From low-rank retractions to dynamical low-rank approximation and back

In algorithms for solving optimization problems constrained to a smooth manifold, retractions are a well-established tool to ensure that the iterates stay on the manifold. More recently, it has been demonstrated that retractions are a useful concept for other computational tasks on manifold as well, including interpolation tasks. In this work, we consider the application of retractions to the numerical integration of differential equations on fixed-rank matrix manifolds. This is closely related to dynamical low-rank approximation (DLRA) techniques. In fact, any retraction leads to a numerical integrator and, vice versa, certain DLRA techniques bear a direct relation with retractions. As an example for the latter, we introduce a new retraction, called KLS retraction, that is derived from the so-called unconventional integrator for DLRA. We also illustrate how retractions can be used to recover known DLRA techniques and to design new ones. In particular, this work introduces two novel numerical integration schemes that apply to differential equations on general manifolds: the accelerated forward Euler (AFE) method and the Projected Ralston–Hermite (PRH) method. Both methods build on retractions by using them as a tool for approximating curves on manifolds. The two methods are proven to have local truncation error of order three. Numerical experiments on classical DLRA examples highlight the advantages and shortcomings of these new methods.


Introduction
This work is concerned with the numerical integration of manifold-constrained ordinary differential equations (ODEs) and, in particular, the class of ODEs evolving on fixed-rank matrix manifolds encountered in dynamical low-rank approximation (DLRA) [KL07].This Outline Section 2 collects preliminary material on differential geometry and DLRA.The link between retractions and existing DLRA techniques is discussed in Section 3.Then, Section 4 presents two new retraction-based integration schemes: the Accelerated Forward Euler method in Section 4.1 and the Projected Ralston-Hermite integration scheme in Section 4.2.Numerical experiments reported in Section 5 assess the accuracy and stability to small singular values of both methods and provide performance comparison with stateof-the-art techniques.In Section 6, a discussion on future work directions concludes the manuscript.

Preliminaries 2.1 Differential geometry
In this section, we briefly introduce concepts from differential geometry needed in this work.The reader is referred to [Bou23] for details.
In the following, M denotes a manifold embedded into a finite-dimensional Euclidean space E. When necessary, M is endowed with the Riemannian submanifold geometry: for every x ∈ M, the tangent space T x M is equipped with the induced metric inherited from the inner product of E. The orthogonal complement of T x M in E is the normal space denoted by N x M. The orthogonal projections from E to T x M and N x M are respectively denoted by Π(x) and Π ⊥ (x) := (I − Π)(x).The disjoint union of all tangent spaces is called the tangent bundle and denoted by T M. Every ambient space point p ∈ E that is sufficiently close to the manifold admits a unique metric projection onto the manifold [AM12, Lemma 3.1] and we denote it by Π M (p) := arg min x∈M p − x .
For a smooth manifold curve γ : R → M, its tangent vector or velocity vector at t ∈ R is denoted by γ(t) ∈ T γ(t) M. If the curve γ is interpreted as a curve in the ambient space E, or if γ maps to a vector space, we use the symbol γ ′ (t) to indicate its tangent vector at t ∈ R.
The Riemannian submanifold geometry of M is complemented with the Levi-Civita connection ∇ which determines the intrinsic acceleration of a smooth manifold curve γ as γ(t) := ∇ γ(t) γ(t), ∀ t ∈ R. We reserve the symbol γ ′′ (t) to indicate the extrinsic acceleration of the curve intended as a curve in the ambient space E.
Manifold of rank-r matrices.The leading example in this work is the case where E = R m×n and M = M r , the manifold of rank-r matrices.We represent rank-r matrices and tangent vectors to M r with the conventions used in [Van13] as well as in the MATLAB library Manopt [BMAS14] that was used to perform the numerical experiments.Any Y ∈ M r is represented in the compact factored form Y = UΣV ⊤ ∈ M r with U ∈ R m×r , V ∈ R n×r both column orthonormal and Σ = diag(σ 1 , . . ., σ r ) containing the singular values ordered as σ 1 ≥ • • • ≥ σ r .Any tangent vector T ∈ T Y M r is represented as T = UMV ⊤ + U p V ⊤ + UV ⊤ p , where M ∈ R r×r , U p ∈ R m×r and V p ∈ R n×r such that U ⊤ U p = V ⊤ V p = 0.The manifold M r inherits the Euclidean metric from R m×n so each tangent space is endowed with the inner product W, T = Tr(W ⊤ T ) and the Frobenius norm W = W F , for any W, T ∈ T Y M r .
Then, the orthogonal projection onto the tangent space at Y of any Z ∈ R m×n is given by Π(Y )Z = UU ⊤ ZV V ⊤ + (I − UU ⊤ )ZV V ⊤ + UU ⊤ Z(I − V V ⊤ ). (1) The metric projection onto the manifold M r of a given A ∈ R m×n is uniquely defined when σ r (A) > σ r+1 (A).It is computed as the rank-r truncated SVD of A and we denote it by Π Mr (A).
Retractions.A retraction is a smooth map R : T M → M : (x, v) → R x (v) defined in the neighborhood of the origin x of each tangent space T x M such that it holds that R x (0) = x and DR x (0)[v] = v, for all v ∈ T x M. The defining properties of a retraction can be condensed into the fact that for any x ∈ M and v ∈ T x M the map τ ∈ R → σ x,v (τ ) = R x (τ v) is welldefined for any sufficiently small τ and parametrizes a smooth manifold curve that satisfies If the manifold is endowed with a Riemannian structure and it holds that σx,v (0) = 0 for any x and v, then the retraction is said to be of second-order.

DLRA
To explain the basic idea of DLRA, let us consider an initial value problem governed by a smooth vector field F : R m×n → R m×n : DLRA aims at finding an approximation of the so-called ambient solution A(t) ∈ R m×n on the manifold of rank-r matrices, for fixed r ≪ min {m, n}, in order to improve computational efficiency while maintaining satisfactory accuracy.Indeed, representing the approximation of A(t) in factored form drastically reduces storage complexity.The central challenge of DLRA consists in computing efficiently a factored low-rank approximation of the ambient solution without having to first estimate the ambient solution and then to truncate it to an accurate rank-r approximation.In the following, the rank is fixed a priori and does not change throughout the integration interval.Then, the DLRA problem under consideration associated to the ambient equation (2) can be formalized as follows.
Problem 1.Given a smooth vector field F : R m×n → R m×n , an initial matrix A 0 ∈ R m×n and a target rank r such that σ r (A 0 ) > σ r+1 (A 0 ), the DLRA problem consists in determining t → Y (t) ∈ M r solving the following initial value problem where Y 0 = Π Mr (A 0 ) ∈ M r , the rank-r truncated singular value decomposition of A 0 .
The origins of the above problem are rooted in the Dirac-Frenkel variational principle, by which the dynamics of (2) are optimally projected onto the tangent space of the manifold.
In fact, for any Y ∈ M r , the orthogonal projection of F (Y ) to T Y M r returns the closest vector in the tangent space: This optimality criterion together with the optimal choice of Y 0 are the local first-order approximation of the computationally demanding optimality Y opt (t) = Π Mr (A(t)).The gap (4) between the original dynamics and the projected dynamics of Problem 1 is known as the modeling error [KV19].Given appropriate smoothness requirements on F and assuming the modeling error can be uniformly bounded in a neighborhood U ⊂ R m×n of the trajectory Y ([0, T ]) ⊂ M of the exact solution of (3) as max then it can be shown, see e.g.[KV19, Theorem 2], that there exists a constant C > 0 depending on the final time T such that where δ 0 = A(0) − Y (0) F denotes the initial approximation error.
In recent years, several computationally efficient numerical integration schemes to approximate the solution to Problem 1 have been proposed [LO14, CL22, KV19].Their output is a time discretization of the solution, where Y k ∈ M r approximates Y (k∆t), for every k = 0, . . ., N, assuming -for simplicity -that a fixed step size ∆t = T /N is used.Existing error analysis results [KLW16,KV19] state that, provided the step size is chosen small enough, the error at the final time can be bounded as for some integer q ≥ 1 and a constant C > 0 that depends on the final time and the problem at hand.The constant q is called the convergence order of the time stepping method.The most direct strategy to numerically integrate Problem 1 is to write Y in rank-r factorization and derive individual evolution equations for the factors [KL07].However, the computational advantages of such an approach are undermined by the high stiffness of the resulting equations when the rth singular value of the approximation becomes small, which is often the case in applications.In turn, this enforces unreasonably small step sizes in order to guarantee the stability of explicit integrators.The projector-splitting schemes proposed by Lubich and Oseledets [LO14] were the first to remedy this issue.Since then, a collection of methods have been designed that achieve stability independently of the presence of small singular values [KLW16, CL22, CKL23, CL23, KNV21].As we show in the following section, some of these DLRA algorithms are directly related to retractions.

DLRA algorithms and low-rank retractions
The concept of retraction was initially formulated in the context of numerical time integration on manifolds.In the work of Shub [Shu86], what is now called a retraction was proposed as a generic tool to develop extensions of the forward Euler method that incorporate manifold constraints.Indeed, any retraction admits the following first-order approximation property.Let t → Y (t) denote the exact solution to a manifold-constrained ODE such as (3) but on a general embedded manifold M. Provided the vector field governing the ODE is sufficiently smooth, the defining properties of a retraction imply that [Shu86, Theorem 1] Let us now specialize the discussion on the relation between retractions and numerical integration of manifold-constrained ODEs to the DLRA differential equation of Problem 1 evolving on the fixed-rank manifold M r .Implementing the general scheme (9) requires the choice of a particular low-rank retraction.A substantial number of retractions are known for the fixed-rank manifold [AO15] and, as discussed in the following, several existing DLRA integration schemes are realized as (9) for a particular choice of retraction on M r .

Metric-projection retraction
A large class of methods to numerically integrate manifold-constrained ODEs fit into the class of projection methods; see [HLW10, §IV.4].Each integration step is carried out in the embedding space with a Euclidean time-stepping method and is followed by the metric projection onto the constraint manifold.For the DLRA differential equation of Problem 1, projection methods have been studied in [KV19].One step of the projected forward Euler method takes the form where Π Mr coincides with the rank-r truncated SVD.This integration scheme fits into the general class of retraction-based schemes (9).Indeed, given X ∈ M r and Z ∈ T X M r , the metric projection of the ambient space point X + Z defines a retraction [AM12, §3], which we refer to as the SVD retraction.Since Z ∈ T X M r , the matrix X + Z is of rank at most 2r.This allows for an efficient implementation of this retraction, as detailed in Algorithm 1.The projected forward Euler method has order q = 1 [KV19, Theorem 4].To achieve higher orders of convergence, projected Runge-Kutta (PRK) methods have been proposed [KV19,§5].The intermediate stages of such methods are obtained by replacing the forward Euler update in (10) with the updates of an explicit Runge-Kutta method.Although the update vectors may not belong to a single tangent space, the PRK recursion can still be written because the SVD retraction has the particular property that it remains well-defined not only for tangent vectors but also for sufficiently small general vectors Z ∈ R m×n (a property that makes it an extended retraction [AO15, §2.3]).We shall abbreviate the PRK method with s ≥ 1 stages as PRKs.Note that PRK1 coincides with the projected forward Euler method above.
Algorithm 1 SVD retraction on M r .Input:

Projector-splitting KSL retraction
The expression (1) for the orthogonal projection of Z ∈ R m×n onto the tangent space at Y = UΣV ⊤ ∈ M x can be expressed as The projector-splitting scheme for DLRA introduced in [LO14] is derived by applying this decomposition to the right-hand side of (3) and using standard Lie-Trotter splitting.Each integration step comprises three integration substeps identified with the letters K, S and L, corresponding respectively to the three terms in (12).When each of the integration substeps is performed with a forward Euler update, we refer to this scheme as the KSL scheme.The scheme is proved to be first-order accurate independently of the presence of small singular values [KLW16, Theorem 2.1].
As shown in [AO15, Theorem 3.3], one step of the KSL scheme actually defines a secondorder retraction for the fixed-rank manifold.In fact, it coincides modulo third-order terms with the orthographic retraction presented in Section 3.3.1.The KSL retraction is denoted by R KSL and its computation is summarized in Algorithm 2. Hence, the KSL integration scheme fits into the general scheme (9) for Problem 1 as it can simply be written as

Projector-splitting KLS retraction
Recently, a modification to KSL projector-splitting method [CL22] was proposed to improve its (parallel) performance, while maintaining the stability and accuracy properties of the KSL method.The scheme is known as the unconventional integrator and it is a modification of the KSL scheme where the L-step is performed before the S-step.Hence, when each of the integration substeps is performed with a forward Euler update, we refer to it as the KLS scheme.The KLS scheme comes with the computational advantage of allowing the K-step and L-step to be performed in parallel without compromising first-order accuracy and stability with respect to small singular values [CL22, Theorem 4].
As we prove in the next section, one step of the KLS scheme also defines a retraction for the fixed-rank manifold.We denote it by R KLS and its computation is detailed in Algorithm 3.Then, as for other schemes so far, the KLS integration scheme for DLRA takes the simple form Algorithm 3 KLS retraction Input:

The KLS retraction and the orthographic retraction
The goal of this section is to prove that the update rule of the KLS scheme, as detailed by Algorithm 3, defines a second-order retraction.The strategy to reach this goal is based on the observation that the KLS update is obtained as a small perturbation of another commonly encountered retraction, known as the orthographic retraction.
Algorithm 4 Orthographic retraction Input: The orthographic retraction.Specialized to the fixed-rank matrix manifold M r , the orthographic retraction consists in perturbing the point X ∈ M r in the ambient space as X +Z ∈ R m×n and projecting back onto the manifold but only along vectors from the normal space of the starting point, see Figure 1.Formally this reads: A closed-form expression for the solution of this optimization problem for the case of M r is established in [AO15, §3.2] and is the output of Algorithm 4. In virtue of the analysis carried out in [AM12], the orthographic retraction is a second-order retraction.A remarkable property of the orthographic retraction is that its local inverse can be computed easily.As suggested by the construction illustrated in Figure 1, the inverse orthographic retraction is obtained by projecting the ambient space difference onto the tangent space, i.e., This opens the possibility to use the inverse orthographic retraction in practical numerical procedures such as Hermite interpolation of a manifold curve [SK22] used in the definition of the Projected Ralston-Hermite scheme in Section 4.2.
A careful inspection of the computations for the orthographic retraction reported in Algorithm 4 reveals that it is identical to the update rule of the KLS scheme given in Algorithm 3, up to the computation of Σ 1 .Let Σ KLS 1 and Σ ORTH 1 denote the quantities computed respectively at step 5 of Algorithm 3 and step 3 of Algorithm 4. Note that Σ ORTH 1 can be rewritten without explicitly computing the factors S U and S V .Using the relations with This highlights that the quantity Σ 1 computed in the KLS scheme and the orthographic retraction differ by Using this observation together with the second-order property of the orthographic retraction allows us to show that the KLS procedure defines a second-order retraction.This link with the orthographic retraction mirrors the same observation made for the closely related KSL scheme [AO15, Theorem 3.3].
Proposition 1.The procedure of Algorithm 3 defines a second-order retraction (called the KLS retraction).
Proof.The proof relies on a necessary and sufficient condition for a retraction to be a secondorder retraction stated in [AM12, Proposition 3].On a general Riemannian manifold M, a mapping R : T M → M is a second-order retraction if and only if for all (x, v) ∈ T M the curve t → R x (tv) is well-defined for all sufficiently small t and satisfies where t → Exp x (tv) is a parametrization of the geodesic passing through x with velocity v obtained with the Riemannian exponential map at x [Bou23, Definition 10.16].
Given X ∈ M r and Z ∈ T X M r , the orthonormalizations of the first two lines of the KLS scheme in Algorithm 3 are uniquely defined provided the matrices to orthonormalize have full rank.This is the case for any Z sufficiently small by lower semi-continuity of the matrix rank.By smoothness of the orthonormalization process, we know that the matrix Σ 1 depends smoothly on Z. Since for Z = 0 we have Σ 1 = Σ 0 , assumed to be full rank, the matrix Σ 1 has full rank for sufficiently small Z.
is uniquely and smoothly defined for any Z in a neighborhood of the origin of T X M r and belongs to M r .Now, consider the curves t → R ORTH X (tZ) and t → R KLS X (tZ), well-defined for sufficiently small t.Since these curves share the left and right singular vectors U 1 (t) and V 1 (t), their difference is given by where This coincides with C ′ (0) since C is smooth for small t.But since we can infer that C(t) = o(t) and therefore that R KLS X (tZ) = R ORTH X (tZ) + o(t 3 ).Using the result [AM12, Proposition 2.3] explained in the beginning of the proof and the second-order property of the orthographic retraction, we obtain that the retraction curve t → R ORTH X (tZ) approximates the geodesic t → Exp X (tZ), as R ORTH X (tZ) = Exp X (tZ) + O(t 3 ).Combining the two results leads to Using once again the result from [AM12, Proposition 2.3], this implies that R KLS X is indeed a second-order retraction.

New retraction-based time stepping algorithms
Having seen in Section 3 that existing DLRA time integration algorithms can be directly related with particular low-rank retractions, we will now discuss the opposite direction, how low-rank retractions can be used to produce new integration schemes.
Before proceeding to the derivation of the new methods, we first briefly describe the rationale behind these methods for the initial value problem (2) evolving in R m×n .We let A(∆t) denote the exact solution at time t = ∆t and Ã(∆t) its approximation obtained by performing one step of a given numerical integration scheme starting from A(0).The scheme is said to have order q if the local truncation error satisfies In other words, any curve Ã(•) on R m×n that is a sufficiently good approximation of A(•) in the vicinity of 0 defines a suitable numerical integration scheme.In the two methods we propose, the curves Ã are constructed through manifold interpolation, based on retractions.By definition, see Section 2.1, any retraction induces manifold curves with prescribed initial position and velocity.If, additionally, the retraction has second order, it is possible to prescribe an initial acceleration.
When setting x = γ(0), v = γ(0) and a = γ(0) for a smooth manifold curve γ, it is shown in [Sé23, Proposition 3.22] that the Riemannian distance between γ and the retraction curve constructed by Proposition 2 satisfies if R is a second-order retraction.

An accelerated forward Euler scheme
The first method we propose for DLRA is obtained by accelerating the projected forward Euler scheme (10) through a second-order correction.

Euclidean analog
For illustration, let us first derive the accelerated forward Euler method for the initial value problem (2) evolving in R m×n .If the solution to (2) is sufficiently smooth, we get from its Taylor expansion that Assuming we can compute A ′′ (0) = DF (A 0 )[F (A 0 )], the differential of the vector field along F (A 0 ), the Euclidean accelerated forward Euler (AFE) scheme is defined by the update The Euclidean AFE scheme is of order q = 2 since the local truncation error is by construction of order O(∆t 3 ).The scheme is conditionally stable with the same stability domain as any 2-stages explicit Runge-Kutta method of order 2 [HW10, Theorem 2.2].That being said, this scheme requires the evaluation of the Jacobian at each step, which renders the AFE scheme non-competitive on Euclidean spaces.In fact, for Euclidean ODEs there are simple second-order methods using only the evaluation of the vector field, such as Runge-Kutta methods.
In the following, we generalize the AFE scheme for the DLRA differential equation of Problem 1 by first showing how to compute the acceleration of the exact solution.As we will see below, in Proposition 6, it is the sum of two terms: the tangent component of the ambient acceleration and the acceleration due to the curvature of the manifold.The latter is expressed using the Weingarten map, a classical concept of Riemannian geometry that we briefly introduce in the next section.

The Weingarten map
As before, we consider a Riemannian submanifold M embedded into a finite-dimensional Euclidean space E; see [Lee18, §8] for a more general setting.We let X T (M) and X N (M) denote the set of smooth tangent and normal vector fields on M, respectively.
The Weingarten map is constructed from the second fundamental form.The latter measures the discrepancy between the ambient Euclidean connection and the Riemannian connection ∇ on the submanifold M. For every W, Z ∈ X T (M) and x ∈ M, we have that ∇ W Z(x) = Π(x)D W Z(x), were D is the Euclidean connection corresponding to standard directional derivative.Hence, the vector Definition 4. The Weingarten map is defined by the multilinear form N, II(W, Z) = W(W, N), Z , ∀ Z ∈ X T (M).
Since ∇ W Z can be shown to depend only on the value of the vector field W at x, using the properties of the Levi-Civita connection both the second fundamental form and the Weingarten map can be defined pointwise, i.e. depending only on values of the vector fields at x [Lee18, Proposition 8.1].Given w, z ∈ T x M and n ∈ N x M, the Weingarten map for any W, Z ∈ X T (M) and N ∈ X N (M) satisfying W (x) = w, Z(x) = z and N(x) = n.
The following characterization relates the Weingarten map at x with the differential of the tangent space projection.
Proposition 5 ([AMT13, Theorem 1]).For any x ∈ M, w ∈ T x M and n ∈ N x M, the Weingarten map satisfies for any q ∈ T x E ≃ E such that Π ⊥ (x)q = n.
Expression for the fixed-rank manifold.Depending on the conventions used to represent points and tangent vectors, many equivalent expressions are known for the Weingarten map of the fixed-rank manifold, see for example [AMT13,FL18].In our conventions, for any This expression nicely highlights two notable features that are known for the fixed-rank manifold: it is a ruled surface with unbounded curvature.In fact, along the subspace associated to the UMV ⊤ term, M r is flat, while the curvature along the other directions grows unbounded as σ r (Y ) → 0.

The accelerated forward Euler (AFE) integration scheme for DLRA
With the definitions introduced in Section 4.1.2,we can compute the acceleration of the DLRA solution curve allowing to generalize the AFE integration scheme to Problem 1.
Proposition 6.If a smooth curve on M r is defined by Ẏ = Π(Y )F (Y ) for some smooth vector field F : R m×n → R m×n , then its intrinsic acceleration can be computed as Proof.By definition of the Levi-Civita connection for Riemannian submanifolds we have Using the product rule and the fact that Π(Y ) 2 = Π(Y ), it follows from the last equality of Proposition 5 that Hence, to mimic the Euclidean AFE update defined by equation ( 29), we need to construct a smooth manifold curve Y AFE (∆t) such that Proposition 2 shows that we can construct such a curve using a second-order retraction.Let R II indicate any second-order retraction, then the manifold analogous of the AFE update (29) reads as Then, the AFE scheme for DLRA takes the form As a consequence of (28), this scheme admits a local truncation error of order O(∆t 3 ).All the retractions for the fixed-rank manifold presented in Section 3 have the secondorder property.In principle, all of them are suited to be used as R II in (38), however, experiments reported in Section 5 suggest the orthographic retraction is the most convenient in terms of speed, accuracy and stability.

The Projected Ralston-Hermite integration scheme
As put forth in [SK22], retractions can also be used to define a manifold curve with prescribed endpoints and endpoint velocities.For tangent bundle data points (x 0 , v 0 ), (x 1 , v 1 ) ∈ T M and some parameters t 0 < t 1 , we denote by H the retraction-based Hermite (RH) interpolant defined by [SK22, Corollary 7], which satisfies We will employ the notation H(t; (p 0 , v 0 , t 0 ), (p 1 , v 1 , t 1 )) to underline the dependence of H on the interpolation data.The curve H is constructed by a manifold extension of the well-known De Casteljau algorithm.But instead of using geodesic segments as building blocks of the procedure, the RH interpolant is defined with endpoint curves constructed with the use of a retraction and its local inverse, making the method more efficient and more broadly applicable.An efficient evaluation of the curve H requires a retraction for which the inverse retraction admits a computationally affordable procedure to evaluate.On the fixed-rank manifold, the orthographic retraction is a suitable candidate to construct the RH interpolant.
As stated in [SK22, Theorem 17], the RH interpolant achieves O(∆t 4 ) approximation error as ∆t → 0 in the case where the interpolation data is sampled from a smooth manifold curve at t 0 and t 1 = t 0 + ∆t.The Projected Ralston-Hermite (PRH) scheme aims at leveraging this approximation power to define a numerical integration update rule with small local truncation error.Let us derive the PRH method for the initial value problem (2) evolving in R m×n .

Euclidean analog
is the unique third degree polynomial curve in R m×n which satisfies H P (t i ) = A i , H ′ P (t i ) = V i , for i = 0, 1.The polynomial curve H P can be used to define the following multistep integration scheme for the initial value problem (2): Working out the expression for H P allows rewriting the recursive relation (41) as As pointed out in [HNW93, §III.3], this scheme has a local truncation error O(∆t 4 ), the highest possible order for explicit 2-step method using these terms.However, it is not zerostable and thus does not produce a convergent scheme of order 3. Nevertheless, this update rule can be combined with suitably chosen intermediate steps to recover stability.Consider the family of multistep methods obtained by taking an intermediate forward Euler step of length α ∈ (0, 1) combined with the update rule (41) as These schemes are all part of the family of explicit Runge-Kutta methods as it can be shown that For any α ∈ (0, 1), the scheme satisfies the first-order conditions of Runge-Kutta methods.
Choosing α to satisfy also the second-order conditions narrows down the family to the scheme with α = 2/3.This scheme is an explicit second-order Runge-Kutta method known as the Ralston scheme and is defined by the following Butcher table.4.2.2The Projected Ralston-Hermite (PRH) integration scheme for DLRA Expressed in the form (44), the update rule of the Ralston scheme consists of a forward Euler step followed by a Hermite interpolation step.We can leverage this formulation and the RH interpolant (40) to extend the Ralston method to the manifold setting.Let R denote any retraction and R I denote any retraction that can be used in practice to evaluate the RH interpolant (i.e. whose inverse can be efficiently computed).To indicate which retraction is used to construct the retraction-based Hermite interpolant H, we add it to its list of arguments.The Projected Ralston-Hermite (PRH) scheme for Problem 1 is then defined as A suitable candidate for both retractions is R = R I = R ORTH , the orthographic retraction.
As experiments in Section 5 suggest, this generalization to the manifold setting of the Ralston scheme maintains its second-order accuracy.

The APRH integration scheme
For the sake of completeness, a third scheme can be obtained by combining the AFE and the PRH schemes.Replacing the intermediate forward Euler step of the PRH scheme with an AFE update defines what we call the Accelerated Projected Ralston-Hermite scheme (APRH).It is defined by the recursive relation where Ÿk is given by (39).

Numerical experiments
In this section, we illustrate the performances of the accelerated forward Euler (AFE) method, the Projected Ralston-Hermite (PRH) method and the accelerated Projected Ralston-Hermite (APRH) method.Experiments were executed with Matlab 2022b on a laptop computer with Intel i7 CPU (1.8GHz with single-thread mode) with 8GB of RAM, 1MB of L2 cache and 8MB of L3 cache.The implementation leverages the differential geometry tools of the Manopt library [BMAS14].In particular, the orthographic retraction provided by Manopt is used for the implementation of AFE, PRH and APRH.An implementation of the KSL and KLS retractions as described by Algorithms 2 and 3 were added to the fixed-rank manifold factory.For the implementation of the projected Runge-Kutta method of [KV19], we also added an implementation of the truncated SVD extended retraction, accepting as input tangent vectors of arbitrary tangent spaces.See Table 1 for a summary of the acronyms of the methods appearing in the numerical experiments.For the sake of completeness, we recall that an s-stage Projected Runge-Kutta (PRKs) method for given Butcher table, as provided in [KV19], is defined as follows (PRKs) where R denotes the metric projection on the fixed-rank manifold M r .We refer to [KV19, §5] for a detailed description of its efficient implementation, along with a recapitulation of the Butcher table for each s-stage method up to the third order.

Differential Lyapunov equation
The modeling error (4) introduced by DLRA is associated with the normal component of the vector field of the original differential equation.The effect of this error on the performance of DLRA integrators can be assessed by considering a class of matrix differential equations, the so-called differential Lyapunov equations [UV20, §6.1], which take the form for some A 0 , L, Q ∈ R n×n .Indeed, if A 0 has rank exactly r and the matrix Q is zero, then A(t) is also of rank-r for every t ∈ [0, T ] [HM94, Lemma 1.22].Therefore, the norm of Q is proportional to the modeling error.
In the following, L is the discretized 1D Laplace operator on a uniform grid, that is, L is the tridiagonal matrix with −2 on the main diagonal and 1 on the first off-diagonals.For the source term, we take Q = η Q/ Q F for some η > 0 and where Q is a full rank matrix generated from its singular value decomposition with randomly chosen singular vectors and prescribed singular values, decaying as σ i ( Q) = 10 2−i , for i = 1, . . ., n.The initial condition is taken to be of rank exactly r and is also assembled from a randomly generated singular value decomposition with a prescribed geometric decay of non-zero singular values: σ i (A 0 ) = 3 2−i , for all i = 1, . . ., r, and σ i (A 0 ) = 0, for all i = r + 1, . . ., n.
In Figure 2, we report the results with n = 100 and r = 12 of the following experiments.For different values of η, we numerically integrate the rank-r DLRA differential equation (3) applied to (48) with different numerical schemes and different time steps up to T = 0.5.A reference solution to the ambient equation ( 48) is found by using the MATLAB routine ode45 between each time step, for a time step that is the smallest among those considered for the numerical integrators.We then plot as a function of the step size the 2-norm discrepancy between the reference solution at final time and its approximation obtained by numerical integration.The numerical results for the KSL and the KLS scheme were very similar to the ones of PRK1.Hence, they were omitted not to overcrowd the plots.
The panels of Figure 2 correspond to the cases (a) η = 0, (b) η = 0.01, (c) η = 0.1, (d) η = 1.0.When the source term is zero, the reference solution is also of rank exactly r, as can be seen from the value the best approximation error in panel (a).In this regime, the AFE and the PRH scheme both exhibit O(∆t 2 ) error convergence, while the APRH scheme seem to reach an asymptotic O(∆t 3 ) trend.The trade-off between accuracy and computational effort that can be seen in Figure 3-(a) shows that in this simple setting, the PRH, AFE and APRH schemes have comparable performances to PRK2.Turning on the source term determines a non-negligible best approximation error due to the growth of singular values that were initially zero, as can be seen in the bottom plots of panels (b), (c) and (d).The larger the source term's norm, the faster and the greater these singular values grow.Then, the numerical integrators converge to the exact solution of the projected system and so the error with respect the ambient solutions stagnates at a value slightly higher than the best 2-norm approximation.While the PRH scheme preserve the O(∆t 2 ) trend up to some oscillations as η increases, the AFE and APRH schemes seem to suffer instability when the normal component of the vector field is too large.In this more realistic scenario where the normal component of the vector field is non-negligible, only the PRH scheme remains comparable to PRK2 in terms of the trade-off between accuracy and effort, see Figure 3-(b) and Table 2.

Robustness to small singular values
A fundamental prerequisite for competitive DLRA integrators is to be resilient to the presence of small singular values in the solution.A detailed discussion on the topic can be found in [KLW16].In applications, very often the ambient solution admits an exponential decay of singular values.Hence, a good low-rank approximation is possible but the occurrence of small singular values is inevitable for DLRA to be accurate: a rank-r approximation of the solution must match the rth singular value of the ambient solution, which is small if the approximation error is small.
The smaller the singular values of the solution, the greater the stiffness of the DLRA differential equation (3): the Lipschitz constant of the vector field F gets multiplied by  The approximation error at final time is constituted mainly of the integration error which can be reduced by decreasing the step size, and the modeling error is affected only by the choice of r.A scheme is said to be robust to small singular values, if the integration error is independent of the choice of r.In practice, one must observe that the trend of the error as a function of the step size is unaffected by the choice of r for step sizes where the modeling error is negligible compared to the integration error.
Figure 4 shows the results for the experiment described in the previous paragraph with a curve of the form (49) with randomly generated Ω U and Ω V , initial singular values σ i = 2 −i and n = 100.The panels from left to right corresponds respectively to the AFE, the PRH and the APRH schemes.Note that for the AFE and the APRH schemes, we use the exact expression for the second derivative of (49) given by The results for AFE show the ideal outcome: the error curves for increasing values of r are superimposed until the modeling error plateau determined by the value of r is reached.These results empirically suggest that the AFE integration scheme is robust to small singular values.for some randomly generated orthogonal matrices U 0 and V 0 and with σ i logarithmically spaced on the interval [σ r , 1], for some σ r ≤ 1.To obtain the second interpolation point, we first move away from Y 0 with the orthographic retraction along a random tangent vector Z ∈ T Y M r such that Z F = 1 to get Ỹ1 = R Y 0 (σr) (Z).Then, the second interpolation point is Y 1 , obtained from Ỹ1 by replacing its singular values with for some random ξ i drawn from a uniform distribution on [1/2, 2].This way, the singular values decay of both Y 0 and Y 1 mimic a situation encountered in one step of the PRH and APRH integration schemes, when the smallest singular value of the current approximation is of the order of σ r .Then, randomly generate Z 0 ∈ T Y 0 M r and Z 1 ∈ T Y 1 r with 0 = Z 1 = 1 and form the retraction-based interpolant (40) given by For different values of the smallest singular value σ r , we measure the discrepancies Z 0 − Ḣ(0) F and Z 1 − Ḣ(1) F , where derivatives of H are obtained by finite differences.The experiment is repeated for each σ r on 100 randomly generated instances and the error distribution is plotted against σ r in Figure 5.These results unequivocally indicate the fragility of retraction-based Hermite interpolant on the fixed-rank manifold when small singular values are present in the interpolation points.As σ r decrease, the velocity error in τ = 0 increases, and even more severely in τ = 1.The fact that the error is non-negligible even for moderately small values of σ r suggests the PRH and APRH integration schemes may occasionally employ inaccurate retraction-based Hermite interpolants.This may contribute to the oscillatory behavior of the error observed PRH and APRH in Figures 2 and 4.

Conclusion
This work contributes to strengthening the connection between retractions and numerical integration methods for manifold ODEs and especially DLRA techniques.In particular, we show that the so-called unconventional integration scheme [CL22] defines a second-order retraction which approximates up to high-order terms the orthographic retraction.It remains an open question whether the same observation can be made for the recently proposed parallelized version of KLS [CKL23].We also derive three numerical integration schemes expressed in terms of retractions and showcase their performance on classic problem instances of DLRA.The derivation and the numerical results show that the methods can achieve second-order error convergence with respect to the time integration step.However, the methods have shown mixed results.While the AFE and the APRH schemes exhibit instability in the presence of large normal components of the ambient vector field, the PRH scheme appears more resilient to this aspect.On the other hand, the occurrence of small singular values in the approximation had no apparent effect on the performance of AFE but for the PRH and APRH methods, small singular values may explain occasional deviations from the expected second-order convergence behavior.We observe that the PRH scheme delivers similar performance, both with respect to computational time and accuracy, compared to its existing counterpart, PRK2.However, the high-order accelerated version, APRH, is found to be less favorable compared to analogous schemes such as PRK3.This is largely due to the additional computational cost incurred by the Weingarten map.
For other low-rank tensor formats, such as the Tucker or the tensor-train formats, retractions have also been proposed [KSV14,Ste16].However, to the best of our knowledge no retraction with an efficiently computable inverse retraction is known and the orthographic retraction has remained elusive due to the complexity of the normal space structure for these manifolds.Yet, the KLS scheme has been extended to low-rank tensor manifolds [CL22, §5].Hence, assuming the connection with the orthographic retraction carries over to the tensor setting, it may be possible to retrieve the orthographic retraction for such tensor manifolds as a small perturbation of the KLS update.Then, the possibility to easily compute the inverse orthographic retraction would enable using also in the case of low-rank tensor manifolds the retraction-based endpoint curves and the numerical integration schemes presented in this work.
) belongs to the normal space at x and depends smoothly on x.Hence, the function (I − Π)D W Z is a smooth normal vector field on M. Definition 3. The second fundamental form II : X T (M)×X T (M) → X N (M) is a symmetric bilinear map defined by II(W, Z) = (I − Π)D W Z.

Figure 4 :
Figure 4: Error at final time Y ∆t (T ) − A(T ) 2 versus the step size ∆t of DLRA integration applied to (51) for reconstructing the curve (49) for different values of rank.

Figure 5 :
Figure 5: Robustness to small singular values of the retraction-based Hermite interpolant (54).The solid line is the median of the error over 100 randomly generated instances for each value of σ r while the dashed and dotted lines correspond to the percentiles [0.05, 0.25, 0.75, 0.95] of the sampled error.

Table 1 :
Summary of the acronyms of the methods considered in the numerical experiments.

Table 2 :
Average time in milliseconds per step for the experiments of Figure 2-(d).