An unconventional robust integrator for dynamical low-rank approximation

We propose and analyse a numerical integrator that computes a low-rank approximation to large time-dependent matrices that are either given explicitly via their increments or are the unknown solution to a matrix differential equation. Furthermore, the integrator is extended to the approximation of time-dependent tensors by Tucker tensors of fixed multilinear rank. The proposed low-rank integrator is different from the known projector-splitting integrator for dynamical low-rank approximation, but it retains the important robustness to small singular values that has so far been known only for the projector-splitting integrator. The new integrator also offers some potential advantages over the projector-splitting integrator: It avoids the backward time integration substep of the projector-splitting integrator, which is a potentially unstable substep for dissipative problems. It offers more parallelism, and it preserves symmetry or anti-symmetry of the matrix or tensor when the differential equation does. Numerical experiments illustrate the behaviour of the proposed integrator.


Introduction
For the approximation of huge time-dependent matrices (or tensors) that are the solution to a matrix differential equation, dynamical low-rank approximation [11,12] projects the right-hand side function of the differential equation to the tangent space of matrices (or tensors) of a fixed rank at the current approximation. This yields differential equations for the factors of an SVD-like decomposition of the time-dependent low-rank approximation. The direct numerical integration of these differential equations by standard methods such as explicit or implicit Runge-Kutta methods is highly problematic because in the typical presence of small singular values in the approximation, it leads to severe step size restrictions proportional to the smallest nonzero singular value. This difficulty does not arise with the projectorsplitting integrator proposed in [15], which foregoes a direct time discretization of the differential equations for the factors and instead splits the orthogonal projection onto the tangent space, which is an alternating sum of subprojections. This approach leads to an efficiently implementable integrator that is robust to small singular values [10,15]. It has been extended, together with its robustness properties, from the matrix case to Tucker tensors in [14,17], to tensor trains / matrix product states in [16,7], and to general tree tensor networks in [5].
In the present paper we propose and analyse a different integrator that is shown to have the same robust error behaviour as the projector-splitting integrator. This new integrator can apparently not be interpreted as a splitting integrator or be included in another familiar class of integrators. Its substeps look formally similar to those of the projector-splitting integrator but are arranged in a different, less sequential way. Like in the projector-splitting integrator, the differential equations in the substeps are linear if the original differential equation is linear, even though the projected differential equation becomes nonlinear. The new integrator bears some similarity also to the constant-mean-field integrator of [3] and the splitting integrator of [9].
Beyond the robustness to small singular values, the new integrator has some favourable further properties that are not shared with the projector-splitting integrator. Maybe most importantly, it has no backward time integration substep as in the projector-splitting integrator. This appears advantageous in strongly dissipative problems, where the backward time integration step represents an unstable substep. Moreover, the new integrator has enhanced parallelism in its substeps, and in the Tucker tensor case even a reduced serial computational cost. It preserves symmetry or anti-symmetry of the matrix or tensor when the differential equation does. It reduces to the (anti-)symmetry-preserving low-rank integrator of [4] in this case.
On the other hand, unlike the projector-splitting integrator it cannot be efficiently extended to a time-reversible integrator. When applied to the time-dependent Schrödinger equation (as an integrator for the MCTDH method of quantum molecular dynamics; cf. [2,3,14]), the new integrator preserves the norm, but it has no energy conservation as shown in [14] for the projector-splitting integrator.
In Section 2 we recapitulate dynamical low-rank approximation and the projectorsplitting integrator for the matrix case. We restate its exactness property and its robust error bound.
In Section 3 we present the new low-rank matrix integrator and show that it has the same exactness property and robust error bound as the matrix projector-splitting integrator.
In Section 4 we recapitulate dynamical low-rank approximation by Tucker tensors of fixed multilinear rank and the extension of the projector-splitting integrator to the Tucker tensor case.
In Section 5 we present the new low-rank Tucker tensor integrator and show that it has the same exactness property and robust error bound as the Tucker tensor projector-splitting integrator.
In Section 6 we illustrate the behaviour of the new low-rank matrix and Tucker tensor integrators by numerical experiments.
While we describe the integrator for real matrices and tensors, the algorithm and its properties extend in a straightforward way to complex matrices and tensors, requiring only some care in using transposes U ⊤ versus adjoints U * = U ⊤ .
Throughout the paper, we use the convention to denote matrices by boldface capital letters and tensors by italic capital letters.

Recap: the matrix projector-splitting integrator
Dynamical low-rank approximation of time-dependent matrices [11] replaces the exact solution A(t) ∈ R m×n of a (too large) matrix differential equation by the solution Y(t) ∈ R m×n of rank r of the differential equation projected to the tangent space of the manifold of rank-r matrices at the current approximation, where the initial rank-r matrix Y 0 is typically obtained from a truncated singular value decomposition (SVD) of A 0 . (We note that F(t, Y) = .
A(t) if A(t) is given explicitly.) For the actual computation with rank-r matrices, they are represented in a non-unique factorized SVD-like form where the slim matrices U(t) ∈ R m×r and V(t) ∈ R n×r each have r orthonormal columns, and the small matrix S(t) ∈ R r×r is invertible. The orthogonal tangent space projection P(Y) can be written explicitly as an alternating sum of three subprojections onto the co-range, the intersection of co-range and range, and the range of the rank-r matrix Y [11]. The projector-splitting integrator of [15] splits the right-hand side of (2) according to the three subprojections in the stated ordering and solves the subproblems consecutively in the usual way of a Lie-Trotter or Strang splitting. This approach yields an efficient time-stepping algorithm that updates the factors in the SVD-like decomposition of the rank-r matrices in every time step, alternating between solving differential equations for matrices of the dimension of the factor matrices and orthogonal decompositions of slim matrices.
One time step from t 0 to t 1 = t 0 + h starting from a factored rank-r matrix Y 0 = U 0 S 0 V ⊤ 0 proceeds as follows: 1. K-step : Update U 0 → U 1 , S 0 →Ŝ 1 Integrate from t = t 0 to t 1 the m × r matrix differential equatioṅ Integrate from t = t 0 to t 1 the r × r matrix differential equatioṅ and setS 0 = S(t 1 ).
3. L-step : Update V 0 → V 1 ,S 0 → S 1 Integrate from t = t 0 to t 1 the n × r matrix differential equatioṅ Perform a QR factorization L(t 1 ) = V 1 S ⊤ 1 . Then, the approximation after one time step is given by To proceed further, Y 1 is taken as the starting value for the next step, and so on. The projector-splitting integrator has very favourable properties. First, it reproduces rank-r matrices exactly.
Even more remarkable, the algorithm is robust to the presence of small singular values of the solution or its approximation, as opposed to standard integrators applied to (2) or the equivalent differential equations for the factors U(t), S(t), V(t), which contain a factor S(t) −1 on the right-hand sides [11,Prop. 2.1]. The appearance of small singular values is ubiquitous in low-rank approximation, because the smallest singular value retained in the approximation cannot be expected to be much larger than the largest discarded singular value of the solution, which is required to be small for good accuracy of the low-rank approximation.

Theorem 2 (Robust error bound, [10, Theorem 2.1]) Let A(t)
denote the solution of the matrix differential equation (1). Assume that the following conditions hold in the Frobenius norm · = · F : 1. F is Lipschitz-continuous and bounded: for all Y, Y ∈ R m×n and 0 ≤ t ≤ T , 2. The non-tangential part of F(t, Y) is ε-small: for all Y ∈ M in a neighbourhood of A(t) and 0 ≤ t ≤ T . 3. The error in the initial value is δ-small: Let Y n denote the rank-r approximation to A(t n ) at t n = nh obtained after n steps of the projector-splitting integrator with step-size h > 0. Then, the error satisfies for all n with t n = nh ≤ T where the constants c i only depend on L, B, and T . In particular, the constants are independent of singular values of the exact or approximate solution.
In [10, Section 2.6.3] it is shown that an inexact solution of the matrix differential equations in the projector-splitting integrator leads to an additional error that is bounded in terms of the local errors in the inexact substeps, again with constants that do not depend on small singular values.
Numerical experiments with the matrix projector-splitting integrator and comparisons with standard numerical integrators are reported in [15,10].

A new robust low-rank matrix integrator
We now present a different integrator that has the same exactness and robustness properties as the projector-splitting integrator but which differs in the following favourable properties: 1. The solution of the differential equations for the m × r and n × r matrices can be done in parallel, and also the two QR decompositions can be done in parallel. 2. The differential equation for the small r × r matrix is solved forward in time, not backwards. 3. The integrator preserves (skew-)symmetry if the differential equation does.
While item 1. can clearly speed up the computation, item 2. is of interest for strongly dissipative problems, for which the S-step in the projector-splitting algorithm with the minus sign in the differential equations is an unstable substep of the algorithm. This does not appear in the new algorithm. We mention that in [1], the problem of the backward substep for parabolic problems has recently been addressed in a different way.
On the other hand, contrary to the projector-splitting integrator, there is apparently no efficient way to construct a time-reversible integrator from this new integrator.

Formulation of the algorithm
One time step of integration from time t 0 to t 1 = t 0 +h starting from a factored rank- as follows.

K-step:
Integrate from t = t 0 to t 1 the m × r matrix differential equatioṅ Perform a QR factorization K(t 1 ) = U 1 R 1 and compute the r × r matrix M = U ⊤ 1 U 0 . L-step : Integrate from t = t 0 to t 1 the n × r matrix differential equatioṅ S-step : Integrate from t = t 0 to t 1 the r × r matrix differential equatioṅ and set S 1 = S(t 1 ).
The m × r, n × r and r × r matrix differential equations in the substeps are solved approximately using a standard integrator, e.g., an explicit or implicit Runge-Kutta method or an exponential integrator when F is dominantly linear.
We note that the L-step equals the K-step for the transposed function G(t, Y) = F(t, Y ⊤ ) ⊤ and transposed starting values. Unlike the projector-splitting algorithm, the triangular factors of the QR-decompositions are not reused. The S-step can be viewed as a Galerkin method for the differential equation (1) in the space of matrices U 1 SV ⊤ 1 generated by the updated basis matrices. In contrast to the projectorsplitting integrator, there is no minus sign on the right-hand side of the differential equation for S(t). We further note that U 1 of the new integrator is identical to U 1 of the projector-splitting integrator, but V 1 is in general different.

Remark 1
There exists a modification where all three differential equations for K, L and S can be solved in parallel. That variant solves the K-and L-steps as above, but in the S-step it solves instead the r × r matrix differential equatioṅ This modified integrator can be shown to have the same exactness property as proved below for the integrator formulated above, and also a similar robust error bound under the condition that the inverses of the matrices M and N are bounded by a constant. This condition can, however, be guaranteed only for step sizes that are small in comparison to the smallest nonzero singular value. In our numerical experiments this method did not behave as reliably as the method proposed above, and despite its interesting properties it will therefore not be further discussed in the following.

Exactness property and robust error bound
We will prove the following remarkable results for the integrator of Section 3.1.

Theorem 3
The exactness property of Theorem 1 holds verbatim also for the new integrator.

Theorem 4 The robust error bound of Theorem 2 holds verbatim also for the new integrator.
As in [10, Section 2.6.3], it can be further shown that an inexact solution of the matrix differential equations in the projector-splitting integrator leads to an additional error that is bounded in terms of the local errors in the inexact substeps, again with constants that do not depend on small singular values.

Proof of Theorem 3
For the proof of Theorem 3 we need the following auxiliary result, which extends an analogous result in [4] for symmetric matrices.
Proof The solution of the K-step at time t 1 is Hence, By assumption, the factor in big square brackets is invertible. Computing a QRdecomposition of this term, we have where Q ∈ R r×r is an orthogonal matrix and R ∈ R r×r is invertible and upper triangular. The QR-factorization of K(t 1 ) thus yields To conclude, which is the stated result. ⊓ ⊔ Proof (of Theorem 3) Since the L-step is the K-step for the transposed matrix A(t) ⊤ , which has the same rank as A(t), it follows from Lemma 1 that The integrator yields in the S-step . The result after a time step of the new integrator is where the last equality holds because of (4). ⊓ ⊔

Proof of Theorem 4
Under the assumptions of Theorem 2, we introduce the quantity which is the local error bound of the projector-splitting integrator after one time step, as proved in [10, Theorem 2.1].
of Theorem 2 are fulfilled. Then, The result is proved in the course of the proof of Lemma 1 in [4]. We give the proof here for the convenience of the reader. The local error analysis in [10] shows that the r × n matrix Z = S ps 1 V ps,⊤ 1 , where S ps 1 and V ps 1 are the matrices computed in the third substep of the projector-splitting algorithm, satisfies The square of the left-hand side can be split into two terms: Hence, This yields the stated result for the second term. ⊓ ⊔ Lemma 3 Let A 1 , U 1 and V 1 be defined as above. The following estimate holds: The L-step is the K-step for the transposed function G(t, Y) = F(t, Y ⊤ ) ⊤ , which again fulfills conditions 1.-3. of Theorem 2. Conditions 1. and 3. hold because of the invariance of the Frobenius norm under transposition. Condition 2. holds because where we used the identity P(Y)Z ⊤ = P(Y ⊤ )Z ⊤ . From Lemma 2 we thus have This implies that Since V 1 V ⊤ 1 2 = 1, the result follows from (6). ⊓ ⊔ In the following lemma, we show that the approximation given after one time step is O(h(h + ε)) close to the solution of system (1) when the starting values coincide.

Lemma 4 (Local Error)
If A 0 = Y 0 , the following local error bound holds: where the constants only depend on L and B and a bound of the step size. In particular, the constants are independent of singular values of the exact or approximate solution.
Proof With a few crucial modifications, the proof is similar to that of [4, Lemma 2]. We report here the full proof for completeness and convenience of the reader. By the identity Y 1 = U 1 S 1 V ⊤ 1 and Lemma 3 we have that The analysis of the local error thus reduces to estimating S 1 − U ⊤ 1 A 1 V 1 . To this end, we introduce the following quantity: for t 0 ≤ t ≤ t 1 , We write where R(t) denotes the term in big brackets. Lemma 3 and the bound B of F yield, for t 0 ≤ t ≤ t 1 ,

F(s, A(s)) ds ≤ Bh.
Hence the remainder term is bounded by It follows that F(t, A(t)) can be written as with the defect With the Lipschitz constant L of F, the defect is bounded by We compare the two differential equationṡ By construction, the solution of the first differential equation at time t 1 is S(t 1 ) = U ⊤ 1 A 1 V 1 . The solution of the second differential equation is S 1 as given by the S-step of the integrator. With the Gronwall inequality we obtain The result now follows using the definition of ϑ. ⊓ ⊔ Using the Lipschitz continuity of the function F, we pass from the local to the global errors by the standard argument of Lady Windermere's fan [8, Section II.3] and thus conclude the proof of Theorem 7.

Symmetric and skew-symmetric low-rank matrices
We now assume that the right-hand side function in (1) is such that one of the following conditions holds, or Under these conditions, solutions to (1) with symmetric or skew-symmetric initial data remain symmetric or skew-symmetric, respectively, for all times. We also have preservation of (skew-)symmetry for the new integrator, which does not hold for the projector-splitting integrator.
Theorem 5 Let Y 0 = U 0 S 0 U ⊤ 0 ∈ R n×n be symmetric or skew-symmetric and assume that the function F satisfies property (7) or (8), respectively. Then, the approximation Y 1 obtained after one time step of the new integrator is symmetric or skew-symmetric, respectively. Proof Let us just consider the skew-symmetric case (8). (The symmetric case is analogous.) The L-step is the K-step for the transposed function G(t, Y) = F(t, Y ⊤ ) ⊤ , and so the skew-symmetry of S 0 and property (8) imply that L(t 1 ) = −K(t 1 ), which further yields V 1 = U 1 and M = N. These identities show that the initial value and the right-hand side function of the differential equation for S(t) are skew-symmetric, which implies that S(t) and hence S 1 are still skew-symmetric. Altogether, the algorithm gives us the skew-symmetric result Under condition (7) or (8), the new integrator coincides with the (skew)-symmetry preserving low-rank matrix integrator of [4].

Recap: the Tucker tensor projector-splitting integrator
The solution A(t) ∈ R n1×···×nd of a tensor differential equation .
where Y 0 is a rank-r approximation to A 0 . Tensors Y (t) of multilinear rank r are represented in the Tucker form [6], written here in a notation following [13]: i.e., y i1,...,id (t) = j1,...,jd where the slim basis matrices U i ∈ R ni×ri have orthonormal columns and the smaller core tensor C(t) ∈ R r1×···×rd is of full multilinear rank r.
The orthogonal tangent space projection P(Y ) is given as an alternating sum of 2d − 1 subprojections [14], and like in the matrix case, a projector-splitting integrator with favourable properties can be formulated and efficiently implemented [14,17]. The algorithm runs through the modes i = 1, . . . , d and solves differential equations for matrices of the dimension of the slim basis matrices and for the core tensor, alternating with orthogonalizations of slim matrices. Like the matrix projector-splitting integrator, also the Tucker tensor projector-splitting integrator has the exactness property and a robust error bound independently of small singular values of matricizations of the core tensor [17, Theorems 4.1 and 5.1].

A new robust low-rank Tucker tensor integrator
The low-rank numerical integrator defined in Section 3 for the matrix case extends in a natural way to the Tucker tensor format, and this extension still has the exactness property and robust error bounds that are independent of small singular values of matricizations of the core tensor.
In comparison with the Tucker integrator of [14] and [17], the new Tucker tensor integrator has the following favourable properties: 1. The solution of the differential equations for the n i × r i matrices can be done in parallel for i = 1, . . . , d, and also the QR decompositions can be done in parallel. 2. No differential equations are solved backward in time. No differential equations for r i × r i matrices need to be solved.

The integrator preserves (anti-)symmetry if the differential equation does.
On the other hand, in contrast to the projector-splitting Tucker integrator there is apparently no efficient way to construct a time-reversible integrator from this new Tucker integrator.

Formulation of the algorithm
One time step of integration from time t 0 to t 1 = t 0 + h starting from a Tucker tensor of multilinear rank (r 1 , . . . , r d ) in factorized form, i , computes an updated Tucker tensor of multilinear rank (r 1 , . . . , r d ) in factorized form, i , in the following way: 1. Update the basis matrices U 0 i → U 1 i for i = 1, . . . , d in parallel: Perform a QR factorization of the transposed i-mode matricization of the core tensor: Mat Integrate from t = t 0 to t 1 the r 1 × · · · × r d tensor differential equatioṅ To continue in time, we take Y 1 as starting value for the next step and perform another step of the integrator.
We observe that, in contrast to the Tucker integrators of [17,14], the factors U i ∈ R ni×ri are updated simultaneously for i = 1, . . . , d.

Exactness property
The following result extends the exactness results of Theorem 3 and [17, Theorem 4.1] to the new Tucker tensor integrator.

A(t))
). We tensorize in the i-th mode and obtain With Y 0 = A(t 0 ) we obtain from the second substep of the algorithm which proves the exactness. ⊓ ⊔

Robust error bound
The robust error bounds from Theorem 4 and [17, Theorem 5.1] extend to the new Tucker tensor integrator as follows. The norm B of a tensor B used here is the Euclidean norm of the vector of entries of B.

Theorem 7 (Robust error bound) Let A(t)
denote the solution of the tensor differential equation (9). Assume the following: 1. F is Lipschitz-continuous and bounded.
2. The non-tangential part of F (t, Y ) is ε-small: The error in the initial value is δ-small: Let Y n denote the approximation of multinear rank (r 1 , . . . , r d ) to A(t n ) at t n = nh obtained after n steps of the new Tucker integrator with step-size h > 0. Then, the error satisfies for all n with t n = nh ≤ T where the constants c i only depend on the Lipschitz constant L and bound B of F , on T , and on the dimension d. In particular, the constants are independent of singular values of matricizations of the exact or approximate solution.
The proof of Theorem 7 proceeds similar to the proof of Theorem 4 for the matrix case. We begin with two key lemmas and are then in a position to analyse the local error produced after one time step. We denote the solution value at t 1 by A 1 . The basis matrix computed in the first part of the integrator is denoted by U 1 i for each i = 1, . . . , d.
The boundedness and Lipschitz condition of the matrix-valued function F i follows from the boundedness and Lipschitz condition of the tensor-valued function F . Condition 2. follows with the help of the correspondingly defined projection which is an orthogonal projection onto a subspace of the tangent space at Y (i) of the manifold of rank-r i matrices of dimension n i × n ¬i . Denoting the orthogonal projection onto this tangent space by P (i) (Y (i) ), we thus have Condition 3. holds due to the invariance of the Frobenius norm under matricization, and so we obtain the stated result. ⊓ ⊔

Lemma 6
The following estimate holds with ϑ of (5): where c only depends on d and a bound for hL.
Proof From Lemma 5 and Lemma 2, The norm is invariant under tensorization and so the bound is equivalent to To conclude, we observe

and the result follows by an iteration of this argument. ⊓ ⊔
We are now in a position to analyse the local error produced after one time step of the integrator.

Lemma 7 (Local error)
If A 0 = Y 0 , the following local error bound holds for the new Tucker tensor integrator: whereĉ only depends on d and a bound of hL. In particular, the constant is independent of singular values of the exact or approximate solution.
We omit the proof because, up to minor modifications analogous to those in the proof of Lemma 4, the result follows as in [4,Section 5.3] on using the two previous lemmas.
Using the Lipschitz continuity of the function F , we pass from the local to the global errors by the standard argument of Lady Windermere's fan [8,Section II.3] and thus conclude the proof of Theorem 7.

Symmetric and anti-symmetric low-rank Tucker tensors
For permutations σ ∈ S d , we use the notation σ(Y ) = y i σ (1) We now assume that the right-hand side function in (9) is such that one of the following conditions holds: For all permutations σ ∈ S d and all tensors Y ∈ R n×···×n of order d, or Under these conditions, solutions to (1) with symmetric or anti-symmetric initial data remain symmetric or anti-symmetric, respectively, for all times. We also have preservation of (anti-)symmetry for the new integrator, which does not hold for the projector-splitting integrator.
Theorem 8 Let Y 0 be symmetric or anti-symmetric and assume that the function F satisfies property (12) or (13), respectively. Then, the approximation Y 1 obtained after one time step of the new integrator is symmetric or anti-symmetric, respectively.
The simple proof is similar to the matrix case and is therefore omitted. Under condition (12) or (13), the new integrator coincides with the (anti)-symmetry preserving low-rank Tucker tensor integrator of [4].

Numerical Experiments
In this section, we present results of different numerical experiments. The experiments were done using Matlab R2017a software with TensorLab package v3.0 [18].

Robustness with respect to small singular values
In the first example, the time-dependent matrix is given explicitly as The matrix D ∈ R N ×N is diagonal with entries d j = 2 −j . The matrices W 1 ∈ R N ×N and W 2 ∈ R N ×N are skew-symmetric and randomly generated. We note that e t 2 −j are the singular values of A(t). We choose N = 100 and final time T = 1. We compare the new low-rank integrator presented in Section 3 with a numerical solution obtained with the classical fourth-order explicit Runge-Kutta method applied to the system of differential equations for dynamical low-rank approximation as derived in [11]. The numerical results for different ranks are shown in Figure 1. In contrast to the Runge-Kutta method, the new low-rank integrator does not require a step-size restriction in the presence of small singular values. The same favourable behaviour was shown for the projector-splitting integrator in [10].

Error behaviour
In the second example, we integrate a (non-stiff) discrete Schrödinger equation in imaginary time,Ẏ The function H arises from the Hamiltonian H = − 1 2 ∆ discrete +V (x) on a equidistant space grid with the torsional potential V (x 1 , . . . , For each i = 1, . . . , d, the orthonormal matrices U 0 i ∈ R N ×N are randomly generated. The core tensor C 0 ∈ R N ×N ×N has only non-zero diagonal elements set equal to (C 0 ) jjj = 10 −j for j = 1, . . . N in the case d = 3, and analogously in the matrix case d = 2.
The reference solution was computed with the Matlab solver ode45 and stringent tolerance parameters {'RelTol', 1e-10, 'AbsTol', 1e-10} . The differential equations appearing in the definition of a step of the new matrix and Tucker integrators have all been solved either with a single step of a second-or fourth-order explicit Runge-Kutta method. We choose N = 100, final time T = 0.1 and d = 2, 3. The multi-linear rank is chosen such that r 1 = r 2 = · · · = r d . The singular values of the matricization in the first mode of the reference solution and the absolute errors Y n − A(t n ) F at time t n = T of the approximate solutions for different ranks, calculated with different step-sizes and different time integration methods, are shown in Figure 2 for the matrix case(d = 2), and in Figure 3 for the tensor case(d = 3). The figures clearly show that solving the substeps with higher accuracy allows us to take larger step-sizes to achieve a prescribed error.
6.3 Comparison with the matrix projector-splitting integrator over different ranks In the last example, we compare the matrix projector splitting integrator with the new matrix integrator of Section 3. Here, the complex case is considered: in the definition of the sub-problems appearing in the new matrix integrator, it is sufficient to replace the transpose with the conjugate transpose.
We consider a Schrödinger equation as in [10,Section 4.3], The problem is discretized with a Fourier collocation method with a grid of N × N points; the solution is essentially supported within Ω = [−7.5, 7.5] 2 . We choose the final time T = 5 and N = 128, which makes the problem moderately stiff. First, we compute a reference solution with an Arnoldi method and a tiny time-step size h = 10 −4 . Then, for each rank from 1 until 20, we compute a low-rank approximation with the matrix projector splitting integrator and the new matrix integrator. The lower-dimensional sub-problems appearing in the definition of the two integrators are solved with an Arnoldi method and time-step size h = 0.005. For each rank, the absolute error in Frobenius norm of the two given approximations at the final time T = 5, with respect to the reference solution, are shown in Figure 4.