Introduction

We study systems of linear ordinary differential equations of Schrödinger type

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi '(t) = -\,{\mathrm {i}}\,H(t)\,\psi (t)\,, \qquad t \in [t_0, t_{{\mathrm {end}}}]\,, \\ \psi (t_0) = \psi _0\; \text { given}\,, \end{array}\right. } \end{aligned}$$
(1)

with a time-dependent Hermitian matrix \(H :{\mathbb {R}} \rightarrow {\mathbb {C}}^{d \times d}\). The exact flow of (1) is denoted by \({\mathcal {E}}(t,u_0)\) in the following. High-dimensional systems of this form arise for instance in the design of oxide solar cells [31], describing the movement and interaction of electrons within Hubbard-type models of solid state physics, where the explicit time-dependence here originates from an external electric field associated with the impact of a photon, or as typical semiclassical models arising in quantum control [40]. In the former application, the computational challenge results from the high dimension of the resulting system. Indeed, for a model with n discrete locations, the state space has dimension \(4^n\). Thus, for a model with the claim of physical relevance, the problem quickly reaches the limitations of modern supercomputers. In the latter application, the semiclassical parameter is chosen as very small, which mandates a fine spacial discretization, again resulting in very large systems of ODEs. Thus the main motivation for the present study is to identify the computationally most efficient numerical time integrators for the considered problem class in order to make large-scale simulations of high accuracy feasible on current computer hardware.

The aim of this paper is to compare two approaches to the numerical time integration of problems of the form (1). Popular integrators for time-dependent linear homogeneous differential equations are based on the Magnus expansion [23, 35], or on commutator-free exponential-based integrators [1]. These have been found to excel over classical Magnus integrators (introduced as numerical methods in [21]) for example in [3] and will therefore be used in the present study. In contrast, non-autonomous problems can also be solved by interpreting the independent variable t as a separate component, which in splitting methods can be frozen over a time-step and propagated separately. This approach is discussed extensively in [16,17,18, 20, 39] and references therein. The success of the splitting approach critically depends on the structure of the underlying problem. If the operator H(t) naturally suggests a splitting, where the time-dependent part is cheap to compute for fixed t, this may offer computational advantages when t is propagated along with one sub-operator. However, if only t is split off, the required number of compositions in a splitting approach may be prohibitive from a computational point of view. Also, if H has a special structure which can be exploited to increase the efficiency, the introduction of the additional variable t may destroy this structure [16]. We will corroborate these general observations on a number of practical examples, see also [19] for an abstract discussion of the computational effort.

The present comparisons involve methods that have been used in previous studies, but not in comprehensive assessment of the efficiency when applied to a number of application-motivated examples. The efficiency of adaptive splitting methods has been studied by the authors for instance in [2, 9], and adaptive Magnus-type methods are discussed in [3, 7]. This work adds the aspect of understanding adaptive splitting and Magnus-type methods as to their applicability and respective merits. By providing a meticulous comparison on several significant examples from applications, we give a balanced account of advantages and disadvantages of the two numerical approaches. The Hubbard model is of high interest in solid state physics, and therefore a search for the best numerical approach among several contenders is of relevance.

In Sect. 2 of this manuscript, we specify the model model problems that we will subsequently resort to in our comparisons, in Sect. 3 we briefly recapitulate the numerical approaches that are used, and in Sect. 4 we give the results of our numerical comparisons. The main criterion to assess the computational efficiency is the required CPU time to reach a prescribed accuracy, as the considered numerical approaches are fundamentally different in their structure and do not readily admit other metrics.

Model Problems

We consider a Rosen–Zener model related to quantum optics, a Hubbard model of the impact of light on a solid, and a semiclassical problem typical for quantum control.

Rosen–Zener problem As the first example, we consider a Rosen–Zener model from [19], which appears in quantum optics, see also [33]. The associated Schrödinger equation in the interaction picture is given by (1) with

$$\begin{aligned}&H(t) = f_1(t) \sigma _1 \otimes I_{k\times k} + f_2(t) \sigma _2 \otimes R \in {\mathbb {C}}^{2k\times 2k},\quad k=50\,, \end{aligned}$$
(2a)
$$\begin{aligned}&\sigma _1 = \left( \begin{array}{cc} 0 &{} 1 \\ 1 &{} 0 \end{array} \right) ,\qquad \sigma _2 = \left( \begin{array}{cc} 0 &{} -{\mathrm {i}}\\ {\mathrm {i}}&{} 0 \end{array} \right) ,\end{aligned}$$
(2b)
$$\begin{aligned}&R = {\mathrm {tridiag}}(1,0,1) \in {\mathbb {R}}^{k\times k},\end{aligned}$$
(2c)
$$\begin{aligned}&f_1(t) = V_0 \cos (\omega t) \left( \cosh (t/T_0) \right) ^{-1},\quad f_2(t) = V_0 \sin (\omega t) \left( \cosh (t/T_0) \right) ^{-1},\end{aligned}$$
(2d)
$$\begin{aligned}&\omega =\tfrac{1}{2},\ T_0 =1,\ V_0 = 1, \end{aligned}$$
(2e)

where the initial condition is chosen as \(\psi (0)=(1,\dots ,1)^T.\) The integration interval is \(t \in [-5,5]\).

Hubbard model for solar cells Next, we consider a Hubbard model describing the movement and interaction of electrons within an oxide solar cell [25, 31] built from \({{\mathrm {LaVO}}}_3\), withFootnote 1\(H(t)\in {\mathbb {C}}^{4900\times 4900}\). The explicit time-dependence here originates from an external electric field associated with the impact of a photon.

This model is given by a finite-dimensional Hamiltonian in second quantization of the form

$$\begin{aligned} H=\frac{1}{2} \sum _{ij \sigma } v_{ij} c_{j \sigma }^\dagger c_{i \sigma }^{} + \sum _{ij \sigma \sigma '} U_{ij} {{\hat{n}}}_{i\sigma } {{\hat{n}}}_{j\sigma '}. \end{aligned}$$
(3)

Here, the annihilation and creation operators \(c_{i \sigma }^{}\) and \(c_{j \sigma }^\dagger \) take an electron away from site i with spin \(\sigma \in \{\uparrow ,\downarrow \}\) and add it on site j.

The impact of a photon exciting the system out of equilibrium can be described by a classical electric field pulse, which introduces time-dependence to the Hamiltonian (3), see [25]. We choose \({\mathrm {e}}^{{\mathrm {i}} \,\omega (t)}\) with \(\omega (t) =\tfrac{1}{10} \exp \left( -\tfrac{1}{6} (t-6)^2\cos \left( \tfrac{7\pi }{4}(t-6)\right) \right) \), which appears in off-diagonal entries of H(t) depending on the geometry underlying the model of the investigated solid. The model is described in detail in [27].

The oscillating and quickly attenuating electric field generated by the external potential in this model makes adaptive time-stepping a relevant issue. Time integration proceeds on the interval \(t\in [0,30]\).

Quantum control A model typical for quantum control of atomic systems which is discussed in [28], see also [40], introduces a potential which explicitly depends on time,

$$\begin{aligned}&{\mathrm {i}}\,\partial _t \psi (x,t) = \varepsilon {\varDelta }\psi (x,t) + \varepsilon ^{-1} V(x,t)\,\psi (x,t)=H\psi (x,t)\,, \qquad t>0\,, \end{aligned}$$
(4a)
$$\begin{aligned}&\psi (x,0) = \psi _0(x)\,, \end{aligned}$$
(4b)

with V(xt) and the initial condition chosen as

$$\begin{aligned} \psi _0(x)= & {} (\delta \pi )^{-\frac{1}{4}}{\mathrm {e}}^{\frac{{\mathrm {i}}\,k_0(x-x_0)}{\delta }-\frac{(x-x_0)^2}{2\delta }},\\ V(x,t)= & {} V_0(x)+\rho (3t-1)\rho (\sin (2\pi (x-t))) + 10^{-3} t \sin (20\pi x),\\ V_0(x)= & {} \rho (4x)\sin (20\pi x),\\ \rho (x)= & {} {\left\{ \begin{array}{ll} e^{\frac{-1}{1-x^2}}, &{} |x|<1\\ 0, &{}\text {otherwise,} \end{array}\right. } \end{aligned}$$

where \(x_0=-0.3\), \(k_0=0.1\), \(\delta =10^{-3}\) and \(\varepsilon \) in (4a) assumes the values \(2^{-6},\ 2^{-8},\ 2^{-10},\ 2^{-12}\). The spatial interval \([-1,1]\) is discretized using a Fourier pseudospectral method at 2048 points for periodic boundary conditions. The computation terminates at \(t_{{\mathrm {end}}}=0.75.\)

Adaptive Time Integration

Splitting Methods

Splitting methods constitute a popular divide-and-conquer approach for numerical time integration of (1) when the Hamiltonian is partitioned, i.e., \(-{\mathrm {i}}H(t) = A(t) + B(t)\) and the operators A and B have different properties which promise computational advantages when propagated independently. This is for instance typical for the splitting of a Schrödinger operator into kinetic and potential part.

In our context, we will use splitting methods by making the problem formally autonomous by considering t as an additional solution component and adding the equation \(t'=1\). In this setting, time can be advanced separately, or simultaneously with one suboperator if this is autonomous. More precisely, in the definition of the splitting, the operators become

$$ \psi \longleftarrow \left( \begin{array}{c} \psi \\ t \end{array} \right) ,\quad A \longleftarrow \left( \begin{array}{c} A \\ 1 \end{array} \right) ,\quad B \longleftarrow \left( \begin{array}{c} B \\ 0 \end{array} \right) . $$

The same holds mutatis mutandis when t is propagated together with B.

For autonomous problems, splitting methods have the following form: At the (time-)semi-discrete level, s-stage exponential splitting methods use multiplicative combinations of the partial flows \( {\mathcal {E}}_A(t,u_0):u_0 \mapsto u(t)\) with \(u'(t)=A(u(t)),\ u(t_0)=u_0\), and \( {\mathcal {E}}_B(t,u_0):u_0 \mapsto u(t) \) with \(u'(t)=Bu(t),\ u(t_0)=u_0\). For a single step \( (t_0,u_0) \mapsto (t_0+h,u_1) \) with time-step \( t=h \), this reads

$$\begin{aligned} u_{1} := {\mathcal {S}}(h,u_0) = {\mathcal {E}}_B(b_s h,\cdot ) \circ {\mathcal {E}}_A(a_s h,\cdot ) \circ \ldots \circ {\mathcal {E}}_B(b_1 h,\cdot ) \circ {\mathcal {E}}_A(a_1 h,u_0), \end{aligned}$$
(5)

where the coefficients \( a_j,b_j,\, j=1 \ldots s \) are determined from order conditions to achieve a desired order of consistency [23].

Local Error Estimators for Splitting Methods

As the basis for adaptation of the time-steps, three classes of local error estimators are used in this study. These have different advantages depending on the context in which they are applied [5].

  1. (i)

    Embedded pairs of splitting formulae have first been considered in [32] and are based on reusing a number of evaluations from the basic integrator. In this paper, we will focus on the pairs [6, Emb 4/3 AK p] of orders four and three and [6, Emb 5/4 AK (ii)] of orders five and four, which were found to be the most successful in earlier work [2].

  2. (ii)

    A defect-based error estimator has been proposed and analyzed in [8, 10,11,12]. In order to construct an error estimator associated with a splitting method of order \(p \ge 1\), an integral representation of the local error involving the defect \({\mathcal {D}}\) of the numerical approximation is evaluated by means of an Hermite quadrature formula. Due to the fact that the validity of the p-th order conditions ensures that the first \(p-1\) derivatives of \({\mathcal {D}}\) vanish at \(t=t_0\), this leads to a local error estimator involving a single evaluation of the defect,

    $$\begin{aligned} {\mathcal {P}}(t,u_0) = \tfrac{1}{p+1}\,t\,{\mathcal {D}}(t,u_0) \,\approx \, {\mathcal {L}}(t,u_0) = {\mathcal {S}}(t,u_0)-{\mathcal {E}}(t,u_0)\,. \end{aligned}$$
    (6)

    This device works generally for splittings of any order into an arbitrary number of operators if Fréchet derivatives of the subflows are available, see [4]. We use the defect-based error estimator in conjunction with the integrators in [6, Emb 4/3 AK p] and [6, Emb 5/4 AK (ii)], since these are close to optimal.

  3. (iii)

    For adjoint pairs of formulae of odd order p, an asymptotically correct error estimator can be computed at the same cost as for the basic method, see [5]. Since the error estimator is easy to construct and evaluate in this case, we employ the pair [6, PP 5/6 A] of orders 5/6, since this was found to be efficient for high accuracy demands for instance in [2]. We will also employ this optimized method in conjunction with the defect-based error estimator for reasons of comparison.

All the error estimates we use in our comparisons are asymptotically correct, i.e., the deviation of the error estimator from the true error tends to zero faster than does the error.

Commutator-free Magnus-type Integrators

A successful and much used class of integration methods is given by higher-order commutator-free Magnus-type integrators (CFM) [1, 22]. These approximate the exact flow in terms of products of exponentials of linear combinations of the system matrix evaluated at different times, avoiding evaluation and storage of commutators. These have been found to excel over classical Magnus integrators in applications in our interest in [3].

One step of a CFM scheme for (1) starting at \((t_0,u_0)\) is defined byFootnote 2

$$\begin{aligned} u_{1} = {\mathcal {S}}(\tau ;t_0)\,u_0\,, \end{aligned}$$

with the ansatz [1, 22]

$$\begin{aligned} \begin{aligned} {\mathcal {S}}(\tau ;t_0)&= {\mathcal {S}}_J(\tau ) \cdots {\mathcal {S}}_1(\tau ) = {\mathrm {e}}^{{\varOmega }_J (\tau )} \,\cdots \, {\mathrm {e}}^{{\varOmega }_1(\tau )}\,, \\[1mm] {\varOmega }_j(\tau )&= \tau B_j(\tau ),~~ j=1, \ldots , J, \\ B_{j}(\tau )&= \sum _{k=1}^K a_{jk}\, H_{k}(\tau ),\quad H_{k}(\tau ) = -{\mathrm {i}}H(t_0+c_k \tau )\,, \end{aligned} \end{aligned}$$
(7)

where the coefficients \(a_{jk}\), \(c_k\) are determined from the order conditions (a system of polynomial equations in the coefficients) such that the method attains convergence order p, see for example [19] and references therein. Algorithms to efficiently generate the order conditions are described for instance in [26]. Since such a system of equations generally does not define a unique solution, numerical optimization techniques are employed, for example minimizing the leading local error term of the resulting integrator.

In this study, we will use the methods referred to as CF4oH and CF6n in [3].

The choice of the methods above for our comparisons is motivated by the fact that these two methods were found to be the most efficient CFM methods in the study [3].

Local Error Estimation for Magnus-type Methods

As a basis for adaptive time-stepping, defect-based error estimators for CFM methods and for classical Magnus integrators have been introduced in [7]. For the defect

$$\begin{aligned} {\mathcal {D}}(\tau )={\mathcal {S}}'(\tau ;t_0)-A(t_0+\tau ){\mathcal {S}}(\tau ;t_0) \end{aligned}$$
(8)

it holds that

$$\begin{aligned} {\mathcal {D}}(0)={\mathcal {D}}'(0)=\cdots ={\mathcal {D}}^{(p-1)}(0)=0, \end{aligned}$$

for an order p method.

The local error \({\mathcal {L}}(\tau )\psi _0:=({\mathcal {S}}(\tau ;t_0)-{\mathcal {E}}(\tau ;t_0))\psi _0\) can be expressed via the variation-of-constant formula as

$$\begin{aligned} {\mathcal {L}}(\tau )=\int _0^\tau {\varPi }(\tau ,\sigma ){\mathcal {D}}(\sigma )\,{\mathrm {d}}\sigma ={{\mathscr {O}}}(\tau ^{p+1}),\quad {\varPi }(\tau ,\sigma )={\mathcal {E}}(\tau -\sigma ;t_0+\sigma ). \end{aligned}$$

For the practical evaluation of the defect, the derivative of matrix exponentials of the form

$$ \tfrac{{\mathrm {d}}}{{\mathrm {d}}\tau }{\mathrm {e}}^{\tau B(\tau )} = {\varGamma }(\tau )\,{\mathrm {e}}^{\tau B(\tau )} $$

is required. The function \({\varGamma }\) can be expressed as an infinite series or alternatively as an integral. These are approximated by truncation or numerical Hermite quadrature, respectively, to yield a computable quantity \({{\tilde{{\varGamma }}}}\) and an approximate defect \({\tilde{{\mathcal {D}}}}\). The resulting computable error estimator is denoted by \({\tilde{{\mathcal {P}}}}\). The asymptotical correctness of the error estimators was established in [7].

In the numerical experiments reported in Sect. 4 below, truncation of the Taylor expansion has been used throughout.

Adaptive Lanczos Method

The crucial computational step in any of the Magnus-type methods described above, is the evaluation of the action of a matrix exponential

$$\begin{aligned} E(t)v={\mathrm {e}}^{-{\mathrm {i}}t {\varOmega }}v,\quad {\varOmega }\, \text {Hermitian}, \, t \, \text {fixed}. \end{aligned}$$
(9)

Note that this subproblem is also solved in the splitting approximation of the problems (2) and (3), whereas in (4), the pseudospectral space discretization renders this substep the trivial exponentiation of a diagonal matrix. The standard Krylov approximation to \({\mathrm {e}}^{-{\mathrm {i}}t {\varOmega }} v\) reads

$$\begin{aligned} S_m(t)v = V_m\,{\mathrm {e}}^{-{\mathrm {i}}t T_m}\,V_m^*\,v = V_m\,{\mathrm {e}}^{-{\mathrm {i}}t T_m}e_1, \end{aligned}$$
(10)

with \(T_m=(\tau _{i,j})\) tridiagonal and \(V_m\) an orthonormal basis of the Krylov space \({\mathcal {K}}_m({\varOmega },v) = {\mathrm {span}}\{v,{\varOmega }v,\ldots ,{\varOmega }^{m-1}v\} \subseteq {\mathbf {C}}^n\). For Hermitian or skew-Hermitian matrices \({\varOmega }\), the Lanczos method [36] constitutes a computationally efficient realization.

In [30], a time-stepping strategy was introduced which is based on the defect of the approximation. Due to the success of this strategy documented ibidem, we use it invariantly in the Magnus-type integrators. The asymptotically correct error estimator is based on the defect operator

$$ D_m(t) = -{\mathrm {i}}\,{\varOmega }\,S_m(t) - S_m'(t) \in {\mathbf {C}}^{n \times n}. $$

The local error operator \(L_m(t)=E(t)-S_m(t)\) can be represented as

$$ L_m(t)v = \int _0^t E(t-s)\,D_m(s)v\,{\mathrm {d}}s. $$

Numerical quadrature applied to this defect-based integral representation yields a computable, asymptotically correct local error bound satisfying (see [30]),

$$\begin{aligned}&\Vert L_m(t)v \Vert _2 \le \tau _{m+1,m}\gamma _m\,\frac{t^{m}}{m!},\\&\Vert L_m(t)v \Vert _2 = \tau _{m+1,m}\gamma _m\,\frac{t^m}{m!} + {{\mathcal {O}}}(t^{m+1}), \end{aligned}$$

with \(\gamma _m= \prod _{j=1}^{m-1} (T_m)_{j+1,j}.\)

As an error tolerance for the Lanczos matrix exponentiation, we prescribe \(10^{-12}\). This allows to realize highly accurate time-stepping on the basis of this approximation with tolerance requirements as strict as \(10^{-12}\).

Step-size Selection

Based on a local error estimator, the time step-size is adapted such that the tolerance is expected to be satisfied in the following step. If \(h_{\text {old}}\) denotes the present step-size, the next step-size \(h_{\text {new}}\) in an order p method is predicted as (see [24, 37])

$$\begin{aligned} h_{\text {new}} = h_{\text {old}} \cdot \min \Big \{\alpha _{\text {max}},\max \Big \{\alpha _{\text {min}}, \alpha \,\Big (\dfrac{\text {tol}}{{\mathcal {P}}(h_{\text {old}})}\Big )^{\frac{1}{p+1}}\,\Big \}\Big \}, \end{aligned}$$
(11)

where we choose the parameters as \( \alpha = 0.9 \), \( \alpha _{\text {min}} = 0.25 \), \( \alpha _{\text {max}} = 4.0 \), and \({\mathcal {P}}(h_{\text {old}})\) is an asymptotically correct estimator for the local error arising in the previous time-step. This established and widely used strategy incorporates safety factors to avoid an oscillating and unstable behavior.

Table 1 Runtime for the Rosen–Zener model (2)
Table 2 Runtime for Hubbard model (3)
Table 3 Runtime for the quantum control problem (4) with \(\varepsilon = 2^{-6}\)
Table 4 Runtime for the quantum control problem (4) with \(\varepsilon = 2^{-8}\)
Table 5 Runtime for the quantum control problem (4) with \(\varepsilon = 2^{-10}\)
Table 6 Runtime for the quantum control problem (4) with \(\varepsilon = 2^{-12}\)

Numerical Results

Here, we give the results of our experimental comparisons of the numerical methods described in Sect. 3. The numerical results have been obtained based on implementations which can be found at

https://github.com/HaraldHofstaetter/TimeDependentLinearODESystems.jl and https://github.com/HaraldHofstaetter/TSSM.jl.

As a measure of computational efficiency, we resort to CPU time on the Vienna Scientific Cluster. Its third generation cluster VSC-3 has 2020 nodes, each equipped with 2 processors (Intel Xeon E5-2650v2, 2.6 GHz, 8 cores). The runtimes we give below are averages over 100 identical runs on a single compute node, respectively. Runtime seems to be the most reasonable measure of computational efficiency due to the very different nature of the two numerical approaches. Two different local error tolerances \(10^{-5}\) and \(10^{-9}\) are prescribed for all examples, for (4) the tolerance \(10^{-12}\) could also be reached.

Rosen–Zener model. In Table 1 we show the results for the Rosen–Zener model (2). For the splitting methods, only the time variable is split off and the Hamiltonian is exponentiated as a whole. In modern computer arithmetics, a conceivable splitting into real and imaginary part does not promise a computational advantage. We observe that the most efficient exponential-based method is CF4oH, while Emb 4/3 AK p is the best splitting method for the larger tolerance \(10^{-5}\), and PP 5/6 A excels for tolerance \(10^{-9}\). Note that the number of time-steps does not immediately correspond with the computational effort, the commutator-free Magnus-type method of order six requires the fewest steps, but is more expensive in each step and thus not the fastest integrator. The fastest exponential-based integrator is almost twice as fast as the best splitting method.

Hubbard model. For the Hubbard model of solar cells (3) we obtain a similar picture. Again, only the time variable is split off. Table 2 shows the runtimes for tolerances \(10^{-5}\) and \(10^{-9}\). The fourth order commutator-free Magnus-type integrator CF4oH is the most efficient for both tolerances, and again, Emb 4/3 AK p is the best splitting method for the larger tolerance, and PP 5/6 A for the stricter tolerance. The best exponential-based method again excels over the best splitting method.

Quantum control. The results for the semiclassical problem (4) show a different picture than the previous investigations. The reason is obvious: The problem (4) suggests a natural splitting into kinetic and potential part, and hence t can be propagated efficiently alongside with the autonomous kinetic operator. We vary \(\varepsilon \) from \(\varepsilon =2^{-6}\) to \(\varepsilon =2^{-12}\) in Tables 3, 4, 5 and  6. For this example, a tolerance of \(10^{-12}\) could additionally be achieved and is added to the numerical results. Throughout, the best splitting method is EMB 5/4 AK (ii), and the best Magnus-type method is CF4oH. For larger \(\varepsilon \), splitting methods are clearly to be preferred, but this advantage is significantly diminished for the more oscillatory problems for smaller \(\varepsilon \). Indeed, Magnus-type integrators are known to excel for oscillatory problems. For larger \(\varepsilon \) and particularly larger tolerances, Emb 5/4 AK (ii) is by far more efficient than the best exponential-based method, but for smaller \(\varepsilon \), this advantage is dimished, and for \(\varepsilon =2^{-10}\) and \(2^{-12}\) and tolerance \(10^{-12}\), CF4oH is even slightly faster. The reason may be the additional splitting error which contributes to dimished efficiency due to reduced accuracy for a given computational effort.

Conclusions

We have studied the differences between two fundamentally diverse approaches for the solution of linear non-autonomous systems of differential equations. Exponential-based methods related to the Magnus expansion are contrasted with splitting methods, where the time variable is split off and suitably propagated. Both approaches allow to construct asymptotically correct estimators for the local time-stepping error and implement adaptive time-stepping on this basis. Which method is more efficient depends on the problem structure. If only the time variable is split off, the additional substeps induced in the splitting procedure seem not to be justified from the point of view of computational efficiency. However, if the problem naturally suggests a splitting into a time-dependent and a time-independent part, the approach may be more efficient. However, for highly oscillatory problems, the splitting error is too large and Magnus-type integrators are again to be preferred. Our findings may also have an impact on the study of time-dependent differential equations of other classes such as differential-algebraic equations [15], functional and stochastic differential equations [34], or fractional differential equations [29, 38], see also [13, 14].