On the convergence of Lawson methods for semilinear stiff problems

Since their introduction in 1967, Lawson methods have achieved constant interest in the time discretization of evolution equations. The methods were originally devised for the numerical solution of stiff differential equations. Meanwhile, they constitute a well-established class of exponential integrators. The popularity of Lawson methods is in some contrast to the fact that they may have a bad convergence behaviour, since they do not satisfy any of the stiff order conditions. The aim of this paper is to explain this discrepancy. It is shown that non-stiff order conditions together with appropriate regularity assumptions imply high-order convergence of Lawson methods. Note, however, that the term regularity here includes the behaviour of the solution at the boundary. For instance, Lawson methods will behave well in the case of periodic boundary conditions, but they will show a dramatic order reduction for, e.g., Dirichlet boundary conditions. The precise regularity assumptions required for high-order convergence are worked out in this paper and related to the corresponding assumptions for splitting schemes. In contrast to previous work, the analysis is based on expansions of the exact and the numerical solution along the flow of the homogeneous problem. Numerical examples for the Schr\"odinger equation are included.


Introduction
Exponential integrators are a well-established class of methods for the numerical solution of semilinear stiff differential equations.If the stiff initial value problem stems from a spatial semi-discretization of an evolutionary partial differential equation (PDE), the very form of the domain of the spatial differential operator enters the convergence analysis.The stiff order conditions, which guarantee a certain order of convergence independently of the considered problem, must be independent of the domain of this operator (which, in general, involves certain boundary conditions).This is the main reason why stiff order conditions for exponential integrators are quite involved (see Hochbruck & Ostermann (2005a) and Luan & Ostermann (2013)).
For particular problems, however, less conditions are required for obtaining a certain order of convergence.(The same is true for ordinary differential equations (ODEs), where linear problems, e.g., require less order conditions for Runge-Kutta methods than nonlinear ones.)It was already observed in Hochbruck & Ostermann (2005b) that periodic boundary conditions do not give any order reduction in exponential integrators of collocation type in contrast to homogeneous Dirichlet boundary conditions, which restrict the order of convergence considerably (close to the stage order, depending on the precise situation).Full-order convergence for periodic boundary conditions was also noticed in Kassam & Trefethen (2005) and Besse et al. (2017).
A similar behaviour can be observed for Lawson methods which are obtained by a linear variable transformation from (explicit) Runge-Kutta methods (see Lawson (1967) and Section 2 below).These methods are very attractive, since they can be easily constructed from any known Runge-Kutta method.Unfortunately, Lawson methods exhibit a strong order reduction, in general.For particular problems, however, they show full order of convergence (see Cano & González-Pachón (2015), Balac et al. (2016), and Montanelli & Bootland (2016)).By construction, Lawson methods do satisfy the order conditions for non-stiff problems.Such conditions will be called non-stiff or conventional order conditions henceforth.However, Lawson methods do not satisfy any of the stiff order conditions, as detailed in Hochbruck & Ostermann (2005a), Hochbruck & Ostermann (2010), and Luan & Ostermann (2013).This fact can result in a dramatic order reduction, even down to order one for parabolic problems with homogeneous Dirichlet boundary conditions.
So far, the derivation of (stiff) order conditions for exponential integrators was based on standard expansions of the exact and the numerical solution.There, the main assumption on the problem is that the exact solution and its composition with the nonlinearity are both sufficiently smooth in time (see Hochbruck & Ostermann (2005a) and Luan & Ostermann (2013)).Any additional regularity in space is not of immediate benefit in this analysis.This is in contrast to splitting methods, where spatial regularity usually shows up in form of commutator bounds (see, e.g., Jahnke & Lubich (2000)).
In this paper, we study the convergence behaviour of Lawson methods for semilinear problems.One of the main contributions of this paper is a different expansion of the solution.It is still based on the variation-of-constants formula but the nonlinearity is expanded along the flow of the homogeneous problem.This expansion can be derived in a systematic way using trees as in Hairer et al. (1993) and Luan & Ostermann (2013).The expansion of the exact solution is carried out in terms of elementary integrals, that of the numerical solution in terms of elementary quadrature rules.We show that conventional, non-stiff order conditions together with (problem-dependent) assumptions on the exact solution give full order of convergence.This involves regularity of the solution in space and time.Our main result for Lawson methods is stated in Theorems 4.6 and 4.7.We prove that Lawson methods converge with order p, if the order of the underlying Runge-Kutta methods is at least p and the solution satisfies appropriate regularity assumptions.These conditions are studied in detail for methods of orders one and two, respectively, and they are related to the corresponding conditions that arise in the analysis of splitting methods.In particular, this is worked out for the nonlinear Schrödinger equation.Our error analysis also reveals a different behaviour between the first-order Lawson method and the exponential Euler method, which is visible in numerical experiments.
The outline of the paper is as follows.In Section 2, we recall the construction of Lawson methods.The expansion of the numerical and the exact solution in terms of elementary integrals is given in Section 3. There, we also introduce the analytic (finite dimensional) framework which typically occurs when discretizing a semilinear parabolic or hyperbolic PDE in space.Order conditions and convergence results are given in Section 4. The resulting regularity assumptions are discussed in Section 5.These assumptions are related to the corresponding conditions for splitting methods.Numerical examples that illustrate the required regularity assumptions and the proven convergence behaviour are also presented.

Lawson methods
Consider a semilinear system of stiff differential equations where the stiffness stems from the linear part of the equation, i.e., from A, which is either an unbounded linear operator or its spatial discretization, i.e., a matrix.The precise assumptions on A and g will be given in Section 3.For the numerical solution of (2.1), Lawson (1967) considered the following change of variables: Note that when applied to evolution equations, this transformation has to be done in a formal way, since e tA might not be a meaningful object in our general framework.
Inserting the new variables into (2.1)gives the transformed differential equation For the solution of this problem, an s-stage explicit Runge-Kutta method with coefficients b i , c i , a i j is considered.The method is assumed to satisfy the simplifying assumptions c 1 = 0 and Transforming the Runge-Kutta discretization of (2.2) back to the original variables yields the corresponding Lawson method for (2.1) ) ) Here, u n is the numerical approximation to the exact solution u(t) at time t = t n = nh, and h is the step size.Note that this method makes explicit use of the action of the matrix exponential function.
Depending on the properties of A, the nodes c 1 , . . ., c s have to fulfill particular assumptions, see Assumption 3.1 in the next section.Because of these actions of the matrix exponential, Lawson methods form a particular class of exponential integrators.For a review on such integrators, we refer to Hochbruck & Ostermann (2010).
For a non-stiff ordinary differential equation (2.1), it is obvious that the order of the Runge-Kutta method applied to (2.2) coincides with that of the corresponding Lawson method applied to (2.1).It is the aim of this paper to show that this is also true in the stiff situation, if appropriate regularity assumptions hold (we will explain the meaning of regularity in the context of discretized PDEs in Section 5).

Expansion of the exact and the numerical solution
By adding t ′ = 1 to (2.1), the differential equation is transformed to autonomous form.It is well known that Runge-Kutta methods of order at least one satisfying (2.3) are invariant under this transformation.Therefore, we restrict ourselves henceforth to the autonomous problem Let X be a Hilbert space or a Banach space with norm • .Our main assumptions on A and g are as follows.
ASSUMPTION The set of infinitesimal generators of non-expansive (semi)groups in X is a possible choice for the family F .In addition, the above assumption is typically satisfied in situations where (3.1) stems from a spatial discretization of a semilinear parabolic or hyperbolic partial differential equation.The important fact here is that the constant C F is independent of the spatial mesh width for finite difference and finite element methods, and independent of the number of ansatz functions in spectral methods.As our error bounds derived below do not depend on A itself but only on the constant C F , they also apply to spatially discretized systems.ASSUMPTION 3.2 For a given integer p 0, the nonlinearity g is p times differentiable with bounded derivatives in a neighborhood of the solution of (3.1).
We recall that the solution of (3.1) can be represented in terms of the variation-of-constants formula Applying this formula recursively and expanding the nonlinearity along the flow of the homogeneous problem yields the following expansion of the exact solution where we have used the shorthand notation Note that here and throughout the whole section, the constant in the Landau symbol O only depends on C F and the derivatives of g, but not explicitly on A itself, i.e., not on the stiffness.Also note that this expansion differs considerably from the previous work (see, e.g., Hochbruck & Ostermann (2005a); Luan & Ostermann (2013)) where the nonlinearity g(u(t)) was expanded with respect to t.
Next we perform a similar expansion of the numerical solution (2.4), which yields (again in the autonomous case) (3.5) As we have used the variation-of-constants formula and its discrete counterpart, respectively, the expansions of the exact and the numerical solution reflect the well-known tree structure of (explicit) Runge-Kutta methods.In the following we use the classic trees which are well-established for studying the non-stiff order conditions for Runge-Kutta methods, see (Hairer et al., 1993, Section II.2), (Hairer et al., 2006, Section III.1), and references given there.By T we denote the set of unlabeled rooted trees.We recall that these trees are defined recursively by Here, [τ 1 , . . ., τ k ] (a k tuple without ordering) denotes the tree which is obtained by concatenating the roots of the trees τ 1 , . . ., τ k via k branches with a new node.This node becomes the root of the tree [τ 1 , . . . ,τ k ].
For τ ∈ T the elementary differential D(τ) of a smooth function g is defined recursively in the following way.For τ = we have D( )(w) = g w , and for τ = [τ 1 , . . ., τ k ] we have By ρ(τ) we denote the order of the tree τ which is defined as the number of nodes of τ ∈ T .The trees of order less or equal then p are denoted by Here, • j refers to the variables corresponding to the jth subtree τ j .
Finally, we define for all τ ∈ T the elementary integrals as For example, we have = [ ] and It is straightforward to verify that the elementary integrals satisfy the recurrence relation Our assumptions on g and A ensure that the integrand for w in a neighborhood of the exact solution of (3.1) and h sufficiently small.REMARK 3.4 In the nonstiff situation, where A ≡ 0, all evaluations of g or its derivatives are at the fixed value w.
The following theorem shows how the expansion (3.3) can be expressed as a (truncated) B-series.
Proof.The proof is done by induction.For p = 0, the claim follows from the variation-of-constants formula and The induction step follows the lines of the proof of (Hairer et al., 2006, Lemma III.1.9)with the following modifications: we use the variation-of-constants formula and truncate the series in such a way that only the first p derivatives of g enter the expansion.We omit the details.Now we proceed analogously for the numerical solution starting with the definition of elementary quadrature rules.DEFINITION 3.6 For τ ∈ T we define the multivariate quadrature operators I(τ) and I i (τ), i = 1, . . ., s, recursively in the following way.
(a) For τ = and a univariate function f , we set (b) For τ = [τ 1 , . . ., τ k ] and a multivariate function f in ρ(τ) variables, we set Finally, we define the elementary quadrature rules in the following way It is easy to see from the recursive definitions that the elementary quadrature rules satisfy This allows us to express the expansion (3.5) of the numerical solution in terms of elementary quadrature rules.
THEOREM 3.7 The numerical solution of (3.1) satisfies ) where we define the numerical B-series for w ∈ X as Proof.The proof is done analogously to the proof of Theorem 3.7.

Order conditions and convergence
In this section we present a systematic way of deriving general stiff convergence results for Lawson methods based on trees.
The expansions of the exact and the numerical solution in terms of elementary integrals and elementary quadrature rules derived in the previous section allow us to study the local error in the same way as for classical Runge-Kutta methods.In fact we show that the orders of these quadrature rules determine the local error of the Lawson method.A similar strategy was used in the analysis of splitting methods by Jahnke & Lubich (2000).General stiff order conditions for exponential Runge-Kutta methods have been derived in Luan & Ostermann (2013) and for splitting methods in Hansen & Ostermann (2016).
As usual, we say that the Lawson method is of (stiff) order p if the local error satisfies uniformly for smooth nonlinearities and operators A satisfying Assumption 3.1, meaning that the constant C depends on the constant C F defined in (3.2) but not on A itself.
THEOREM 4.1 The Lawson method (2.4) is of order p if ), for all τ ∈ T p .
Proof.From Theorems 3.5 and 3.7 we have This proves the statement.
REMARK 4.2 The above derivation can be easily generalized to exponential integrators with a fixed linearization cf. Hochbruck & Ostermann (2010).If one replaces b i e −(1−c i )hA by b i (−hA) and a i j e −(c i −c j )hA by a i j (−hA) in Definition 3.6, Theorems 3.7 and 4.1 also hold for general exponential Runge-Kutta methods.If the stiff order conditions derived in Hochbruck & Ostermann (2005a) and Luan & Ostermann (2013) are satisfied up to order p, then ) for all τ ∈ T p .
EXAMPLE 4.3 For the exponential Euler method, where s = 1, c 1 = 0, and b 1 (z) = ϕ 1 (z), we have The condition for order one requires that g(u 0 ) − g σ (u 0 ) Ch.In the linear case, where g(u) = Bu, this can be written as Hence the condition is fulfilled if Au 0 is uniformly bounded, i. e., u 0 ∈ D(A).For the convergence, we thus need It might be interesting to compare (4.2) to the condition given in (Hochbruck & Ostermann, 2010, Lemma 2.13) which was proved by a Taylor series expansion of g u(t) .For linear problems, it reads Hence both results require the same regularity, namely that Au(t) is uniformly bounded.Note, however, that (4.3) does not involve the ϕ 1 function.The latter decays like 1/z as z → ∞ in the closed left half- plane, hence components corresponding to eigenvalues with large negative real part are damped.
Proof.First note that for problems with A ≡ 0, we have On the one hand, classical Runge-Kutta theory implies that the local error (4.1) behaves as O(h p+1 ) for any sufficiently smooth g.On the other hand, the elementary differentials D(τ) are known to be independent.Hence, we obtain that G(τ)(w) = G(τ)(w).The statement follows because the integrand Ψ (τ)(•, w) ≡ D(τ)(w) is a constant.This yields (4.4).
The following theorem provides a sufficient condition for Lawson methods being of (stiff) order p.Here, C k,1 (R d , X) denotes the space of k time continuously differentiable functions which have a Lipschitz continuous kth derivative.THEOREM 4.6 Let the integrand Ψ (τ) of G(τ) satisfy for all τ ∈ T p , (4.5) where u(t) ∈ X is the solution of (3.1), 0 t T .If the underlying Runge-Kutta method is of (conventional) order p, then the Lawson method (2.4) is of (stiff) order p.
This result now allows us to prove an error bound for Lawson methods which is uniform for all problems (3.1) with A satisfying Assumption 3.1.THEOREM 4.7 Let u be the solution of (3.1) and let the assumptions of Theorem 4.6 be satisfied.If the underlying Runge-Kutta method is of (conventional) order p, then there exists h 0 > 0 such that for all 0 < h h 0 sufficiently small, where C and h 0 are independent of n, h, and A.
Proof.We define a norm by By assumption, g is locally Lipschitz continuous.Then (4.6) and Theorem 3.7 show that the Lawson method is locally Lipschitz with respect to the initial value with a Lipschitz constant of size 1 + O(h).This implies the required stability.
The error bound follows in a standard way using Lady Windermere's fan.

Regularity conditions and applications
It remains to discuss the regularity conditions (4.5) and to give some applications.We first examine the conditions for orders one and two, respectively.The extension to higher orders is a tedious but straightforward exercise.It turns out that these regularity conditions can all be expressed in terms of commutators, very much like in the case of splitting methods.
In order to obtain simple sufficient conditions, we replace the space C k,1 (Ω , X) in condition (4.5) by the subspace of k + 1 times partially differentiable functions with uniformly bounded partial derivatives on Ω in the following discussion.This is also justified by the fact that Lipschitz continuous functions are almost everywhere differentiable (Rademacher's theorem).

Condition for order one
Since p = ρ(τ) = 1, we only have to consider the tree τ = in (4.5).Differentiating with respect to σ yields where [F A , g] denotes the Lie commutator of g and F A (w) = Aw, defined as (5.3) From this calculation, we conclude the following result.If the bound sup holds with a constant C that is allowed to depend on C F , then a Lawson method of non-stiff order one has also stiff order one.

Conditions for order two
Stiff order two is achieved if we require the following two regularity conditions We commence with the first condition and exploit the fact that ∂ σ Ψ ( )(σ , w) is of exactly the same form as (5.1) with g replaced by the vector field [F A , g].Hence from (5.2) we have Therefore, the bound should hold with a constant C that is independent of A .
Next, we move to the second condition.Differentiating with respect to σ 1 and σ 2 yields since by definition (5.3) the derivative of the commutator satisfies (5.6)Moreover, we have respectively.From these two relations, we infer that the bounds should hold with a constant C that is independent of A .
From the above calculations, we conclude the following result.If the conditions (5.4), (5.5), and (5.7) hold with a constant C that does not depend on A , then a Lawson method of non-stiff order two has also stiff order two.

Conditions for higher order
The following lemma provides the formulas to derive the order conditions for order larger than two in a systematic way.
Proof.Both parts are proved by induction on m.
(a) For m = 1 the statement was proved in (5.2).The induction step is proved by the same arguments as were used for m = 2 above.
(b) To prove the statement for m = 1, we first note that for τ Since ∂ η Ψ η (τ) = −hAΨ η (τ) for any tree τ, we obtain On the other hand, by definition (5.3), we have (5.8) Using induction on k it is easy to see that (5.9) This proves the claim for m = 1.If it holds for some m 1 then it does also for m + 1, since the same calculation can be done with [F A , g] (k) in the role of g (k) .The lemma thus shows that all derivatives arising in the order conditions can be obtained recursively from the tree structure.Moreover, only commutators, iterated commutators and their derivatives appear.

Specialisation to linear problems
For the linear evolution equation with bounded operator B on X, the above conditions (5.4), (5.5), and (5.7) simplify a bit.Having g(u) = Bu, the Lie commutator coincides with the operator commutator of A and B A first-order Lawson method is of stiff order one if (5.10a) For second order, the conditions read (5.10d) We recall that such conditions also arise in the analysis of splitting methods, cf.Jahnke & Lubich (2000).Using Lemma 5.1, the above analysis can easily be generalized to higher order, since for linear problems, only long trees have to be considered.For all other trees, which have at least one node with two branches, the integrand Ψ vanishes.

Nonlinear Schrödinger equations
For the time discretization of nonlinear Schrödinger equations split-step methods are commonly viewed as the method of choice.In recent years, however, exponential integrators have been considered as a viable alternative for the solution of (5.11).For instance, Besse et al. (2017) studied exponential integrators in the context of Bose-Einstein condensates; Cano & González-Pachón (2015) and Balac et al. (2016) reported favorable results for Lawson integrators of the form as discussed in this paper.Rigorous convergence results, however, are still missing for these methods.
As an application of our analysis, we will use the above regularity conditions (5.4), (5.5), and (5.7) to verify second-order convergence of Lawson methods.We refrain from any particular space discretization and argue in an abstract Hilbert space framework.Note, however, that our reasoning carries over to spatial discretizations (by spectral methods, e.g.) without any difficulty.
For this purpose, we consider (5.11) with periodic boundary conditions on the d dimensional torus and smooth potential.Then it is well known (see, e.g., (Kato, 1995, Thm. 4.1)) that the problem is well posed in H m for m > d/2.The regularity of an initial value u 0 ∈ H m is thus preserved along the solution.Henceforth we choose m > d/2.
Second-order Strang splitting for (5.11) with f (u) = ±u was rigorously analysed in Lubich (2008).There it was shown that commutator relations similar to our conditions (5.4), (5.5), and (5.7) play a crucial role in the convergence proof for Strang splitting.The analysis given here shows that Lawson methods converge under the same regularity assumptions as splitting schemes.This will be worked out now in detail for first and second-order methods.
Let A = −i ∆ and g(u the Fréchet derivative of g is given by The first commutator [F A , g] then takes the form (5.12) We next show that the commutator can be bounded in H m if the solution is in H m+2 for m 0.
LEMMA 5.2 Let Ω ⊂ R d , d 3, be a bounded Lipschitz domain.Then there exists a constant C which only depends on Ω and d such that (5.13) Proof.Note that by the Sobolev embedding theorem we have the following bounds ) (5.14b) (5.14d) cf.(Lubich, 2008, Section 8).
For m = 0, the bound (5.13) follows from using (5.14a) for the first two terms and (5.14b) for the last one in the explicit expression (5.12) of [F A , g].For m = 1 we apply (5.14c) to all terms and for m 2 the bound follows from (5.14d).
For Lawson methods, a first-order convergence bound in H m thus requires H m+2 regularity of the exact solution, which is the same regularity as required for the first-order Lie splitting.
For second-order methods, one has to estimate the double commutator A simple calculation shows that a bound in H m requires H m+4 regularity of the exact solution.This situation is exactly the same as for second-order Strang splitting (see Lubich (2008)).Using (5.12) we conclude that the derivative of the commutator [F A , g] can be expressed as This commutator can again be bounded in H m for u, w ∈ H m+2 .We thus conclude that Lawson methods require the same regularity for second-order convergence as Strang splitting.
After space discretization (by finite differences, finite elements, or spectral methods) the evolution equation (3.1) becomes an ordinary differential equation (5.15) with a matrix A N ∈ C N×N and a discretization g N : C N → C N of g, where N denotes the employed degrees of freedom.In order to satisfy Assumption 3.1 the space discretization is required to provide matrices A N such that e −tA N C F (5.16) holds with a constant C F being uniform in N and t ∈ R.
In the previous sections we showed that full order of convergence is only guaranteed if certain regularity conditions are satisfied.The aim of the following numerical examples is to show that order reduction can also be verified numerically, if some of these regularity assumptions are violated.In fact, such order reductions can even be observed for linear problems.Hence we resign from presenting numerical examples for semilinear problems here.Numerous such examples can be found in the literature mentioned above.We also restrict ourselves to the first order schemes covered by our analysis, the exponential Euler and the Lawson Euler method, since they already show interesting (and different) convergence behavior.
We consider the linear Schrödinger equation (5.17) with periodic boundary conditions and discretize it using a Fourier spectral method on an equidistant grid.Let N be even and denote by F N the discrete Fourier matrix.Then matrix A N is given as With this notation, the exact solution of (5.15) is given by u(t) = e t(−A N +B N ) u 0 .
(5.18) EXAMPLE 5.3 The aim of the first example is to explain that the concept of regularity is relevant even in the ODE context.In order to show what regularity means here, for each N = 2 7 , . . ., 2 12 we choose a a regularity parameter α 0 and a vector r = (r m ) N/2 m=−N/2+1 ∈ C N of Fourier coefficients whose entries contain random numbers uniformly distributed in the unit disc.Then we define an initial function as the trigonometric polynomial (5.19) In the limit N → ∞, this sequence of trigonometric polynomials converges to a function in the Sobolev space H α per = H α per ((−π, π)) equipped with the norm In Figure 1 we plot u 0 µ,N for different values of µ over the number of Fourier modes N. The three graphs clearly show that u 0 µ,N is bounded independently of the number N of Fourier modes only for µ α.This corresponds to the continuous case, where obviously, the Sobolev norm u µ is bounded for all functions u ∈ H α per for µ α.The example clearly shows that regularity of the corresponding continuous function is crucial to obtain error bounds which do not deteriorate in the limit N → ∞.
After these introductory explanations, we now fix the spatial discretization and set N = 2048.We consider (5.17 10 −1 α = 0, p = 0.83 α = 0.5, p = 1.00 α = 1, p = 1.00 α = 1.5, p = 1.00 α = 2, p = 1.00 FIG. 2. Discrete L ∞ ((0,1),L 2 (Ω )) error of the numerical solution of (5.17) with periodic potential (5.21a) for the exponential Euler method (top) and the Lawson Euler method (bottom) for starting values in H α per .The values of p in the legend show the numerically observed orders of the schemes.corresponding initial function is contained in H α per .The leading error terms of the new analysis for the exponential Euler and the Lawson-Euler method are given in (4.2) and (5.10a), respectively.For comparison, we also added the leading error term (4.3) from our previous work.
Since B : H α per → H α per is a bounded perturbation of A, the exact solution of the continuous problem is guaranteed to stay in H α per for initial values in H α per for α 0. For the discrete problem, e −σ hA N and e σ h(−A n +B N ) are unitary matrices, which means that they leave all discrete Sobolev norms • α,N invariant.Thus the expression in (5.10a) can be bounded by e −(1−σ )hA N [A N , B N ]e −σ hA N u(t) 0,N c 1 e −σ hA N u(t) Here, the first inequality was proved in (Jahnke & Lubich, 2000, Lemma 3.1) with a constant c 1 independent of N and A N .
Hence, the (sufficient but not necessary) order condition (5.10a) for the Lawson Euler method yields order one convergence for initial values bounded in • α for α 1. Numerically, we observe an order reduction for α = 0 for the Lawson Euler method, while the exponential Euler method, which requires initial values in H 2 per = D(A), cf.(4.2) or (4.3), shows order reduction for α 1.For α = 0 the error of the exponential Euler method has an irregular behaviour for larger step sizes.To better visualise the order, we added thin lines (blue in the colored version) to all curves related to α = 0.The slopes p of these lines are also given in the legends (blue in the colored version).
Motivated by the expansion (3.3) of the exact solution we define elementary integrals.DEFINITION 3.3 For τ ∈ T and 0 ζ 1 we define the elementary integral G ζ (τ), its integrand Ψ ζ (τ) and the multivariate integration operator I ζ (τ) recursively in the following way.
(a) For τ = and a univariate function f , we set LEMMA 4.5 Let τ ∈ T and κ ∈ N ρ(τ) 0 (k) = [ ] k , where [ ] k = [ , . . ., ] denotes the bush with k leafs.Using σ = I σ ( )1 and the recursive definition of I ζ (τ) we obtain This norm is equivalent to • and we have in the corresponding operator norm e only generates a bounded semigroup, then taking the supremum only over t 0 shows that −A generates a contraction semigroup.
FIG. 1. Illustration of discrete regularity: the discrete Hµ per -Sobolev seminorm u 0 µ,N is plotted against the number N of Fourier modes, where u 0 is chosen as in (5.20) (and thus corresponds to a function in H α per ).