Mean-square stability analysis of approximations of stochastic differential equations in infinite dimensions

The (asymptotic) behaviour of the second moment of solutions to stochastic differential equations is treated in mean-square stability analysis. This property is discussed for approximations of infinite-dimensional stochastic differential equations and necessary and sufficient conditions ensuring mean-square stability are given. They are applied to typical discretization schemes such as combinations of spectral Galerkin, finite element, Euler–Maruyama, Milstein, Crank–Nicolson, and forward and backward Euler methods. Furthermore, results on the relation to stability properties of corresponding analytical solutions are provided. Simulations of the stochastic heat equation illustrate the theory.


Introduction
An interesting quantity of a stochastic differential equation (SDE) or a stochastic partial differential equation (SPDE) is the qualitative behaviour of its second moment for large times.Both types of equations can be interpreted as SDEs on a (here separable) Hilbert space (H, •, • H ).More specifically, let us consider a complete filtered probability space (Ω, A, (F t , t ≥ 0), P ) satisfying the "usual conditions" and the model problem (1) dX(t) = (AX(t) + F X(t)) dt + G(X(t)) dL(t) with F 0 -measurable, square-integrable initial condition X(0) = X 0 .Here, A : D(A) → H is the generator of a C 0 -semigroup S = (S(t), t ≥ 0) on H and F is a linear and bounded operator on H, i.e., F ∈ L(H).Furthermore, L denotes a U -valued Q-Lévy process that is assumed to be a square-integrable martingale as considered in [27] on the real separable Hilbert space (U, •, • U ) with covariance Q ∈ L(U ) of trace class and let G ∈ L(H; L(U ; H)).
Since analytical solutions to SDEs are rarely available, approximations in time and possibly in space by numerical methods have to be considered.The main focus of research in recent years has been on strong and weak convergence when the discretization parameters ∆t in time and h in space tend to zero.However, this property does not guarantee that the approximation shares the same (asymptotic) mean-square stability properties as the analytical solution.For finite-dimensional SDEs it is known that the specific choice of ∆t is essential.The goal of this manuscript is to generalize the theory of asymptotic mean-square stability analysis to a Hilbert space setting.We develop a theory for approximation schemes that has apriori no relation to the original equation (1) and its properties.Later on, we will discuss which conditions on (1) and its approximation lead to similar behaviour.An important application of mean-square stability is in multilevel Monte Carlo methods, where combinations of approximations on different space and time grids are computed.If the solution is mean-square unstable on any of the included levels, this is enough for the estimator to not behave as it should, see, e.g., [1].
The mean-square stability analysis of numerical approximations of SDEs started by considering the approximations of the one-dimensional geometric Brownian motion, see e.g., [29,14,15].As it has been pointed out in [10,11], the analysis of higher-dimensional systems and their approximations is also necessary, since the asymptotic behaviour of the corresponding mean-square processes of systems with commuting and non-commuting matrices often differs.The tools to perform mean-square stability analysis of SDE approximations presented in [11] could in principle be used for approximations of infinite-dimensional SDEs by a method of lines approach: After projection on an N h -dimensional space the mean-square stability properties of the resulting finite-dimensional SDEs and their approximations can be determined by considering the eigenvalues of N 2 h × N 2 h -dimensional matrices.However, due to the computational complexity as N h → ∞, neither the symbolic nor the numerical computation of these eigenvalues can be done for arbitrarily large systems.For this reason, we use an approach based on tensor-product-space-valued processes and properties of tensorized linear operators.
The outline of this article is as follows: Section 2 sets up a theory of mean-square stability analysis for discrete stochastic processes derived from recursions as they appear in approximations of infinite-dimensional SDEs.In the main result, necessary and sufficient conditions for asymptotic mean-square stability are shown.These results are then applied in Section 3 to numerical approximations of (1) based on spatial Galerkin discretization schemes and time discretizations with Euler-Maruyama and Milstein methods using backward/forward Euler and Crank-Nicolson as rational semigroup approximations.We conclude this work presenting simulations of stochastic heat equations with spectral Galerkin and finite element methods in Section 4 that illustrate the theory.

Asymptotic mean-square stability analysis
This section is devoted to the setup of asymptotic mean-square stability for families of stochastic processes in discrete time given by recursion schemes as they typically show up in approximations of (1).We derive necessary and sufficient conditions ensuring asymptotic mean-square stability that can be checked in practice as it is shown later in Section 3.
Let (V h , h ∈ (0, 1]) be a family of finite-dimensional subspaces V h ⊂ H with dim(V h ) = N h ∈ N indexed by a refinement parameter h.With an inner product induced by •, • H , V h becomes a Hilbert space with norm • H .For a linear operator D : V h → V h , the operator norm D L(V h ) is therefore given by sup v∈V h Dv H / v H and can be seen to coincide with DP h L(H) , where P h denotes the orthogonal projection onto V h .Let us further consider the time interval [0, ∞) and for convenience equidistant time steps t j = j∆t, j ∈ N 0 , with fixed step size ∆t > 0. Hence, t → ∞ is equivalent to j → ∞.Assume that we are given a sequence of V h -valued random variables (X j h , j ∈ N 0 ) determined by the linear recursion scheme is an L(V h )-valued random variable for all j.In terms of SDE (1), one can think of D det ∆t,h as the approximation of the solution operator of the deterministic part and D stoch,j ∆t,h approximates the stochastic part Although, in general, any not necessarily equidistant time discretization (t j , j ∈ N 0 ) that satisfies t j → ∞ if j → ∞ would be sufficient for the following theory, we see in the given SDE example that D det ∆t,h would be j-dependent in this case, which we want to omit for the sake of readability.
Inspired by properties of standard approximation schemes for (1), we put the following assumptions on the family (D stoch,j ∆t,h , j ∈ N 0 ).
For the recursion scheme (2) an equilibrium (solution) is given by the zero solution, which is defined as X j h,e = 0 for all j ∈ N 0 .We define mean-square stability of the zero solution of ( 2) in what follows.Definition 2.2.Let X h = (X j h , j ∈ N 0 ) be given by (2) for fixed h and ∆t.The zero solution (X j h,e = 0, j ∈ N 0 ) of ( 2) is called mean-square stable if, for every ε > 0, there exists It is called asymptotically mean-square stable if it is mean-square stable and there exists For convenience, the abbreviation (asymptotic) mean-square stability for the (asymptotic) mean-square stability of the zero solution of (2) or ( 1) is used if it is clear from the context.
When applied to Y j = X j h , the following lemma provides an equivalent condition for meansquare stability in terms of the tensor-product-space-valued process h .Here, for a general Hilbert space H, the abbreviation H (2) = H ⊗ H is used and H (2) is defined as the completion of the algebraic tensor product with respect to the norm induced by and w = M j=1 w 1,j ⊗ w 2,j are representations of elements v and w in the algebraic tensor product.
Lemma 2.3.Let V h be a finite-dimensional subspace of H.Then, for any sequence Proof.By Parseval's identity, for an orthonormal basis (ψ 1 , . . ., ψ N h ) of V h , we have and similarly Therefore, one implication is immediately obtained, while the other follows from the fact that H .This lemma enables us to show the following sufficient condition for asymptotic meansquare stability.
Theorem 2.4.Let X h = (X j h , j ∈ N 0 ) given by (2) satisfy Assumption 2.1 and set S j = D det ∆t,h ⊗ D det ∆t,h + E[D stoch,j ∆t,h ⊗ D stoch,j ∆t,h ].Then the zero solution of (2) is asymptotically mean-square stable, if Proof.Let us first remark that S j ∈ L(V h ) for all j ∈ N 0 by the properties of D det ∆t,h and D stoch,j ∆t,h and of the Hilbert tensor product.In order to show asymptotic mean-square stability, it suffices to show E[X j h ⊗ X j h ] → 0 as j → ∞ by Lemma 2.3.For this, consider ) .The mixed terms vanish by the observation that since X j h and D det ∆t,h are F t j -measurable and E[D stoch,j ∆t,h |F t j ] = 0 by Assumption 2.1.
Applying Assumption 2.1 once more, we therefore conclude h ) = 0, mean-square stability is shown with the computation For asymptotic mean-square stability, note that for any for which a sufficient condition is given by lim j→∞ S j • • • S 0 L(V (2) h ) = 0. Therefore, the proof is finished.
In many examples the operators (D stoch,j ∆t,h , j ∈ N 0 ) have a constant covariance, i.e., they satisfy for all j ∈ N 0 Often as in the following example they are even independent and identically distributed, which implies (3).
Example 2.5.Consider the one-dimensional geometric Brownian motion driven by an adapted, real-valued Brownian motion (β(t), t ≥ 0) with initial condition X(0) = x 0 ∈ R and λ, σ ∈ R. The solution can be approximated by the explicit Euler-Maruyama scheme for j ∈ N 0 , where ∆β j = β(t j+1 ) − β(t j ), or by the Milstein scheme Then the deterministic operators in (2) D det ∆t,EM = D det ∆t,Mil = 1 + λ∆t are equal for both schemes, and the corresponding approximations of the stochastic integrals are given by Both families of stochastic approximation operators satisfy Assumption 2.1 and do not depend on j.We observe that the equidistant time step ∆t is essential here.
Having this example in mind, we are able to give a necessary and sufficient condition for asymptotic mean-square stability when assuming (3) and therefore to specify Theorem 2.4.The condition relies on the spectrum of a single linear operator S ∈ L(V given by (2) satisfy Assumption 2.1 and (3).Then the zero solution of (2) is asymptotically mean-square stable if and only if h ) < 1. Proof.Setting S j = S for all j ∈ N 0 in Theorem 2.4, we obtain by the same arguments h ⊗ X j h ] = 0 if and only if lim j→∞ S j = 0 which is equivalent to ρ(S) < 1 by the same arguments as, e.g., in [8,17,11].This completes the proof of the first statement.Since ρ(S) ≤ S L(V (2) h ) , a sufficient condition for asymptotic mean-square stability is given by S L(V (2) h ) < 1.In the framework of SDE approximations, note that this corollary is an SPDE version formulated with operators of the results for finite-dimensional linear systems in [11].There, the proposed method relies on a matrix eigenvalue problem.For SPDE approximations, this approach is not suitable, since the dimension of the considered eigenvalue problem increases heavily with space refinement.More precisely, for h > 0, the spectral radius of an (N 2 h × N 2 h )matrix has to be computed.To overcome this problem, we perform, in what follows, meansquare stability analysis of SPDE approximations based on operators as introduced above.

Application to Galerkin methods
We continue by applying the previous results to the analysis of some classical numerical approximations of (1) which admits by results in [27,Chapter 9] an up to modification unique mild càdlàg solution and is for t ≥ 0 given by ( 4) We assume further that the operator −A : 1) is densely defined, selfadjoint, and positive definite with compact inverse.This implies that −A has a non-decreasing sequence of positive eigenvalues (λ i , i ∈ N) for an orthonormal basis of eigenfunctions (e i , i ∈ N) in H and fractional powers of −A are provided by • H defines a separable Hilbert space (see, e.g., [20,Appendix B]).

Let the sequence (V
This definition implies that −A h is self-adjoint and positive definite on V h and therefore has a sequence of orthonormal eigenfunctions (e h,i , i = 1, . . ., N h ) and positive non-decreasing eigenvalues (λ h,i , i = 1, . . ., N h ) (see e.g., [20,Chapter 3]).By using basic properties of the Rayleigh quotient, we bound the smallest eigenvalue λ h,1 of −A h from below by the smallest eigenvalue λ 1 of −A through since V h ⊂ H, cf.[9].This estimate turns out to be useful when comparing asymptotic mean-square stability of (1) and its approximation later in this section.
Let the covariance of the Lévy process L be self-adjoint, positive semidefinite, and of trace class.Then results in [27,Chapter 4] imply the existence of an orthonormal basis (f i , i ∈ N) of U and a non-increasing sequence of non-negative real numbers (µ i , i ∈ N) such that for all i ∈ N, where (L i , i ∈ N) is a family of real-valued, square-integrable, uncorrelated Lévy processes satisfying E[(L i (t)) 2 ] = t for all t ≥ 0. Note that due to the martingale property of L, the real-valued Lévy processes satisfy E[L i (t)] = 0 for all t ≥ 0 and i ∈ N.This implies, together with the stationarity of the Lévy increments ∆L j i = L i (t j+1 ) − L i (t j ), that for all i ∈ N and j ∈ N 0 , Since the series representation of L can be infinite, an approximation of L might be required to implement a fully discrete approximation scheme, which is typically done by truncation of the Karhunen-Loève expansion, i.e., for κ ∈ N, set L κ (t) = κ i=1 √ µ i L i (t)f i .Note that the choice of κ is essential and should be coupled with the overall convergence of the numerical scheme as is discussed in, e.g., [3,5,25].Within this work, we consider numerical methods based on the original Karhunen-Loève expansion (6) of L. However, this does not restrict the applicability of the results since L κ fits in the framework by setting µ i = 0 for all i > κ.
As standard example in this context we consider the stochastic heat equation which is used for simulations in Section 4.
is referred to as the (homogeneous) stochastic heat equation.
It is known (see, e.g., [20,Chapter 6]) that the eigenvalues and eigenfunctions of the operator −A are given by We first assume, for simplicity, that U = H = L 2 ([0, 1]) and that the operator Q diagonalizes with respect to the eigenbasis of −A, i.e., f i = e i for all i ∈ N.For this choice, we consider the operator G = G 1 that gives rise to a geometric Brownian motion in infinite dimensions, cf.[20,Section 6.4].It is for all u, v ∈ H defined by the equation v, e i H u, e i H e i .
As a second example, we let U = Ḣ1 with the same diagonalization assumption as before, i.e., Here, we let the operator G = G 2 be a Nemytskii operator which is defined pointwise for To see that G ∈ L(H; L(U ; H)) note that for u, v ∈ H, by the triangle inequality and Cauchy-Schwarz we have for Next, for G 2 with v ∈ H and u ∈ Ḣ1 , it holds that Here, the first inequality is an application of the Cauchy-Schwarz inequality, while the second follows from the fact that the sequence 3.1.Time discretization with rational approximations.Let us first recall that a rational approximation of order p of the exponential function is a rational function R : C → C satisfying that there exist constants C, δ > 0 such that for all z ∈ C with |z| < δ Since R is rational there exist polynomials r n and r d such that R = r −1 d r n .We want to consider rational approximations of the semigroup S generated by the operator −A and of its approximations −A h as they were considered in [30].With the introduced notation, the linear operator R(∆tA h ) is given for all v h ∈ V h by Let us start with the mean-square stability properties of a Galerkin Euler-Maruyama method, which is given by the recursion for j ∈ N 0 with initial condition X 0 h = P h X 0 , where with ∆L j = L(t j+1 ) − L(t j ).Note that the linear operators (D stoch,j ∆t,h , j ∈ N 0 ) satisfy all assumptions of Corollary 2.6 since they only depend on the Lévy increments (∆L j , j ∈ N 0 ).For this type of numerical approximation, the result from Corollary 2.6 can be specified: Proposition 3.2.The zero solution of the numerical method (8) is asymptotically meansquare stable if and only if . Since E[∆L j ⊗ ∆L j ] = ∆t q by Lemma A.1, the proof is completed with Corollary 2.6.
The still rather abstract condition can be specified to an explicit sufficient condition.
Corollary 3.3.A sufficient condition for asymptotic mean-square stability of (8) is Proof.We first note that by the triangle inequality and the properties of the linear operator induced by the rational approximation R defined in Equation ( 7) we obtain that and, similarly and , we obtain the claimed condition, which is sufficient by Corollary 2.6.
We continue with the higher order Milstein scheme.Applying [3] in our context reads where D det ∆t,h and D EM,j ∆t,h are as in (9) and Remark 3.4.In order to compute the iterated integrals of D M,j ∆t,h , one may assume (cf.[3,16]) that for all H-valued, adapted stochastic processes χ = (χ(t), t ≥ 0) and all i, j ∈ N, the diffusion operator G satisfies the commutativity condition Under this assumption satisfied in Example 3.1, D M,j ∆t,h simplifies to where Here, [L k , L ] t denotes the quadratic covariation of L k and L evaluated at t ≥ 0, which is straightforward to compute when L k , L are jumpdiffusion processes (cf.[3]).For the simulation of more general Lévy processes in the context of SPDE approximation, we refer to [13,7].
As for the Euler-Maruyama scheme, Corollary 2.6 can be specified for this Milstein scheme.
Proposition 3.5.Assume that the bilinear mapping C (u 1 , u 2 ) = r −1 d (∆tA h )P h G(G(•)u 1 )u 2 for u 1 , u 2 ∈ U can be uniquely extended to a mapping C ∈ L(U (2) , L(V h )).Then the zero solution of (10) is asymptotically mean-square stable if and only if 4) and C and q as in Proposition 3.2.
h ) are well-defined by the same arguments as in Proposition 3.2.Since D stoch,j ∆t,h = D EM,j ∆t,h + D M,j ∆t,h , we obtain for The first term and D det ∆t,h ⊗ D det ∆t,h are given in Proposition 3.2.We conclude for the second term with Lemma A.2, writing ∆ (2) L j ⊗ ∆L j = 0 and analogously the same for the third term.Finally, Lemma A.2 yields L j and the statement follows directly from Corollary 2.6.
Remark 3.6.The assumption on C in Proposition 3.5 holds for the operators G 1 and G 2 in the setting of Example 3.1.One can get rid of this assumption by using that the bound on G ∈ L(H; L(U ; H)) allows for an extension of the bilinear mapping to the projective tensor product space U ⊗ π U , cf. [18].One would then have to assume additional regularity on L to ensure that ∆ (2) L j in the proof of Proposition 3.5 is in the space L 2 (Ω; U ⊗ π U ).Alternatively, one considers finite-dimensional truncated noise, which leads to equivalent norms.

3.2.
Examples of rational approximations.Let us next consider specific choices of rational approximations R and investigate their influence on mean-square stability.First, we derive sufficient conditions based on Corollary 3.3 for Euler-Maruyama schemes with standard rational approximations.More specifically, we consider the backward Euler, the Crank-Nicolson, and the forward Euler scheme.Theorem 3.7.Consider the approximation scheme (8).
By the same arguments, we obtain for the forward Euler scheme that |R(−∆tλ h,i )| is maximized either at z = −∆tλ h,1 or z = −∆tλ h,N h .Therefore, since r −1 d (z) = 1, the claim follows again with Corollary 3.3, which finishes the proof.
For the Milstein scheme, Proposition 3.5 yields the following sufficient condition.Proposition 3.8.Under the assumptions of Proposition 3.5, the Milstein scheme (10) with R(z) = (1 − z) −1 is asymptotically mean-square stable if Proof.In the same way as in the proof of Corollary 3.3, we bound

Hence, our assumption ensures that S L(V (2)
h ) < 1, which by Corollary 2.6 is sufficient for asymptotic mean-square stability.
Note that the sufficient condition for the Milstein scheme is more restrictive than the sufficient condition presented in Theorem 3.7(1) for the backward Euler-Maruyama method due to the additional positive term in the estimate in Proposition 3.8.

Relation to the mild solution.
To connect existing results on asymptotic mean-square stability of (1) to the results for discrete schemes in Section 3.2, we have to restrict ourselves to Q-Wiener processes W = (W (t), t ≥ 0) due to the framework for analytical solutions in [24].Specifically, we consider dX(t) = (AX(t) + F X(t)) dt + G(X(t)) dW (t).(11) The following special case of [24, Proposition 3.1.1]provides a sufficient condition for the asymptotic mean-square stability of (1) by a Lyapunov functional approach.Theorem 3.9.Assume that X 0 = x 0 ∈ Ḣ1 is deterministic and there exists c > 0 such that for all v ∈ Ḣ2 .Then the zero solution of (11) is asymptotically mean-square stable.
We use this theorem to derive simultaneous sufficient mean-square stability conditions for (11) and the corresponding backward Euler scheme (8).
Corollary 3.10.Assume that X 0 = x 0 ∈ Ḣ1 is deterministic.Then the zero solutions of (11) and its approximation (8) with R(z) = (1 − z) −1 are asymptotically mean-square stable for all h and ∆t if We show first that (12) yields asymptotic mean-square stability of (11) by reducing it to the assumption in Theorem 3.9.For the second term there, note that for any v ∈ Ḣ2 , where the first equality follows from the properties of the trace.The first term satisfies using the definition of • 1 .Altogether, we therefore obtain i.e., with ( 12) asymptotic mean-square stability of ( 11) by setting We continue with ( 8) observing first that λ h,1 ≥ λ 1 by ( 5).Therefore, the condition in Theorem 3.7(1) yields This is satisfied and finishes the proof since the first term is negative by assumption and so is the second using ( 12) and Note that under (12) in Corollary 3.10, the backward Euler-Maruyama scheme preserves the qualitative behaviour of the analytical solution without any restriction on h and ∆t.Hence, it can be applied to numerical methods requiring different refinement parameters in parallel such as multilevel Monte Carlo, which efficiently approximate quantities E[ϕ(X(T ))] (see, e.g., [6,4] for details).Here, it is essential that the behaviour is preserved on all refinement levels [1].
On the other hand, note that in the homogeneous case, i.e., F = 0, the stability condition in Theorem 3.7(1) reduces to so that even if (1) is asymptotically mean-square unstable, its approximation (8) can always be rendered stable by letting ∆t be large enough.In that case the analytical solution and its approximation have a different qualitative behaviour for large times.
Remark 3.11.Based on Theorem 3.7, it is also possible to examine the relation between asymptotic mean-square stability of (11) and its approximation by the other rational approximations.However, due to the nature of the sufficient conditions in Theorem 3.7, analogous results to Corollary 3.10 include restrictions on h and ∆t.
For the Milstein scheme considered in Proposition 3.8 we can also derive a sufficient condition for the simultaneous mean-square stability not relying on h and ∆t.However, due to the additional term in Proposition 3.8, the condition becomes slightly more restrictive than in Corollary 3.10.More precisely we obtain the following: Corollary 3.12.Assume that X 0 = x 0 ∈ Ḣ1 is deterministic and F = 0. Then the zero solutions of (11) and its Milstein approximation (10) with R(z) = (1−z) −1 are asymptotically mean-square stable for all h and ∆t if Proof.The asymptotic mean-square stability of (11) follows by Corollary 3.10 since L(H;L(U ;H)) < 0. The sufficient condition for (10) in Proposition 3.8 can be rewritten as L(H;L(U ;H)) < 0. Thus, asymptotic mean-square stability of (10) follows.

Simulations
In this section we adopt the setting of Example 3.1 and use numerical simulations to illustrate our theoretical results.More specifically, we consider the stochastic heat equation √ µ i β i (t)e i , where (β i , i ∈ N) is a sequence of independent, real-valued Brownian motions, and assume µ i = C µ i −α with C µ > 0 and α > 1.Here, C µ scales the noise intensity and α controls the space regularity of W , see, e.g., [25,23].
4.1.Spectral Galerkin methods.For G = G 1 in Example 3.1, we obtain with the approach presented in [20, Section 6.4] the infinite-dimensional counterpart of the geometric Brownian motion where each of the coefficients x i (t) is the solution to the one-dimensional geometric Brownian motion Furthermore, the second moment is given by Consequently, asymptotic mean-square stability of (13) holds if and only if −2λ i + µ i < 0 for all i ∈ N. By using the explicit representation of the eigenvalues λ i and µ i , this corresponds to 13) is asymptotically mean-square unstable if and only if For the spectral Galerkin approximation, we choose V h = span(e 1 , . . ., e N h ), N h < ∞.Thus, we consider X h (t) = N h k=1 x k (t)e k .To obtain a fully discrete scheme, we approximate the one-dimensional geometric Brownian motions in time by the three considered rational approximations in Theorem 3.
Using a Milstein scheme instead, we obtain for S in Proposition 3.5 with similar computations as before and the observations that the commutativity condition in Remark 3.4 is fulfilled and that ∆[ Note that for both operators S, the eigenvalues Λ k, with k = satisfy where |R(−∆tλ s )| = max j=1,...,N h |R(−∆tλ j )|.Hence, ρ(S) < 1 is equivalent to |Λ k,k | < 1 for all k = 1, . . ., N h .In Table 1 the eigenvalues Λ k,k and sufficient and necessary conditions for asymptotic mean-square stability are collected.
As it is noted above, ( 13) is asymptotically mean-square stable if and only if the condition −2λ 1 + µ 1 < 0 holds.As can be seen from Table 1 and the choice of the eigenvalues, the Euler-Maruyama scheme (8) with backward Euler and Crank-Nicolson rational approximation shares this property without any restriction on V h and ∆t.In Figure 1(a) the qualitative behaviour of the Euler-Maruyama method with the three rational approximations in Theorem 3.7 is compared.We choose ν = 1, N h = 15, and µ i = i −3 for i ∈ N, i.e., C µ = 1 and α = 3.Since −2λ 1 + C µ = −2π 2 + 1 < 0, the analytical solution to ( 13) is asymptotically mean-square stable.
For the approximation of E[ X j h 2 H ] we use a Monte Carlo simulation with M = 10 6 , i.e., we approximate Table 1.Spectral Galerkin methods with corresponding eigenvalues Λ k,k and asymptotic mean-square stability conditions.where ( x j,(i) k , i = 1, . . ., M ) consists of independent samples of numerical approximations of x k (t j ) with different schemes.The reference solution is As it can be seen in Figure 1(a), the backward Euler and the Crank-Nicolson scheme reproduce the mean-square stability of (13) already for large time step sizes (∆t = 1/25), but the forward Euler scheme requires a 44 times smaller ∆t.Here, the finest time step size is given by ∆t = 1/1100 which satisfies the restrictive bound in Table 1 such that ρ(S) < 1.
Due to a rapid amplification of oscillations caused by negative values of X j h for coarser time step sizes outside the stability region (∆t = 1/1000 and 1/1050), the mean-square process deviates rapidly from the reference solution at a certain time point.
In Figure 1(b) the qualitative behaviour of the Euler-Maruyama and the Milstein scheme with a backward Euler rational approximation on the time interval [0, 5] are compared.The parameters ν = 8/(5π 4 ) and µ i = 3/10 i −3 are chosen such that the Milstein scheme is asymptotically mean-square unstable for ∆t = 1.25 and asymptotically mean-square stable for ∆t = 0.25 while the Euler-Maruyama scheme is asymptotically mean-square stable for both choices.These theoretical results are reproduced in the simulation.

4.2.
Galerkin finite element methods.Let us continue with G = G 2 in Example 3.1 and a Galerkin finite element setting, similar to that of [22].This is to say, we let V h be the span of piecewise linear functions on an equidistant grid of [0, 1] with N h interior nodes so that V h is an N h -dimensional subspace of Ḣ1 with refinement parameter h = (N h + 1) −1 .With the exception that U = Ḣ1 , all other parameters are as in Figure 1(a) of Section 4.1.
In contrast to the setting in Section 4.1, the solution and its approximation are no longer sums of one-dimensional geometric Brownian motions and thus, analytical necessary and sufficient conditions for ρ(S) < 1 are not available.We therefore consider the results of Theorem 3.7 instead.With the setting of this section, for i ∈ N, which was derived in [20,Section 6.1].For the convenience of the reader, the sufficient conditions of Theorem 3.7 for the considered approximation schemes are collected in simplified form in Table 2, expressed in terms of stability parameters ρ BE , ρ CN and ρ FE .
, we replace G 2 L(H;L(U ;H)) in these conditions with the upper bound derived in Example 3.1.Note that Corollary 3.10 with these parameters implies simultaneous asymptotic mean-square stability of (13) and the finite element backward Euler scheme (8).rational approximation ρ(S) < 1 ⇐: backward Euler Finite element methods with sufficient conditions for ρ(S) < 1.
As in Section 4.1 we compare the mean-square behaviour of the backward Euler and the forward Euler scheme in Figure 2(a) but now for the finite element discretization up to T = 2.5.We observe that the increase of the time step size by a very small amount, i.e., from ∆t = 0.00066 to ∆t = 0.00067, causes the forward Euler system to switch from a stable to an unstable behaviour.This agrees with the theory in Table 3, as ρ FE changes sign in that interval, i.e., stability is only guaranteed for the smaller time step.Therefore we conclude that the sufficient condition is sharp in our model problem.Then Proof.We first note that ∆L ⊗ ∆L is well-defined as a member of L 1 (Ω; U (2) ) since for k, ∈ N. Thus, we obtain Lemma A.2. Let L be a U -valued, square-integrable Lévy process and set for 0 ≤ a < b with ∆t = b − a, 2) ).
Then (1) E ∆ (2) L ⊗ ∆L = 0, ( Since L is stationary, we may assume without loss of generality that a = 0 and b = t > 0. We first note that To simplify this expression, we use the angle bracket process ( X, Y t , t ≥ 0), which for two real-valued semimartingales X and Y with (locally) integrable quadratic covariation [X, Y ] is defined as the unique compensator which makes ([X, Y ] t − X, Y t , t ≥ 0) a local martingale.For this, we have the polarization identity, which can be found, along with an introduction to this process, e.g., in [28,Section III.5].
For square-integrable martingales M , it holds (see, e.g., [19,Section 8.9]) that E[ M, M t ] = E[M 2 (t)] and therefore, by the polarization identity, if N is another square-integrable martingale, then, Applying this to the Lévy integral, which is a martingale, we obtain where the last equality is a property of the angle bracket process and the stochastic integral, see [19,Section 8.9].Now, when j = , we have, since L j is a Lévy process and E[L 2 j (s)] = s, that L j , L s = L j , L j s = s by [27,Chapter 8].When j = on the other hand, L j L is a square-integrable martingale by [27,  Therefore, [L j , L ] is also a square-integrable martingale (with zero mean), because the right hand side is a square-integrable martingale.Since ( L j , L s , s ≥ 0) is the unique compensator of [L j , L ] it must follow that L j , L s = 0 for all s ≥ 0. Thus, E[ which yields by the monotone convergence theorem that ∆ (2) L ∈ L 2 (Ω; U (2) ) with This entails that ∆ (2) L ⊗ ∆L ∈ L 1 (Ω; U (2) ⊗ U ), since by the Cauchy-Schwarz inequality.Similarly, it holds that ∆ (2) L⊗∆ (2) L ∈ L 1 (Ω; U (2) ⊗U (2) ).Therefore, we obtain E ∆ (2) L ⊗ ∆L = This is justified since L m , L s = 0 only if m = and that in this case the expectation of the integral is still zero since L k has zero expectation.We note that by ( 14) which shows the second claim.

Lemma A. 1 .
Let L be a U -valued Lévy process and let, for 0 ≤ a < b, ∆L = L(b) − L(a) and ∆t = b − a.

E t 0 L
i (s−) dL j (s) t 0 L k (s−) dL (s) = E 0 L i (s−) dL j (s), 0 L k (s−) dL (s) t = E t 0 L i (s−)L k (s−) d L j , L s ,

t 0 L 0 L 0 E
i (s−)L k (s−) d L j , L s ] is non-zero only if j = , and in that caseE t i (s−)L k (s−) d L j , L j s = t [L i (s−)L k (s−)] ds.In conclusion we have obtained (14) E t 0 s 0 dL i (r) dL j (s) t 0 s 0 dL k (r) dL (s) = t 2 /2 for j = and i = k, 0 otherwise,

√ µ k µ µ m E ∆L m t 0 s 0 E t 0 L
dL k (r) dL (s) (f k ⊗ f ) ⊗ f m ,and, in the same way as the first observation of this proof, r) dL (s) = E dL m (s), L k (s−) dL (s) t = k (s−) d L m , L s = 0.

Table 3 .
Specific values of Table2for varying ∆t.