## 1 Introduction

An interesting property of a stochastic differential equation (SDE) or a stochastic partial differential equation (SPDE) is the qualitative behaviour of its second moment for large times. Both types of equations can be interpreted as SDEs on a (here separable) Hilbert space $$(H, \left\langle \cdot , \cdot \right\rangle _{H} )$$. More specifically, let us consider a complete filtered probability space $$(\varOmega , \mathscr {A}, (\mathscr {F}_t,{t\ge 0}), P)$$ satisfying the “usual conditions” and the model problem

\begin{aligned} \mathrm {d}X(t) = (AX(t) + FX(t)) \, \mathrm {d}t + G(X(t)) \, \mathrm {d}L(t) \end{aligned}
(1.1)

with $$\mathscr {F}_0$$-measurable, square-integrable initial condition $$X(0) = X_0$$. Here, the operator $$A: \mathscr {D}(A) \rightarrow H$$ is the generator of a $$C_0$$-semigroup $$S=(S(t), t \ge 0)$$ on H and F is a linear and bounded operator on H, i.e., $$F \in L(H)$$. Furthermore, L denotes a U-valued Q-Lévy process that is assumed to be a square-integrable martingale as considered in  on the real separable Hilbert space $$(U, \left\langle \cdot , \cdot \right\rangle _{U} )$$ with covariance $$Q \in L(U)$$ of trace class and let $$G \in L(H;L(U;H))$$.

We recall from  that an equilibrium (solution) of (1.1) is the zero solution $$(X_\mathrm {e}(t) = 0, t\ge 0)$$. It is called mean-square stable if, for every $$\varepsilon > 0$$, there exists $$\delta > 0$$ such that $${{\mathrm{\mathbb {E}}}}[\Vert X(t) \Vert _H^2] < \varepsilon$$ for all $$t\ge 0$$ whenever $${{\mathrm{\mathbb {E}}}}[\Vert X_0\Vert _H^2] < \delta$$. It is further asymptotically mean-square stable if it is mean-square stable and there exists $$\delta > 0$$ such that $$\mathbb {E}[\Vert X_0\Vert _H^2] < \delta$$ implies $$\lim _{t \rightarrow \infty } {{\mathrm{\mathbb {E}}}}[ \Vert X(t) \Vert _H^2] = 0$$. A lot of effort has been dedicated to the asymptotic mean-square stability analysis in finite and infinite dimensions, see e.g., [2, 17, 24, 26].

Since analytical solutions to SDEs are rarely available, approximations in time and possibly in space by numerical methods have to be considered. The main focus of research in recent years has been on strong and weak convergence when the discretization parameters $$\Delta t$$ in time and h in space tend to zero. However, this property does not guarantee that the approximation shares the same (asymptotic) mean-square stability properties as the analytical solution. For finite-dimensional SDEs it is known that the specific choice of $$\Delta t$$ is essential. The goal of this manuscript is to generalize the theory of asymptotic mean-square stability analysis to a Hilbert space setting. We develop a theory for approximation schemes that has apriori no relation to the original Eq. (1.1) and its properties. Later on, we will discuss which conditions on (1.1) and its approximation lead to similar behaviour. An important application of mean-square stability is in multilevel Monte Carlo methods, where combinations of approximations on different space and time grids are computed. If the solution is mean-square unstable on any of the included levels, this is enough for the estimator not to behave as it should, see, e.g., .

The mean-square stability analysis of numerical approximations of SDEs started by considering approximations of the one-dimensional geometric Brownian motion, see e.g., [14, 15, 29]. As it has been pointed out in [10, 11], the analysis of higher-dimensional systems and their approximations is also necessary, since the asymptotic behaviour of the corresponding mean-square processes of systems with commuting and non-commuting matrices often differs. The tools to perform mean-square stability analysis of SDE approximations presented in  could in principle be used for approximations of infinite-dimensional SDEs by a method of lines approach: After projection on an $$N_h$$-dimensional space the mean-square stability properties of the resulting finite-dimensional SDEs and their approximations can be determined by considering the eigenvalues of $$N_h^2 \times N_h^2$$-dimensional matrices. However, due to the computational complexity as $$N_h \rightarrow \infty$$, neither the symbolic nor the numerical computation of these eigenvalues can be done for arbitrarily large systems. For this reason, we use an approach based on tensor-product-space-valued processes and properties of tensorized linear operators.

The outline of this article is as follows: Sect. 2 sets up a theory of mean-square stability analysis for discrete stochastic processes derived from recursions as they appear in approximations of infinite-dimensional SDEs. In the main result, necessary and sufficient conditions for asymptotic mean-square stability are shown. These results are then applied in Sect. 3 to numerical approximations of (1.1) based on spatial Galerkin discretization schemes and time discretizations with Euler–Maruyama and Milstein methods using backward/forward Euler and Crank–Nicolson as rational semigroup approximations. We conclude this work presenting simulations of stochastic heat equations with spectral Galerkin and finite element methods in Sect. 4 that illustrate the theory.

## 2 Asymptotic mean-square stability analysis

This section is devoted to the setup of asymptotic mean-square stability for families of stochastic processes in discrete time given by recursion schemes as they typically show up in approximations of (1.1). We derive necessary and sufficient conditions ensuring asymptotic mean-square stability that can be checked in practice as it is shown later in Sect. 3.

Let $$(V_h, h \in (0,1])$$ be a family of finite-dimensional subspaces $$V_h \subset H$$ with $${\text {dim}}(V_h) = N_h\in \mathbb {N}$$ indexed by a refinement parameter h. With an inner product induced by $$\left\langle \cdot , \cdot \right\rangle _{H}$$, $$V_h$$ becomes a Hilbert space with norm $$\Vert \cdot \Vert _{H}$$. For a linear operator $$D: V_h \rightarrow V_h$$, the operator norm $$\Vert D \Vert _{L(V_h)}$$ is therefore given by $$\sup _{v \in V_h} \Vert D v \Vert _{H}/\Vert v \Vert _{H}$$ and can be seen to coincide with $$\Vert D P_h \Vert _{L(H)}$$, where $$P_h$$ denotes the orthogonal projection onto $$V_h$$.

Let us further consider the time interval $$[0,\infty )$$ and for convenience equidistant time steps $$t_j = j \Delta t$$, $$j \in \mathbb {N}_0$$, with fixed step size $$\Delta t > 0$$. Hence, $$t \rightarrow \infty$$ is equivalent to $$j\rightarrow \infty$$. Assume that we are given a sequence of $$V_h$$-valued random variables $${(X_h^j, j \in \mathbb {N}_0)}$$ determined by the linear recursion scheme

\begin{aligned} X_h^{j+1} = D_{\Delta t, h}^{{\text {det}}}X_h^j + D_{\Delta t, h}^{{\text {stoch}},j} X_h^j \end{aligned}
(2.1)

with $$\mathscr {F}_0$$-measurable initial condition $$X_h^0 \in L^2(\varOmega ;V_h)$$, i.e., $${{\mathrm{\mathbb {E}}}}[\Vert X_h^0 \Vert _{V_h}^2]< \infty$$. Here $${D_{\Delta t, h}^{{\text {det}}}\in L(V_h)}$$ and $$D_{\Delta t, h}^{{\text {stoch}},j}$$ is an $$L(V_h)$$-valued random variable for all j.

In terms of SDE (1.1), one can think of $$D_{\Delta t, h}^{{\text {det}}}$$ as the approximation of the solution operator of the deterministic part

\begin{aligned} \mathrm {d}X(t) = (AX(t) + F X(t)) \, \mathrm {d}t, \quad t \in [t_j,t_{j+1}) \end{aligned}

and $$D_{\Delta t, h}^{{\text {stoch}},j}$$ approximates the stochastic part

\begin{aligned} \mathrm {d}X(t) = G(X(t)) \, \mathrm {d}L(t), \quad t \in [t_j,t_{j+1}). \end{aligned}

Although, in general, any not necessarily equidistant time discretization $$(t_j, j \in \mathbb {N}_0)$$ that satisfies $$t_j \rightarrow \infty$$ if $$j \rightarrow \infty$$ would be sufficient for the following theory, we see in the given SDE example that $$D_{\Delta t, h}^{{\text {det}}}$$ would be j-dependent in this case, which we want to omit for the sake of readability.

Inspired by properties of standard approximation schemes for (1.1), we put the following assumptions on the family $$(D_{\Delta t, h}^{{\text {stoch}},j}, j \in \mathbb {N}_0)$$.

### Assumption 2.1

Let $$h, \Delta t > 0$$ be fixed. The family $$(D_{\Delta t, h}^{{\text {stoch}},j},j\in \mathbb {N}_0)$$ is $$\mathscr {F}$$-compatible in the sense of [12, 21], i.e., $$D_{\Delta t, h}^{{\text {stoch}},j}$$ is $$\mathscr {F}_{t_{j+1}}$$-measurable and $${{\mathrm{\mathbb {E}}}}[ D_{\Delta t, h}^{{\text {stoch}},j} | \mathscr {F}_{t_j} ] = 0$$ for all $$j \in \mathbb {N}_0$$. Furthermore, for all $$j \in \mathbb {N}_0$$, let

\begin{aligned} \Vert D_{\Delta t, h}^{{\text {stoch}},j} \Vert _{L^2(\varOmega ;L(V_h))} = {{\mathrm{\mathbb {E}}}}[ \Vert D_{\Delta t, h}^{{\text {stoch}},j} \Vert _{L(V_h)}^2 ]^{1/2} < \infty \end{aligned}

and

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ \left. D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j}\right| \mathscr {F}_{t_j}\right] = {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j}\right] , \end{aligned}

where $$\otimes$$ denotes the tensor product.

For the recursion scheme (2.1) an equilibrium (solution) is given by the zero solution, which is defined as $$X_{h,\mathrm {e}}^j = 0$$ for all $$j\in \mathbb {N}_0$$. We define mean-square stability of the zero solution of (2.1) in what follows.

### Definition 2.1

Let $$X_h= (X_h^j,j \in \mathbb {N}_0)$$ be given by (2.1) for fixed h and $$\Delta t$$. The zero solution $$(X_{h,\mathrm {e}}^j = 0,j\in \mathbb {N}_0)$$ of (2.1) is called mean-square stable if, for every $$\varepsilon > 0$$, there exists $$\delta > 0$$ such that $${{\mathrm{\mathbb {E}}}}[\Vert X_h^j \Vert _H^2] < \varepsilon$$ for all $$j\in \mathbb {N}_0$$ whenever $${{\mathrm{\mathbb {E}}}}[\Vert X_h^0\Vert _H^2] < \delta$$.

It is called asymptotically mean-square stable if it is mean-square stable and there exists $$\delta > 0$$ such that $$\mathbb {E}[\Vert X_h^0\Vert _H^2] < \delta$$ implies $$\lim _{j \rightarrow \infty } {{\mathrm{\mathbb {E}}}}[ \Vert X_h^j \Vert _H^2]=0$$. Furthermore, it is called asymptotically mean-square unstable if it is not asymptotically mean-square stable.

For convenience, the abbreviation (asymptotic) mean-square stability is used for the (asymptotic) mean-square stability of the zero solution of (2.1) or (1.1) if it is clear from the context.

When applied to $$Y_j = X^j_h$$, the following lemma provides an equivalent condition for mean-square stability in terms of the tensor-product-space-valued process $${X_h^j \otimes X_h^j \in V_h^{(2)}}$$. Here, for a general Hilbert space H, the abbreviation $${H^{(2)} = H \otimes H}$$ is used and $$H^{(2)}$$ is defined as the completion of the algebraic tensor product with respect to the norm induced by

\begin{aligned} \left\langle v , w \right\rangle _{H \otimes H} = \sum ^N_{i = 1} \sum ^M_{j = 1} \left\langle v_{1,i} , w_{1,j} \right\rangle _{H} \left\langle v_{2,i} , w_{2,j} \right\rangle _{H} , \end{aligned}

where $$v = \sum ^N_{i=1} v_{1,i} \otimes v_{2,i}$$ and $$w = \sum ^M_{j=1} w_{1,j} \otimes w_{2,j}$$ are representations of elements v and w in the algebraic tensor product.

### Lemma 2.1

Let $$V_h$$ be a finite-dimensional subspace of H. Then, for any sequence $$(Y_j, j \in \mathbb {N}_0)$$ of $$V_h$$-valued, square-integrable random variables, $$\lim _{j \rightarrow \infty } {{\mathrm{\mathbb {E}}}}[ Y_j \otimes Y_j ] = 0$$ if and only if $$\lim _{j\rightarrow \infty } {{\mathrm{\mathbb {E}}}}[\Vert Y_j \Vert _{H}^2] = 0$$.

### Proof

By Parseval’s identity, for an orthonormal basis $$(\psi _1,\dots , \psi _{N_h})$$ of $$V_h$$, we have

\begin{aligned} \left\| {{\mathrm{\mathbb {E}}}}\big [ Y_j \otimes Y_j \big ] \right\| _{H^{(2)}}^2&= \sum ^{N_h}_{k, \ell = 1 } \left| {{\mathrm{\mathbb {E}}}}\left[ \left\langle Y_j \otimes Y_j , \psi _k \otimes \psi _\ell \right\rangle _{H^{(2)}} \right] \right| ^2 \\&= \sum ^{N_h}_{k, \ell = 1 } \left| {{\mathrm{\mathbb {E}}}}\big [ \langle Y_j, \psi _k \rangle _{H} \langle Y_j, \psi _\ell \rangle _{H}\big ] \right| ^2 \end{aligned}

and similarly

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ \Vert Y_j \Vert _H^2 \right] = \sum _{k=1}^{N_h} {{\mathrm{\mathbb {E}}}}\left[ \left\langle Y_j , \psi _k \right\rangle _{H} ^2\right] . \end{aligned}

Therefore, one implication is immediately obtained, while the other follows from the fact that

\begin{aligned} \left\| {{\mathrm{\mathbb {E}}}}\left[ Y_j \otimes Y_j \right] \right\| _{H^{(2)}} \le {{\mathrm{\mathbb {E}}}}\left[ \left\| Y_j \otimes Y_j \right\| _{H^{(2)}}\right] = {{\mathrm{\mathbb {E}}}}\left[ \Vert Y_j \Vert _H^2 \right] . \end{aligned}

This finishes the proof. $$\square$$

This lemma enables us to show the following sufficient condition for asymptotic mean-square stability.

### Theorem 2.1

Let $$X_h = (X_h^j,j\in \mathbb {N}_0)$$ given by (2.1) satisfy Assumption 2.1 and set

\begin{aligned} \mathscr {S}_j = D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}+ {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j}\right] . \end{aligned}

Then the zero solution of (2.1) is asymptotically mean-square stable, if

\begin{aligned} \lim _{j\rightarrow \infty } \Vert \mathscr {S}_j \cdots \mathscr {S}_0 \Vert _{L(V^{(2)}_h)} = 0. \end{aligned}

### Proof

Let us first remark that $$\mathscr {S}_j \in L(V^{(2)}_h)$$ for all $$j \in \mathbb {N}_0$$ by the properties of $$D_{\Delta t, h}^{{\text {det}}}$$ and $$D_{\Delta t, h}^{{\text {stoch}},j}$$ and of the Hilbert tensor product. In order to show asymptotic mean-square stability, it suffices to show $${{\mathrm{\mathbb {E}}}}[X_h^j \otimes X_h^j]\rightarrow 0$$ as $$j \rightarrow \infty$$ by Lemma 2.1. For this, consider

\begin{aligned} {{\mathrm{\mathbb {E}}}}[X_h^{j+1} \otimes X_h^{j+1}]&= {{\mathrm{\mathbb {E}}}}\left[ \left( D_{\Delta t, h}^{{\text {det}}}+ D_{\Delta t, h}^{{\text {stoch}},j}\right) X_h^j \otimes \left( D_{\Delta t, h}^{{\text {det}}}+ D_{\Delta t, h}^{{\text {stoch}},j}\right) X_h^j \right] \\&= {{\mathrm{\mathbb {E}}}}\left[ \left( D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}\right) \left( X_h^j\otimes X_h^j\right) \right] \\& +\, {{\mathrm{\mathbb {E}}}}\left[ \left( D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j}\right) \left( X_h^j\otimes X_h^j\right) \right] \\& +\, \mathbb {E}\left[ \left( D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {stoch}},j}\right) \left( X_h^j \otimes X_h^j\right) \right] \\& +\, \mathbb {E}\left[ \left( D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {det}}}\right) \left( X_h^j \otimes X_h^j\right) \right] . \end{aligned}

The mixed terms vanish by the observation that

\begin{aligned}&\mathbb {E}\left[ \left( D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {stoch}},j}\right) \left( X_h^j\otimes X_h^j\right) \right] \\&\quad = \mathbb {E}\left[ \left( D_{\Delta t, h}^{{\text {det}}}\otimes {{\mathrm{\mathbb {E}}}}[D_{\Delta t, h}^{{\text {stoch}},j}|\mathscr {F}_{t_j}]\right) \left( X_h^j\otimes X_h^j\right) \right] = 0, \end{aligned}

since $$X_h^j$$ and $$D_{\Delta t, h}^{{\text {det}}}$$ are $$\mathscr {F}_{t_j}$$-measurable and $${{\mathrm{\mathbb {E}}}}[D_{\Delta t, h}^{{\text {stoch}},j}|\mathscr {F}_{t_j}] = 0$$ by Assumption 2.1.

Applying Assumption 2.1 once more, we therefore conclude

\begin{aligned}&{{\mathrm{\mathbb {E}}}}[X_h^{j+1} \otimes X_h^{j+1}]\\&\quad = {{\mathrm{\mathbb {E}}}}\left[ \Bigl (D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}+ D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j} \Bigr ) \left( X_h^j \otimes X_h^j\right) \right] \\&\quad = {{\mathrm{\mathbb {E}}}}\left[ \Bigl (D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}+ {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j}|\mathscr {F}_{t_j}\right] \Bigr ) \left( X_h^j \otimes X_h^j\right) \right] \\&\quad = \Bigl (D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}+ {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j}\right] \Bigr ){{\mathrm{\mathbb {E}}}}\left[ X_h^j \otimes X_h^j\right] . \end{aligned}

and obtain

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ X_h^{j+1} \otimes X_h^{j+1}\right] = \mathscr {S}_j {{\mathrm{\mathbb {E}}}}\left[ X_h^j \otimes X_h^j\right] = (\mathscr {S}_j \cdots \mathscr {S}_0) {{\mathrm{\mathbb {E}}}}\left[ X_h^0 \otimes X_h^0\right] . \end{aligned}

Since $$\lim _{j\rightarrow \infty }\Vert \mathscr {S}_j \cdots \mathscr {S}_0\Vert _{L(V_h^{(2)})}=0$$, mean-square stability is shown with the computation

\begin{aligned}&\mathbb {E}\left[ \Vert X_h^{j+1} \Vert _H^2\right] ^2 \\&\quad = \Bigl (\sum _{k=1}^{N_h} {{\mathrm{\mathbb {E}}}}\left[ \langle X_h^{j+1},\psi _k \rangle ^2_H\right] \Bigr )^2 \le N_h \sum _{k=1}^{N_h} {{\mathrm{\mathbb {E}}}}\left[ \langle X_h^{j+1},\psi _k \rangle ^2_H\right] ^2 \\&\quad \le N_h \left\| \mathbb {E}\left[ X_h^{j+1} \otimes X_h^{j+1} \right] \right\| _{H^{(2)}}^2 \le N_h \Vert \mathscr {S}_j \cdots \mathscr {S}_0 \Vert _{L(V_h^{(2)})}^2 \mathbb {E}\left[ \Vert X_h^0 \Vert _H^2\right] ^2. \end{aligned}

For asymptotic mean-square stability, note that for any $$\mathscr {F}_0$$-measurable initial value $$X_h^0\in L^2(\varOmega ;V_h)$$ it holds that $$\lim _{j\rightarrow \infty } {{\mathrm{\mathbb {E}}}}[X_h^j \otimes X_h^j] = 0$$ if and only if

\begin{aligned} \lim _{j\rightarrow \infty } \left\| \left( \mathscr {S}_j \cdots \mathscr {S}_0\right) {{\mathrm{\mathbb {E}}}}\left[ X_h^0 \otimes X_h^0\right] \right\| _{V_h^{(2)}} =0, \end{aligned}

for which a sufficient condition is given by $$\lim _{j\rightarrow \infty } \Vert \mathscr {S}_j \cdots \mathscr {S}_0 \Vert _{L(V^{(2)}_h)} = 0$$. Therefore, the proof is finished. $$\square$$

In many examples the operators $$(D_{\Delta t, h}^{{\text {stoch}},j},j\in \mathbb {N}_0)$$ have a constant covariance, i.e., they satisfy for all $$j \in \mathbb {N}_0$$

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j}\right] = {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {stoch}},0} \otimes D_{\Delta t, h}^{{\text {stoch}},0} \right] . \end{aligned}
(2.2)

Often as in the following example they are even independent and identically distributed, which implies (2.2).

### Example 2.1

Consider the one-dimensional geometric Brownian motion driven by an adapted, real-valued Brownian motion $$(\beta (t),t\ge 0)$$

\begin{aligned} \mathrm {d}X(t) = \lambda X(t) \mathrm {d}t + \sigma X(t) \mathrm {d}\beta (t), \quad t\ge 0, \end{aligned}

with initial condition $$X(0) = x_0 \in \mathbb {R}$$ and $$\lambda ,\sigma \in \mathbb {R}$$. The solution can be approximated by the explicit Euler–Maruyama scheme

\begin{aligned} X_{j+1}&= X_j + \lambda \Delta t X_j + \sigma \Delta \beta ^j X_j, \end{aligned}

for $$j \in \mathbb {N}_0$$, where $$\Delta \beta ^j = \beta (t_{j+1}) - \beta (t_j)$$, or by the Milstein scheme

\begin{aligned} X_{j+1} = X_j + \lambda \Delta t X_j + \sigma \Delta \beta ^j X_j + 2^{-1} \sigma ^2 \left( (\Delta \beta ^j)^2 - \Delta t\right) X_j. \end{aligned}

Then the deterministic operators in (2.1)

\begin{aligned} D_{\Delta t, \text {EM}}^{{\text {det}}} = D_{\Delta t, \text {Mil}}^{{\text {det}}} = 1 + \lambda \Delta t \end{aligned}

are equal for both schemes, and the corresponding approximations of the stochastic integrals are given by

\begin{aligned} D_{\Delta t, \text {EM}}^{{\text {stoch}},j} = \sigma \Delta \beta ^j, \quad D_{\Delta t, \text {Mil}}^{{\text {stoch}},j} = \sigma \Delta \beta ^j + 2^{-1} \sigma ^2 \left( (\Delta \beta ^j)^2 - \Delta t\right) \end{aligned}

for $$j \in \mathbb {N}_0$$. Both families of stochastic approximation operators satisfy Assumption 2.1 and

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, \text {EM}}^{{\text {stoch}},j} \otimes D_{\Delta t, \text {EM}}^{{\text {stoch}},j}\right] = \sigma ^2 \Delta t, \quad {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, \text {Mil}}^{{\text {stoch}},j} \otimes D_{\Delta t, \text {Mil}}^{{\text {stoch}},j}\right] = \sigma ^2 \Delta t\left( 1 + 2^{-1} \sigma ^2 \Delta t\right) \end{aligned}

do not depend on j. We observe that the equidistant time step $$\Delta t$$ is essential here.

Having this example in mind, we are able to give a necessary and sufficient condition for asymptotic mean-square stability when assuming (2.2) and therefore to specify Theorem 2.1. The condition relies on the spectrum of a single linear operator $$\mathscr {S}\in L(V^{(2)}_h)$$.

### Corollary 2.1

Let $$X_h = (X_h^j,j\in \mathbb {N}_0)$$ given by (2.1) satisfy Assumption 2.1 and (2.2). Then the zero solution of  (2.1) is asymptotically mean-square stable if and only if

\begin{aligned} \mathscr {S}= D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}+ {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {stoch}},0} \otimes D_{\Delta t, h}^{{\text {stoch}},0}\right] \in L(V^{(2)}_h) \end{aligned}

satisfies $$\rho ( \mathscr {S}) = \max _{i=1,\dots ,N_h^2} |\lambda _i| < 1$$, where $$\lambda _1,\dots ,\,\lambda _{N_h^2}$$ are the eigenvalues of $$\mathscr {S}$$.

Furthermore, it is asymptotically mean-square stable if $$\Vert \mathscr {S} \Vert _{L(V_h^{(2)})} < 1$$.

### Proof

Setting $$\mathscr {S}_j = \mathscr {S}$$ for all $$j \in \mathbb {N}_0$$ in Theorem 2.1, we obtain by the same arguments

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ X_h^{j+1} \otimes X_h^{j+1}\right] = \left( \mathscr {S}_j \cdots \mathscr {S}_0\right) {{\mathrm{\mathbb {E}}}}\left[ X_h^0 \otimes X_h^0\right] = \mathscr {S}^{j+1} {{\mathrm{\mathbb {E}}}}\left[ X_h^0 \otimes X_h^0\right] . \end{aligned}

As a consequence, $$\lim _{j\rightarrow \infty } {{\mathrm{\mathbb {E}}}}[X_h^j \otimes X_h^j] = 0$$ if and only if $$\lim _{j\rightarrow \infty } \mathscr {S}^{j} = 0$$ which is equivalent to $$\rho (\mathscr {S}) < 1$$ by the same arguments as, e.g., in [8, 11, 17]. This completes the proof of the first statement. Since $$\rho (\mathscr {S}) \le \Vert \mathscr {S}\Vert _{L(V_h^{(2)})}$$, a sufficient condition for asymptotic mean-square stability is given by $$\Vert \mathscr {S} \Vert _{L(V_h^{(2)})} < 1$$. $$\square$$

In the framework of SDE approximations, note that this corollary is an SPDE version formulated with operators of the results for finite-dimensional linear systems in . There, the proposed method relies on a matrix eigenvalue problem. For SPDE approximations, this approach is not suitable, since the dimension of the considered eigenvalue problem increases heavily with space refinement. More precisely, for $$h>0$$, the spectral radius of an $$(N_h^2 \times N_h^2)$$-matrix has to be computed. To overcome this problem, we perform, in what follows, mean-square stability analysis of SPDE approximations based on operators as introduced above.

## 3 Application to Galerkin methods

We continue by applying the previous results to the analysis of some classical numerical approximations of (1.1) which admits by results in [27, Chapter 9] an up to modification unique mild càdlàg solution and is for $$t \ge 0$$ given by

\begin{aligned} X(t) = S(t) X_0 + \int ^t_0 S(t-s)F(X(s)) \, \mathrm {d}s+ \int ^t_0 S(t-s)G(X(s)) \, \mathrm {d}L(s).\qquad \end{aligned}
(3.1)

We assume further that the operator $$-A: \mathscr {D}(-A) \subset H \rightarrow H$$ of (1.1) is densely defined, self-adjoint, and positive definite with compact inverse. This implies that $$-A$$ has a non-decreasing sequence of positive eigenvalues $${(\lambda _i, i \in \mathbb {N})}$$ for an orthonormal basis of eigenfunctions $$(e_i, i \in \mathbb {N})$$ in H and fractional powers of $$-A$$ are provided by

\begin{aligned} (-A)^{r/2} e_i = \lambda _{i}^{r/2} e_i \end{aligned}

for all $$i \in \mathbb {N}$$ and $$r > 0$$. For each $$r>0$$, $$\dot{H}^r = \mathscr {D}((-A)^{r/2})$$ with inner product $$\left\langle \cdot , \cdot \right\rangle _{r} = \left\langle (-A)^{r/2}\cdot , (-A)^{r/2}\cdot \right\rangle _{H}$$ defines a separable Hilbert space (see, e.g., [20, Appendix B]).

We assume that the sequence $$(V_h, h \in (0,1])$$ of finite-dimensional subspaces fulfil $$V_h\subset \dot{H}^1\subset H$$ and define the discrete operator $$-A_h : V_h \rightarrow V_h$$ by

\begin{aligned} \left\langle -A_h v_h , w_h \right\rangle _{H} = \left\langle v_h , w_h \right\rangle _{1} = \left\langle (-A)^{1/2} v_h , (-A)^{1/2} w_h \right\rangle _{H} \end{aligned}

for all $$v_h, w_h \in V_h$$. This definition implies that $$-A_h$$ is self-adjoint and positive definite on $$V_h$$ and therefore has a sequence of orthonormal eigenfunctions $$(e_{h,i}, i = 1,\ldots ,N_h)$$ and positive non-decreasing eigenvalues $$(\lambda _{h,i}, i = 1,\ldots ,N_h)$$ (see e.g., [20, Chapter 3]). By using basic properties of the Rayleigh quotient, we bound the smallest eigenvalue $$\lambda _{h,1}$$ of $$-A_h$$ from below by the smallest eigenvalue $$\lambda _1$$ of $$-A$$ through

\begin{aligned} \lambda _{h,1} = \min _{v_h \in V_h \setminus \{ 0 \}} \frac{ \left\langle v_h , v_h \right\rangle _{1} }{\Vert v_h \Vert _{H}^2} \ge \min _{v \in H \setminus \{ 0 \}} \frac{ \left\langle v , v \right\rangle _{1} }{\Vert v \Vert _{H}^2} = \lambda _1, \end{aligned}
(3.2)

since $$V_h \subset H$$, cf. . This estimate turns out to be useful when comparing asymptotic mean-square stability of (1.1) and its approximation later in this section.

Let the covariance of the Lévy process L be self-adjoint, positive semidefinite, and of trace class. Then results in [27, Chapter 4] imply the existence of an orthonormal basis $$(f_i, i \in \mathbb {N})$$ of U and a non-increasing sequence of non-negative real numbers $$(\mu _i, i \in \mathbb {N})$$ such that for all $$i \in \mathbb {N}$$, $$Q f_i = \mu _i f_i$$ with $${{\mathrm{Tr}}}(Q) = \sum _{i=1}^\infty \mu _i < \infty$$ and L admits a Karhunen–Loève expansion

\begin{aligned} L(t) = \sum ^\infty _{i=1} \sqrt{\mu _i} L_i(t) f_i, \end{aligned}
(3.3)

where $$(L_i, i \in \mathbb {N})$$ is a family of real-valued, square-integrable, uncorrelated Lévy processes satisfying $${{\mathrm{\mathbb {E}}}}[(L_i(t))^2]=t$$ for all $$t \ge 0$$. Note that due to the martingale property of L, the real-valued Lévy processes satisfy $${{\mathrm{\mathbb {E}}}}[L_i(t)] = 0$$ for all $$t \ge 0$$ and $$i \in \mathbb {N}$$. This implies, together with the stationarity of the Lévy increments $$\Delta L^j_i = L_i(t_{j+1})-L_i(t_j)$$, that for all $$i \in \mathbb {N}$$ and $$j \in \mathbb {N}_0$$,

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ \Delta L_i^j\right] = {{\mathrm{\mathbb {E}}}}\left[ \Delta L_1^0\right] = {{\mathrm{\mathbb {E}}}}\left[ L_1(\Delta t)\right] = 0. \end{aligned}

Since the series representation of L can be infinite, an approximation of L might be required to implement a fully discrete approximation scheme, which is typically done by truncation of the Karhunen–Loève expansion, i.e., for $$\kappa \in \mathbb {N}$$, set the truncated process $${L^\kappa (t) = \sum _{i=1}^\kappa \sqrt{\mu _i} L_i(t) f_i}$$. Note that the choice of $$\kappa$$ is essential and should be coupled with the overall convergence of the numerical scheme as is discussed in, e.g., [3, 5, 25]. Within this work, we consider numerical methods based on the original Karhunen–Loève expansion (3.3) of L. However, this does not restrict the applicability of the results since $$L^\kappa$$ fits in the framework by setting $$\mu _i = 0$$ for all $$i > \kappa$$.

As standard example in this context we consider the stochastic heat equation which is used for simulations in Sect. 4.

### Example 3.1

(Stochastic heat equation) Let $$H=L^2([0,1])$$ be the separable Hilbert space of square-integrable functions on [0, 1]. On this space we consider the operator $$A=\nu \Delta$$, where $$\nu > 0$$ and $$\Delta$$ denotes the Laplace operator with homogeneous zero Dirichlet boundary conditions which is the generator of a $$C_0$$-semigroup, cf. [20, Example 2.21]. The equation

\begin{aligned} \mathrm {d}X(t) = \nu \Delta X(t) \, \mathrm {d}t + G(X(t)) \, \mathrm {d}L(t) \end{aligned}

is referred to as the (homogeneous) stochastic heat equation.

It is known (see, e.g., [20, Chapter 6]) that the eigenvalues and eigenfunctions of the operator $$-A$$ are given by

\begin{aligned} \lambda _i = \nu i^2\pi ^2, \quad e_i(y) = \sqrt{2} \sin (i \pi y), \quad i \in \mathbb {N}, y\in [0,1]. \end{aligned}

We first assume, for simplicity, that $$U=H=L^2([0,1])$$ and that the operator Q diagonalizes with respect to the eigenbasis of $$-A$$, i.e., $$f_i = e_i$$ for all $$i \in \mathbb {N}$$. For this choice, we consider the operator $$G=G_1$$ that gives rise to a geometric Brownian motion in infinite dimensions, cf. [20, Sect. 6.4]. It is for all $$u,v \in H$$ defined by the equation

\begin{aligned} G_1(v)u = \sum _{i=1}^\infty \langle v,e_i \rangle _H \langle u,e_i \rangle _H e_i . \end{aligned}

As a second example, we let $$U=\dot{H}^1$$ with the same diagonalization assumption as before, i.e., $$f_i = \lambda ^{1/2}_i e_i$$ for all $$i \in \mathbb {N}$$. Here, we let the operator $$G=G_2$$ be a Nemytskii operator which is defined pointwise for $$x \in [0,1]$$, $$u \in \dot{H}^1$$ and $$v \in H$$ by

\begin{aligned} (G_2 (v)u)[x] = v(x) u(x). \end{aligned}

To see that $$G \in L(H;L(U;H))$$ note that for $$u,v \in H$$, by the triangle inequality and Cauchy–Schwarz we have for $$G_1$$

\begin{aligned} \Vert G_1(v) u \Vert _{H}&\le \sum _{i=1}^\infty |\langle v,e_i \rangle _H | |\langle u,e_{i} \rangle _H | \le \left( \sum _{i=1}^\infty \langle v,e_i \rangle _H^2 \right) ^{1/2} \left( \sum _{i=1}^\infty \langle u,e_i \rangle _H ^2 \right) ^{1/2} = \Vert v \Vert _{H} \Vert u \Vert _{H}. \end{aligned}

Next, for $$G_2$$ with $$v \in H$$ and $$u \in \dot{H}^1$$, it holds that

\begin{aligned} \Vert G_2(v) u \Vert _{H}^2&= \int ^1_0 u(x)^2 v(x)^2 \, \mathrm {d}x = \int ^1_0 \left( \sum ^\infty _{i=1} \lambda _i^{1/2} \left\langle u , e_i \right\rangle _{H} \lambda _i^{-1/2} e_i (x) \right) ^2 v(x)^2 \, \mathrm {d}x \\&\le \left( \sum ^\infty _{i=1} \lambda _i | \left\langle u , e_i \right\rangle _{H} |^2 \right) \int ^1_0 \left( \sum ^\infty _{i=1} \lambda _i^{-1} e_i(x)^2 \right) v(x)^2 \, \mathrm {d}x \\&\le \Vert u \Vert _{\dot{H}^1}^{2} \left( 2 \sum ^\infty _{i=1} \lambda _i^{-1} \right) \int ^1_0 v(x)^2 \, \mathrm {d}x = \left( 2 \sum ^\infty _{i=1} \lambda _i^{-1} \right) \Vert u \Vert _{\dot{H}^1}^{2} \Vert v \Vert _{H}^{2}. \end{aligned}

Here, the first inequality is an application of the Cauchy–Schwarz inequality, while the second follows from the fact that the sequence $${(|e_i(x)|, i \in \mathbb {N})}$$ is bounded by $$\sqrt{2}$$ for all $$x \in [0,1]$$. Therefore, we obtain

\begin{aligned} \Vert G_1 \Vert _{L(H;L(H))} \le 1, \quad \Vert G_2 \Vert _{L(H;L(\dot{H}^1,H))} \le \left( 2 \sum ^\infty _{i=1} \lambda _i^{-1} \right) ^{1/2}. \end{aligned}

### 3.1 Time discretization with rational approximations

Let us first recall that a rational approximation of order p of the exponential function is a rational function $$R: \mathbb {C}\rightarrow \mathbb {C}$$ satisfying that there exist constants $$C,\delta > 0$$ such that for all $$z \in \mathbb {C}$$ with $$|z| < \delta$$

\begin{aligned} |R(z) - \exp (z)| \le C |z|^{p+1}. \end{aligned}

Since R is rational there exist polynomials $$r_n$$ and $$r_d$$ such that $$R = r_d^{-1} r_n$$. We want to consider rational approximations of the semigroup S generated by the operator $$-A$$ and of its approximations $$-A_h$$ as they were considered in . With the introduced notation, the linear operator $$R(\Delta t A_h)$$ is given for all $$v_h \in V_h$$ by

\begin{aligned} R(\Delta t A_h)v_h = r_\mathrm {d}^{-1}(\Delta t A_h) r_\mathrm {n}(\Delta t A_h)v_h = \sum _{k=1} ^{N_h} \frac{r_\mathrm {n}(-\Delta t \lambda _{h,k})}{r_\mathrm {d}(-\Delta t \lambda _{h,k})} \left\langle v_h , e_{h,k} \right\rangle _{H} e_{h,k}. \end{aligned}
(3.4)

To start, let us consider the mean-square stability properties of a Galerkin Euler–Maruyama method, which is given by the recursion

\begin{aligned} X_h^{j+1}&= \left( D_{\Delta t, h}^{{\text {det}}}+ D_{\Delta t, h}^{{\text {EM}},j}\right) X_h^j \end{aligned}
(3.5)

for $$j \in \mathbb {N}_0$$ with initial condition $$X_h^0 = P_h X_0$$, where

\begin{aligned} \begin{aligned} D_{\Delta t, h}^{{\text {det}}}&= R(\Delta t A_h)+ r_\mathrm {d}^{-1}(\Delta t A_h) \Delta t P_hF, \\ D_{\Delta t, h}^{{\text {stoch}},j}&= D_{\Delta t, h}^{{\text {EM}},j} = r_\mathrm {d}^{-1}(\Delta t A_h) P_h G(\cdot ) \Delta L^{j} \end{aligned} \end{aligned}
(3.6)

with $$\Delta L^{j} = L(t_{j+1}) - L(t_j)$$. Note that the linear operators $$(D_{\Delta t, h}^{{\text {stoch}},j}, j \in \mathbb {N}_0)$$ satisfy all assumptions of Corollary 2.1 since they only depend on the Lévy increments $$(\Delta L^j,j\in \mathbb {N}_0)$$. For this type of numerical approximation, the result from Corollary 2.1 can be specified:

### Proposition 3.1

The zero solution of the numerical method (3.5) is asymptotically mean-square stable if and only if

\begin{aligned} \mathscr {S}= D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}+ \Delta t \, (C \otimes C) q \in L(V^{(2)}_h) \end{aligned}

satisfies that $$\rho (\mathscr {S})<1$$, where $$q = \sum _{k=1}^{\infty } \mu _k f_k \otimes f_k \in U^{(2)}$$ and $$C \in L(U; L(V_h))$$ with

\begin{aligned} Cu = r_\mathrm {d}^{-1}(\Delta t A_h) P_h G(\cdot ) u. \end{aligned}

### Proof

Note that since $$V_h$$ is finite-dimensional, $$L(V_h) = L_{\text {HS}} (V_h)$$ so $$(C \otimes C)$$ is well-defined as an element of $$L(U^{(2)}, L^{(2)}_{\text {HS}} (V_h)) \subset L(U^{(2)}, L(V^{(2)}_h))$$ by [18, Lemma 3.1(ii)], which yields for $$j\in \mathbb {N}$$

\begin{aligned} {{\mathrm{\mathbb {E}}}}\Big [&D_{\Delta t, h}^{{\text {EM}},j} \otimes D_{\Delta t, h}^{{\text {EM}},j}\Big ] = {{\mathrm{\mathbb {E}}}}[ C \Delta L^{j} \otimes C \Delta L^{j}] = ( C \otimes C ) {{\mathrm{\mathbb {E}}}}[\Delta L^{j} \otimes \Delta L^{j}]. \end{aligned}

Since $${{\mathrm{\mathbb {E}}}}[\Delta L^{j} \otimes \Delta L^{j}] = \Delta t \, q$$ by Lemma 5.1, the proof follows from Corollary 2.1. $$\square$$

The still rather abstract condition can be specified to an explicit sufficient condition.

### Corollary 3.1

A sufficient condition for asymptotic mean-square stability of (3.5) is

\begin{aligned}&\Bigl (\, \max _{k = 1,\dots ,N_h} |R(-\Delta t \lambda _{h,k})| + \max _{k = 1,\dots ,N_h} |r_\mathrm {d}^{-1}(-\Delta t \lambda _{h,k})| \Delta t \Vert F \Vert _{L(H)} \Bigr )^2 \\&\quad + \max _{k = 1,\dots ,N_h} |r_\mathrm {d}^{-1}(-\Delta t \lambda _{h,k})|^2 \Delta t {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 < 1. \end{aligned}

### Proof

We first note that by the triangle inequality and the properties of the linear operator induced by the rational approximation R defined in Eq. (3.4) we obtain that

\begin{aligned} \Vert D_{\Delta t, h}^{{\text {det}}} \Vert _{L(V_h)}&= \Vert R(\Delta t A_h)+ r_\mathrm {d}^{-1}(\Delta t A_h) \Delta t P_hF \Vert _{L(V_h)} \\&\le \max _{k=1,\dots ,N_h} |R(-\Delta t \lambda _{h,k})| + \max _{k = 1,\dots ,N_h} |r_\mathrm {d}^{-1}(-\Delta t \lambda _{h,k})| \Delta t \Vert F \Vert _{L(H)} \end{aligned}

and, similarly

\begin{aligned} \Vert C \Vert _{L(U;L(V_h))} \le \max _{k=1,\dots ,N_h} |r_\mathrm {d}^{-1}(-\Delta t \lambda _{h,k})| \Vert G \Vert _{L(H;L(U;H))}. \end{aligned}

Since

\begin{aligned} \Vert (C \otimes C) q \Vert _{L(V_h^{(2)})} \le \sum _{k=1}^{\infty } \mu _k \Vert C f_k \Vert _{L(V_h)}^2 \le {{\mathrm{Tr}}}(Q) \Vert C \Vert _{L(U;L(V_h))}^2 \end{aligned}

and $$\Vert D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}} \Vert _{L(V^{(2)}_h)} = \Vert D_{\Delta t, h}^{{\text {det}}} \Vert _{L(V_h)}^2$$, we obtain the claimed condition, which is sufficient by Corollary 2.1. $$\square$$

We continue with the higher order Milstein scheme. Applying  in our context reads

\begin{aligned} \begin{aligned} X_h^{j+1} = (D_{\Delta t, h}^{{\text {det}}}+ D_{\Delta t, h}^{{\text {EM}},j} + D_{\Delta t, h}^{{\text {M}},j})X_h^j, \end{aligned} \end{aligned}
(3.7)

where $$D_{\Delta t, h}^{{\text {det}}}$$ and $$D_{\Delta t, h}^{{\text {EM}},j}$$ are as in (3.6) and

\begin{aligned} D_{\Delta t, h}^{{\text {M}},j}&= \sum _{k,\ell = 1}^\infty r_\mathrm {d}^{-1}(\Delta t A_h) \sqrt{\mu _k \mu _\ell }P_h G(G(\cdot )f_k)f_\ell \int _{t_{j}}^{t_{j+1}} \int _{t_{j}}^s \mathrm {d}L_k(r) \, \mathrm {d}L_\ell (s). \end{aligned}

### Remark 3.1

In order to compute the iterated integrals of $$D_{\Delta t, h}^{{\text {M}},j}$$, one may assume (cf. [3, 16]) that for all H-valued, adapted stochastic processes $$\chi = (\chi (t),t\ge 0)$$ and all $$i,j\in \mathbb {N}$$, the diffusion operator G satisfies the commutativity condition

\begin{aligned} G(G(\chi )f_j ) f_i = G(G(\chi )f_i)f_j. \end{aligned}

Under this assumption satisfied in Example 3.1, $$D_{\Delta t, h}^{{\text {M}},j}$$ simplifies to

\begin{aligned} D_{\Delta t, h}^{{\text {M}},j}&= \frac{1}{2}\sum _{k,\ell = 1}^\infty \sqrt{\mu _k \mu _\ell } r_\mathrm {d}^{-1}(\Delta t A_h) P_h G(G(X_h^j)f_k)f_\ell (\Delta L_{k}^{j} \Delta L_{\ell }^{j} - \Delta [L_k,L_\ell ]^j), \end{aligned}

where $$\Delta [L_k,L_\ell ]^j = [L_k,L_\ell ]_{t_{j+1}} - [L_k,L_\ell ]_{t_{j}}$$. Here, $$[L_k,L_\ell ]_{t}$$ denotes the quadratic covariation of $$L_k$$ and $$L_\ell$$ evaluated at $$t \ge 0$$, which is straightforward to compute when $$L_k, L_\ell$$ are jump-diffusion processes (cf. ). For the simulation of more general Lévy processes in the context of SPDE approximation, we refer to [7, 13].

As for the Euler–Maruyama scheme, Corollary 2.1 can be specified for this Milstein scheme.

### Proposition 3.2

Assume that the mapping $$C'(u_1,u_2) = r_\mathrm {d}^{-1}(\Delta t A_h) P_h G(G(\cdot )u_1)u_2$$ for $$u_1, u_2 \in U$$ can be uniquely extended to a mapping $$C' \in L(U^{(2)},L(V_h))$$. Then the zero solution of (3.7) is asymptotically mean-square stable if and only if

\begin{aligned} \mathscr {S}= D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}+ \Delta t \, (C \otimes C) q +\frac{\Delta t^2}{2} (C' \otimes C') q' \end{aligned}

satisfies that $$\rho (\mathscr {S}) < 1$$. Here, $$q' = \sum _{k,\ell =1}^{\infty } \mu _k \mu _\ell (f_k \otimes f_{\ell }) \otimes (f_k \otimes f_{\ell }) \in U^{(4)}$$ and C and q as in Proposition 3.1.

### Proof

Note that $$C' \otimes C' : U^{(4)} \rightarrow L(V^{(2)}_h)$$ and $$C' \otimes C : U^{(2)} \otimes U \rightarrow L(V^{(2)}_h)$$ are well-defined by the same arguments as in Proposition 3.1. Since $$D_{\Delta t, h}^{{\text {stoch}},j} = D_{\Delta t, h}^{{\text {EM}},j} + D_{\Delta t, h}^{{\text {M}},j}$$, we obtain for $$j\in \mathbb {N}_0$$

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {stoch}},j} \otimes D_{\Delta t, h}^{{\text {stoch}},j}\right]&= {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {EM}},j} \otimes D_{\Delta t, h}^{{\text {EM}},j}\right] + {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {M}},j} \otimes D_{\Delta t, h}^{{\text {EM}},j}\right] \\&\quad + {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {EM}},j} \otimes D_{\Delta t, h}^{{\text {M}},j}\right] + {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {M}},j} \otimes D_{\Delta t, h}^{{\text {M}},j}\right] . \end{aligned}

The first term and $$D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}}$$ are given in Proposition 3.1. For the second term, writing $$\Delta ^{(2)} L = \sum _{k,\ell =1}^\infty \sqrt{\mu _k \mu _\ell } \left( \int _{t_{j}}^{t_{j+1}} \int _{t_{j}}^s \mathrm {d}L_k(r) \, \mathrm {d}L_\ell (s) \right) f_k \otimes f_{\ell }$$, Lemma 5.2 yields

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {M}},j} \otimes D_{\Delta t, h}^{{\text {EM}},j}\right]&= {{\mathrm{\mathbb {E}}}}\bigl [C' \Delta ^{(2)} L^j \otimes C \Delta L^j \bigr ] = (C' \otimes C) {{\mathrm{\mathbb {E}}}}\bigl [\Delta ^{(2)} L^j \otimes \Delta L^j\bigr ] = 0 \end{aligned}

and analogously the same for the third term. Finally, Lemma 5.2 also implies

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ D_{\Delta t, h}^{{\text {M}},j} \otimes D_{\Delta t, h}^{{\text {M}},j}\right] = {{\mathrm{\mathbb {E}}}}\bigl [C' \Delta ^{(2)} L^j \otimes C' \Delta ^{(2)} L^j \bigr ] = (C' \otimes C') {{\mathrm{\mathbb {E}}}}\bigl [\Delta ^{(2)} L^j \otimes \Delta ^{(2)} L^j \bigr ] \end{aligned}

and the statement follows directly from Corollary 2.1. $$\square$$

### Remark 3.2

The assumption on $$C'$$ in Proposition 3.2 holds for the operators $$G_1$$ and $$G_2$$ in the setting of Example 3.1. One can get rid of this assumption by using that the bound on $$G \in L(H;L(U;H))$$ allows for an extension of the bilinear mapping to the projective tensor product space $$U \otimes _\pi U$$, cf. . One would then have to assume additional regularity on L to ensure that $$\Delta ^{(2)} L^j$$ in the proof of Proposition 3.2 is in the space $$L^2(\varOmega ;U \otimes _\pi U)$$. Alternatively, one considers finite-dimensional truncated noise, which leads to equivalent norms.

### 3.2 Examples of rational approximations

Let us next consider specific choices of rational approximations R and investigate their influence on mean-square stability. First, we derive sufficient conditions based on Corollary 3.1 for Euler–Maruyama schemes with standard rational approximations. More specifically, we consider the backward Euler, the Crank–Nicolson, and the forward Euler scheme.

### Theorem 3.1

Consider the approximation scheme (3.5).

1. 1.

(Backward Euler scheme) Let R be given by $$R(z) = (1-z)^{-1}$$. Then (3.5) is asymptotically mean-square stable if

\begin{aligned} \frac{(1 + \Delta t \Vert F \Vert _{L(H)})^2 + \Delta t {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2}{(1 + \Delta t \lambda _{h,1})^2} < 1. \end{aligned}
2. 2.

(Crank–Nicolson scheme) Let R be given by $$R(z) = (1 + z/2)/(1-z/2)$$. Then (3.5) is asymptotically mean-square stable if

\begin{aligned}&\left( \max _{k \in \{ 1,N_h\}} \bigg | \frac{1- \Delta t \lambda _{h,k}/2}{1 + \Delta t \lambda _{h,k}/2} \bigg | + \Delta t \frac{\Vert F\Vert _{L(H)}}{(1 + \Delta t \lambda _{h,1}/2)}\right) ^2 + \Delta t \frac{ {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2}{(1 + \Delta t \lambda _{h,1}/2)^2} < 1. \end{aligned}
3. 3.

(Forward Euler scheme) Let R be given by $$R(z) = 1+z$$. Then (3.5) is asymptotically mean-square stable if

\begin{aligned} \Bigl ( \max _{\ell \in \{1,N_h\}} |1 - \Delta t \lambda _{h,\ell }| + \Delta t \Vert F \Vert _{L(H)} \Bigr )^2 + \Delta t {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 < 1. \end{aligned}

### Proof

Let us start with the backward Euler scheme. Since the functions $$r_\mathrm {d}^{-1}(z)$$ and R(z) are equal and it holds for all $$k = 1, \dots ,N_h$$ that $$|R(-\Delta t \lambda _{h,k} )| \le |R(-\Delta t \lambda _{h,1})|$$, we obtain by Corollary 3.1 asymptotic mean-square stability if

\begin{aligned} (1 + \Delta t \lambda _{h,1})^{-2} \left( (1 + \Delta t \Vert F \Vert _{L(H)})^2 + \Delta t {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 \right) < 1. \end{aligned}

For the Crank–Nicolson scheme, note that the function R is decreasing on $$\mathbb {R}^-$$ and that $$R(z) \in [-1,1]$$ for all $$z\in \mathbb {R}^-$$. Thus, the maximizing eigenvalue is either the largest, $$\lambda _{h,N_h}$$, or the smallest, $$\lambda _{h,1}$$, and therefore,

\begin{aligned} | R(- \Delta t \lambda _{h,k})| \le \max _{\ell \in \{1,N_h\}} | R (- \Delta t \lambda _{h,\ell })|. \end{aligned}

Since $$|r_\mathrm {d}^{-1}(- \Delta t \lambda _{h,k})| \le |r_\mathrm {d}^{-1}(- \Delta t \lambda _{h,1})|$$ for all $$k = 1,\dots ,N_h$$, the claim follows with Corollary 3.1.

By the same arguments, we obtain for the forward Euler scheme that $$|R(-\Delta t \lambda _{h,i})|$$ is maximized either at $$z=-\Delta t\lambda _{h,1}$$ or $$z=-\Delta t\lambda _{h,N_h}$$. Therefore, since $$r_\mathrm {d}^{-1} (z) = 1$$, the claim follows again with Corollary 3.1, which finishes the proof. $$\square$$

For the Milstein scheme, Proposition 3.2 yields the following sufficient condition.

### Proposition 3.3

Under the assumptions of Proposition 3.2, the Milstein scheme (3.7) with $$R(z) = (1-z)^{-1}$$ is asymptotically mean-square stable if

\begin{aligned}&(1 + \Delta t \Vert F \Vert _{L(H)})^2 + \Delta t {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 + \frac{\Delta t^2}{2} {{\mathrm{Tr}}}(Q)^2 \Vert G \Vert _{L(H;L(U;H))}^4 \\&\quad < (1+\Delta t \lambda _{h,1})^2. \end{aligned}

### Proof

In the same way as in the proof of Corollary 3.1, we bound

\begin{aligned} \Vert (C' \otimes C') q' \Vert _{L(V^{(2)}_h)}&\le \Vert C' \Vert ^2_{L(U^{(2)};L(V_h))} {{\mathrm{Tr}}}(Q)^2 \\&\le (1+\Delta t \lambda _{h,1})^{-2} \Vert G \Vert _{L(H;L(U;H))}^4 {{\mathrm{Tr}}}(Q)^2. \end{aligned}

Hence, our assumption ensures that $$\Vert \mathscr {S}\Vert _{L(V_h^{(2)})} <1$$, which by Corollary 2.1 is sufficient for asymptotic mean-square stability. $$\square$$

Note that the sufficient condition for the Milstein scheme is more restrictive than the sufficient condition presented in Theorem 3.1(1) for the backward Euler–Maruyama method due to the additional positive term in the estimate in Proposition 3.3.

### 3.3 Relation to the mild solution

To connect existing results on asymptotic mean-square stability of (1.1) to the results for discrete schemes in Sect. 3.2, we have to restrict ourselves to Q-Wiener processes $$W = (W(t),t\ge 0)$$ due to the framework for analytical solutions in . Specifically, we consider

\begin{aligned} \mathrm {d}X(t) = (AX(t) + F X(t)) \, \mathrm {d}t + G(X(t)) \, \mathrm {d}W(t). \end{aligned}
(3.8)

The following special case of [24, Proposition 3.1.1] provides a sufficient condition for the asymptotic mean-square stability of (1.1) by a Lyapunov functional approach.

### Theorem 3.2

Assume that $$X_0=x_0 \in \dot{H}^1$$ is deterministic and there exists $$c>0$$ such that

\begin{aligned} 2\langle v,A v + F(v)\rangle _H + {{\mathrm{Tr}}}[G(v)Q(G(v))^* ] \le -c\Vert v\Vert ^2_H \end{aligned}

for all $$v\in \dot{H}^2$$. Then the zero solution of (3.8) is asymptotically mean-square stable.

We use this theorem to derive simultaneous sufficient mean-square stability conditions for (3.8) and the corresponding backward Euler scheme (3.5).

### Corollary 3.2

Assume that $$X_0=x_0 \in \dot{H}^1$$ is deterministic. Then the zero solutions of (3.8) and its approximation (3.5) with $$R(z) = (1-z)^{-1}$$ are asymptotically mean-square stable for all h and $$\Delta t$$ if

\begin{aligned} 2\left( \Vert F \Vert _{L(H)}- \lambda _1 \right) + {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 < 0. \end{aligned}
(3.9)

### Proof

We show first that (3.9) yields asymptotic mean-square stability of (3.8) by reducing it to the assumption in Theorem 3.2. For the second term there, note that for any $$v \in \dot{H}^2$$,

\begin{aligned}&{{\mathrm{Tr}}}[G(v)Q(G(v))^*] \\&\quad ={{\mathrm{Tr}}}[(G(v))^*G(v)Q] = \sum _{k=1}^\infty \langle G(v)Q f_k , G(v)f_k \rangle \\&\quad \le \sum _{k=1}^\infty \mu _k \Vert G\Vert _{L(H;L(U;H))}^2 \Vert v\Vert ^2_H \Vert f_k \Vert _U^2 = {{\mathrm{Tr}}}(Q) \Vert G\Vert _{L(H;L(U;H))}^2 \Vert v\Vert _H^2, \end{aligned}

where the first equality follows from the properties of the trace. The first term satisfies

\begin{aligned}&\langle v,A v + F(v)\rangle = \langle v,F(v)\rangle + \langle v,A v\rangle \le \Vert F \Vert _{L(H)} \Vert v\Vert ^2_H - \Vert v\Vert ^2_1 \\&\quad \le (\Vert F \Vert _{L(H)} -\lambda _1) \Vert v\Vert _H^2 \end{aligned}

using the definition of $$\Vert \cdot \Vert _1$$. Altogether, we therefore obtain

\begin{aligned}&2\langle v,A v + F(v)\rangle _H + {{\mathrm{Tr}}}[G(v)Q(G(v))^*] \\&\quad \le \bigl ( 2 \left( \Vert F \Vert _{L(H)} - \lambda _1 \right) + {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 \bigr ) \Vert v \Vert _{H}^2, \end{aligned}

i.e., with (3.9) asymptotic mean-square stability of (3.8) by setting

\begin{aligned} c = - \bigl ( 2 \left( \Vert F \Vert _{L(H)} - \lambda _1 \right) + {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 \bigr ). \end{aligned}

We continue with (3.5) observing first that $$\lambda _{h,1} \ge \lambda _1$$ by (3.2). Therefore, the condition in Theorem 3.1(1) yields

\begin{aligned} \Delta t \bigl ( 2 \left( \Vert F \Vert _{L(H)} - \lambda _1 \right) + {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 \bigr ) + {\Delta t}^2 \bigl ( \Vert F \Vert _{L(H)}^2 - \lambda _1^2 \bigr ) < 0. \end{aligned}

This is satisfied and finishes the proof since the first term is negative by assumption and so is the second using (3.9) and

\begin{aligned}&\Vert F \Vert _{L(H)}^2 - \lambda _1^2\\&\quad = \bigl ( \Vert F \Vert _{L(H)} + \lambda _1 \bigr ) \bigl ( \Vert F \Vert _{L(H)} - \lambda _1 \bigr ) \\&\quad \le \left( \Vert F \Vert _{L(H)} + \lambda _1 \right) \bigl ( \left( \Vert F \Vert _{L(H)}- \lambda _1 \right) + 2^{-1} {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 \bigr ) < 0. \end{aligned}

$$\square$$

Note that under (3.9) in Corollary 3.2, the backward Euler–Maruyama scheme preserves the qualitative behaviour of the analytical solution without any restriction on h and $$\Delta t$$. Hence, it can be applied to numerical methods requiring different refinement parameters in parallel such as multilevel Monte Carlo, which efficiently approximates quantities $$\mathbb {E}[\varphi (X(T))]$$ (see, e.g., [4, 6] for details). Here, it is essential that the behaviour is preserved on all refinement levels .

On the other hand, note that in the homogeneous case, i.e., $$F=0$$, the stability condition in Theorem 3.1(1) reduces to

\begin{aligned} {{\mathrm{Tr}}}(Q) \Vert G \Vert _{L(H;L(U;H))}^2 < \lambda _{h,1} (2 + \Delta t \lambda _{h,1}) \end{aligned}

so that even if (1.1) is asymptotically mean-square unstable, its approximation (3.5) can always be rendered stable by letting $$\Delta t$$ be large enough. In that case the analytical solution and its approximation have a different qualitative behaviour for large times.

### Remark 3.3

Based on Theorem 3.1, it is also possible to examine the relation between asymptotic mean-square stability of (3.8) and its approximation by the other rational approximations. However, due to the nature of the sufficient conditions in Theorem 3.1, analogous results to Corollary 3.2 include restrictions on h and $$\Delta t$$.

For the Milstein scheme considered in Proposition 3.3 we can also derive a sufficient condition for the simultaneous mean-square stability not relying on h and $$\Delta t$$. However, due to the additional term in Proposition 3.3, the condition becomes slightly more restrictive than in Corollary 3.2. More precisely we obtain the following:

### Corollary 3.3

Assume that $$X_0=x_0 \in \dot{H}^1$$ is deterministic and $$F = 0$$. Then the zero solutions of (3.8) and its Milstein approximation (3.7) with $$R(z) = (1-z)^{-1}$$ are asymptotically mean-square stable for all h and $$\Delta t$$ if

\begin{aligned} -\sqrt{2} \lambda _1 + {\text {Tr}}(Q) \Vert G \Vert ^2_{L(H;L(U;H))}< 0. \end{aligned}

### Proof

The asymptotic mean-square stability of (3.8) follows by Corollary 3.2 since

\begin{aligned} -2 \lambda _1 +{\text {Tr}}(Q) \Vert G \Vert ^2_{L(H;L(U;H))}< - \sqrt{2} \lambda _1 + {\text {Tr}}(Q) \Vert G \Vert ^2_{L(H;L(U;H))}< 0. \end{aligned}

The sufficient condition for (3.7) in Proposition 3.3 can be rewritten as

\begin{aligned} \Delta t ( - 2 \lambda _{h,1} + {\text {Tr}}(Q) \Vert G \Vert ^2_{L(H;L(U;H))} ) + \Delta t^2( -2 \lambda _{h,1}^2 + {\text {Tr}}(Q)^2\Vert G \Vert ^4_{L(H;L(U;H))}) < 0. \end{aligned}

The first summand is negative since

\begin{aligned} - 2 \lambda _{h,1} + {\text {Tr}}(Q) \Vert G \Vert ^2_{L(H;L(U;H))}< -\sqrt{2}\lambda _1 + {\text {Tr}}(Q) \Vert G \Vert ^2_{L(H;L(U;H))} < 0. \end{aligned}

The assumption $$\sqrt{2} \lambda _1 > {\text {Tr}}(Q) \Vert G \Vert ^2_{L(H;L(U;H))}$$ implies for the second summand that

\begin{aligned} -2 \lambda _{h,1}^2 + {\text {Tr}}(Q)^2\Vert G \Vert ^4_{L(H;L(U;H))} < 0. \end{aligned}

Thus, asymptotic mean-square stability of (3.7) follows. $$\square$$

## 4 Simulations

In this section we adopt the setting of Example 3.1 and use numerical simulations to illustrate our theoretical results. More specifically, we consider the stochastic heat equation

\begin{aligned} \mathrm {d}X(t) = \nu \Delta X(t) \, \mathrm {d}t + G(X(t)) \, \mathrm {d}W(t). \end{aligned}
(4.1)

with $$X_0(x) = \sqrt{30} x(1-x)$$, then $${{\mathrm{\mathbb {E}}}}[\Vert X_0 \Vert _H^2] = 1$$. We consider a Q-Wiener process $$W(t) = \sum _{i=1}^\infty \sqrt{\mu _i} \beta _i(t) e_i$$, where $$(\beta _i,i\in \mathbb {N})$$ is a sequence of independent, real-valued Brownian motions, and assume $$\mu _i = C_\mu i^{-\alpha }$$ with $$C_\mu > 0$$ and $$\alpha > 1$$. Here, $$C_\mu$$ scales the noise intensity and $$\alpha$$ controls the space regularity of W, see, e.g., [23, 25].

### 4.1 Spectral Galerkin methods

For $$G=G_1$$ in Example 3.1, we obtain with the approach presented in [20, Sect. 6.4] the infinite-dimensional counterpart of the geometric Brownian motion

\begin{aligned} X(t) = \sum _{i=1}^\infty \langle X(t),e_i \rangle _H e_i = \sum _{i=1}^\infty x_i(t) e_i, \end{aligned}

where each of the coefficients $$x_i(t)$$ is the solution to the one-dimensional geometric Brownian motion

\begin{aligned} \mathrm {d}x_i(t) = -\lambda _i x_i(t) \, \mathrm {d}t + \sqrt{\mu _i} x_i(t) \, \mathrm {d}\beta _i(t). \end{aligned}

Furthermore, the second moment is given by

\begin{aligned} {{\mathrm{\mathbb {E}}}}\left[ \Vert X(T) \Vert _H^2 \right] = \sum _{i=1}^\infty {{\mathrm{\mathbb {E}}}}\left[ |x_i(T)|^2\right] = \sum _{i=1}^\infty \langle X_0,e_i \rangle _H^2\exp ((-2\lambda _i + \mu _i)T). \end{aligned}

As a consequence, the asymptotic mean-square stability of (4.1) holds if and only if $$-2\lambda _i + \mu _i < 0$$ for all $$i\in \mathbb {N}$$. By using the explicit representation of the eigenvalues $$\lambda _i$$ and $$\mu _i$$, this corresponds to $$-2 \nu i^2\pi ^2 + C_\mu i^{-\alpha } < 0$$ which is equivalent to the condition $$-2\lambda _1 + \mu _1 = -2\nu \pi ^2 + C_\mu < 0$$, i.e., (4.1) is asymptotically mean-square unstable if and only if $$C_\mu > 2\nu \pi ^2$$.

For the spectral Galerkin approximation, we choose $$V_h = {\text {span}}(e_1,\dots ,e_{N_h})$$, $${N_h < \infty }$$. Thus, we consider $$X_h(t) = \sum _{k=1}^{N_h} x_k(t)e_k$$. To obtain a fully discrete scheme, we approximate the one-dimensional geometric Brownian motions in time by the three considered rational approximations in Theorem 3.1 and Proposition 3.3. Propositions 3.1 and 3.2 yield asymptotic mean-square stability if and only if the corresponding linear operators $$\mathscr {S}$$ satisfy $$\rho (\mathscr {S}) <1$$. The operator is in the first case for $$k,\ell = 1,\dots ,N_h$$ given by

\begin{aligned} \mathscr {S}(e_k \otimes e_\ell )&= (D_{\Delta t, h}^{{\text {det}}}\otimes D_{\Delta t, h}^{{\text {det}}})(e_k \otimes e_\ell ) + \Delta t \bigl ( (C\otimes C)q \bigr )(e_k \otimes e_\ell ) \\&= (D_{\Delta t, h}^{{\text {det}}}e_k \otimes D_{\Delta t, h}^{{\text {det}}}e_\ell ) + \Delta t \sum _{m=1}^\infty \mu _m \bigl (((Ce_m)e_k) \otimes ((Ce_m)e_\ell )\bigr ). \end{aligned}

Since

\begin{aligned} D_{\Delta t, h}^{{\text {det}}}e_k = R(\Delta t A_h)e_k = \sum _{r=1}^{N_h}R(-\Delta t \lambda _r)\langle e_k , e_r \rangle _H e_r = R(-\Delta t \lambda _k)e_k \end{aligned}

and

\begin{aligned} (Ce_m)e_k&= r_\mathrm {d}^{-1}(\Delta t A_h) P_h G_1(e_k)e_m = r_\mathrm {d}^{-1}(\Delta t A_h) P_h\left( \sum _{n=1}^\infty \langle e_k,e_n\rangle _H \langle e_m,e_n\rangle _H e_n\right) \\&= \delta _{k, m} r_\mathrm {d}^{-1}(\Delta t A_h) e_k = \delta _{k, m} r_\mathrm {d}^{-1}(-\Delta t \lambda _k)e_k, \end{aligned}

the corresponding eigenvalues $$\varLambda _{k,\ell }$$ are

\begin{aligned} \varLambda _{k,\ell } = R(-\Delta t \lambda _k) R(-\Delta t \lambda _\ell ) + \delta _{k, \ell }\,\Delta t \mu _k \, r_\mathrm {d}^{-1}(-\Delta t \lambda _k)r_\mathrm {d}^{-1}(-\Delta t \lambda _\ell ). \end{aligned}

Using a Milstein scheme instead, we obtain for $$\mathscr {S}$$ in Proposition 3.2 with similar computations as before and the observations that the commutativity condition in Remark 3.1 is fulfilled and that $$\Delta [\beta _k,\beta _\ell ]^j =\delta _{k, \ell } \Delta t$$

\begin{aligned} \varLambda _{k,\ell } = R(-\Delta t \lambda _k) R(-\Delta t \lambda _\ell ) + \delta _{k, \ell }\,r_\mathrm {d}^{-1}(-\Delta t \lambda _k)r_\mathrm {d}^{-1}(-\Delta t \lambda _\ell )\left( \Delta t \mu _k + \Delta t^2\mu _k^2/2\right) . \end{aligned}

Note that for both operators $$\mathscr {S}$$, the eigenvalues $$\varLambda _{k,\ell }$$ with $$k\ne \ell$$ satisfy

\begin{aligned} |\varLambda _{k,\ell }| = |R(-\Delta t \lambda _k) R(-\Delta t \lambda _\ell )| \le R(-\Delta t \lambda _s)^2 \le \varLambda _{s,s}, \end{aligned}

where $$|R(-\Delta t \lambda _s)| = \max _{j = 1,\dots ,N_h}|R(-\Delta t \lambda _j)|$$. Hence, $$\rho (\mathscr {S}) < 1$$ is equivalent to $$|\varLambda _{k,k}| <1$$ for all $$k=1,\dots ,N_h$$. In Table 1 the eigenvalues $$\varLambda _{k,k}$$ and sufficient and necessary conditions for asymptotic mean-square stability are collected.

As it is noted above, (4.1) is asymptotically mean-square stable if and only if the condition $$-2 \lambda _1 + \mu _1 < 0$$ holds. As can be seen from Table 1 and the choice of the eigenvalues, the Euler–Maruyama scheme (3.5) with backward Euler and Crank–Nicolson rational approximation shares this property without any restriction on $$V_h$$ and $$\Delta t$$. In Fig. 1a the qualitative behaviour of the Euler–Maruyama method with the three rational approximations in Theorem 3.1 is compared. We choose our parameters to be $$\nu =1$$, $$N_h = 15$$, and $$\mu _i = i^{-3}$$ for $$i \in \mathbb {N}$$, i.e., $$C_\mu = 1$$ and $$\alpha = 3$$. Since $$-2\lambda _1 + C_\mu = -2\pi ^2 + 1 < 0$$, the analytical solution to (4.1) is asymptotically mean-square stable.

For the approximation of $$\mathbb {E}[\Vert X_h^j\Vert _H^2]$$ we use $$M = 10^6$$ samples in a Monte Carlo simulation, i.e., we approximate

\begin{aligned} \mathbb {E}\left[ \Vert X_h^j\Vert _H^2\right] \approx \text {MS}_X(t_j) = \frac{1}{M} \sum _{i=1 }^M \sum _{k=1}^{N_h} |\widehat{x}_k^{j,(i)}|^2, \end{aligned}

where $$(\widehat{x}_k^{j,(i)},i=1,\dots ,M)$$ consists of independent samples of numerical approximations of $$x_k(t_j)$$ with different schemes. The reference solution is

\begin{aligned} \mathbb {E}\left[ \Vert X_h(t) \Vert _H^2\right] = \sum _{k=1}^{N_h} \mathbb {E}\left[ |x_k(t)|^2\right] = \sum _{k=1}^{N_h} \langle X_0,e_k \rangle _H^2 \exp \left( (-2\lambda _k + \mu _k)t \right) . \end{aligned}

As it can be seen in Fig. 1a, the backward Euler and the Crank–Nicolson scheme reproduce the mean-square stability of (4.1) already for large time step sizes ($$\Delta t = 1/25$$), but the forward Euler scheme requires a 44 times smaller $$\Delta t$$. Here, the finest time step size is given by $$\Delta t = 1/1100$$ which satisfies the restrictive bound in Table 1 such that $$\rho (\mathscr {S}) < 1$$. Due to a rapid amplification of oscillations caused by negative values of $$X_h^j$$ for coarser time step sizes outside the stability region (i.e. $${\Delta t = 1/1000}$$ and 1 / 1050), the mean-square process deviates rapidly from the reference solution at a certain time point.

In Fig. 1b the qualitative behaviour of the Euler–Maruyama and Milstein schemes with a backward Euler rational approximation on the time interval [0, 5] is compared. The parameters $$\nu = 8/(5\pi ^4)$$ and $$\mu _i = 3/10\, i^{-3}$$ are chosen such that the Milstein scheme is asymptotically mean-square unstable for $$\Delta t = 1.25$$ and asymptotically mean-square stable for $$\Delta t = 0.25$$ while the Euler–Maruyama scheme is asymptotically mean-square stable for both choices. These theoretical results are reproduced in the simulation.

### 4.2 Galerkin finite element methods

Let us continue with $$G = G_2$$ in Example 3.1 and a Galerkin finite element setting, similar to that of . This is to say, we let $$V_h$$ be the span of piecewise linear functions on an equidistant grid of [0, 1] with $$N_h$$ interior nodes so that $$V_h$$ is an $$N_h$$-dimensional subspace of $$\dot{H}^1$$ with refinement parameter $$h=(N_h+1)^{-1}$$. With the exception that $$U=\dot{H}^1$$, all other parameters are as in Fig. 1a of Sect. 4.1.

In contrast to the setting in Sect. 4.1, the solution and its approximation are no longer sums of one-dimensional geometric Brownian motions and thus, analytical necessary and sufficient conditions for $$\rho (\mathscr {S}) < 1$$ are not available. We therefore consider the results of Theorem 3.1 instead. With the setting of this section,

\begin{aligned} \lambda _{h,i} = 12 \nu h^{-2} \left( 2 + \cos (i \pi h) \right) ^{-1} \left( \sin (i \pi h / 2) \right) ^2 \end{aligned}

for $$i \in \mathbb {N}$$, which was derived in [20, Sect. 6.1]. For the convenience of the reader, the sufficient conditions of Theorem 3.1 for the considered approximation schemes are collected in simplified form in Table 2, expressed in terms of stability parameters $$\rho _{\text {BE}}$$, $$\rho _{\text {CN}}$$ and $$\rho _{\text {FE}}$$. By setting $${\hat{g}} = \Bigl ( 2 \sum ^\infty _{i=1} \lambda _i^{-1} \Bigr )^{1/2}$$, we replace $$\Vert G_2 \Vert _{L(H;L(U;H))}$$ in these conditions with the upper bound derived in Example 3.1. Note that Corollary 3.2 with these parameters implies simultaneous asymptotic mean-square stability of (4.1) and the finite element backward Euler scheme (3.5).

As in Sect. 4.1 we compare the mean-square behaviour of the backward Euler and the forward Euler scheme in Fig. 2a but now for the finite element discretization up to $$T=2.5$$. We observe that the increase of the time step size by a very small amount, i.e., from $$\Delta t = 0.00066$$ to $$\Delta t = 0.00067$$, causes the forward Euler system to switch from a stable to an unstable behaviour. This agrees with the theory in Table 3, as $$\rho _{\text {FE}}$$ changes sign in that interval, i.e., stability is only guaranteed for the smaller time step. Therefore we conclude that the sufficient condition is sharp in our model problem.
For the approximation of $$\mathbb {E}[\Vert X_h^j\Vert _H^2]$$, we use the same method as before but take $$M=10^4$$ samples in the Monte Carlo approximation. For the computation of the norm in H, we use the fact that for given representation $$X_h^j = \sum _{m=1}^{N_h} x_m \phi _m$$ with respect to the hat functions $$\{\phi _m, m=1\ldots ,N_h\}$$ that span $$V_h$$
\begin{aligned} \Vert X_h^j\Vert _H^2 = \sum _{m=1}^{N_h} \sum _{n=1}^{N_h} x_m x_n \left\langle \phi _m , \phi _n \right\rangle _{H} . \end{aligned}
In Fig. 2a the mean-square behaviour of the the backward Euler scheme and the Crank–Nicolson scheme for $$\Delta t = 0.015$$ to $$\Delta t = 0.15$$ is compared. We see from Table 3 that $$\rho _{\text {CN}}$$ changes sign when the time step size is increased, which occurs for significantly larger time steps than for the forward Euler scheme. The simulation results show a substantial change in the decay behaviour of $${{\mathrm{\mathbb {E}}}}[ \Vert X_h^j \Vert _H^2]$$ for the Crank–Nicolson scheme with time step size $$\Delta t = 0.15$$ compared to $$\Delta t = 0.015$$, which is no longer convincing to be mean-square stable. Since the sufficient condition $$\rho _{CN} < 0$$ from Table 2 is not fulfilled for $$\Delta t = 0.15$$, it is unclear from the theory if asymptotic mean-square stability holds in that case.