1 Introduction

The random periodic solution is a new concept to characterize the presence of random periodicity in the long run of some stochastic systems. On its first appearance in [22], the authors gave the definition of the random periodic solutions of random dynamical systems and showed the existence of such periodic solutions for a \(C^1\) perfect cocycle on a cylinder. This is followed by another seminal paper [9], where the authors not only defined the random periodic solutions for semiflows but also provided a general framework for its existence. Namely, instead of following the traditional geometric method of establishing the Poincaré mapping, a new analytical method for coupled infinite horizon forward-backward integral equations was introduced. This pioneering study boosts a series of work, including the existence of random periodic solutions to stochastic partial differential equations (SPDEs) [4], the existence of anticipating random periodic solutions [6, 7], periodic measures [8], etc.

Let us recall the definition of the random periodic solution for stochastic semiflows given in [9]. Let H be a separable Banach space. Denote by \((\Omega ,{{\mathcal {F}}},{\mathbb {P}},(\theta _s)_{s\in {\mathbb {R}}})\) a metric dynamical system and \(\theta _s:\Omega \rightarrow \Omega \) is assumed to be measurably invertible for all \(s\in {\mathbb {R}}\). Denote \(\Delta :=\{(t,s)\in {\mathbb {R}}^2, s\le t\}\). Consider a stochastic semiflow \(u: \Delta \times \Omega \times H\rightarrow H\), which satisfies the following standard condition

$$\begin{aligned} {} u(t,r,\omega )=u(t,s,\omega )\circ u(s,r,\omega ),\ \ \mathrm{for\ all } \ r\le s\le t,\ r, s,t\in {\mathbb {R}},\ \text{ for } a.e.\ \omega \in \Omega .\nonumber \\ \end{aligned}$$
(1)

We do not assume the map \(u(t,s,\omega ): H\rightarrow H\) to be invertible for \((t,s)\in \Delta ,\ \omega \in \Omega \).

Definition 1.1

A random periodic path of period \(\tau >0\) of the semiflow \(u: \Delta \times \Omega \times H\rightarrow H\) is an \({{\mathcal {F}}}\)-measurable map \(y:{\mathbb {R}}\times \Omega \rightarrow H\) such that for a.e. \(\omega \in \Omega \)

$$\begin{aligned} \Big \{\begin{array}{l}u(t,s, y(s,\omega ), \omega )=y(t,\omega ),\ \ \forall t\ge s\\ y(s+\tau ,\omega )=y(s, \theta _\tau \omega ),\ \ \forall s\in {\mathbb {R}}. \end{array} \end{aligned}$$
(2)

Note that Definition 1.1 covers both the deterministic periodic path and the random fixed point (c.f. [1]), also known as stationary point as its special cases. To see the latter one, one may assume (2) holds for any \(\tau >0\), and define \({\hat{y}}(\theta _t\omega )=y(0,\theta _t\omega )\) for \(t>0\), then one can conclude that \(u(t,0, {\hat{y}}(\omega ), \omega )={\hat{y}}(\theta _t\omega )\) from (2), which coincides with the definition of random fixed point (also termed as the stationary solution) given in [1]. A well-known example for stationary solution is given by \(Y(\omega )=\int _{-\infty }^{0}e^s \textrm{d}W(s)\), for the one-dimensional random dynamical system \(\phi (t,\omega )x=xe^{-t}+\int _0^te^{-(t-s)}\textrm{d}W(s,\omega )\) generated from the following Ornstein–Uhlenbeck process:

$$\begin{aligned} \text {d}y(t)=-y(t)\textrm{d}t+\textrm{d}W(t), \ \ y(0)=x\in {\mathbb {R}}, \ \ t>0, \end{aligned}$$
(3)

where \(W: (t,\omega ) \mapsto W(t,\omega )\) is a one-dimensional two-sided Wiener process on \((\Omega , \mathcal {F}, {\mathbb {P}})\), and as a convention, \(\omega \) is usually hidden in the notation \(W(s,\omega )\). One can verify that \(\phi (t,\omega )Y(\omega )=Y(\theta _t \omega )\). If in addition, we add a periodic drift term to Eqn (3) such that it reads as

$$\begin{aligned} \text {d} y(t)=(-y(t)+\sin (t))\textrm{d}t+\textrm{d}W(t), \ \ y(s)=x\in {\mathbb {R}}, \ \ t>0, \end{aligned}$$
(4)

then it is not hard to see the semiflow for (4) is given by \(\varphi (t,s,x,\omega ):=xe^{-(t-s)}+\int _s^te^{-(t-r)}\sin (r)\textrm{d}r+\int _s^te^{-(t-r)}\textrm{d}W(r)\). Now define \(Y(t,\omega )=\int _{-\infty }^t e^{-(t-s)}\sin (s)\textrm{d}s+\int _{-\infty }^t e^{-(t-s)}\textrm{d}W(s)\). One can verify that \(Y(t,\omega )=\varphi (t,s,\omega )Y(s,\omega )\) and \(Y(t+2\pi ,\omega )=Y(t,\theta _{2\pi }\omega )\). Indeed,

$$\begin{aligned} Y(t+2\pi ,\omega )&=\int _{-\infty }^{t+2\pi } e^{-(t+2\pi -s)}\sin (s)\textrm{d}s+\int _{-\infty }^{t+2\pi } e^{-(t+2\pi -s)}\textrm{d}W(s,\omega )\\&=\int _{-\infty }^{t} e^{-(t-{\hat{s}})}\sin ({\hat{s}}+2\pi )\textrm{d}{\hat{s}}+\int _{-\infty }^{t} e^{-(t-{\hat{s}})}\textrm{d}W({\hat{s}}+2\pi ,\omega )\\&=\int _{-\infty }^{t} e^{-(t-{\hat{s}})}\sin ({\hat{s}})\textrm{d}{\hat{s}}+\int _{-\infty }^{t} e^{-(t-{\hat{s}})}\textrm{d}(W({\hat{s}}+2\pi ,\omega )-W(2\pi ,\omega ))\\&=\int _{-\infty }^{t} e^{-(t-{\hat{s}})}\sin ({\hat{s}})\textrm{d}{\hat{s}}+\int _{-\infty }^{t} e^{-(t-{\hat{s}})}\textrm{d}W({\hat{s}},\theta _{2\pi }\omega )=Y(t,\theta _{2\pi }\omega ) \end{aligned}$$

where we use in the last two lines the measure-preserving property of Wiener process. Therefore, Y is a random periodic path for semiflow \(\varphi \) generated from SDE (4).

In general, random periodic solutions cannot be solved explicitly. Even for the simple case as we showcased in Eq. (4), one relies on numerical approaches to simulate the random periodic path Y. For the dissipative system generated from some SDE with a global Lipchitz condition, the convergences of a forward Euler–Maruyama method and a modified Milstein method to the random period solution have been investigated in [5]. For SDEs with a monotone drift condition, one benefits a flexible choice of stepsize from applying the implicit method instead [20]. Each of these numerical schemes admits their own random periodic solution, which approximates the random periodic solution of the targeted SDE as the stepsize decreases. The main challenge lies in proving a convergence of an infinite time horizon. In this paper, we consider approximating the random periodic trajectory of SPDEs, where we encounter an additional obstacle of simulating infinite-dimensional objects. For this, we employ the spectral Galerkin method (c.f. [13]) for spatial dimension reduction, construct a discrete exponential integrator scheme based on the spatial discretization and conclude the existence and uniqueness of random periodic solution from the discrete scheme. To the best of our knowledge, this is the first study that works on the numerical analysis (Galerkin analysis) of random periodic solutions for SEEs. The Galerkin-type method has been intensively used to simulate solutions of parabolic SPDEs over finite-time horizon [10,11,12, 14, 15, 17], and it is recently applied to approximate stationary distributions for SPDEs [2]. For the error analysis of both strong and weak approximation of semilinear stochastic evolution equations (SEEs) through Galerkin approximation, we refer the reader to the monograph [16].

Let \((H, (\cdot , \cdot ), \Vert \cdot \Vert )\) and \((U, (\cdot , \cdot )_U, \Vert \cdot \Vert _U)\) be two separable \({\mathbb {R}}\)-Hilbert spaces. Let \({\mathbb {P}}\) be the two-sided Wiener measure on \((\Omega , \mathcal {F})\), which is a measure-preserving probability, and denote by \((\Omega , \mathcal {F}, (\mathcal {F}_t)_{t \in {\mathbb {R}}}, {\mathbb {P}})\) a filtered probability space satisfying the usual conditions. By \((W(t))_{t \in {\mathbb {R}}}\) we denote an \((\mathcal {F}_t)_{t \in {\mathbb {R}}}\)-Wiener process on U with associated covariance operator \(Q \in \mathcal {L}(U)\), which is not necessarily assumed to be of finite trace. Denote by \(\mathcal {L}_2^0=\mathcal {L}_2^0(H) =\mathcal {L}_2(Q^{\frac{1}{2}}(U),H)\) the set of all Hilbert–Schmidt operators from \(Q^{\frac{1}{2}}(U)\) to H. Let \(\theta \) be the Wiener shift operator defined by \(\left( \theta _{t} \omega \right) (s)=\omega (t+s)-\omega (t)\) for all \(s, t \in {\mathbb {R}}\) and \(\omega \in \Omega \). Then, \((\Omega , \mathcal {F}, {\mathbb {P}}, \theta )\) is a metric dynamical system. For each \(\omega \in \Omega \) and \(t \in {\mathbb {R}}\), define \(W(t, \omega )=\omega (t)\). We denote by \(L^p(\Omega ,\mathcal {F}_s,{\mathbb {P}}; H)\) the space of \(\mathcal {F}_s\)-measurable random variables X with finite p-th moment, i.e. \({\mathbb {E}}[\Vert X\Vert ^p]<\infty \).

For some arbitrary \(t_0,T \in (-\infty ,\infty )\) with \(t_0<T\), our goal is to study and approximate the random periodic mild solution to SEEs of the form,

$$\begin{aligned} {\left\{ \begin{array}{ll} \textrm{d}{X^{t_0}_t} = \big [ -A X^{t_0}_t + f(t,X^{t_0}_t) \big ] \textrm{d}{t}+g(t,X^{t_0}_t) \textrm{d}{W(t)},&{} \quad \text {for } t \in (t_0,T],\\ X^{t_0}_{t_0} = \xi .&{} \end{array}\right. } \end{aligned}$$
(5)

Throughout the paper, we impose the following essential assumptions.

Assumption 1.1

The linear operator \(A :\text {dom}(A) \subset H \rightarrow H\) is densely defined, self-adjoint, and positive definite with compact inverse.

Assumption 1.1 implies the existence of a positive, increasing sequence \((\lambda _i)_{i\in {\mathbb {N}}} \subset {\mathbb {R}}\) such that \(0<\lambda _1 \le \lambda _2 \le \ldots \) with \(\lim _{i\rightarrow \infty }\lambda _i = \infty \), and of an orthonormal basis \((e_i)_{i\in {\mathbb {N}}}\) of H such that \(A e_i = \lambda _i e_i\) for every \(i \in {\mathbb {N}}\). Indeed we have that

$$\begin{aligned} \text {dom}(A):=\{x\in H:\sum _{n=1}^\infty \lambda _n^2(x,e_n)^2<\infty \}. \end{aligned}$$

In addition, it also follows from Assumption 1.1 that \(-A\) is the infinitesimal generator of an analytic semigroup \((S(t))_{t \in [0,\infty )} \subset \mathcal {L}(H)\) of contractions. More precisely, the family \((S(t))_{t \in [0,\infty )}\) enjoys the properties

$$\begin{aligned} S(0)&= \textrm{Id} \in \mathcal {L}(H),\\ S(s + t)&= S(s) \circ S(t) = S(t) \circ S(s), \quad \text {for all } s,t \in [0,\infty ), \end{aligned}$$

and

$$\begin{aligned} \sup _{t \in [0,\infty )} \Vert S(t) \Vert _{\mathcal {L}(H)} \le 1. \end{aligned}$$
(6)

Further, let us introduce fractional powers of A, which are used to measure the (spatial) regularity of the mild solution (10). For any \(r\in [-1,1]\), we define the operator \(A^{\frac{r}{2}} :\text {dom}(A^{\frac{r}{2}}) = \{x\in H \,: \, \sum _{j=1}^{\infty } \lambda _j^r (x,e_j)^2 < \infty \} \subset H \rightarrow H\) by

$$\begin{aligned} A^{\frac{r}{2}} x:= \sum _{j=1}^{\infty } \lambda _j^{\frac{r}{2}} (x,e_j) e_j, \quad \text {for all } x \in \text {dom}(A^{\frac{r}{2}}). \end{aligned}$$
(7)

Then, by setting \(({\dot{H}}^r,(\cdot ,\cdot )_r, \Vert \cdot \Vert _r):= (\text {dom}(A^{\frac{r}{2}}), (A^{\frac{r}{2}} \cdot , A^{\frac{r}{2}}\cdot ), \Vert A^{\frac{r}{2}}\cdot \Vert )\), we obtain a family of separable Hilbert spaces. Clearly, for any \(0\le r_1<r_2\le 1\), we have that \(\text {dom}(A)\subset {\dot{H}}^{r_2}\subset {\dot{H}}^{r_1}\subset H\).

Assumption 1.2

The initial value \(\xi :\Omega \rightarrow H\) satisfies \(\xi \in L^{2}(\Omega , \mathcal {F}_{t_0},{\mathbb {P}}; H)\). Denote by \(C_\xi \) a constant such that \({\mathbb {E}}[\Vert \xi \Vert ^2]\le C_\xi ^2\).

Assumption 1.3

The mappings \(f :{\mathbb {R}} \times H \rightarrow H\) and \(g :{\mathbb {R}} \times H \rightarrow \mathcal {L}^2_0\) are continuous and periodic in time with period \(\tau \). Moreover, there exist \(\kappa \in (0,1]\), \(C_f, C_g, C_{f,g} \in (0,\infty )\) such that

$$\begin{aligned}&\Vert f(t,u_1) - f(t,u_2) \Vert \le C_f \Vert u_1 - u_2\Vert ,\qquad \\&\quad \Vert f(t_1,u) - f(t_2,u) \Vert \le C_f (1+\Vert u\Vert )|t_1 - t_2|^{\kappa },\\&\quad \Vert g(t,u_1)-g(t,u_2)\Vert _{\mathcal {L}^2_0}\le C_g \Vert u_1-u_2\Vert ,\\&\qquad \Vert g(t_1,u) - g(t_2,u) \Vert _{\mathcal {L}^2_0}\le C_g (1+\Vert u\Vert )|t_1 - t_2|^{\kappa }, \end{aligned}$$

for all \(u,u_1, u_2 \in H\) and \(t,t_1,t_2 \in [0,\tau )\).

Remark 1

Indeed the condition on f can be weakened to a local Lipschitz condition for the existence and uniqueness of the random periodic solution. However, to show the continuity of the random periodic solution or to conduct the numerical analysis, one still need Assumption 1.3.

Remark 2

One will see that the Hölder continuity in temporal variable imposed on both diffusion and drift terms plays an important role in the numerical analysis part. To be more specific, it partly determines the order of convergence of the proposed numerical scheme.

Remark 3

Note that the assumption on g excludes identity in \(\mathcal {L}(H)\). One may refer to [2] for techniques handing a slight more general assumption on g, which allows g to be constant in \(\mathcal {L}(H)\).

From Assumption 1.3, we directly deduce a linear growth bound of the f and g

$$\begin{aligned} \Vert f(t,u)\Vert \le L_{f} + C_f\Vert u\Vert ,\quad \text {for all } t \in {\mathbb {R}},\, u \in H, \end{aligned}$$
(8)

and

$$\begin{aligned} \Vert g(t,u)\Vert _{\mathcal {L}^2_0}\le L_{g} +C_g\Vert u\Vert ,\quad \text {for all } t \in {\mathbb {R}},\, u \in H, \end{aligned}$$
(9)

where \(L_{f}:=\max _{s\in [0,\tau )}\Vert f(s,0)\Vert \) and \(L_{g}:=\max _{s\in [0,\tau )}\Vert g(s,0)\Vert \).

Under these assumptions, the SEE (5) admits a unique mild solution \(X_{\cdot }^{t_0} :[t_0,T] \times \Omega \rightarrow H\) such that it is uniquely determined by the variation-of-constants formula (c.f. [3])

$$\begin{aligned} X^{t_0}_t(\xi ) = S(t-t_0) \xi + \int _{t_0}^t S(t - s) f(s,X^{t_0}_s) \textrm{d}{s} + \int _{t_0}^t S(t-s) g(s,X^{t_0}_s) \textrm{d}{W(s)}, \end{aligned}$$
(10)

which holds \({\mathbb {P}}\)-almost surely for all \(t \in [t_0,T]\).

1.1 The pull-back

To ensure the existence of random periodic solution, we need some additional assumptions on the Wiener process and on \(C_f\) and \(\lambda _1\):

Assumption 1.4

The constant \(C_f\) and \(C_g\) in Assumption 1.3 and the eigenvalue \(\lambda _1\) of A satisfy \(3C_f+2C_g^2 <2\lambda _1\).

Denote by \(X^{-k\tau }_t(\xi ,\omega ) \) the solution starting from time \(-k\tau \). The uniform boundedness of \(X^{-k\tau }_t(\xi ,\omega ) \) in the \(L^2\) sense can be guaranteed under Assumption 1.11.4. Further, under Assumption 1.11.4, one is able to show that when \(k\rightarrow \infty \), the pull-back \(X^{-k\tau }_t(\xi )\) has a unique limit \(X^* _t\) in \(L^2(\Omega ; H)\), moreover, \(X^* _t\) is the random periodic solution of SEE (5), satisfying

$$\begin{aligned} X^*_t = \int _{-\infty }^t S(t - s) f(s,X^{*}_s) \textrm{d}{s} + \int _{-\infty }^t S(t-s) g(s,X^{*}_s) \textrm{d}{W(s)}. \end{aligned}$$
(11)

Surprisingly, the mild form (11) can be shown well defined in \(L^2(\Omega ;{\dot{H}}^r)\) for any \(r\in (0,1)\). More details about the proof can be found in Sect. 3. Besides, the continuity of \(X^{-k\tau }_t(\xi ,\omega ) \) is characterized in Sect. 3 for error analysis in Sect. 4.

1.2 The Galerkin approximation

Next, we formulate the assumptions and notations on the spatial discretization. To this end, define finite-dimensional subspaces \(H_n\) of H spanned by the first n eigenfunctions of the basis, ie, \(H_n:=\{e_1,\ldots ,e_n\}\), and let \(P_n:H\rightarrow H_n\) be the orthogonal projection. Note that \(H_n \subset {\dot{H}}^r\) for any \(r\in {\mathbb {R}}\). By doing this, we are able to further introduce the following notations: \(A_n =P_n A\in \mathcal {L}(H_n)\), \(S_n(t)=P_nS(t)\in \mathcal {L}(H_n)\), \(f_n=P_n f:{\mathbb {R}}\times H_n \rightarrow H_n\) and \(g_n=P_ng:{\mathbb {R}}\times H_n \rightarrow \mathcal {L}_0^2(H_n)\). Then, the Galerkin approximation to (5) can be formulated as follows

$$\begin{aligned} {\left\{ \begin{array}{ll} \textrm{d}{X^{n,t_0}_t} = \big [ -A_n X^{n,t_0}_t + f_n(t,X^{n,t_0}_t) \big ] \textrm{d}{t}+g_n(t,X^{n,t_0}_t) \textrm{d}{W(t)},&{} \quad \text {for } t \in (t_0,T],\\ X^{n,t_0}_{t_0} = P_n\xi .&{} \end{array}\right. } \end{aligned}$$
(12)

Applying the spectral Galerkin method results in a system of finite-dimensional stochastic differential equations. Note that for \(x,y\in H_n\), we have that \(A_nx=Ax\), \(S_n(t)x=S(t)x\) and \(\big (x, f_n(t,y)\big ) =\big ( x,f(t,y)\big )\).

Remark 4

It is easy to see there exists an isometry between \(H_n\) and \({\mathbb {R}}^n\). An simply calculation leads to the existence of a unique strong solution to (12). The uniform boundedness of \(X^{n,-k\tau }_t\) as well as the existence of the random periodic solution to (12) are simple consequences of the corresponding properties of \(X^{-k\tau }_t\).

Let us fix an equidistant partition \(\mathcal {T}^h:=\{jh,\ j\in {\mathbb {Z}} \}\) with stepsize \(h\in (0,1)\). Note that \(\mathcal {T}^h\) stretch along the real line because we are dealing with an infinite time horizon problem. Then to simulate the solution to (12) starting at \(-k\tau \), the discrete exponential integrator scheme on \(\mathcal {T}^h\) is given by the recursion

$$\begin{aligned} \begin{aligned}&{\hat{X}}_{-k\tau +(j+1)h}^{n,-k\tau } \\&\quad = S_n(h)\Big ({\hat{X}}_{-k\tau +jh}^{n,-k\tau } +h f_n\big (-k\tau +jh, {\hat{X}}_{-k\tau +jh}^{n,-k\tau } \big )\\&\qquad + g_n(-k\tau +jh,{\hat{X}}_{-k\tau +jh}^{n,-k\tau } )\Delta W_{-k\tau +jh}\Big ), \end{aligned} \end{aligned}$$
(13)

for all \(j \in {\mathbb {N}}\), where the initial value \({\hat{X}}_{-k\tau }^{n,-k\tau } = P_n\xi \). Moreover, if we define

$$\begin{aligned} {\bar{X}}_t^{n,-k\tau } ={\hat{X}}_{-k\tau +jh}^{n,-k\tau }\ \ \text {and}\ \Lambda (t) =-k\tau +jh \end{aligned}$$

for \(t\in [-k\tau +jh,-k\tau +(j+1)h)\), it follows that the continuous version of (13) is therefore

$$\begin{aligned} \begin{aligned} {\hat{X}}_{t}^{n,-k\tau }&= S_n(t+k\tau )P_n \xi +\int _{-k\tau } ^t S_n\big (t-\Lambda (s)\big ) f_n\big (\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } \big )\textrm{d}s\\&\quad + \int _{-k\tau } ^t S_n\big (t-\Lambda (s)\big ) g_n\big (\Lambda (s), {\bar{X}}_{s}^{n,-k\tau }\big )\textrm{d}W(s)\\&= S(t+k\tau )P_n \xi +\int _{-k\tau } ^t S\big (t-\Lambda (s)\big ) f_n\big (\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } \big )\textrm{d}s\\&\quad + \int _{-k\tau } ^t S\big (t-\Lambda (s)\big ) g_n\big (\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } \big )\textrm{d}W(s), \end{aligned} \end{aligned}$$
(14)

with differential form

$$\begin{aligned} \begin{aligned} \textrm{d}{\hat{X}}_{t}^{n,-k\tau }&= -A {\hat{X}}_{t}^{n,-k\tau } + S\big (t-\Lambda (t)\big ) f_n\big (\Lambda (t), {\bar{X}}_{t}^{n,-k\tau } \big )\textrm{d}t\\&\quad + S\big (t-\Lambda (t)\big ) g_n\big (\Lambda (t),{\bar{X}}_{t}^{n,-k\tau } \big )\textrm{d}W(t). \end{aligned} \end{aligned}$$
(15)

In Sect. 4, we show the uniform boundedness of \({\hat{X}}_{t}^{n,-k\tau }\) by imposing another assumptions on f and g:

Assumption 1.5

(Dissipative condition)It holds that

$$\begin{aligned} 2( f(t,u_1) - f(t,u_2),u_1-u_2 )+\Vert g(t,u_1)-g(t,u_2)\Vert ^2_{\mathcal {L}^2_0}\le -C_{f,g} \Vert u_1-u_2\Vert ^2 \end{aligned}$$

for all \(u,u_1, u_2 \in H\) and \(t \in [0,\tau )\).

Assumption 1.6

\(\frac{C_f}{\lambda _1}+\frac{C_g}{2\sqrt{\lambda _1}}<1\).

We conclude the random periodicity of the spatio-temporal discrete scheme (14) in Theorem 4.1 and determine a uniform and strong order to approximate \(X^{-k\tau }_{\cdot }(\xi )\) from (14) in Theorem 4.2. Compared to the convergence in SDE cases in [5, 20], for the SEE case it is required that the approximation trajectory starting from \(L^{2}(\Omega ; {\dot{H}}^r)\) with \(r\in (0,1)\) rather than an arbitrary starting point in \(L^{2}(\Omega ; H)\), which guarantees the continuity of the path \(X^{-k\tau }_{\cdot }(\xi )\). An interesting observation from it is, the order of convergence directly depends on the space where the initial point lives on, ie, \(L^{2}(\Omega ;{\dot{H}}^r)\). As the rate of the convergence from \(X^{-k\tau }_{\cdot }(\xi )\) to the random periodic path \(X^{*}_{\cdot }\) is dependent of the initial condition, we end up this paper with Corollary 4.1 which determines a strong but not optimal order for approximating \(X^{*}_{\cdot }\). Corollary 4.1 also implies the best order of convergence can be ever achieved is \(1/2-\epsilon \) with arbitrarily small \(\epsilon \).

2 Preliminaries

In this section, we present a few useful mathematical tools for later use.

Proposition 2.1

Under the condition of the infinitesimal generator \(-A\) in Assumption 1.1 for the semigroup \((S(t))_{t \in [0,\infty )}\), the following properties hold:

  1. 1.

    For any \(\nu \in [0,1]\), there exists a positive constant \(C_1(\nu )\) such that

    $$\begin{aligned} \Vert A^{-\nu }(S(t)-\text {Id})\Vert _{\mathcal {L}(H)}\le C_1(\nu ) t^\nu \quad \text {for }t\ge 0, \end{aligned}$$
    (16)

    where \(\text {Id}\) is the identity map from H to H. In addition,

    $$\begin{aligned} \Vert A^{-\nu }\Vert _{\mathcal {L}(H)}\le \lambda _{1}^{-\nu }. \end{aligned}$$
    (17)
  2. 2.

    For any \(\mu \ge 0\), there exists a positive constant \(C_2(\mu )\) such that

    $$\begin{aligned} \Vert A^{\mu }S(t)\Vert _{\mathcal {L}(H)}\le C_2(\mu ) t^{-\mu } \quad \text {for }t> 0. \end{aligned}$$
    (18)
  3. 3.

    For any \(t\ge 0\), \( \Vert S(t)\Vert _{\mathcal {L}(H)}\le e^{-\lambda _{1} t}\). For the orthogonal projection \(P_n\), it holds that

    $$\begin{aligned} \Vert S(t)(\text {Id}-P_n)\Vert _{\mathcal {L}(H)}\le e^{-\lambda _{n+1} t}, \quad \text {for }t\ge 0. \end{aligned}$$
    (19)

Proof

The proof for the first two inequalities can be found in [19]. For the last one, note for any \(x\in H\), we have the decomposition \(x=\sum _{i=1}^\infty (x,e_i)e_i\). Clearly, \(S(t)(\text {Id}-P_n)\) is a linear operator from H to H. Then, the induced norm (indeed we consider its square for convenience) for it is therefore

$$\begin{aligned} \Vert S(t)(\text {Id}-P_n)\Vert ^2_{\mathcal {L}(H)}&=\sup _{x\in H, \Vert x\Vert =1} \Vert S(t)(\text {Id}-P_n)x\Vert ^2=\sup _{x\in H, \Vert x\Vert =1} \sum _{i=n+1}^\infty e^{-2\lambda _i t}(x,e_i)^2 \\&\le e^{-2\lambda _{n+1} t}\sup _{x\in H, \Vert x\Vert =1} \sum _{i=1}^\infty (x,e_i)^2 \le e^{-2\lambda _{n+1} t} . \end{aligned}$$

\(\square \)

As one of the main tools, Gamma function is presented:

$$\begin{aligned} \Gamma (\gamma ):=\int _0^\infty x^{\gamma -1}e^{-x} \textrm{d}x<\infty \ \ \text {for\ } \gamma >0. \end{aligned}$$
(20)

3 Existence and uniqueness of random periodic solution

In the following, we will show the boundedness of the solution to SEE (5) and characterize its dependence on the initial condition, both of which are crucial ingredients for the existence of random periodic solutions. The proof simply follows Lemma 3.1 and Lemma 3.2 in [20].

Lemma 3.1

For SEE (5) with the given initial condition \(\xi \) and satisfying Assumption 1.1 to Assumption 1.4, we have

$$\begin{aligned} \sup _{k\in {\mathbb {N}}}\sup _{t>-k\tau }{\mathbb {E}}[\Vert X_{t}^{-k\tau }(\xi )\Vert ^2]<\infty . \end{aligned}$$
(21)

If, in addition, \(\xi \in L^{2}(\Omega , \mathcal {F}_{-k\tau }, {\mathbb {P}}; {\dot{H}}^r)\) for some \(r\in (0,1)\), then the mild solution \(X_{t}^{-k\tau }(\xi )\) introduced in (10) is well defined in \( L^{2}(\Omega , {\mathcal {F}_{t}},{\mathbb {P}}; {\dot{H}}^r)\) for any \(k\in {\mathbb {N}}\), and \(t>-k\tau \).

Proof

The fist assertion follows Lemma 11 in [20]. Applying Itô formula to \(e^{2\lambda _1 t}{\mathbb {E}}[\Vert X_{t}^{-k\tau }(\xi )\Vert ^2]\) and taking the expectation yields

$$\begin{aligned} \begin{aligned} e^{2\lambda _1 t}{\mathbb {E}}[\Vert X_{t}^{-k\tau }(\xi )\Vert ^2]=&e^{-2\lambda _1 k\tau }{\mathbb {E}}[\Vert \xi \Vert ^2]+2\lambda _1 \int _{-k\tau }^t e^{2\lambda _1 s}{\mathbb {E}}[\Vert X_{s}^{-k\tau }\Vert ^2]\textrm{d}s\\&-2\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}\big ( X_{s}^{-k\tau }, AX_{s}^{-k\tau }\big )\textrm{d}s\\&+2\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}\big ( X_{s}^{-k\tau }, f(s,X_{s}^{-k\tau })\big ) \textrm{d}s\\&+\int _{-k\tau }^t e^{2\lambda _1 s}{\mathbb {E}}[\Vert g(s,X_s^{-k\tau })\Vert ^2_{\mathcal {L}_0^2}]\textrm{d}s. \end{aligned} \end{aligned}$$
(22)

Note that \(2(\lambda _1I-A)\) is non-positive definite. Then, making use of Young inequality and linear growth of f and g in Eqs. (8) and (9) gives

$$\begin{aligned} e^{2\lambda _1 t}{\mathbb {E}}[\Vert X_{t}^{-k\tau }(\xi )\Vert ^2]&\le e^{-2\lambda _1 k\tau }{\mathbb {E}}[\Vert \xi \Vert ^2]+C_f\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[\Vert X_{s}^{-k\tau }\Vert ^2]\textrm{d}s\\&\quad +\frac{1}{C_f}\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[\Vert f(s,X_{s}^{-k\tau })\Vert ^2]\textrm{d}s\\&\quad + \int _{-k\tau }^t e^{2\lambda _1 s}{\mathbb {E}}[\Vert g(s,X_s^{-k\tau })\Vert ^2_{\mathcal {L}_0^2}]\textrm{d}s\\&\le e^{-2\lambda _1 k\tau }{\mathbb {E}}[\Vert \xi \Vert ^2]+(3C_f+2C_g^2)\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[\Vert X_{s}^{-k\tau }\Vert ^2]\textrm{d}s \\&\quad +\frac{1}{2\lambda _1}\big (\frac{2}{C_f}L_f^2+2L_g^2\big )(e^{2\lambda _1 t}-e^{-2\lambda _1 k\tau }). \end{aligned}$$

Define

$$\begin{aligned}{} & {} K_2:=\frac{1}{\lambda _1}\big (\frac{L_f^2}{C_f}+L_g^2\big ), \\{} & {} \quad K_1:= e^{-2\lambda _1 k\tau }\big ({\mathbb {E}}[\Vert \xi \Vert ^2]-K_2)\qquad \text {and}\qquad K_3:=3C_f+2C_g^2. \end{aligned}$$

Note that \(K_3\le 2\lambda _1\) because of Assumption 1.4. By the Grönwall inequality, we have that

$$\begin{aligned}&e^{2\lambda _1 t}{\mathbb {E}}[\Vert X_{t}^{-k\tau }(\xi )\Vert ^2]\le K_1+K_2 e^{2\lambda _1 t}+\int _{-k\tau }^t( K_1+K_2 e^{2\lambda _1 s})K_3e^{K_3(t-s)}\textrm{d}s\\&\le K_1e^{K_3(k\tau +t)}+K_2 e^{2\lambda _1 t}+\frac{K_2K_3}{2\lambda _1-K_3}(e^{2\lambda _1 t}-e^{-2\lambda _1 k\tau })\\&\le (K_1e^{2\lambda _1k\tau }+K_2) e^{2\lambda _1 t}+\frac{K_2K_3}{2\lambda _1-K_3}e^{2\lambda _1 t}. \end{aligned}$$

Note that \(K_1e^{2\lambda _1 k\tau }+K_2={\mathbb {E}}[\Vert \xi \Vert ^2]\). By Assumption 1.2, it leads to

$$\begin{aligned} {\mathbb {E}}[\Vert X_{t}^{-k\tau }(\xi )\Vert ^2]\le {\mathbb {E}}[\Vert \xi \Vert ^2]+\frac{K_2K_3}{2\lambda _1-K_3}\le C_{\xi }^2+\frac{2K_2\lambda _1}{2\lambda _1-K_3}. \end{aligned}$$

It remains to justify the second assertion, by bounding each term of (10) in \( L^{2}(\Omega , \mathcal {F}_{-k\tau },{\mathbb {P}}; {\dot{H}}^r)\) with some constant independent of k and t. For the first term on the right hand side of (10), we have

$$\begin{aligned} {\mathbb {E}}[\Vert A^{\frac{r}{2}} S(t+k\tau )\xi \Vert ^2] = {\mathbb {E}}[\Vert S(t+k\tau )A ^{\frac{r}{2}}\xi \Vert ^2] \le {\mathbb {E}}[\Vert A^{\frac{r}{2}} \xi \Vert ^2]. \end{aligned}$$

To bound the second term on the right hand side of (10), we apply the linear growth of f in (8), Proposition 2.1 and (21), and take \(\theta =\frac{1}{2}\) as follows:

$$\begin{aligned}&{\mathbb {E}}\big [ \big \Vert A^{\frac{r}{2}}\int _{-k\tau }^t S(t - s) f(s,X^{-k\tau }_s) \textrm{d}{s}\big \Vert ^2\big ]\\&\quad = {\mathbb {E}}\big [ \big \Vert \int _{-k\tau }^t A^{\frac{r}{2}}S\big (\theta (t-s)\big )S\big ((1-\theta )(t-s)\big ) f(s,X^{-k\tau }_s) \textrm{d}{s}\big \Vert ^2\big ]\\&\quad \le 2L_{f,g}^2 \big (1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big )\\&\quad \Big (\int _{-k\tau }^t \Vert A^{\frac{r}{2}}S\big (\theta (t-s)\big )\Vert _{\mathcal {L}(H)}\Vert S\big ((1-\theta )(t-s)\big )\Vert \textrm{d}{s}\Big )^2\\&\quad \le 2L_{f,g}^2 C_2\big (\frac{r}{2}\big )^2 \big (1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big )\\&\quad \Big (\int _{-k\tau }^t \big (\theta (t-s)\big )^{-\frac{r}{2}} e^{-{\lambda _1} (1-\theta )(t-s)} \textrm{d}{s}\Big )^2\\&\quad \le 2L_{f,g}^2 C_2\big (\frac{r}{2}\big )^2 \big (1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big )\lambda _1 ^{r-2}\frac{\Gamma \big (1-\frac{r}{2}\big )^2}{4}, \end{aligned}$$

where we make use of the definition of Gamma function (20) and \(\theta =\frac{1}{2}\) to get

$$\begin{aligned} \int _{-k\tau }^t \big (\theta (t-s)\big )^{-\frac{r}{2}} e^{-\lambda _1 (1-\theta )(t-s)} \textrm{d}{s}=\int _{0}^{t+k\tau } (\theta s)^{-\frac{r}{2}} e^{-\lambda _1 (1-\theta )s} \textrm{d}{s}\le \lambda _1 ^{\frac{r}{2}-1}\frac{\Gamma \big (1-\frac{r}{2}\big )}{2}. \end{aligned}$$

It remains to estimate the last term of (10). To achieve it, we shall apply the Itô isometry, the linear growth of g in (8), and Proposition 2.1 together with the technique involving Gamma function above:

$$\begin{aligned}&{\mathbb {E}}\big [ \big \Vert A^{\frac{r}{2}}\int _{-k\tau }^t S(t-s) g(s,X^{-k\tau }_s) \textrm{d}{W(s)}\big \Vert ^2\big ]\\&\le 2L_{f,g}^2 \big (1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big )\int _{-k\tau }^t \Vert A^{\frac{r}{2}}S(t-s)\Vert ^2 \textrm{d}s\\&\le 2L_{f,g}^2 \big (1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau } {\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big )(2{\lambda _1})^{r-1}\frac{\Gamma \big (1-r\big )}{2}. \end{aligned}$$

\(\square \)

Remark 5

As in [20], it suffices to show (21) through a weaker condition on fg than the linear growth (8) there exists a constant \({\hat{L}}_{f,g}\) such that \(2(u,f(t,u))+\Vert g(t,u)\Vert _{L^2_0}\le {\hat{L}}_{f,g}(1+\Vert u\Vert ^2)\), for \(t\in {\mathbb {R}}\) and \(u\in H\).

Lemma 3.2

Assume Assumption 1.1 to Assumption 1.4. Denote by \(X_t^{-k\tau }\) and \(Y_t^{-k\tau }\) two solutions of SPDE (5) with different initial values \(\xi \) and \(\eta \). Then for every \(\epsilon >0\), there exists a \(t\ge -k\tau \) such that

$$\begin{aligned} {\mathbb {E}}[\Vert X_{\tilde{t}}^{-k\tau }-Y_{\tilde{t}}^{-k\tau }\Vert ^2]<\epsilon \end{aligned}$$
(23)

whenever \(\tilde{t}\ge t\).

The existence of the semiflow u for SEE (5) and its continuity with respect to the initial condition, ie, \(u(t,s,\cdot ,\omega ):H\rightarrow H\) being continuous, can be guaranteed by [18]. With Lemma 3.1, Lemma 3.2 and Assumption 1.3, the existence and uniqueness of the random periodic solution to (5) can be shown following a similar argument in the proof of Theorem 2.4 in [5].

Theorem 3.1

Under Assumption 1.1 to 1.3, there exists a unique random periodic solution \(X^{*}(t,\cdot )\in L^2(\Omega ;H)\) such that the solution of (5) satisfies

$$\begin{aligned} \lim _{k\rightarrow \infty }{\mathbb {E}}[\Vert X^{-k\tau }_t(\xi )-X^*_t\Vert ^2]=0. \end{aligned}$$
(24)

Moreover, it holds that the mild form of \(X^{*}\) given in (11) is well defined in \(L^2(\Omega ;{\dot{H}}^r)\) for any \(r\in (0,1)\).

Proof

For the existence and uniqueness of the random periodic solution \(X_t^*\), we will only give a sketch here as the proof simply follows [5]. Set \(u(t,r)(\xi ):=X^r_t(\xi )\). Then, this u(tr) map defines a semiflow. First, for an arbitrary \(t\in {\mathbb {R}}\), we are able to show that \((X_t^{-k\tau }(\xi ))_{k\in {\mathbb {N}}}\) gives a Cauchy sequence in \(L^2(\Omega ; H)\) via its semiflow property, Lemma 3.1 and Lemma 3.2. The limit is denoted by \(X_t^*\). We then need to check \(X_t^*\) satisfies the two conditions in Definition 1.1. For the first condition, through the continuity of the semiflow with respect the initial condition as well as the convergence of the Cauchy sequence, we are able to show that

$$\begin{aligned} u(t,r,\omega )(X_r^*)\overset{\text {continuity}}{=}\lim _{k\rightarrow \infty } u(t,r,\omega )(X_r^{-k\tau })\overset{\text {semiflow property}}{=}\lim _{k\rightarrow \infty } X_t^{-k\tau }\overset{\text {Cauchy}}{=} X_t^*. \end{aligned}$$

Thus, the first condition of Definition 1.1 has been verified. The second condition of Definition 1.1 is the random periodic property. To show this property, one needs to write down the mild solutions of \(X^{-(k-1)\tau }_{t+\tau }(\xi )\) and \(\theta _\tau X^{-k\tau }_{t}(\xi )\) for a deterministic \(\xi \) and make the observation that two mild solutions are in the same format. The uniqueness of the solution implies that they are the same. Finally using the convergence of the Cauchy sequence again, one can achieve the random periodic property for \(X^*_t\).

It remains to show the second assertion. The first conclusion of Theorem 3.1 ensures that for any \(\epsilon \), there exists a \(K(t)\in {\mathbb {N}}\) such that \({\mathbb {E}}[\Vert X^{-k\tau }_t(\xi )-X^*_t\Vert ^2]<\epsilon \) for any \(k \ge K(t)\). Then,

$$\begin{aligned}&\limsup _{t \in [0,\tau ]}{\mathbb {E}}[\Vert X^*_t\Vert ^2]=\limsup _{t \in [0,{\tau }]}{\mathbb {E}}[\Vert X^*_t-X^{-k\tau }_t(\xi )+X^{-k\tau }_t(\xi )\Vert ^2]\\&\le \sup _{k\in {\mathbb {N}}}\sup _{t\in [0,{\tau }]} 2{\mathbb {E}}[\Vert X^{-k\tau }_t(\xi )\Vert ^2]+\limsup _t \lim _{k\ge K(t)}2{\mathbb {E}}[\Vert X^{-k\tau }_t(\xi )-X^*_t\Vert ^2]\\&< \sup _{k\in {\mathbb {N}}}\sup _{t\in [0,{\tau }]} 2{\mathbb {E}}[\Vert X^{-k\tau }_t(\xi )\Vert ^2]+2\epsilon . \end{aligned}$$

Because \(\epsilon \) is arbitrary, then \(\limsup _{t \in [0,{\tau }]}{\mathbb {E}}[\Vert X^*_t\Vert ^2]\le \sup _{k\in {\mathbb {N}}}\sup _{t\in [0,{\tau }]} 2{\mathbb {E}}[\Vert X^{-k\tau }_t(\xi )\Vert ^2]\).

Due to the random periodicity of \(X^{*}\) and the measure-preserving property of \(\theta \), it holds that

$$\begin{aligned}{} & {} \limsup _{t \in [{\tau },2{\tau }]}{\mathbb {E}}[\Vert X^*_t(\cdot )\Vert ^2]=\limsup _{t \in [{\tau },2{\tau }]}{\mathbb {E}}[\Vert X^*_{t-{\tau }}(\theta _{\tau } \cdot )\Vert ^2]\\{} & {} \quad =\limsup _{t \in [{\tau },2{\tau }]}{\mathbb {E}}[\Vert X^*_{t-{\tau }}( \cdot )\Vert ^2]=\limsup _{t \in [0,{\tau }]}{\mathbb {E}}[\Vert X^*_{t}( \cdot )\Vert ^2]. \end{aligned}$$

Similarly \(\limsup _{t \in [-{\tau },0]}{\mathbb {E}}[\Vert X^*_t\Vert ^2]=\limsup _{t \in [0,{\tau }]}{\mathbb {E}}[\Vert X^*_t\Vert ^2]\). Thus by induction, \(\limsup _{t \in {\mathbb {R}}}{\mathbb {E}}[\Vert X^*_t\Vert ^2]<\infty \). Then following the same approach in the proof of Lemma 3.1, we can deduce that the mild form of \(X^*_t\) is in \(L^2(\Omega ; {\dot{H}}^r)\) for any \(r\in (0,1)\). \(\square \)

The second conclusion of Theorem 3.1 claims that the \(X^*\) lives in an intersection space of \(L^2(\Omega ;{\dot{H}}^r)\), which is much smaller than \(L^2(\Omega ;H)\). Note that the first conclusion of Theorem 3.1 shows the convergence is regardless of the initial condition \(\xi \), that is, \(X^{-k\tau }_t(\xi )\) will converge to the unique random periodic solution no matter where it starts from. This observation is crucial in that one may choose a starting point with preferred properties, for instance, the continuity shown in Lemma 3.3.

Lemma 3.3

Recall that for a fixed \(h\in (0,1)\), \(\Lambda (t):=-k\tau +jh\) when \(t\in (-k\tau +jh,-k\tau +(j+1)h]\). Consider the mild solution \(X^{-k\tau }_{\cdot }(\xi )\) of SEE (5) with given initial condition \(\xi \in L^{2}(\Omega , \mathcal {F}_{-k\tau }, {\mathbb {P}}; {\dot{H}}^r)\) for some \(r\in (0,1)\) and satisfying Assumption 1.1 to 1.4. Then for any \(\nu _1 \in \big (0,r/2\big ]\), there exists a positive constant \(C_X\) depending on r and \(\nu _1\) such that

$$\begin{aligned} \sup _{k\in {\mathbb {N}}}{\sup _{t\ge -k\tau }}{\mathbb {E}}[\Vert X_t^{-k\tau }-X_{\Lambda (t)}^{-k\tau }\Vert ^2]\le C_X(\nu _1,r) h^{2\nu _1}. \end{aligned}$$

Proof

One can deduce the following expression from the mild form (10):

$$\begin{aligned} \begin{aligned}&X^{-k\tau }_t(\xi )- X^{-k\tau }_{\Lambda (t)}(\xi )\\&\quad = \big (S(t-\Lambda (t))-\text {Id}\big )S(\Lambda (t)+k\tau ) \xi \\&\qquad + \int _{\Lambda (t)}^t S(t - s) f(s,X^{-k\tau }_s) \textrm{d}{s} + \int _{-k\tau }^{\Lambda (t)}\\&\quad \big (S(t-\Lambda (t))-\text {Id}\big )S(\Lambda (t)-s) f(s,X^{-k\tau }_s) \textrm{d}{s}\\&\qquad + \int _{\Lambda (t)}^t S(t-s) g(s,X^{-k\tau }_s) \textrm{d}{W(s)}+\int _{-k\tau }^{\Lambda (t)} \big (S(t-\Lambda (t))-\text {Id}\big )\\&\quad S(\Lambda (t)-s) g(s,X^{-k\tau }_s) \textrm{d}{W(s)}. \end{aligned} \end{aligned}$$
(25)

To get the final assertion, we estimate each term on the right hand in \({\mathbb {E}}[\Vert \cdot \Vert ^2]\). For the first term, we have that

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \big (S(t-\Lambda (t))-\text {Id}\big )S(\Lambda (t)+k\tau ) \xi \big \Vert ^2\big ]\\&={\mathbb {E}}\big [\big \Vert A^{-\nu _1}\big (S(t-\Lambda (t))-\text {Id}\big )A^{-(\frac{r}{2}-\nu _1)}S(\Lambda (t)+k\tau ) A^{\frac{r}{2}}\xi \big \Vert ^2\big ]\\&\le \Vert A^{-\nu _1}\big (S(t-\Lambda (t))-\text {Id}\big )\Vert ^2_{\mathcal {L}(H)} \Vert A^{-(\frac{r}{2}-\nu _1)}\Vert ^2_{\mathcal {L}(H)}\Vert S(\Lambda (t)+k\tau )\Vert ^2_{\mathcal {L}(H)} {\mathbb {E}}[ \Vert A^{\frac{r}{2}}\xi \Vert ^2]\\&\le C_1(\nu _1)h^{2\nu _1}\lambda _1^{-(r-2\nu _1)}{\mathbb {E}}[ \Vert A^{\frac{r}{2}}\xi \Vert ^2], \end{aligned}$$

where Proposition 2.1 is applied for the last line. For the second term of (25), by making use of the linear growth condition on f and Hölder’s inequality, we can obtain

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{\Lambda (t)}^t S(t - s) f(s,X^{-k\tau }_s) \textrm{d}{s}\big \Vert ^2\big ]\le { 2 h^2\big (L_f^2+C_f^2\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau } {\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big ).} \end{aligned}$$

Similarly for the fourth term of (25), through the Itô isometry we have that

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{\Lambda (t)}^t S(t-s) g(s,X^{-k\tau }_s) \textrm{d}{W(s)}\big \Vert ^2\big ]\\&=\int _{\Lambda (t)}^t {\mathbb {E}}\big [\Vert S(t-s) g(s,X^{-k\tau }_s)\Vert _{\mathcal {L}^2_0}\big ] \textrm{d}{s}\\&\le {2 h\big (L_g^2+C_g^2\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau } {\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big ).} \end{aligned}$$

For the third term of (25), applying Assumption 1.1, Proposition 2.1 and defining \(\theta =1/2\) yield the following estimate

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{-k\tau }^{\Lambda (t)} \big (S(t-\Lambda (t))-\text {Id}\big )S(\Lambda (t)-s) f(s,X^{-k\tau }_s) \textrm{d}{s}\big \Vert ^2\big ]\\&\quad ={\mathbb {E}}\big [\big \Vert \int _{-k\tau }^{\Lambda (t)} A^{-\nu _1}\big (S(t-\Lambda (t))-\text {Id}\big )A^{\nu _1}S\big (\Lambda (t)-s\big ) f(s,X^{-k\tau }_s) \textrm{d}{s}\big \Vert ^2\big ]\\&\quad \le C_1(\nu _1)^2h^{2\nu _1}\int _{-k\tau }^{\Lambda (t)}\Vert A^{\nu _1}S\big (\Lambda (t)-s\big )\Vert _{\mathcal {L}(H)}\textrm{d}{s}\\&\quad \int _{-k\tau }^{\Lambda (t)}\Vert A^{\nu _1}S\big (\Lambda (t)-s\big )\Vert _{\mathcal {L}(H)}{\mathbb {E}}[\Vert f(s,X^{-k\tau }_s)\Vert ^2]\textrm{d}{s}\\&\quad \le 2{\big (L_f^2+C_f^2\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big )} C_1(\nu _1)^2h^{2\nu _1}\\&\quad \Big (\int _{0}^{\Lambda (t)+k\tau }\Vert A^{\nu _1}S(\theta s)S\big ((1-\theta )s\big )\Vert _{\mathcal {L}(H)}\textrm{d}{s}\Big )^2\\&\quad \le 2{\big (L_f^2+C_f^2\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big )} C_1(\nu _1)^2h^{2\nu _1}C_2(\nu _1)^2\\&\quad \Big (\int _{0}^{\Lambda (t)+k\tau }(\theta s)^{-\nu _1}e^{-\lambda _1 (1-\theta )s}\textrm{d}{s}\Big )^2\\&\quad \le 2{ \big (L_f^2+C_f^2\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big )} C_1(\nu _1)^2h^{2\nu _1}C_2(\nu _1)^2\frac{\lambda _1^{2(\nu _1-1)} \Gamma (1-\nu _1)^2}{4}, \end{aligned} \end{aligned}$$
(26)

where we change variable to deduce the integral in the fourth line and apply the Gamma function (20) to get the last line.

For the last term of (25), using the Itô isometry, the linear growth of g in (8) and the definition of the Gamma function we have that

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{-k\tau }^{\Lambda (t)} \big (S(t-\Lambda (t))-\text {Id}\big )S(\Lambda (t)-s) g(s,X^{-k\tau }_s) \textrm{d}{W}(s)\big \Vert ^2\big ]\\&\quad =\int _{-k\tau }^{\Lambda (t)} {\mathbb {E}}\big [\Vert A^{-\nu _1}\big (S(t-\Lambda (t))-\text {Id}\big )A^{\nu _1}S(\Lambda (t)-s) g(s,X^{-k\tau }_s)\Vert ^2_{\mathcal {L}^2_0}\big ] \textrm{d}{s}\\&\quad \le {2\big (L_g^2+C_g^2\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big ) } C_1(\nu _1)^2h^{2\nu _1} \\&\quad \int _{0}^{\Lambda (t)+k\tau } \Vert A^{\nu _1}S(\theta s)S\big ((1-\theta ) s\big )\Vert ^2_{\mathcal {L}(H)}\textrm{d}{s}\\&\quad \le {2\big (L_g^2+C_g^2\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]\big ) } C_1(\nu _1)^2h^{2\nu _1} C_2(\nu _1)^2\\&\quad \frac{2(2{\lambda _1}^{2\nu _1-1} \Gamma (1-2\nu _1)^2}{2}. \end{aligned} \end{aligned}$$
(27)

\(\square \)

One will see that the continuity of the true solution in Lemma 3.3 plays an important role in later analysis.

4 The random periodic solution of the Galerkin numerical approximation

This section is devoted to the existence and uniqueness of the random periodic solution for the Galerkin-type spatio-temporal discretization defined in (14), and its convergence to the random periodic solution of our underlying SPDE (5).

Lemma 4.1

Under Assumption 1.1 to Assumption 1.3, for the continuous version of the numerical scheme defined in (14) with stepsize \(h\in (0,1)\), it holds that

$$\begin{aligned} {\mathbb {E}}[\Vert {\hat{X}}_{t}^{n,-k\tau }-{\bar{X}}_{t}^{n,-k\tau }\Vert ^2]\le { h\big (C_{n,1}+C_{n,2}{\mathbb {E}}[\Vert {\bar{X}}_{t}^{n,-k\tau }\Vert ^2]\big ),} \end{aligned}$$
(28)

where \(C_{n,1}=6(L_{f}^2+L_g^2)\) and \(C_{n,2}=3(\lambda _n^2+2C_f^2+2C_g^2)\).

Proof

From (15), we get that

$$\begin{aligned} \begin{aligned}&{\hat{X}}_{t}^{n,-k\tau } -{\bar{X}}_{t}^{n,-k\tau } \\&= \big (S\big (t-\Lambda (t)\big )-\text {Id}\big ) {\bar{X}}_{t}^{n,-k\tau } +\int _{\Lambda (t)} ^t S\big (t-\Lambda (s)\big ) f_n\big (\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } \big )\textrm{d}s\\&\quad + \int _{\Lambda (t)} ^t S\big (t-\Lambda (s)\big ) g_n\big (\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } \big )\textrm{d}W(s). \end{aligned} \end{aligned}$$
(29)

For the first term on the right hand side, we have that

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}[\Vert \big (S\big (t-\Lambda (t)\big )-\text {Id}\big ) {\bar{X}}_{t}^{n,-k\tau } \Vert ^2]\\&= {\mathbb {E}}\Big [\Big \Vert \sum _{i=1}^n \big (e^{-\lambda _i(t-\Lambda (t))}-1\big )\big (e_i, {\bar{X}}_{t}^{n,-k\tau }\big )e_i \Big \Vert ^2\Big ]\\&\le \big (e^{-\lambda _n(t-\Lambda (t))}-1\big )^2{\mathbb {E}}\big [\Vert {\bar{X}}_{t}^{n,-k\tau }\Vert ^2\big ]\le \lambda _n^2 h^2 {{\mathbb {E}}[\Vert {\bar{X}}_{t}^{n,-k\tau }\Vert ^2]}, \end{aligned} \end{aligned}$$
(30)

where we use the fact \((1-e^{-a})\le a\) for \(a> 0\) to derive the last inequality.

For the second term on the right hand side of (29), we have that

$$\begin{aligned}&{\mathbb {E}}\Big [\Big \Vert \int _{\Lambda (t)} ^t S\big (t-\Lambda (s)\big ) f_n\big (\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } \big )\textrm{d}s\Big \Vert ^2\Big ]\\&\le \int _{\Lambda (t)} ^t \Vert S\big (t-\Lambda (s)\big )\Vert ^2_{\mathcal {L}(H)}\textrm{d}s \int _{\Lambda (t)} ^t {\mathbb {E}}\big [\big \Vert f_n\big (\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } \big )\big \Vert ^2\big ]\textrm{d}s\\&\le {2h^2 \big (L_f^2+C_f^2 {\mathbb {E}}[\Vert {\bar{X}}_{t}^{n,-k\tau }\Vert ^2]\big )}, \end{aligned}$$

where we apply the Hölder inequality to deduce the second line and make use of the linear growth of f to get the last line.

For the last term on the right hand side of (29), through the Itô isometry, Assumption 1.1 and the linear growth of g we have that

$$\begin{aligned}&{\mathbb {E}}\Big [\Big \Vert \int _{\Lambda (t)} ^t S\big (t-\Lambda (s)\big ) g_n\big (\Lambda (s),{\bar{X}}_{s}^{n,-k\tau }\big )\textrm{d}W(s)\Big \Vert ^2\Big ]\\&= \int _{\Lambda (t)} ^t {\mathbb {E}}\big [\Vert S\big (t-\Lambda (s)\big )g_n\big (\Lambda (s),{\bar{X}}_{s}^{n,-k\tau }\big ) \big \Vert ^2_{\mathcal {L}^2_0}\big ]\textrm{d}s\\&\le {2h \big (L_g^2+C_g^2 {\mathbb {E}}\big [\big \Vert {\bar{X}}_{\Lambda (t)}^{n,-k\tau }\big \Vert ^2\big ]\big ).} \end{aligned}$$

\(\square \)

Lemma 4.2

Under Assumption 1.1 to Assumption 1.3 and Assumption 1.5 to Assumption 1.6, let \(X^{-k\tau }_{\cdot }\) be the solution of SEE (5) with the initial condition \(\xi \) and \({\hat{X}}^{n,-k\tau }_{\cdot }\) from (14) be its numerical simulation with the stepsize h satisfying

$$\begin{aligned} {({2(C_f+1) \lambda _n \sqrt{h}(1+C_{n,2} h)}+C_f{(1+C_{n,2})})} \sqrt{h}\le 2C_{f,g}. \end{aligned}$$
(31)

Then, it holds that

$$\begin{aligned} { \sup _{k\in {\mathbb {N}}}\sup _{t>-k\tau }{\mathbb {E}}[\Vert {\hat{X}}_{t}^{n,-k\tau }(\xi )\Vert ^2]<\infty .} \end{aligned}$$
(32)

If, in addition, \(\xi \in L^{2}(\Omega , \mathcal {F}_{-k\tau }, {\mathbb {P}}; {\dot{H}}^r)\) for \(r\in (0,1)\), the numerical solution introduced in (14) is well defined in \( L^{2}(\Omega , {\mathcal {F}_{t}},{\mathbb {P}}; {\dot{H}}^r)\) for any \(k\in {\mathbb {N}}\), and \(t>-k\tau \).

Proof

Applying the Itô formula to \(e^{2\lambda t}\Vert {\hat{X}}_{t}^{n,-k\tau }(\xi )\Vert ^2\), where we consider the differential form (15), and taking the expectation yield

$$\begin{aligned} \begin{aligned} e^{2\lambda t}{\mathbb {E}}[\Vert {\hat{X}}_{t}^{n,-k\tau }(\xi )\Vert ^2]=&e^{-2\lambda k\tau }{\mathbb {E}}[\Vert \xi \Vert ^2]+2\lambda \int _{-k\tau }^t e^{2\lambda s}{\mathbb {E}}[\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert ^2]\textrm{d}s\\&-2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big ( {\hat{X}}_{s}^{n,-k\tau }, A{\hat{X}}_{s}^{n,-k\tau }\big )\textrm{d}s\\&+2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big ( {\hat{X}}_{s}^{n,-k\tau }, S\big (s-\Lambda (s)\big ) f_n(\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } )\big ) \textrm{d}s\\&+\int _{-k\tau }^t e^{2\lambda s}{\mathbb {E}}\big [\big \Vert S\big (s-\Lambda (s)\big ) g_n\big (\Lambda (s),{\bar{X}}_{s}^{n,-k\tau } \big )\big \Vert ^2_{\mathcal {L}^2_0}\big ]\textrm{d}s. \end{aligned} \end{aligned}$$
(33)

Note that the inner product in the last second term can be further divided into several inner products as follows:

$$\begin{aligned} \begin{aligned}&\big ( {\hat{X}}_{s}^{n,-k\tau }, S\big (s-\Lambda (s)\big ) f_n(\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } )\big ) \\&= \big ( {\hat{X}}_{s}^{n,-k\tau }, \big (S\big (s-\Lambda (s)\big ) -\text {Id}\big ) f_n(\Lambda (s), {\hat{X}}_{s}^{n,-k\tau } )\big ) \\&\quad + \big ( {\hat{X}}_{s}^{n,-k\tau }-{\bar{X}}_{s}^{n,-k\tau }, f_n(\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } )-f_n(\Lambda (s), 0)\big ) \\&\quad + \big ( {\bar{X}}_{s}^{n,-k\tau }, f_n(\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } )-f_n(\Lambda (s), 0 )\big ) \\&\quad + \big ( {\hat{X}}_{s}^{n,-k\tau }, f_n(\Lambda (s), 0)\big )=:\sum _{i=1}^4 V_i. \end{aligned} \end{aligned}$$
(34)

For \(V_1\), we have that

$$\begin{aligned}&2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}[V_1] \textrm{d}s\\&\quad = 2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big ( {\hat{X}}_{s}^{n,-k\tau }, \\&\quad \big (S\big (s-\Lambda (s)\big ) -\text {Id}\big ) f_n(\Lambda (s), {\hat{X}}_{s}^{n,-k\tau } )\big ) \textrm{d}s\\&\quad \le 2\lambda _n h\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert {(C_f\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert +L_f)}\big ]\textrm{d}s\\&\quad \le {L^2_{f}\lambda _n h\int _{-k\tau }^te^{2\lambda s}\textrm{d}s+(2C_f+1)\lambda _n h \int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\textrm{d}s}\\&\quad \le {\frac{\lambda _n h\big (L_{f}^2+2(2C_f+1)C_{n,1} h\big )}{2\lambda }}(e^{2\lambda t}-e^{-2\lambda k\tau })\\&\qquad +{2(C_f+1) \lambda _n h(1+C_{n,2} h)}\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\textrm{d}s, \end{aligned}$$

where to deduce the second line we make use of linear growth of f and a similar estimate of (30), and to bound the term of \({\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\) in terms of \({\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\) to get the last line we make use of the following fact from Lemma 4.1:

$$\begin{aligned} \begin{aligned} {\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert ^2\big ]&\le 2{\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]+2{\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }-{\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\\&\le {2(1+C_{n,2}h){\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]+2C_{n,1}h.} \end{aligned} \end{aligned}$$
(35)

For \(V_2\), we have that by the Lipchitz condition of f, the Hölder inequality and Lemma 4.1

$$\begin{aligned}&2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}[V_2] \textrm{d}s\\&\quad = 2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}} \big ( {\hat{X}}_{s}^{n,-k\tau }-{\bar{X}}_{s}^{n,-k\tau }, f_n(\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } )\\&\quad -f_n(\Lambda (s), 0)\big ) \textrm{d}s\\&\quad \le 2C_f\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }-{\bar{X}}_{s}^{n,-k\tau }\Vert \Vert {\bar{X}}_{s}^{n,-k\tau }\Vert \big ]\textrm{d}s\\&\quad \le 2C_f\int _{-k\tau }^te^{2\lambda s}\sqrt{{\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }-{\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]{\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]}\textrm{d}s\\&\quad \le 2C_f{\sqrt{h}\int _{-k\tau }^te^{2\lambda s}\sqrt{\big (C_{n,2}{\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]+C_{n,1}\big ){\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]}}\textrm{d}s\\&\quad \le \frac{C_f C_{n,1} \sqrt{h}}{2\lambda }(e^{2\lambda t}-e^{-2\lambda k\tau })+C_f{(1+C_{n,2})} \sqrt{h}\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\textrm{d}s. \end{aligned}$$

For \(V_3\), together with the last term in (33), we are able to make use of dissipative condition in Assumtion 1.5 such that

$$\begin{aligned}&2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}[V_3] \textrm{d}s+ \int _{-k\tau }^t e^{2\lambda s}{\mathbb {E}}\big [\big \Vert S\big (s-\Lambda (s)\big ) g_n\big (\Lambda (s),{\bar{X}}_{s}^{n,-k\tau } \big )\big \Vert ^2_{\mathcal {L}^2_0}\big ]\textrm{d}s \\&\le 2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\big ( {\bar{X}}_{s}^{n,-k\tau }, f_n(\Lambda (s), {\bar{X}}_{s}^{n,-k\tau } )-f_n(\Lambda (s), 0 )\big )\\&\quad \quad \quad + \big \Vert g_n\big (\Lambda (s),{\bar{X}}_{s}^{n,-k\tau } \big )-g_n\big (\Lambda (s),0\big )\big \Vert ^2_{\mathcal {L}^2_0} \big ]\textrm{d}s + \int _{-k\tau }^te^{2\lambda s}\Vert g_n\big (\Lambda (s),0 \big )\big \Vert ^2_{\mathcal {L}^2_0} \big ]\textrm{d}s\\&\le -2C_{f,g}\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\textrm{d}s+{\frac{L_{g}^2}{\lambda }(e^{2\lambda t}-e^{-2\lambda k\tau })}, \end{aligned}$$

where we also apply linear growth of g in (8) to deduce the last line.

For \(V_4\), we have that by the linear growth of f and the Young inequality

$$\begin{aligned}&2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}[V_4] \textrm{d}s= 2\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big ( {\hat{X}}_{s}^{n,-k\tau }, f_n(\Lambda (s), 0)\big )\textrm{d}s\\&\le {2L_{f}}\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert \big ]\textrm{d}s\\&\le {\frac{L_{f}^2}{\epsilon \lambda }(e^{2\lambda t}-e^{-2\lambda k\tau }) +\epsilon \int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\textrm{d}s}, \end{aligned}$$

where \(\epsilon \) is an constant such that \(0<\epsilon <\lambda _1\). Here we can fix \(\epsilon =\lambda _1/2\).

Now take \(\lambda = \lambda _1-\epsilon /2=\lambda _1/2>0\). In summary,

$$\begin{aligned}&e^{2\lambda t}{\mathbb {E}}[\Vert {\hat{X}}_{t}^{n,-k\tau }(\xi )\Vert ^2]\\&\quad \le e^{-2\lambda k\tau }{\mathbb {E}}[\Vert \xi \Vert ^2]+(2\lambda +{\epsilon }-2\lambda _1) \\&\quad \int _{-k\tau }^t e^{2\lambda s}{\mathbb {E}}[\Vert {\hat{X}}_{s}^{n,-k\tau }\Vert ^2]\textrm{d}s\\&\quad -\big (2C_{f,g}-({2(C_f+1) \lambda _n \sqrt{h}(1+C_{n,2} h)}+C_f{(1+C_{n,2})}) \sqrt{h}\big )\int _{-k\tau }^te^{2\lambda s}{\mathbb {E}}\big [\Vert {\bar{X}}_{s}^{n,-k\tau }\Vert ^2\big ]\textrm{d}s\\&\quad +{\frac{\lambda _n h\big (L_{f}^2+2(2C_f+1)C_{n,1} h\big )+C_fC_{n,1}\sqrt{ h}+2L_{g}^2+L_{f}^2/\epsilon }{2\lambda }}(e^{2\lambda t}-e^{-2\lambda k\tau })\\&\quad \le e^{-2\lambda k\tau }{\mathbb {E}}[\Vert \xi \Vert ^2]+{\frac{\lambda _n h\big (L_{f}^2+2(2C_f+1)C_{n,1} h\big )+C_fC_{n,1}\sqrt{ h}+2L_{g}^2+2L_{f}^2/\lambda _1}{\lambda _1}}\\&\quad (e^{2\lambda t}-e^{-2\lambda k\tau }), \end{aligned}$$

where, to deduce the last line, we make use of the choice for h in (31). This leads to

$$\begin{aligned}&{\mathbb {E}}[\Vert {\hat{X}}_{t}^{n,-k\tau }(\xi )\Vert ^2]\\&\quad \le {\mathbb {E}}[\Vert \xi \Vert ^2]+\frac{\lambda _n h\big (L_{f}^2+2(2C_f+1)C_{n,1} h\big )++C_fC_{n,1}\sqrt{ h}+2L_{g}^2+2L_{f}^2/\lambda _1}{\lambda _1}\\&\quad (1-e^{-2\lambda (t+k\tau )})<\infty . \end{aligned}$$

The second assertion follows the proof of Lemma 3.1. \(\square \)

Lemma 4.3

Under Assumption 1.1 to Assumption 1.3, denote by \({\hat{X}}_t^{n,-k\tau }\) and \({\hat{Y}}_t^{n,-k\tau }\) two Galerkin numerical approximations from (14) of SEE (5) with the same stepsize \(h\in (0,1)\) but different initial values \(\xi \) and \(\eta \). Define \({\hat{E}}_t^{n,-k\tau }:={\hat{X}}_t^{n,-k\tau }-{\hat{Y}}_t^{n,-k\tau }\) and similarly \({\bar{E}}_t^{n,-k\tau }:={\bar{X}}_t^{n,-k\tau }-{\bar{Y}}_t^{n,-k\tau }\). Then,

$$\begin{aligned} {\mathbb {E}}[\Vert {\hat{E}}_{t}^{n,-k\tau }-{\bar{E}}_{t}^{n,-k\tau }\Vert ^2]\le c_n h{{\mathbb {E}}[\Vert {\bar{E}}_{t}^{n,-k\tau }\Vert ^2],} \end{aligned}$$
(36)

where \(c_n=3(\lambda ^2_n+C_f^2+C_g^2)\).

Proof

From (15), we have that

$$\begin{aligned} \begin{aligned} \textrm{d}{\hat{E}}_t^{n,-k\tau }&= -A {\hat{E}}_t^{n,-k\tau } + S\big (t-\Lambda (t)\big ) \big (f_n(\Lambda (t), {\bar{X}}_{t}^{n,-k\tau } )-f_n(\Lambda (t), {\bar{Y}}_{t}^{n,-k\tau } )\big )\textrm{d}t\\&+ S\big (t-\Lambda (t)\big ) \big (g_n(\Lambda (t), {\bar{X}}_{t}^{n,-k\tau } )-g_n(\Lambda (t), {\bar{Y}}_{t}^{n,-k\tau } )\big )\textrm{d}W(t). \end{aligned} \end{aligned}$$
(37)

The rest of the proof is similar to the proof of Lemma 4.1. \(\square \)

Lemma 4.4

Under the same assumptions as Lemma 4.3 and Assumption 1.5. Denote by \({\hat{X}}_t^{n,-k\tau }\) and \({\hat{Y}}_t^{n,-k\tau }\) two approximations of SEE (5) with different initial values \(\xi \) and \(\eta \) under the same stepsize \(2C_f(\sqrt{h}\lambda _n+\sqrt{c_n})\sqrt{h}\le C_{f,g}\). Then

$$\begin{aligned} {\mathbb {E}}[\Vert {\hat{X}}_t^{n,-k\tau }-{\hat{Y}}_t^{n,-k\tau }\Vert ^2]\le e^{-2\lambda _1(t+k\tau )}{\mathbb {E}}[\Vert \xi -\eta \Vert ^2]. \end{aligned}$$

Proof

Similar as the proof of Lemma 4.2, we apply the Itô formula to \(e^{2\lambda _1 t}\Vert {\hat{E}}_t^{n,-k\tau }\Vert ^2\), take its expectation, make use of the Itô isometry and get

$$\begin{aligned} \begin{aligned}&e^{2\lambda _1 t}{\mathbb {E}}[\Vert {\hat{E}}_t^{n,-k\tau }\Vert ^2]=e^{-2\lambda _1 k\tau }{\mathbb {E}}[\Vert \xi -\eta \Vert ^2]+2\lambda _1 \int _{-k\tau }^t e^{2\lambda _1 s}{\mathbb {E}}[\Vert {\hat{E}}_{s}^{n,-k\tau }\Vert ^2]\textrm{d}s\\&\quad -2\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}\big ( {\hat{E}}_{s}^{n,-k\tau }, A{\hat{E}}_{s}^{n,-k\tau }\big )\textrm{d}s\\&\quad +2\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}\Big ({\hat{E}}_s^{n,-k\tau }, S\big (s-\Lambda (s)\big )\big ({f_n}(\Lambda (s),{\bar{X}}_{s}^{n,-k\tau })-{f_n}(\Lambda (s),{\bar{Y}}_{s}^{n,-k\tau })\big )\Big )\textrm{d}s\\&\quad +\int _{-k\tau }^t e^{2\lambda _1 s}{\mathbb {E}}\big [\Vert S\big (s-\Lambda (s)\big )\big ({g_n}(\Lambda (s),{\bar{X}}_{s}^{n,-k\tau })-{g_n}(\Lambda (s),{\bar{Y}}_{s}^{n,-k\tau })\big )\Vert ^2_{\mathcal {L}^2_0}\big ]\textrm{d}s. \end{aligned} \end{aligned}$$
(38)

In order to make use of the dissipative condition in Assumption 1.5, we further decompose the following term into three terms

$$\begin{aligned}&\Big ({\hat{E}}_s^{n,-k\tau }, S\big (s-\Lambda (s)\big )\big ({f_n}(\Lambda (s),{\bar{X}}_{s}^{n,-k\tau })-{f_n}(\Lambda (s),{\bar{Y}}_{s}^{n,-k\tau })\big )\Big )\\&=\big ({\bar{E}}_s^{n,-k\tau }, {f_n}(\Lambda (s),{\bar{X}}_{s}^{n,-k\tau })-{f_n}(\Lambda (s),{\bar{Y}}_{s}^{n,-k\tau })\big )\\&\quad +\Big ({\bar{E}}_s^{n,-k\tau }, \big (S\big (s-\Lambda (s)\big )-\text {Id}\big )\big ({f_n}(\Lambda (s),{\bar{X}}_{s}^{n,-k\tau })-{f_n}(\Lambda (s),{\bar{Y}}_{s}^{n,-k\tau })\big )\Big )\\&\quad + \Big ({\hat{E}}_s^{n,-k\tau }-{\bar{E}}_s^{n,-k\tau }, S\big (s-\Lambda (s)\big )\big ({f_n}(\Lambda (s),{\bar{X}}_{s}^{n,-k\tau })-{f_n}(\Lambda (s),{\bar{Y}}_{s}^{n,-k\tau })\big )\Big )\\&=:U_1+U_2+U_3. \end{aligned}$$

Substituting the right hand side into Eq. (38) and applying the dissipative condition in Assumption 1.5 give that

$$\begin{aligned}&e^{2\lambda _1 t}{\mathbb {E}}[\Vert {\hat{E}}_t^{n,-k\tau }\Vert ^2]\le e^{-2\lambda _1 k\tau }{\mathbb {E}}[\Vert \xi -\eta \Vert ^2]+2(\lambda _1-\lambda _1)\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[\Vert {\hat{E}}_{s}^{n,-k\tau }\Vert ^2]\textrm{d}s\\&-C_{f,g}\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[\Vert {\bar{E}}_{s}^{n,-k\tau }\Vert ^2]\textrm{d}s+2\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[U_2]\textrm{d}s+2\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[U_3]\textrm{d}s. \end{aligned}$$

For the term involving \(U_2\), we have that

$$\begin{aligned} 2\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[U_2]\textrm{d}s\le 2C_f \lambda _n h \int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[\Vert {{\bar{E}}_{s}^{n,-k\tau }}\Vert ^2]\textrm{d}s, \end{aligned}$$

where we bound \(\Vert S\big (s-\Lambda (s)\big )-\text {Id}\big ) \cdot \Vert \le \lambda _n h \Vert \cdot \Vert \) as we deduce the bound in (30) of Lemma 4.1.

For the term involving \(U_3\), we have that

$$\begin{aligned} 2\int _{-k\tau }^te^{2\lambda _1 s}{\mathbb {E}}[U_3]\textrm{d}s&\le 2C_f \int _{-k\tau }^t e^{2\lambda _1 s}{\mathbb {E}}[\Vert {\hat{E}}_s^{n,-k\tau }-{\bar{E}}_s^{n,-k\tau }\Vert \Vert {\bar{E}}_s^{n,-k\tau }\Vert ]\textrm{d}s\\&\le 2C_f\sqrt{c_n}\sqrt{h} \int _{-k\tau }^t e^{2\lambda _1 s}{\mathbb {E}}[\Vert {\bar{E}}_s^{n,-k\tau }\Vert ^2]\textrm{d}s, \end{aligned}$$

where we apply Lemma 4.3 to deduce the last line.

In summary, we have that

$$\begin{aligned}&e^{2\lambda _1 t}{\mathbb {E}}[\Vert {\hat{E}}_t^{n,-k\tau }\Vert ^2]\le e^{-2\lambda _1 k\tau }{\mathbb {E}}[\Vert \xi -\eta \Vert ^2] \\&\qquad -\big (C_{f,g}-2C_f(\sqrt{h}\lambda _n+\sqrt{c_n})\sqrt{h}\big )\int _{-k\tau }^t e^{2\lambda _1 s}{\mathbb {E}}[\Vert {\bar{E}}_{s}^{n,-k\tau }\Vert ^2]\textrm{d}s\\&\quad \le e^{-2\lambda _1 k\tau }{\mathbb {E}}[\Vert \xi -\eta \Vert ^2] \end{aligned}$$

because of the choice of stepsize h. Then, the assertion follows. \(\square \)

Theorem 4.1

Under Assumptions 1.11.6, for any \(h\in (0,1)\) satisfying

$$\begin{aligned}{} & {} 2C_f(\sqrt{h}\lambda _n+\sqrt{c_n})\sqrt{h}\le C_{f,g} \text { and } \nonumber \\{} & {} \quad {({2(C_f+1) \lambda _n \sqrt{h}(1+C_{n,2} h)} +C_f{(1+C_{n,2})})} \sqrt{h}\le 2C_{f,g}, \end{aligned}$$
(39)

the Galerkin numerical approximation (14) admits a unique random period solution \({\hat{X}}^{n,*}_{t}\in L^2(\Omega ;H)\) such that

$$\begin{aligned} \lim _{k\rightarrow \infty }{\mathbb {E}}[\Vert {\hat{X}}^{n,-k\tau }_t(\xi )-{\hat{X}}^{n,*}_t\Vert ^2]=0. \end{aligned}$$
(40)

With Lemma 4.2 and Lemma 4.4, the proof is similar to the proof of Theorem 3.4 in [5].

4.1 The convergence

Theorem 4.2

Under Assumption 1.11.3, and Assumption 1.5 to Assumption 1.6, let \(X^{-k\tau }_{\cdot }\) be the solution of SEE (5) with the initial condition \(\xi \in L^{2}(\Omega , \mathcal {F}_{-k\tau }, {\mathbb {P}}; {\dot{H}}^r)\) for some \(r\in (0,1)\), and let \({\hat{X}}^{n,-k\tau }_{\cdot }\) be its numerical simulation defined by (14) with the stepsize h satisfying (31). Then for any any \(\nu _1 \in (0,r/2]\), there exists a constant C, which depends on \(\xi \), A, f, g, r, \(\nu _1\) and the uniform bounds of both \(X^{-k\tau }_{\cdot }\) and \({\hat{X}}^{n,-k\tau }_{\cdot }\), such that

$$\begin{aligned} \sup _{k\in {\mathbb {N}}}{\sup _{t\ge -k\tau }}\big ({\mathbb {E}}[\Vert X_t^{-k\tau }-{\hat{X}}_t^{n,-k\tau }\Vert ^2]\big )^{1/2}\le C\big (h^{{\nu _1\wedge \kappa }}+\frac{1}{\sqrt{\lambda _n^r}}\big ), \end{aligned}$$
(41)

where \(\nu _1\wedge \kappa \) represents the smaller between \(\nu _1\) and \(\kappa \).

Proof

From the mild form (10) and the continuous version (14) for the Galerkin numerical approximation we derive that

$$\begin{aligned} \begin{aligned}&X_t^{-k\tau }-{\hat{X}}_t^{n,-k\tau }=S(t+k\tau )(\text {Id}-P_n)\xi + \int _{-k\tau }^t S(t-s)(\text {Id}-P_n)f(s,X_s^{-k\tau })\textrm{d}s\\&\quad +\int _{-k\tau }^t S(t-s)\big (f_n(s,X_s^{-k\tau })-f_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )\big )\textrm{d}s\\&\quad +\int _{-k\tau }^t S(t-s)\big (f_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )-f_n(\Lambda (s),{\bar{X}}_s^{n,-k\tau })\big )\textrm{d}s\\&\quad {-}\int _{-k\tau }^t S(t-s)(S(s-\Lambda (s))-\text {Id})f_n(\Lambda (s),{\bar{X}}_s^{n,-k\tau })\textrm{d}s\\&\quad +\int _{-k\tau }^t S(t-s)(\text {Id}-P_n)g(s,X_s^{-k\tau })\textrm{d}W(s)\\&\quad +\int _{-k\tau }^t S(t-s)\big (g_n(s,X_s^{-k\tau })-g_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )\big )\textrm{d}W(s)\\&\quad +\int _{-k\tau }^t S(t-s)\big (g_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )-g_n(\Lambda (s),{\bar{X}}_s^{n,-k\tau })\big )\textrm{d}W(s)\\&\quad {-}\int _{-k\tau }^t S(t-s)(S(s-\Lambda (s))-\text {Id})g_n(\Lambda (s),{\bar{X}}_s^{n,-k\tau })\textrm{d}W(s)=:\sum _{i=1}^9 J_i. \end{aligned} \end{aligned}$$
(42)

It remains to estimate each of \(\{J_i\}_{i=1}^9\) in \({\mathbb {E}}[\Vert \cdot \Vert ^2]\) with a finite bound that is independent of k and t. For \(J_1\), we can get the following estimate based on Assumption 1.1 and the condition on \(\xi \):

$$\begin{aligned}&{\mathbb {E}}[\Vert S(t+k\tau )(\text {Id}-P_n)\xi \Vert ^2]\\ {}&={\mathbb {E}}\big [ \sum _{i=n+1}^\infty e^{-2(t+k\tau )\lambda _i} (e_i,\xi )^2\big ]={\mathbb {E}}\big [ \sum _{i=n+1}^\infty \frac{e^{-2(t+k\tau )\lambda _i}}{\lambda _n^r} \lambda ^r_n (e_i,\xi )^2\big ]\\&\le \frac{1}{\lambda _n^r}{\mathbb {E}}\big [ \sum _{i=1}^\infty \lambda ^r_n (e_i,\xi )^2\big ]=\frac{1}{\lambda _n^r}{\mathbb {E}}[\Vert A^{\frac{r}{2}}\xi \Vert ^2]. \end{aligned}$$

For \(J_2\), by using the same decomposition for \(x\in H\) as in \(J_1\), the linear growth of f, and the uniform boundedness of \(X^{-k\tau }_t\) (see Lemma 3.1), one can see that

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)(\text {Id}-P_n)f(s,X_s^{-k\tau })\textrm{d}s\big \Vert ^2\big ]\\&={\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t \sum _{i=n+1}^\infty e^{-2(t-s)\lambda _i} (e_i,f(s,X_s^{-k\tau }))\textrm{d}s\big \Vert ^2\big ]\\&\le {\mathbb {E}}\Big [\Big (\int _{-k\tau }^t e^{-2(t-s)\lambda _{n+1}} \Big \Vert \sum _{i=n+1}^\infty (e_i,f(s,X_s^{-k\tau }))\Big \Vert \textrm{d}s\Big )^2\Big ]\\&\le \int _{-k\tau }^t e^{-2\lambda _{n+1}(t-s)} \textrm{d}s \int _{-k\tau }^t e^{-2\lambda _{n+1}(t-s)} {\mathbb {E}}[\Vert f(s,X_s^{-k\tau })\Vert ^2]\textrm{d}s\\&\le 2\big ({L_{f}^2+C_f^2}\sup _{k\in {\mathbb {N}}}\sup _{t\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_t\Vert ]\big )\frac{(1-e^{-2\lambda _{n+1}(t+k\tau )})^2}{\lambda _{n+1}^2}\\&\le \frac{2}{\lambda _{n+1}^2} \big ({L_{f}^2+C_f^2}\sup _{k\in {\mathbb {N}}}\sup _{t\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_t\Vert ]\big ). \end{aligned}$$

To get the upper bound for \(J_3\), one shall apply the Hölder inequality and Assumption 1.3, and then make use of Lemma 3.3,

$$\begin{aligned}&\big ({\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)\big (f_n(s,X_s^{-k\tau })-f_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )\big )\textrm{d}s\big \Vert ^2\big ]\big )^{1/2}\\&\quad \le \big ({\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)\big (f_n(s,X_s^{-k\tau })-f_n\big (\Lambda (s),X_s^{-k\tau }\big )\big )\textrm{d}s\big \Vert ^2\big ]\big )^{1/2}\\&\qquad +\big ({\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)\big (f_n(\Lambda (s),X_s^{-k\tau })-f_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )\big )\textrm{d}s\big \Vert ^2\big ]\big )^{1/2}\\&\quad \le 2C_f \Big (\sqrt{1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]}{h^\kappa }+\sqrt{C_X(\nu _1,r)}h^{\nu _1}\Big )\\&\quad \int _{-k\tau }^t \Vert S(t-s)\Vert _{\mathcal {L}(H)}\textrm{d}s\\&\quad \le \frac{1}{{\lambda _1}}2C_f \Big (\sqrt{1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]}h^{{\kappa }}+\sqrt{C_X(\nu _1,r)}{h^{\nu _1}}\Big ), \end{aligned}$$

where to get the last line, we use the following estimate based on Proposition 2.1,

$$\begin{aligned} \int _{-k\tau }^t \Vert S(t-s)\Vert _{\mathcal {L}(H)}\textrm{d}s\le \int _{-k\tau }^t e^{-{\lambda _1}(t-s)}\textrm{d}s\le \frac{1}{\lambda _1}. \end{aligned}$$
(43)

Regarding the term \(J_4\), by Assumption 1.3 and the estimate (43) one has that

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)\big (f_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )-f_n(\Lambda (s),{\bar{X}}_s^{n,-k\tau })\big )\textrm{d}s\big \Vert ^2\big ]\\&\le \frac{C_f^2}{{\lambda _1}^2} {\sup _{k\in {\mathbb {N}}}\sup _{t\ge -k\tau }}{\mathbb {E}}[\Vert X_{s}^{-k\tau }-{\hat{X}}_s^{n,-k\tau }\Vert ^2]. \end{aligned}$$

For term \(J_5\), following the estimate (26) yields

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{-k\tau }^tS(t-s)(S(s-\Lambda (s))-\text {Id})f_n(\Lambda (s),{\bar{X}}_s^{n,-k\tau })\textrm{d}s\big \Vert ^2\big ]\\&\le 2 \big ({L_{f}^2+C_f^2}\sup _{k\in {\mathbb {N}}}\sup _{t\ge -k\tau }{\mathbb {E}}[\Vert {\hat{X}}^{n,-k\tau }_t\Vert ^2]\big ) C_2(\nu _1)^2C_2(\nu _1)^2\frac{{\lambda _1}^{2(\nu _1-1)} \Gamma (1-\nu _1)^2}{4} h^{2\nu _1}. \end{aligned}$$

Directly applying the Itô isometry and the estimate (19), we have the bound for \(J_6\):

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{-k\tau }^tS(t-s)(\text {Id}-P_n)g(s,X_s^{-k\tau })\textrm{d}W(s)\big \Vert ^2\big ]\\&\le 2\big ({L_{g}^2+C_g^2}\sup _{k\in {\mathbb {N}}}\sup _{t\ge -k\tau }{\mathbb {E}}[\Vert {\hat{X}}^{n,-k\tau }_t\Vert ^2]\big )\int _{-k\tau }^t \Vert S(t-s)(\text {Id}-P_n)\Vert ^2_{\mathcal {L}(H)}\textrm{d}s\\&\le \frac{2}{\lambda _{n+1}^2}\big ({L_{g}^2+C_g^2}\sup _{k\in {\mathbb {N}}}\sup _{t\ge -k\tau }{\mathbb {E}}[\Vert {\hat{X}}^{n,-k\tau }_t\Vert ^2]\big ). \end{aligned}$$

Through the Itô isometry, Assumption 1.1, Assumption 1.3, Lemma 3.3 and a similar estimate as (43), one can derive the bound for \(J_7\) as follows

$$\begin{aligned}&\big ({\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)\big (g_n(s,X_s^{-k\tau })-g_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )\big )\textrm{d}W(s)\big \Vert ^2\big ]\big )^{1/2}\\&\quad \le \big ({\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)\big (g_n(s,X_s^{-k\tau })-g_n\big (\Lambda (s),X_{s}^{-k\tau }\big )\big )\textrm{d}W(s)\big \Vert ^2\big ]\big )^{1/2}\\&\qquad +\big ({\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)\big (g_n(\Lambda (s),X_s^{-k\tau })-g_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )\big )\textrm{d}W(s)\big \Vert ^2\big ]\big )^{1/2}\\&\quad \le 2C_g\Big (\sqrt{1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]}{h^\kappa }+\sqrt{C_X(\nu _1,r)}h^{\nu _1}\Big )\\&\quad \big (\int _{-k\tau }^t \Vert S(t-s)\Vert ^2_{\mathcal {L}(H)}\textrm{d}s\big )^{1/2}\\&\quad \le \frac{2C_g}{\sqrt{2{\lambda _1}}} \Big (\sqrt{1+\sup _{k\in {\mathbb {N}}}\sup _{s\ge -k\tau }{\mathbb {E}}[\Vert X^{-k\tau }_s\Vert ^2]}h^{\kappa }+\sqrt{C_X(\nu _1,r)}{h^{\nu _1}}\Big ). \end{aligned}$$

Regarding the term \(J_8\), by the Itô isometry, Assumption 1.3 and the estimate (43) one has that

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{-k\tau }^t S(t-s)\big (g_n\big (\Lambda (s),X_{\Lambda (s)}^{-k\tau }\big )-g_n(\Lambda (s),{\bar{X}}_s^{n,-k\tau })\big )\textrm{d}W(s)\big \Vert ^2\big ]\\&\le \frac{C_g^2}{2{\lambda _1}} {\sup _{k\in {\mathbb {N}}}\sup _{t\ge -k\tau }}{\mathbb {E}}[\Vert X_{s}^{-k\tau }-{\hat{X}}_s^{n,-k\tau }\Vert ^2]. \end{aligned}$$

Finally, applying the Itô isometry and the linear growth of g in Assumption 1.3, and following the estimate (27), we have the bound for \(J_9\) that

$$\begin{aligned}&{\mathbb {E}}\big [\big \Vert \int _{-k\tau }^tS(t-s)(S(s-\Lambda (s))-\text {Id})g_n(\Lambda (s),{\bar{X}}_s^{n,-k\tau }))\textrm{d}W(s)\big \Vert ^2\big ]\\&\quad \le 2\big ({L_{g}^2+C_g^2}\sup _{k\in {\mathbb {N}}}\sup _{t>-k\tau }{\mathbb {E}}[\Vert {\hat{X}}^{n,-k\tau }_t\Vert ^2]\big )\int _{-k\tau }^t \Vert A^{\nu _1}S(t-s)A^{-\nu _1}\\&\quad (S(s-\Lambda (s))-\text {Id})\Vert ^2_{\mathcal {L}(H)}\textrm{d}s\\&\quad \le 2\big ({L_{g}^2+C_g^2}\sup _{k\in {\mathbb {N}}}\sup _{t>-k\tau }{\mathbb {E}}[\Vert {\hat{X}}^{n,-k\tau }_t\Vert ^2]\big )C_1(\nu _1)^2 h^{2\nu _1}\\&\quad C_2(\nu _1)^2\frac{(2{\lambda _1})^{2\nu _1-1} \Gamma (1-2\nu _1)^2}{2}. \end{aligned}$$

In total, we have that

$$\begin{aligned}&\sup _{k\in {\mathbb {N}}}{\sup _{t\ge -k\tau }}\big ({\mathbb {E}}[\Vert X_t^{-k\tau }-{\hat{X}}_t^{n,-k\tau }\Vert ^2]\big )^{1/2}\le \sum _{i=1}^9 \sup _{k\in {\mathbb {N}}}\sup _{t\ge k\tau }({\mathbb {E}}[\Vert J_i\Vert ^2])^{1/2} \\&\quad \le C(h^{\nu _1}+{h^{\kappa }}+\frac{1}{\sqrt{\lambda _n^r}}+\frac{1}{\lambda _n}+\frac{1}{\lambda _{n+1}})+\big (\frac{C_f}{{\lambda _1}}+\frac{C_g}{\sqrt{{2{\lambda _1}}}}\big )\\&\quad \sup _{k\in {\mathbb {N}}}\sup _{t\ge k\tau }\big ({\mathbb {E}}[\Vert X_t^{-k\tau }-{\hat{X}}_t^{n,-k\tau }\Vert ^2]\big )^{1/2}. \end{aligned}$$

Because of \(\frac{C_f}{{\lambda _1}}+\frac{C_g}{\sqrt{{{2}\lambda _1}}}<1\) from Assumption 1.6, we can conclude the final assertion. \(\square \)

Note in Theorem 4.2, one can take \(\nu _1=r/2\) to achieve the fastest convergence.

Corollary 4.1

Assume Assumption 1.1 to Assumption 1.6. Let \(X^{*}_{t}\) be the random periodic solution of SEE (5) and \({\hat{X}}^{n,*}_{t}\) be the random period solution of the Galerkin numerical approximation with the stepsize h satisfying (39). Consider approximating \({\hat{X}}^{n,*}_{\cdot }\) through the sequence \(\{{\hat{X}}^{n,-k\tau }_{\cdot }(\xi )\}_{k}\) with \(\xi \in L^{2}(\Omega , \mathcal {F}_{-k\tau }, {\mathbb {P}}; {\dot{H}}^r)\) for \(r\in (0,1)\). Then, there exists a constant C, which depends on A, f, g and r, such that

$$\begin{aligned} \sup _{t} \big ({\mathbb {E}}[\Vert X^*_t-{\hat{X}}^{n,*}_t\big \Vert ^2]\big )^{1/2}\le C \big (h^{{\frac{r}{2}\wedge \kappa }}+\frac{1}{\sqrt{\lambda _n^r}}\big ). \end{aligned}$$
(44)

Corollary 4.1 implies that the best order of convergence can be achieved is \(1/2-\epsilon \) for an arbitrarily small \(\epsilon >0\) if \(\kappa \ge 1/2\). Moreover, as the mild form of \(X^*_t\) is well defined in \(\bigcap _{r\in (0,1)}L^2(\Omega ;{\dot{H}}^r)\) shown in Theorem 3.1, and \({\dot{H}}^{r_1}\subset {\dot{H}}^{r_2}\) for \(r_1\ge r_2\), it is not surprised to observe that the order of convergence would be higher if we adopt the approximation sequence with initial condition in \(L^{2}(\Omega ; {\dot{H}}^r)\) under a higher value of r.