Splitting up method for the 2D stochastic Navier–Stokes equations

  • H. Bessaih
  • Z. Brzeźniak
  • A. MilletEmail author


In this paper, we deal with the convergence of an iterative scheme for the 2-D stochastic Navier–Stokes equations on the torus suggested by the Lie–Trotter product formulas for stochastic differential equations of parabolic type. The stochastic system is split into two problems which are simpler for numerical computations. An estimate of the approximation error is given for periodic boundary conditions. In particular, we prove that the strong speed of the convergence in probability is almost \(1/2\). This is shown by means of an \(L^2(\Omega ,\mathbb {P})\) convergence localized on a set of arbitrary large probability. The assumptions on the diffusion coefficient depend on the fact that some multiple of the Laplace operator is present or not with the multiplicative stochastic term. Note that if one of the splitting steps only contains the stochastic integral, then the diffusion coefficient may not contain any gradient of the solution.


Splitting up methods Navier–Stokes equations Hydrodynamical models Stochastic PDEs Strong convergence  Speed of convergence in probability 

Mathematics Subject Classification

Primary: 60H15 60F10 60H30 Secondary: 76D06 76M35 

1 Introduction

Assume that \(D\) is a regular bounded open domain of \(\mathbb {R}^{2}\) with boundary \(\partial {D}\) and let \(\mathbf {n}\) denote the external normal field to the boundary. Let us consider the Navier–Stokes equations with, for concreteness, the Dirichlet boundary conditions:
$$\begin{aligned} \frac{\partial u(t,x)}{\partial t}-\nu \Delta u(t,x)+(u(t,x)\cdot \nabla ) u(t,x)+\nabla p(t,x)=G(t,u(t,x))\dot{W}(t,x),\nonumber \\ \end{aligned}$$
\(t\in [0,T]\), \(x\in D\), with the incompressibility condition
$$\begin{aligned} \nabla \cdot u(t,x)=0,\quad t\in [0,T], \quad x\in D, \end{aligned}$$
the boundary condition
$$\begin{aligned} u=0 \text{ on } \partial D \end{aligned}$$
and the initial condition
$$\begin{aligned} u(0,x)=u_{0}(x) \quad x\in D. \end{aligned}$$
Here, \(u\) is the velocity, \(p\) is the pressure, \(\nu \) is the viscosity coefficient, \(W\) is a cylindrical Brownian motion and \(G\) is an operator valued function acting on divergence free vector fields. Details will be given in the next sections.

Our main results from Sects. 4 and 5 are valid only for the stochastic Navier–Stokes equations with the periodic boundary conditions. Our results from Sect. 3 are valid for the stochastic Navier–Stokes equations with both the periodic and Dirichlet boundary conditions, see Sect. 4 for details.

The well-posedness of the system (1.1)–(1.2) has been extensively investigated. Under very general conditions on the operator \(G\) we refer to [13] for martingale stationary solutions; the uniqueness of the \(2\)-dimensional case has been investigated in [20]. For the strong solutions and more general models in hydrodynamic we refer to [9]. For a comprehensive setting and higher dimension, we refer to [12] and the references therein.

In this paper, we deal with the convergence of some iterative schemes suggested by the Lie-Trotter product formulas for the stochastic differential equations of parabolic type. The stochastic system is split into two problems which are simpler for numerical computations. Other numerical schemes have been used in the literature. This method has been used for purely theoretical purposes of proving the existence of solutions of stochastic geometric heat equation in [14]. Let us mention two recent papers on the topic of numerical approximations to 2D stochastic Navier–Stokes equations.

Carelli and Prohl [7] studied the strong speed of convergence of some fully and implicit semi-discrete Euler time and finite element based space-time discretization schemes. As in [19] for the stochastic Burgers equation, they proved the \(L^2(\Omega , {\mathbb P})\) convergence of the error term localized by a set (depending on the discretization), whose probability converges to \(1\) as the time mesh decreases to \(0\). They assume that the initial condition \(u_0 \in L^8(\Omega ,V)\), and that the diffusion coefficient of the multiplicative noise may contain some gradient of the solution. These authors studied the speed of convergence in probability (introduced by Printemps in [19]) of the fully implicit scheme to the solution in two different spaces \(C([0,T];H)\cap L^2(0,T;V)\) and \(C([0,T];V')\cap L^2(0,T;H)\) (here \(V\) and \(H\) are the usual functional spaces used for the Navier–Stokes equations, they will be defined later on). They showed that these are equal respectively to \(1/4 - \alpha \) and \(1/2-\alpha \) for some positive \(\alpha \). Finally, they showed that the difference between the fully and the semi-implicit schemes converges to \(0\) in probability in \(C([0,T];V')\cap L^2(0,T;H)\) with the speed \(1/2-\alpha \). The speed of convergence of some space-time discretization was also investigated. Note that in [7], no projection on divergence-free fields is made and the pressure term is part of the discretized process.

Using the semi-group and cubature techniques, Dörsek [10] studied the weak speed of convergence of a certain Strang time-splitting scheme combined with a Galerkin approximation in the space variable for the SNSEs with an additive noise. Two equations were solved alternatively on each time interval of the subdivision to approximate the Galerkin approximation: one deterministic equation only contains the bilinear term and the other one is the stochastic heat equation. The weak speed of convergence of this approximation was given in terms of the largest eigenvalue \(N\) of the space of eigenvectors of the Stokes operator where the projection is done: if a function \(\varphi \) belongs to \({\mathcal C}^6(L^2(D))\) with some exponential control of its derivatives of order up to \(6\), then the speed of convergence of the error term between the Strang time splitting \(v^N_n\) with time mesh \(T/n\) of the Galerkin approximation \(u^N\), that is \(\big | \mathbb {E}[\varphi (u^N(t))] - \mathbb {E}[\varphi (u^N_n(t))]\big |\) is estimated from above in terms of \(N\) and \(n\). Finally, let us note that in that paper the noise is both additive and finite-dimensional and that the initial condition again belongs to the space \(V\).

One of the aims of the current paper is to show that a certain splitting up method for the stochastic Navier–Stokes equations with multiplicative noise is convergent. Our splitting method is implemented using on each time interval two consecutive steps. In the first step, the deterministic Navier–Stokes equations (with modified viscosity) is solved. The corresponding solution is denoted by \(u^n\). In the second step the linear parabolic Stokes equation (again with a modified viscosity) with a stochastic perturbation is solved. The corresponding solution is denoted by \(y^n\). The goal of the paper is twofold. On one hand, we establish the “strong” speed of convergence in \(L^2(D)\) uniformly on the time grid after the two steps have been performed, that is for the difference \(|y^n(t_k^-)-u(t_k)|\). We furthermore prove that the same speed of convergence holds in \( L^2(0,T; V)\) for \(u^n-u\) and \(y^n-u\). The regularity assumptions on the diffusion coefficient and on the initial condition are similar to that in [7]. For instance we assume \(\mathbb {E}\Vert u_0\Vert _{V}^8<\infty \), but either the topology we use is sharper for a similar speed of convergence or the speed of convergence is doubled in the same topological space. While the first step consists in solving the deterministic Navier–Stokes equations, the next step consists in only computing the stochastic integral (with no smoothing Stokes operator), no gradient is allowed in the diffusion coefficient and in that case the regularity of the corresponding process \(y^n\) is weaker than that of the terminal process. Following the definition introduced in [19], we deduce that the speed of convergence in probability (in the above functional spaces) of this splitting scheme is almost \(1/2\). When the second step of the splitting contains some part of the viscosity together with the stochastic integral, due to the smoothing effect of the Stokes operator, the diffusion coefficient may contain some gradient terms provided that the stochastic parabolicity condition holds. In fact the convergence of the scheme requires a stronger control of the gradient part in the diffusion coefficient. The main result states an \(L^2(\Omega , \mathbb {P})\) convergence of the error term localized on a set which depends on the solution and whose probability can be made as close to 1 as required. Furthermore, for technical reasons, the speed of convergence is obtained for \(z^n-u\) in \(L^\infty (0,T;H) \cap L^2(0,T;V)\), where \(z^n\) is a theoretical process which is defined in terms of \(u^n\) and \(y^n\) and such that \(z^n(t_k)=y^n(t_k^-)=u^n(t_{k+1}^+)\).

In Sect. 7 of a paper [5] by the second named authour with Carelli and Prohl, the convergence of the finite element approximation for the \(2\)-D stochastic Navier–Stokes Equations by using the local monotonicity trick of Barbu was proved. The use of the Barbu trick allowed the authours to identify the strong solution without using a compactness method. In this respect that paper is similar to ours. However, the time-splitting approximation we use in the current paper as well as the proofs are completely different.

The strong convergence of the splitting method has already been studied in a series of papers by Gyöngy and Krylov [15, 16] but for parabolic stochastic PDEs with some degenerate stochastic parabolicity condition. However, the linear setting used in these papers does not cover the hydrodynamical models used in the present one. The splitting method was also studied in [6] for SPDEs including the stochastic linear Schrödinger equations and in [17] for parabolic non linear SPDEs with monotone operators.

Section 2 describes the properties on the hydrodynamical models, the noise, the assumptions on the diffusion coefficients ensuring well-posedeness of the solution in various functional spaces. It also describes both splitting schemes, depending on the fact that some multiple of the Stokes operator is kept with the stochastic integral or not. Section 3 proves several a priori bounds such as the control of the norms of the approximating processes independently of the time mesh, as well as the control of the difference of both processes introduced in the steps of the algorithm in terms of the time mesh. Note that the abstract properties of the operators needed to prove the results in Sect. 2 and 3 are satisfied not only by the stochastic 2D Navier–Stokes equations, but also by various hydrodynamical models, such as the Boussinesq, magneto-hydrodynamical and Bénard models; the framework also contains the shell and dyadic models of turbulence (see e.g. [9]). Sections 4 and 5 focuse on the 2D Navier–Stokes equations on the torus ; they require some further properties involving the Stokes operator and the non linearity, more regularity on the initial condition and additional properties on the “diffusion” coefficient \(G\). Section 4 provides further a priori bounds “shifting” the spatial regularity. Section 5 proves the \(L^2(\Omega , {\mathbb P}) \) convergence of a localized version of both algorithms with some explicit control on the constant used in the localization. This enables us to deduce the speed of convergence in probability.

As usual, we denote by \(C\) [resp. \(C(T)\)] a positive constant (resp. a positive constant depending on \(T\)) which may change from line to line, but does not depend on \(n\).

2 Preliminaries and assumptions

2.1 Functional setting

We at first describe the functional setting and the operators of the 2D Navier–Stokes equations. As in [23] we denote by \(\mathcal {V}\) the space of infinitely differentiable divergence free and compactly supported vector fields \(u\) on \( D\).

Let the Hilbert space \((H, |\cdot |) \) be the closure of \({\mathcal V}\) in the \(\mathbb {L}^{2}=L^2(D,\mathbb {R}^2)\) space. Let also \( E\) be the Hilbert space consisting of all \( u\in \mathbb {L}^2\) such that \(\mathrm{div}\,u \in \,L^2(D)\). It is known, see [23, Theorem I.1.2], that there exists a unique bounded linear map \(\gamma _{\mathbf {n}}: E\rightarrow H^{-\frac{1}{2}}(\delta D)\), where \(H^{-\frac{1}{2}}(\partial {D})\) is the dual space of \(H^{1/2}(\partial {D})\) [and equal to the image in \(L^2(\partial {D})\) of the trace operator \(\gamma _0:H^{1,2}(D) \rightarrow L^2(\partial {D})\)] such that
$$\begin{aligned} \gamma _{\mathbf {n}}(u)= \text{ the } \text{ restriction } \text{ of } u\cdot \mathbf {n} \text{ to } \partial {D}, \;\; \text{ if } u\in \mathcal {C}^\infty (\overline{D}). \end{aligned}$$
Then, it is known that \(H\) is equal to the space of all vector fields \(u \in E\) such that \(\gamma _{\mathbf {n}}(u)=0\). Let us denote by \(V\) the separable Hilbert space which is equal to the closure in the space \(\mathbb {H}^{1,2}=H^{1,2}(D,\mathbb {R}^2)\) of the space \(\mathcal {V}\) equipped with the inner product inherited from \(\mathbb {H}^{1,2}\) and with corresponding norm denoted by \(\Vert ~\cdot ~\Vert \). Identifying \(H\) with its dual space \(H^\prime \), and \(H^\prime \) with the corresponding natural subspace of the dual space \(V^\prime \), we have the standard Gelfand triple \(V\subset H\subset V^\prime \) with continuous dense embeddings. Let us note that the duality pairing between \(V\) and \(V^\prime \) agrees with the inner product of \(H\).

Moreover, we set \(D(A)=\mathbb {H}^{2,2}\cap V\) and we define the linear operator \(A: D(A)\subset H\longrightarrow H\) as \(Au=-P \Delta u\), where \(P: \mathbb {L}^2\rightarrow H\) is the orthogonal projection called the Leray–Helmholtz projection. It is known, see [8], that \(A\) is self-adjoint, positive and has a compact inverse.

The fractional powers of the operator \(A\) will be denoted by \(A^\alpha \), \(\alpha \in \mathbb {R}\). It is known that \(V\) coincides with \(D(A^{1/2})\) with equivalent norms and in what follow we can use on \(V\) the norm \(\Vert u\Vert = |A^{1/2} u| =\sqrt{\int _D \vert \nabla u(x) \vert ^2\, dx}\).

Let \(b(\cdot ,\cdot ,\cdot ): V\times V\times V\longrightarrow \mathbb {R}\) be the continuous trilinear form defined by
$$\begin{aligned} b(u,v,z)=\int \limits _{D}(u\cdot \nabla v)\cdot z, \end{aligned}$$
which, by the incompressibility condition satisfies
$$\begin{aligned} b(u,v,v)=0, \;\; u,v\in V. \end{aligned}$$
By the Riesz Lemma there exists a continuous bilinear map \(B: V\times V\longrightarrow V^\prime \) such that
$$\begin{aligned} \langle B(u,v),z\rangle =b(u,v,z),\quad \mathrm{for}\ \mathrm{all}\ u,v,z\in V, \end{aligned}$$
which also satisfies
$$\begin{aligned} \langle B(u,v),z \rangle =-\langle B(u,z),v \rangle \quad \mathrm{and}\quad \langle B(u,v),v \rangle =0\quad u, v, z\in V. \end{aligned}$$
Moreover, as it has been pointed out by V. Barbu [1] and proved in [21, Proposition 2.2] the following Assumption is satisfied with the space \(\mathrm {X}=\mathbb {L}^4(D)\).

Assumption 2.1

  1. (1)
    There exists a Banach space \(\mathrm {X}\) such that \(V\subset \mathrm {X}\subset H\) continuously and densely and there exists a constant \(C>0\) such that
    $$\begin{aligned} \vert u\vert ^{2}_{\mathrm {X}}\le C|u|\Vert u\Vert , \quad u\in V. \end{aligned}$$
  2. (2)
    For \(\eta >0\), there exists a constant \(C_{\eta }>0\) such that for all \(u_i \in V\), \(i=1,2,3\),
    $$\begin{aligned} |\langle B(u_{1}, u_{2}), u_{3}\rangle |&\le C \vert u_1\vert _\mathrm{X} \vert u_2\vert _\mathrm{X} \Vert u_3\Vert . \end{aligned}$$

Remark 2.2

It is easy to see the Assumption 2.1 (2) implies that for any \(\eta >0\) there exists \(C_{\eta }>0\) such that for all \(u_i \in V\), \(i=1,2,3\),
$$\begin{aligned} |\langle B(u_{1}, u_{2}), u_{3}\rangle |&\le \eta \Vert u_{3}\Vert ^{2} +C_{\eta }\vert u_{1}\vert ^{2}_{\mathrm {X}}\vert u_{2}\vert ^{2}_{\mathrm {X}}. \end{aligned}$$
Moreover, the last property together with (2.3) implies that for all \((u_i)_{i=1}^3 \in V\),
$$\begin{aligned} |\langle B(u_{1}, u_{1})- B(u_{2}, u_{2}), u_{1}-u_{2}\rangle |\le \eta \Vert u_{1}-u_{2}\Vert ^{2} +C_{\eta }|u_{1}-u_{2}|^{2}\vert u_{1}\vert ^{4}_{\mathrm {X}}.\qquad \end{aligned}$$
Note that this abstract setting (where one only assumes that the operators \(A\) and \(B\) satisfy the conditions in sections (2.3)–(2.7)) includes several classical hydrodynamical settings subject to random perturbations, such as the 2D Navier–Stokes equations, the 2D Boussinesq model for Bénard convection, the 2D Magneto-Hydrodynamics equations, the 2D magnetic Bénard problem, the 3D Leray \(\alpha \)-model for Navier–Stokes equations, the GOY and Sabra shell models of turbulence, and dyadic models; see [9] for more details. To be precise we assume that
  1. (i)

    \((H,|.|)\) is a Hilbert space,

  2. (ii)

    \(A\) is a linear positive unbounded operator in \(H\),

  3. (iii)

    \(V= D(A^{1/2})\) endowed with the norm \(\Vert \cdot \Vert \) is a Hilbert space,

  4. (iv)

    a bilinear map \(B:V\times V \rightarrow V^\prime \) satisfies (2.3)

  5. (v)

    Assumption 2.1 holds for some Banach space \(\mathrm {X}\).

In the sequel, we will work in this abstract framework containing the above example of the 2D Navier–Stokes equations.

We assume that \(K\) is a separable Hilbert space, \((\Omega , \mathcal {F}, ({\mathcal F}_t)_{t\ge 0}, \mathbb {P})\) is a filtered probability space and \(W=(W(t))_{t\ge 0}\) is a \(K\)-cylindrical Wiener process on that probability space.

Hence, the stochastic hydrodynamical systems (including the stochastic 2D Navier–Stokes equations) are rewritten in the abstract form
$$\begin{aligned} \left\{ \begin{array}{ll} du+(Au+B(u,u)-f)dt=G(u)dW,\\ u(0)=u_{0}. \end{array} \right. \end{aligned}$$
For simplicity we will assume that \(f=0\).
We also introduce a Coriolis type of term \(R: [0,T]\times H\longrightarrow H\), for example \(R(t, (u_1,u_2))=c_0 (-u_2,u_1)\) (precise assumptions are given below), and set for \(t\in [0,T]\) and \(u\in V\):
$$\begin{aligned} F(t,u)=Au+B(u,u)+R(t,u) \end{aligned}$$
and consider the evolution equation.
$$\begin{aligned} d u(t) + F(t,u(t)) dt = G(u(t)) \,dW(t), \quad u(0)=u_0. \end{aligned}$$

2.2 Assumptions and results on the stochastic NSEs in \(H\)

Through the paper, we will assume that \(u_{0}\in H\) (or \(u_{0}\in L^{2}(\Omega ,\mathcal {F}_{0}, H)\)). Let us denote by \({\mathcal T}_{2}(K,H)\) the space of Hilbert–Schmidt operators from \(K\) to \(H\).

Assumption (G1): Let us assume that \(G\) is a continuous mapping \(G:[0,T]\times V\longmapsto {\mathcal T}_{2}(K,H),\) (resp. \(G:[0,T]\times H\longmapsto {\mathcal T}_{2}(K,H)\) for \(\varepsilon =0\)) and that there exist positive constants \((K_{i})_{i=0}^3\) and \((L_i)_{i=1}^2\), such that for any \(t\in [0,T]\), \(u,u_1,u_2\in V\),
$$\begin{aligned} |G(t,u)|^{2}_{{\mathcal T}_{2}(K,H)}&\le K_{0}+K_{1}|u|^{2} + \varepsilon \; K_2 \Vert u\Vert ^2, \end{aligned}$$
$$\begin{aligned} |G(t,u_{2})-G(t,u_{1})|^{2}_{{\mathcal T}_{2}(K,H)}&\le L_{1}|u_{2}-u_{1}|^{2} + \varepsilon \; L_2 \Vert u_2 - u_1\Vert ^2. \end{aligned}$$
Assumption (R1): Let us assume that \(R\) is a continuous mapping \(R: [0,T]\times H\longrightarrow H\) such that for some positive constants \(R_{0}\) and \(R_{1}\)
$$\begin{aligned} |R(t,0)|\le R_{0},\quad |R(t,u)-R(t,v)|\le R_{1}|u-v|,\quad u, v\in H. \end{aligned}$$
There is huge literature for the well posedness of 2D Stochastic Navier–Sokes equations. The following result is similar to that proved in [7] for the 2D Navier–Stokes equations. We refer to [9] for the result given below. Indeed, the a priori estimates in Proposition A.2 of [9] imply that the sequence \(u_{n,h}\) of Galerkin approximations satisfies \(\sup _n \sup _{h\in {\mathcal A}_M } \mathbb {E}\big ( \sup _{t\in [0,T]} |u_{n,h}(t)|^{2p}\big ) <\infty \) and the subsequence used in Step 1 of the proof of Theorem 2.4 in [9] can be supposed to be weak-star convergent to \(u_h\) in \(L^{2p}(\Omega , L^\infty (0,T;H))\). In our case, there is no random control, that is \(h=0\).

Theorem 2.3

Let us assume that assumptions (G1) and (R1) are satisfied with \( K_2 \le L_2 <2\). Then, for any \(T>0\) and any \({\mathcal F}_0\)-measurable \(H\)-valued random variable such that \(\mathbb {E}|u_0|^4<\infty \), there exists a unique adapted process \(u\) such that
$$\begin{aligned} u\in C([0,T]; H)\cap L^2(0,T;V) \cap L^{4}(0,T ; \mathrm {X})\quad \mathrm{a.s.} \end{aligned}$$
and \(P\)-a.s., \(u\) is solution of the equation (2.8), that is in the weak formulation, for all \(t\in [0,T]\) and \(\phi \in D(A)\):
$$\begin{aligned}&\langle u(t), \phi \rangle +\int \limits _{0}^{t}\langle u(s), A\phi \rangle \,ds + \int \limits _{0}^{t}\langle B(u(s), u(s)) , \phi \rangle \,ds + \int \limits _{0}^{t}\langle R(s,u(s)), \phi \rangle \,ds \nonumber \\&\qquad =\langle u_{0}, \phi \rangle + \int \limits _{0}^{t}\langle G(s,u(s))dW(s), \phi \rangle , \end{aligned}$$
Moreover, if \(q\in [2,1+\frac{2}{K_2})\), then there exists a positive constant \(C=C_q(T)\) such that if \(\mathbb {E}|u_0|^{q}<\infty \),
$$\begin{aligned} \mathbb {E}\left( \sup _{t\in [0,T] }|u(t)|^{q} +\int \limits _{0}^{T}\! \Vert u(s)\Vert ^{2}(1+|u(s)|^{q-2})\,ds \right) \le C(1+\mathbb {E} |u_{0}|^{q}). \end{aligned}$$
Finally, if \(K_2<\frac{2}{3}\), then
$$\begin{aligned} \mathbb {E} \int \limits _{0}^{T}\! \vert u(s)\vert _{\mathrm {X}}^{4}\,ds \le C(1+\mathbb {E} |u_{0}|^{4}). \end{aligned}$$

2.3 Description of the scheme

Let \(\Pi =\left\{ 0=t_{0}<t_{1}<\dots <t_{n}=T\right\} \) be a finite partition of a given interval \([0,T]\) with constant mesh \(h=T/n\). We will consider the following splitting scheme similar to one introduced in [3] for deterministic NSEs. Let \( \varepsilon \; \in [0,1)\) and let \(F_\varepsilon : [0,T]\times V \rightarrow V^\prime \) be defined by:
$$\begin{aligned} F_\varepsilon (t,u) = (1-\varepsilon ) Au + B(u,u) + R(t,u). \end{aligned}$$
Note that \(F_0=F\).

Set \(t_{-1}=-\frac{T}{n}\). For \(t \in [ t_{-1},0)\) set \(y^n(t) = u^n(t) = u_0\) and \({\mathcal F}_t={\mathcal F}_0\).

The scheme \((y^n, u^n) \) is defined by induction as follows. Let \(i=0, \cdots n-1\) and suppose we have defined processes \(u^n(t)\) and \(y^n(t)\) for \(t\in [t_{i-1}, t_{i})\) such that \(y^n(t_{i}^{-})\) is an \(H\)-valued \(\mathcal {F}_{t_{i}}\)-measurable function. This clearly holds for \(i=0\). Then we define \(u^n(t),\ \ t\in [t_{i}, t_{i+1})\) as the unique solution of the (deterministic) problem with positive viscosity \(1-\varepsilon \) and with “initial” condition \( y^n(t_{i}^{-})\) at time \(t_i\), that is :
$$\begin{aligned} \frac{d}{dt}u^n(t) + F_{\varepsilon }(t, u^n(t))=0, \; t\in [t_i, t_{i+1}), \quad u^n(t_i) = u^n(t_i^+)=y^n(t_{i}^{-}).\qquad \end{aligned}$$
Note that \(u^n(t_{i+1}^{-})\) is a well defined \(H\)-valued \(\mathcal {F}_{t_{i}}\) measurable random variable. Thus we can define \(y^n(t),\ \ t\in [t_{i}, t_{i+1})\) as the unique solution of the (random) problem with “initial” condition the \({\mathcal F}_{t_i}\)-measurable random variable \(u^n(t_{i+1}^-)\) at time \(t_i\):
$$\begin{aligned} dy^n(t)+ \varepsilon \; A y^n(t)dt=G(t,y^n(t))\,dW(t), \; t\in [t_{i},t_{i+1}), \nonumber \\ y^n(t_i)=y^n(t_i^+)=u^n(t_{i+1}^{-}). \end{aligned}$$
Note that both processes \(u^n\) and \(y^n\) are a.s. right-continuous. Indeed, if \( \varepsilon \; >0\), it is classical that, provided that the stochastic parabolicity condition holds (that is \( \varepsilon \; K_2\), \( \varepsilon \; L_2\) are small enough), there exists a unique right-continuous weak solution to (2.17), see e.g. [17]. If \( \varepsilon \; =0\), the smoothing effect of \(A\) does not act anymore, but the coefficient \(G\) satisfies the usual growth and Lipschitz conditions for the \(H\)-norm. Therefore, in both cases \(y^n(t_{i+1}^{-})\) is a well defined \(H\)-valued \(\mathcal {F}_{t_{i+1}}\) measurable random variable.

Finally, let \( u^n(T^+)=y^n(T^-)\).

Remark 2.4

As in [3], we notice that the processes \(u^n\) and \(y^n\) are not continuous.

In order to prove the convergence of the above scheme, we will need to establish a priori estimates on \(u^n\) and \(y^n\). They will be different if \( \varepsilon \;=0\) and \( \varepsilon \; \in (0,1)\). We at first introduce some notation. Recall that \(\Pi =\left\{ 0=t_{0}<t_{1}<\dots <t_{n}=T\right\} \). Set
$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l@{\quad }l} d_{n}(t):=t_{i}, &{} d^{*}_{n}(t):=t_{i+1}&{} \mathrm{for }~t\in [t_{i},t_{i+1}), \; i=0, 1,\dots , n-2, \\ d_{n}(t):=t_{n-1}&{}d^{*}_{n}(t):=t_{n}&{} \mathrm{for }~t\in [t_{n-1},t_{n}]. \end{array} \right. \end{aligned}$$
With the above notations, the processes \(u^n\) and \(y^n\) can be rewritten as follows: For every \(t\in [0,T]\) we have:
$$\begin{aligned} u^n(t)&= u_{0}- \int \limits _{0}^{t}F_{\varepsilon }(s,u^n(s))ds +\int \limits _{0}^{d_{n}(t)}\left[ - \varepsilon \; A y^n(s)ds+G(s,y^n(s))dW(s)\right] \!,\nonumber \\ \end{aligned}$$
$$\begin{aligned} y^n(t)&= u_{0}- \int \limits _{0}^{d_{n}^{*}(t)}F_\varepsilon (s,u^n(s))ds+\int \limits _{0}^{t} \left[ - \varepsilon \; A y^n(s)ds+G(s,y^n(s))dW(s)\right] \!.\nonumber \\ \end{aligned}$$

3 A priori estimates for the initial data in \(H\) and general stochastic hydrodynamical systems

We at first suppose that the conditions (G1) and (R1) are satisfied.

Lemma 3.1

Let \(u_0\) be \({\mathcal F}_0\)-measurable and such that \(\mathbb {E}|u_0|^2<\infty \), and fix \( \varepsilon \; \in [0,1)\). Let Assumptions (G1) and (R1) be satisfied with \( K_2 \le L_2 <2 \). Then there exists a positive constant \(C\), depending on \(\mathbb {E}|u_0|^2 , \, \varepsilon \;,\, T\) and the constants \( R_i\) and \(K_i \) such that for every integer \(n\ge 1\):
$$\begin{aligned} \sup _{ t\in [0, T]} \mathbb {E}\left( |y^n(t)|^{2} + \sup _{s\in [d_n(t), d^*_n(t)) }|u^n(s)|^{2} \right) + \mathbb {E} \int \limits _{0}^{T}\Vert u^n(s)\Vert ^{2}ds \le C. \end{aligned}$$
Furthermore, if \( \varepsilon \;\in (0,1)\) we can choose the constant \(C\) such that:
$$\begin{aligned} \sup _{n\in \mathbb {N}} \mathbb {E}\int \limits _{0}^{T}\Vert y^n(s)\Vert ^{2}ds\le C. \end{aligned}$$


Let \(\alpha =2R_{0}+K_{0}\) and \(a=2(R_{0}+R_{1})+K_{1}\). We use an induction argument to prove that for \(l=-1, \cdots , n-1\),
$$\begin{aligned} \mathbb {E} \left( \sup _{t\in [t_l, t_{l+1})} |u^n(t)|^2 \right) + \sup _{t\in [t_l, t_{l+1})} \mathbb {E}|y^n(t)|^2 \le \mathbb {E} |u_{0}|^{2}\mathrm{e}^{\frac{(l+1)aT}{n}}+ \frac{\alpha T}{n}\sum _{j=1}^{l+1}\mathrm{e}^{\frac{j aT}{n}},\nonumber \\ \end{aligned}$$
where we use the convention \(\sum _{j=p}^{q}\mathrm{e}^{\frac{jaT}{n} }=0\) if \(p>q\). Note that (3.3) clearly holds for \(l=-1\). Assume that (3.3) holds for \(l=-1, 0, \cdots , i-1\). Take the scalar product of (2.16) by \(u^n\) and integrate over \((t_i,t]\) for \(t\in [t_i, t_{i+1})\); the anti-symmetry property of \(B\) and Assumption (R1) yield
$$\begin{aligned}&|u^n(t)|^{2} + 2(1-\varepsilon )\int \limits _{t_{i}}^{t} \Vert u^n(s)\Vert ^2 \,ds =|y^n(t_{i}^{-})|^{2} \nonumber \\&\qquad -\,2 \int \limits _{t_{i}}^{t}\langle B(u^n(s),u^n(s)),u^n(s)\rangle \,ds +2 \int \limits _{t_{i}}^{t}\langle R(s,u^n(s)),u^n(s)\rangle \,ds\nonumber \\&\quad \le |y^n(t_{i}^{-})|^{2} + 2 R_0 \frac{T}{n} + 2 \int \limits _{t_i}^t (R_0+R_1) |u^n(s)|^2\,ds. \end{aligned}$$
The Itô Lemma yields for \(t\in [t_i, t_{i+1})\):
$$\begin{aligned}&\mathbb {E}|y^n(t)|^{2}+ 2 \;\varepsilon \; \mathbb {E}\int \limits _{t_{i}}^{t} \Vert y^n(s)\Vert ^2 \,ds =\mathbb {E}|u^n(t_{i+1}^{-})|^{2} +2\mathbb {E}\int \limits _{t_{i}}^{t} |G(s,y^n(s))|^{2}_{{\mathcal T}_{2}(K,H)}ds\nonumber \\&\qquad \le \mathbb {E}|u^n(t_{i+1}^{-})|^{2}+\frac{K_{0}T}{n}+K_{1}\int \limits _{t_{i}}^{t} \mathbb {E}|y^n(s)|^{2}ds + \varepsilon \; K_2 \mathbb {E}\int \limits _{t_{i}}^{t} \Vert y^n(s)\Vert ^2 \,ds. \end{aligned}$$
Since \(K_2<2\) we may neglect the integrals of the \(V\)-norm in both inequalities (3.4) and (3.5). Taking expected values in (3.4), using the Gronwall Lemma and the induction hypothesis, we obtain:
$$\begin{aligned}&\mathbb {E}\left( \sup _{t_{i}\le t <t_{i+1}} |u^n(t)|^{2}\right) \le \left( \mathbb {E}|y^n(t_{i}^{-})|^{2}+\frac{2R_{0}T}{n}\right) \mathrm{e}^{\frac{2(R_{0}+R_{1})T}{n}}\\&\qquad \le \mathbb {E} |u_{0}|^{2}\mathrm{e}^{\frac{aiT}{n}+ \frac{2(R_{0}+R_{1})T}{n}} +\frac{2R_{0}T}{n}\mathrm{e}^{\frac{2(R_{0}+R_{1})T}{n}} +\frac{\alpha T}{n}\sum _{j=1}^{i}\mathrm{e}^{\frac{j aT}{n}+\frac{2(R_{0}+R_{1})T}{n} } , \end{aligned}$$
$$\begin{aligned}&\sup _{t_{i}\le t <t_{i+1}}\mathbb {E}|y^n(t)|^{2}\le \big (\mathbb {E}|u^n(t_{i+1})|^{2}+\frac{K_{0}T}{n}\big ) \mathrm{e}^{\frac{K_{1}T}{n}}\\&\qquad \le \mathbb {E} |u_{0}|^{2}\mathrm{e}^{\frac{iaT+ aT}{n} }+ \frac{2R_0 T}{n} e^{\frac{aT}{n}} + \frac{K_{0}T}{n}\mathrm{e}^{\frac{K_{1}T}{n}} +\frac{\alpha T}{n} e^{\frac{aT}{n}} \sum _{j=1}^{i}\mathrm{e}^{\frac{j aT}{n}}. \end{aligned}$$
Since \(\alpha \ge 2R_{0}\) and \(a\ge 2(R_{0}+R_{1})\), we deduce that the induction hypothesis (3.3) holds true for \(l=i+1\). Hence we deduce that
$$\begin{aligned}&\sup _{0\le t< T} \mathbb {E}\left( \sup _{d_n(t) \le s < d^*_n(t)} |u^n(s)|^{2}\right) \vee \left( \sup _{0\le t<T}\mathbb {E}|y^n(t)|^{2}\right) \le |u_{0}|^{2}\mathrm{e}^{\frac{n aT}{n}}+ \frac{\alpha T}{n}\sum _{j=1}^{n}\mathrm{e}^{\frac{j aT}{n}}\\&\qquad \le \mathbb {E} |u_{0}|^{2}\mathrm{e}^{aT}+ \frac{\alpha T}{n}\mathrm{e}^{\frac{aT}{n}} \frac{\mathrm{e}^{a T}-1}{\mathrm{e}^{\frac{a T}{n}}-1}\, \le \, \mathbb {E}|u_{0}|^{2}\mathrm{e}^{a T}+ \frac{\alpha }{a}\mathrm{e}^{2a T}. \end{aligned}$$
This proves part of (3.1). Using this last inequality in (3.5), and in (3.4) after taking expected values, yields for every \(i=0,\cdots , n-1\):
$$\begin{aligned}&\mathbb {E}| u^n(t_{i+1}^-)|^2 + 2(1-\varepsilon ) \mathbb {E}\int \limits _{t_i}^{t_{i+1}} \Vert u^n(s)\Vert ^2\,ds \le \mathbb {E}|y^n(t_{i}^{-})|^{2} +\frac{CT}{n}, \\&\mathbb {E}|y^n(t_{i+1}^-)|^{2}+ (2-K_2) \varepsilon \; \mathbb {E}\int \limits _{t_{i}}^{t_{i+1}}\Vert y^n(s)\Vert ^2 \,ds \le \mathbb {E}|u^n(t_{i+1}^{-})|^{2}+\frac{CT}{n}. \end{aligned}$$
Adding all these inequalities from \(i=0\) to \(i=n-1\) concludes the proof of (3.1) and proves (3.2) when \( \varepsilon \; >0\).\(\square \)

As usual, we can use higher moments of the \(H\)-norms and deduce the following.

Lemma 3.2

Fix \( \varepsilon \; \in [0,1)\) and let Assumptions (G1) and (R1) be satisfied with \(L_2<2\) and \(K_2<\frac{2}{2p-1}\) for some real number \(p\ge 2\). Then there exists a positive constant \(C:=C( \varepsilon \;,T)\) such that for every integer \(n\ge 1\):
$$\begin{aligned}&\sup _{t\in [0,T]}\left[ \mathbb {E}\big (\sup _{s\in [d_n(t),d^*_n(t))} |u^n(s)|^{2p}\big ) \qquad +\mathbb {E}|y^n(t)|^{2p}\right] \nonumber \\&\quad +\,\mathbb {E}\int \limits _{0}^{T}\Vert u^n(s)\Vert ^{2}|u^n(s)|^{2(p-1)}ds \le C. \end{aligned}$$
Furthermore, if \( \varepsilon \;\in (0,1)\) then \(C\) can be chosen such that:
$$\begin{aligned} \sup _{n\ge 1} \mathbb {E}\int \limits _{0}^{T}\Vert y^n(s)\Vert ^{2}|y^n(s)|^{2(p-1)}ds \le C. \end{aligned}$$


By repeating an argument similar to that used in the proof of Lemma 3.1 that we can find in [23] (Lemma 1.2), that is:
$$\begin{aligned} \frac{d}{dt}|u^n(t)|^{2}=2\langle \big (u^n(t)\big )', u^n(t)\rangle , \end{aligned}$$
and using the chain rule, we have that for \(t\in [t_i, t_{i+1})\), \(i=0, \cdots , n-1\):
$$\begin{aligned} \frac{d}{dt}|u^n(t)|^{2p}= p\big (|u^n(t)|^{2}\big )^{p-1}\frac{d}{dt} |u^n(t)|^{2} = 2p |u^n(t)|^{2(p-1)}\langle \big (u^n(t)\big )', u^n(t)\rangle . \end{aligned}$$
Hence, we get that for \(i=0,\dots ,n-1\) and for all \(t\in [t_{i},t_{i+1})\)
$$\begin{aligned}&|u^n(t)|^{2p}+ 2p (1-\varepsilon ) \int \limits _{t_i}^t |u^n(s)|^{2(p-1)} \Vert u^n(s)\Vert ^2\,ds =| y^n(t_{i}^{-})|^{2p}\nonumber \\&\qquad -\, 2p\ \int \limits _{t_i}^t\big [ \langle B(u^n(s),u^n(s)),u^n(s)\rangle + \langle R(s,u^n(s)),u^n(s)\rangle \big ] |u^n(s)|^{2(p-1)}\,ds\nonumber \\&\quad \le |y^n(t_{i}^{-})|^{2p} +2p\frac{R_{0}T}{n} +2p(R_{0}+R_{1})\int \limits _{t_{i}}^{t} |u^n(s)|^{2p}ds. \end{aligned}$$
Thus, the Gronwall Lemma implies
$$\begin{aligned} \mathbb {E}\sup _{t_{i}\le t<t_{i+1}} |u^n(t)|^{2p}\le \left[ \mathbb {E}|y^n(t_{i}^{-})|^{2p}+2p\frac{R_{0}T}{n}\right] \mathrm{e}^{2p(R_{0}+R_{1})\frac{T}{n}}. \end{aligned}$$
On the other hand, for fixed \(i=0,\dots ,n-1\), the Itô Lemma yields for \(t\in [t_{i},t_{i+1})\)
$$\begin{aligned}&|y^n(t)|^{2p} + 2p \;\varepsilon \;\int \limits _{t_{i}}^{t}|y^n(s)|^{2(p-1)}\Vert y^n(s)\Vert ^{2}\,ds =|u^n(t_{i+1}^{-})|^{2p} \\&\quad +p \int \limits _{t_{i}}^{t}\langle G(s,y^n(s))dW(s),y^n(s)\rangle |y^n(s)|^{2(p-1)}\\&\quad +p \int \limits _{t_{i}}^{t} \! \big ( |G(s,y^n(s))|^{2}_{{\mathcal T}_{2}(K,H)} |y^n(s)|^{2(p-1)} \! \\&\qquad \qquad \qquad +\,2(p-1) |G^{*}(s,y^n(s)) y^n(s) |^{2}_{K} |y^n(s)|^{2(p-2)} \big )\,ds. \end{aligned}$$
Using assumption (G1), we deduce:
$$\begin{aligned}&\mathbb {E}|y^n (t)|^{2p}+2p \;\varepsilon \;\int \limits _{t_{i}}^{t}\mathbb {E}\Vert y^n(s)\Vert ^2 |y^n(s)|^{2(p-1)}\,ds \le \mathbb {E}|u^n(t_{i+1}^{-})|^{2p}\nonumber \\&\quad +\,p(2p-1) \left[ \frac{K_{0}T}{n} +(K_0+K_1) \int \limits _{t_{i}}^{t}\mathbb {E}|y^n(s)|^{2p} ds \right. \nonumber \\&\qquad \qquad \qquad \qquad \left. +\, K_2 \;\varepsilon \int \limits _{t_{i}}^{t}\mathbb {E}|y^n(s)|^{2(p-1)} \Vert y^n(s)\Vert ^{2}\,ds\right] . \end{aligned}$$
Since \(K_2 < \frac{2}{2p-1}\), in inequality (3.9) we may neglect the integrals containing the \(V\)-norms of \(u(t)\) and by applying the Gronwall Lemma we then infer that
$$\begin{aligned} \sup _{t_{i}\le t<t_{i+1}}\mathbb {E}|y^n(t)|^{2p}\le \big (\mathbb {E}|u^n(t_{i+1}^{-})|^{2p}+\frac{2p(p-1)K_{0}T}{n}\big ) \mathrm{e}^{\frac{2p(p-1)(K_{0}+K_{1})T}{n}}. \end{aligned}$$
Let us put
$$\begin{aligned} b:=2p (R_{0}+R_{1})+p(2p-1) (K_{0}+K_{1})\; \mathrm{and}\; \beta :=2p R_{0}+p(2p-1)K_{1}; \end{aligned}$$
then by a mathematical induction argument (see the proof of Lemma 3.1 for a similar one), we infer that for \(i=0,\dots ,n-1\),
$$\begin{aligned} \mathbb {E} \Big ( \sup _{t_{i}\le t<t_{i+1}} |u^n(t)|^{2p}\Big ) \vee \Big ( \sup _{t_{i}\le t<t_{i+1}}\mathbb {E}|y^n(t)|^{2p}\Big ) \le \mathbb {E} |u_{0}|^{2p}\mathrm{e}^{(i+1)\frac{T}{n}b}\!+\!\frac{\beta T}{n}\sum _{j=1}^{i+1}\mathrm{e}^{(jb)\frac{T}{n}}. \end{aligned}$$
Hence we deduce:
$$\begin{aligned} \Big ( \sup _{0\le t< T} \mathbb {E}\Big [ \sup _{s\in [d_n(t),d^*_n(t))} |u^n(s)|^{2p}\Big ] \Big ) \vee \Big ( \sup _{0\le t< T} \mathbb {E}|y^n(t)|^{2p}\Big )\le \mathbb {E} |u_{0}|^{2p}\mathrm{e}^{bT}\!+\!\frac{\beta }{b}\mathrm{e}^{2bT}.\nonumber \\ \end{aligned}$$
Adding (3.8) and (3.9) for \(i=1,\dots ,n-1\) and using (3.10), we deduce that
$$\begin{aligned}&\mathbb {E}|y^n(T^{-})|^{2p} +p[2-(2p-1)K_2] \, \varepsilon \;\int \limits _{0}^{T}\! \! \mathbb {E}|y^n(s)|^{2(p-1)}\Vert y^n(s)\Vert ^{2}ds \\&\quad +\, 2p(1-\varepsilon )\int \limits _{0}^{T}\!\! \mathbb {E}|u^n(s)|^{2(p-1)}\Vert u^n(s)\Vert ^{2}ds \le \mathbb {E} |u_{0}|^{2p} (1+bTe^{bT}) \\&\quad +\,\beta T(1+e^{2bT}). \end{aligned}$$
This completes the proof using once more the fact that \((2p-1)K_2<2\) when \( \varepsilon \; >0\).

\(\square \)

Finally we will prove an upper estimate of the \(H\) norm of the difference of both processes \(u^n\) and \(y^n\).

Proposition 3.3

Let us assume that \(u_0\) is \({\mathcal F}_0\)-measurable such that \({\mathbb E }|u_0|^4 <\infty \) and that the Assumptions (G1) and (R1) hold with \(K_2<\frac{2}{3}\) and \(L_2<2\). Then for any \( \varepsilon \;\in [0,1)\), there exists a positive constant \(C\) such that for any \(n\in \mathbb {N}\)
$$\begin{aligned} \mathbb {E}\int \limits _{0}^{T}|u^{n}(t)-y^{n}(t)|^{2}dt\le \frac{CT}{n}. \end{aligned}$$


Case 1: Let \( \varepsilon \; =0\); then (2.17) and Assumption (G1) prove that for any \(t\in [0,T)\),
$$\begin{aligned} \mathbb {E}|y^n(t)-u^n(d^*_n(t))|^2&= \mathbb {E}\int \limits _{d_n(t)}^t \Vert G(s,y^n(s))\Vert _{\mathcal {T}_2(K,H)}^2\,ds \\&\le \mathbb {E}\int \limits _{d_n(t)}^t [K_0 + K_1 |y^n(s)|^2]\,ds. \end{aligned}$$
Therefore, Fubini’s theorem and (3.1) yield
$$\begin{aligned} \mathbb {E}\int \limits _0^T |y^n(t)-u^n(d^*_n(t))|^2 dt \le C\mathbb {E}\int \limits _0^T\,ds [1+|y^n(s)|^2] \int \limits _s^{d^*_n(s)} dt \le C \frac{T}{n}.\qquad \quad \end{aligned}$$
We next prove that for any \( \varepsilon \; \in [0,1)\), we have
$$\begin{aligned} \mathbb {E}\int \limits _{0}^{T}|u^{n}(d_{n}^{*}(t)^{-})-u^{n}(t)|^{2}dt\le \frac{CT}{n}. \end{aligned}$$
This estimate together with (3.12) concludes the proof of (3.11) when \( \varepsilon \; =0\). The evolution Eq. (2.18) shows that
$$\begin{aligned} |u^{n}(d_{n}^{*}(t)^{-})-u^{n}(t)|^{2} =2\int \limits _{t}^{d_{n}^{*}(t)}\langle u^{n}(s)-u^{n}(t), du^{n}(s)\rangle = \sum _{i=1}^{3}T_{i}(t), \end{aligned}$$
$$\begin{aligned} T_{1}(t)&= -2(1-\varepsilon )\int \limits _{t}^{d_{n}^{*}(t)}\langle Au^{n}(s), u^{n}(s)-u^{n}(t)\rangle \,ds, \\ T_{2}(t)&= -2\int \limits _{t}^{d_{n}^{*}(t)}\langle B(u^{n}(s),u^{n}(s)) ,u^{n}(s)-u^{n}(t)\rangle \,ds,\\ T_{3}(t)&= -2\int \limits _{t}^{d^*_{n}(t)}\langle R(s,u^{n}(s)), u^{n}(s)-u^{n}(t)\rangle \,ds. \end{aligned}$$
Using Lemma 3.1 and the Young inequality, we deduce that
$$\begin{aligned} \big | \mathbb {E} \int \limits _{0}^{T}T_{1}(t)dt\big |&= \big | (1-\varepsilon )\mathbb {E} \int \limits _{0}^{T}dt \int \limits _{t}^{d_{n}^{*}(t)} \big [-2\Vert u^{n}(s)\Vert ^{2}+2\Vert u^{n}(s)\Vert \Vert u^{n}(t)\Vert \big ]ds \big |\\&\le \big | (1-\varepsilon )\mathbb {E} \int \limits _{0}^{T}dt \int \limits _{t}^{d_{n}^{*}(t)} \left[ -2\Vert u^{n}(s)\Vert ^{2}+2\Vert u^{n}(s)\Vert ^{2}+ \frac{1}{2}\Vert u^{n}(t)\Vert ^{2} \right] ds \big |\\&\le \frac{1-\varepsilon }{2} \mathbb {E} \int \limits _{0}^{T}dt \Vert u^{n}(t)\Vert ^{2}\int \limits _{t}^{d_{n}^{*}(t)}ds \; \le \; \frac{CT}{n}. \end{aligned}$$
Furthermore, using the upper estimate (2.5), the Cauchy–Schwarz inequality, the Fubini Theorem, Lemmas 3.1 and 3.2, and (2.4), we deduce
$$\begin{aligned} \Big | \mathbb {E} \int \limits _{0}^{T}T_{2}(t)dt\Big |&\le 2{\mathbb {E}}\int \limits _{0}^{T}dt \int \limits _{t}^{d_{n}^{*}(t)} | u^{n}(s)|_{\mathrm {X}}^{2}\Vert u^{n}(t)\Vert \,ds\\&\le 2\left( \mathbb {E}\int \limits _{0}^{T}\Vert u^{n}(t)\Vert ^{2}dt\right) ^{1/2} \left( \mathbb {E}\int \limits _{0}^{T}dt \left[ \int \limits _{t}^{d^*_{n}(t)}\vert u^{n}(s)\vert ^{2}_{\mathrm {X}}ds\right] ^{2}\right) ^{1/2}\\&\le C\left( \mathbb {E}\int \limits _{0}^{T}dt \frac{T}{n}\int \limits _{t}^{d_{n}^{*}(t)}\vert u^{n}(s)\vert ^{4}_{\mathrm {X}}ds\right) ^{1/2}\\&\le C \left( \frac{T}{n} \; \mathbb {E}\int \limits _{0}^{T}\,ds \vert u^{n}(s)\vert ^{4}_{\mathrm {X}}\int \limits _{d_n(s)}^s dt \right) ^{1/2} \le \frac{CT}{n}. \end{aligned}$$
Using Assumption (R1), the Cauchy–Schwarz inequality, the Fubini Theorem and (3.1), we have:
$$\begin{aligned}&\Big | \mathbb {E} \int \limits _{0}^{T}T_{3}(t)dt\Big | \le C\mathbb {E}\int \limits _{0}^{T}dt \int \limits _{t}^{d_{n}^{*}(t)} \left[ 1+|u^{n}(s)|^{2}\right] ds\\&\qquad +\,C\left( \mathbb {E}\int \limits _{0}^{T}|u^{n}(t)|^{2}dt\right) ^{1/2} \left( \mathbb {E}\int \limits _{0}^{T}dt\left[ \int \limits _{t}^{d_{n}^{*}(t)}(1+|u^{n}(s)|)ds\right] ^{2}\right) ^{1/2}\\&\quad \le \mathbb {E}\int \limits _{0}^{T}\,ds[1+ |u^{n}(s)|^{2}] \int \limits _{d_n(s)}^s dt +C \left( \mathbb {E}\int \limits _{0}^{T}dt \frac{T}{n}\int \limits _{t}^{d_{n}^{*}(t)}(1+|u^{n}(s)|)^{2}ds\right) ^{1/2}\\&\quad \le \frac{CT}{n}+C \left( \frac{T}{n}\mathbb {E}\int \limits _{0}^{T}\,ds (1+|u^{n}(s)|)^{2} \int \limits _{d_n(s)}^s dt \right) ^{1/2} \; \le \; \frac{CT}{n}. \end{aligned}$$
This concludes the proof of (3.13) and hence of (3.11) when \( \varepsilon \; =0\).
Case 2: Suppose that \( \varepsilon \; \in (0,1)\). Then for \(t\in (0,T]\) we have
$$\begin{aligned} y^n(t) - u^n(t) =- \int \limits _t^{d^*_n(t)} \!\! F_\varepsilon (s,u^n(s))\,ds - \varepsilon \; \int \limits _{d_n(t)}^t \!A y^n(s)\,ds + \int \limits _{d_n(t)}^t \!G(s,y^n(s)) dW(s). \end{aligned}$$
Therefore, the Itô Lemma implies that
$$\begin{aligned} \mathbb {E}\int \limits _0^T |y^n(t) - u^n(t)|^2 dt = \sum _{i=1}^5 \bar{T}_i, \end{aligned}$$
$$\begin{aligned} \bar{T}_1&= 2 (1-\varepsilon ) \mathbb {E}\int \limits _0^T \!\!dt \int \limits _t^{d^*_n(t)} \!\! \langle A u^n(s) , y^n(s)- u^n(s)\rangle \,ds,\\ \bar{T}_2&= 2 \mathbb {E}\int \limits _0^T \!\!dt \int \limits _t^{d^*_n(t)} \!\! \langle B(u^n(s),u^n(s)), y^n(s)-u^n(s)\rangle \,ds,\\ \bar{T}_3&= 2 \mathbb {E}\int \limits _0^T \!\!dt \int \limits _t^{d^*_n(t)} \!\! \langle R(s, u^n(s)), y^n(s)-u^n(s)\rangle \,ds, \\ \bar{T}_4&= -2 \varepsilon \; \mathbb {E} \int \limits _0^T \!\!dt \int \limits _{d_n(t)}^t \! \langle A y^n(s), y^n(s)-u^n(s)\rangle \,ds,\\ \bar{T}_5&= \mathbb {E}\int \limits _0^T \!\!dt \int \limits _{d_n(t)}^t \Vert G(s,y^n(s))\Vert _{\mathcal {T}_2(K,H)}^2\,ds. \end{aligned}$$
Let us note that since \(A=A^*\) is non negative, we have, for all \(y,u \in D(A)\):
$$\begin{aligned} \langle A u , y- u\rangle&= \langle A (u-y) , y- u\rangle +\langle A y , y- u\rangle \le \langle A y , y- u\rangle , \end{aligned}$$
$$\begin{aligned} 2 \langle A u , y- u\rangle&\le \langle A u , y- u\rangle +\langle A y , y- u\rangle =\langle A (y+u) , y- u\rangle \nonumber \\&= \langle A y , y\rangle -\langle A u , u\rangle \le \langle A y , y\rangle = \Vert y \Vert ^2. \end{aligned}$$
Therefore, the Fubini Theorem, the Cauchy–Schwarz inequality, and the estimate (3.2) yield
$$\begin{aligned} \bar{T}_1 \le (1-\varepsilon ) \mathbb {E}\int \limits _0^T ds \,\, \Vert y^n(s)\Vert ^2 \int \limits _{d_n(s)}^s \,dt \le C\frac{1-\varepsilon }{n}. \end{aligned}$$
Similarly, the Cauchy–Schwarz inequality and the upper estimates (3.1) and (3.2) yield
$$\begin{aligned} \bar{T}_4 \le 2\;\varepsilon \; {\mathbb E} \int \limits _0^T\,ds \, \big [ \,2\,\Vert y^n(s)^2 + \Vert u^n(s)\Vert ^2\big ] \int \limits _{d_n(s)}^s dt \le C\frac{\varepsilon }{n}. \end{aligned}$$
The Fubini Theorem and the upper estimates (2.6) with \(\eta =1\), (3.1), (3.2), (3.6), (2.5) and (2.4) yield
$$\begin{aligned} \bar{T}_2\le C\mathbb {E}\int \limits _0^T \,ds \big [\,\Vert y^n(s) - u^n(s)\Vert ^2 + C_1 \vert u^n(s)\vert _\mathrm {X}^4\big ] \int \limits _{d_n(s)}^s dt \le C\frac{T}{n}. \end{aligned}$$
Using Assumption (R1), the Fubini Theorem and (3.1), we deduce that
$$\begin{aligned} \bar{T}_3\le C\mathbb {E} \int \limits _0^T ds \,\, [1+|u^n(s)|^2] \int \limits _{d_n(s)}^s \,dt \le C \frac{T}{n}. \end{aligned}$$
Finally, Assumption (G1), (3.2) and the Fubini Theorem yield
$$\begin{aligned} \bar{T}_5\le \mathbb {E}\int \limits _0^T ds \, [K_0+K_1|y^n(s)|^2 + \;\varepsilon \; K_2 \Vert y^n(s)\Vert ^2] \int \limits _{d_n(s)}^{s} \, dt \le C\frac{T}{n}. \end{aligned}$$
This concludes the proof of (3.11) when \( \varepsilon \; \in (0,1)\).\(\square \)

In Sect. 5 we will prove that the scheme \(u^n\) converges to the true solution \(u\) of equation (2.12) in probability in \(H\), with rate of convergence almost equal to \(1/2\). However, in order to prove this convergence, we have to obtain estimates similar to that proved in Sect. 3, shifting regularity; this will be done in Sect. 4. Note that the results proved so far hold for a general hydrodynamical models with more general boundary conditions, while the results obtained in Sects. 4 and 5 require more smoothness on the coefficients and the initial condition, as well as periodic boundary conditions on some specific domain \(D\).

4 Stochastic Navier–Stokes equations with periodic boundary conditions

In this section we will discuss the stochastic NSEs with periodic (in space) boundary conditions, or as it is usually called, on a \(2\)-D torus. The difference between the periodic boundary conditions and the Dirichlet ones investigated in the previous section is that in addition to the properties satisfied for both cases, the periodic case has also one additional (4.1), see below. This assumption allows one not only to solve the problem with the initial datum \(u_0\) belonging to \(V\) but also to get a priori bounds similar to that of (2.13) for the solution and to that of Lemmas 3.1 and 3.2 for the scheme, shifting regularity by one level.

All what we have discussed throughout the paper until now applies to the case when the Dirichlet boundary conditions are replaced by the periodic boundary conditions. In the latter case it is customary to study our problem in the \(2\)-dimensional torus \(\mathbb {T}^2\) (of fixed dimensions \(L\times L\)), instead of a regular bounded domain \(D\). All the mathematical background can be found in the small book [22] by Temam. In particular, the space \(\mathrm { H}\) is equal to
$$\begin{aligned} \mathrm { H}=\{ u\in \mathbb {L}_0^2 : \mathrm{div}\,(u)=0 \text{ and } \gamma _{\mathbf {n}}(u)_{\vert \Gamma _{j+2}}=-\gamma _{\mathbf {n}}(u)_{\vert \Gamma _{j}}, \; j=1,2 \} , \end{aligned}$$
where \(\gamma _n\) is defined in (2.1) and \(\mathbb {L}_0^2=L_0^2(\mathbb {T}^2,\mathbb {R}^2)\) is the Hilbert space consisting of those \(u\in L^2(\mathbb {T}^2,\mathbb {R}^2)\) which satisfy \(\int \limits _{\mathbb {T}^2} u(x)\, dx=0\) and \(\Gamma _j\), \(j=1,\cdots ,4\) are the four (not disjoint) parts of the boundary of \(\partial (\mathbb {T}^2)\) defined by
$$\begin{aligned} \Gamma _j&= \{ x=(x_1,x_2) \in [0,L]^2: x_j=0\},\;\Gamma _{j+2}=\{ x=(x_1,x_2) \in [0,L]^2: x_j=L\},\\&j=1,2. \end{aligned}$$
Similarly, the space \(\mathrm { V}\) is equal to
$$\begin{aligned} \mathrm { V}=\{ u\in \mathbb {L}_0^2\cap H^{1,2}(\mathbb {T}^2,\mathbb {R}^2): \mathrm{div}\,u=0 \text{ and } u_{\vert \partial (\mathbb {T}^2)}=0 \}. \end{aligned}$$
The Stokes operator \(\mathrm {A}\) can be defined in a natural way and it satisfies all the properties known in the bounded domain case. In particular \(A\) is positive and the following identity involving the Stokes operator \(A\) and the nonlinear term \(B\) holds:
$$\begin{aligned} \left<\mathrm {A}u,B(u,u)\right>_\mathrm { H}=0, \;\; u\in D(\mathrm {A}); \end{aligned}$$
see [22, Lemma 3.1] for a proof.

We will also need to strengthen the assumptions on the initial condition \(u_0\) and on the coefficients \(G\) and \(R\) to obtain a uniform control of the \(V\)-norm of the solution. This is done in the following section.

4.1 A priori estimates for the initial data in \(V\) for the Stochastic NSEs on a torus

Given \(u\in V\), recall that we define \(\mathrm{curl} \, u = \partial _{x_1}u_{2}-\partial _{x_2}u_{1}\). The following results are classical; see e.g. [24], or [4] where they are used in a stochastic framework.
$$\begin{aligned} |\Delta u|^{2}&= |\nabla \mathrm{curl } \, u|^{2}, \quad \hbox { for } u\in Dom(A), \end{aligned}$$
$$\begin{aligned} |\nabla u|_{L^2}&\le C |\hbox { curl } u|_{L^2} , \quad \hbox { for } u\in V, \end{aligned}$$
$$\begin{aligned} \langle \hbox { curl } B(u,u),v \rangle&= \langle B(u ,\hbox { curl } u ), v\rangle ,\quad \hbox { for } u,v\in Dom(A). \end{aligned}$$
To ease notations, we denote by \(| { \mathrm curl}\, u |\) the \(L^2\)-norm of the one-dimensional function \(\mathrm{curl}\, u \).

Suppose that the coefficients \(G\) and \(R\) satisfy the following assumptions:

Assumption (G2): \(G : [0,T]\times D(A)\rightarrow \mathcal {T}_2(K, V)\) (resp. \(G : [0,T]\times V\rightarrow \mathcal {T}_2(K, H)\) if \( \varepsilon \; =0\)), and there exist positive constants \(K_i, i=0,1,2\) and \(L_i, i=1,2\) such that for every \(t\in [0,T]\), and \(u,v\in D(A)\) (resp. \(u,v\in V\)):
$$\begin{aligned} | \hbox { curl } G(t,u)|^2_{\mathcal {T}_2(K,V)}&\le K_0+ K_1 | \mathrm{curl}\, u|^2 + \;\varepsilon \; K_2 |A u|^2, \end{aligned}$$
$$\begin{aligned} |\hbox { curl } G(t,u) - \hbox { curl } G(t,v)|^2_{\mathcal {T}_2(K,V)}&\le L_1 |\mathrm{curl}\, (u-v) |^2 \!+\! \varepsilon L_2 |Au\!-\!Av|^2.\qquad \quad \end{aligned}$$
Assumption (R2): Let us assume that \(R\) is a measurable mapping \(R: [0,T]\times V\longrightarrow V\) such that for some positive constants \(R_{0}\) and \(R_{1}\)
$$\begin{aligned} |\hbox { curl R} (t,u)|_{L^2}\le R_{0},\; |\hbox { curl } [R(t,u)\!-\!R (t,v)]|_{L^2} \le R_{1}|\hbox { curl }(u\!-\!v)|_{L^2}, \; u, v\!\in \! V.\nonumber \\ \end{aligned}$$
Let us put \(\xi (t)=\hbox { curl } u(t)\) where \(u\) is the solution to (2.8). Then \(\xi \) solves the following equation on \([0,T]\) with initial condition \(\xi (0)=\hbox { curl u}_0\):
$$\begin{aligned} d\xi (t) + \big [A \xi (t) + \hbox { curl B}(u(t),u(t)) + \hbox { curl } R(t,u(t))\big ] dt = \hbox { curl G}(u(t)) \,dW(t). \end{aligned}$$
An easy modification of the arguments in the proof of Proposition 2.2 in [4] proves the following.

Theorem 4.1

Let us assume that \(u_{0}\) is a \(V\)-valued, \({\mathcal F}_0\)-measurable random variable with \(\mathbb {E}\Vert u_0\Vert ^{2p} <\infty \) for some real number \(p\ge 2\). Assume that assumptions (G1) and (G2) are satisfied with \(K_2<\frac{2}{2p-1}\) and \(L_2<2\), and that the assumptions (R1) and (R2) hold true. Then the process \(u\) solution of (2.8) is such that \(u\in C([0,T]; V)\bigcap L^{2}(0,T; D(A))\) a.s. Moreover, there exists a positive constant \(C\) such that
$$\begin{aligned} \mathbb {E}\left( \sup _{t\in [0,T]}\Vert u(t)\Vert ^{2p}+\int \limits _{0}^{T}|Au(s)|^{2}\left( 1+\Vert u(s)\Vert ^{2(p-1)} \right) \,ds \right) \le C(1+\mathbb {E} \Vert u_{0}\Vert ^{2p}).\nonumber \\ \end{aligned}$$

In the rest of this section we suppose that the coefficients \(G\) (resp. \(R\)) satisfy both Assumptions (G1), (G2) (resp. (R1) and (R2)). This will enable us to upper estimate the \(V\)-norm of the difference \(y^n-u^n\), and hence to strengthen the inequality (3.11).

For every \(t\in [0,T)\) let \(\xi ^n(t)=\mathrm{curl } \, u^n(t)\) and \(\eta ^n(t)= \mathrm{curl }\, y^n(t)\). Equations (2.16) and (2.17) imply that \(\xi ^n(t)=\eta ^n(t)= \mathrm{curl }\, u_0\) for \(t\in [t_{-1},t_0)\) and for \(i=0, \cdots , n-1\) and \(t\in [t_i, t_{i+1})\), we have
$$\begin{aligned}&d \xi ^n(t) + \big [ (1-\varepsilon ) A\xi ^n(t) + \mathrm{curl }\, B(u^n(t),u^n(t)) + \mathrm{curl }\, R(t,u^n(t))\big ] dt =0, \nonumber \\&\xi ^n(t_i^+) = \eta ^n(t_i^-), \end{aligned}$$
$$\begin{aligned}&d \eta ^n(t) + \;\varepsilon \; A\eta ^n(t) dt = \mathrm{curl }\, G(t,y^n(t)) \,dW(t), \nonumber \\&\eta ^n(t_i^+)= \xi ^n(t_{i+1}^-). \end{aligned}$$
The processes \((\xi ^n(t), t_i\le t<t_{i+1})\) (resp. \((\eta ^n(t), t_i\le t<t_{i+1})\)) are well-defined processes which are \({\mathcal F}_{t_i}\)-measurable (resp. \(({\mathcal F}_t)\)-adapted). Furthermore, these equations can be reformulated as follows for \(t\in [0,T]\):
$$\begin{aligned} \xi ^n(t)&= \mathrm{curl }\, u_{0}- \!\int \limits _{0}^{t} \!\!\mathrm{curl }\, F_{\varepsilon }(s,u^n(s))ds \nonumber \\&\quad +\int \limits _{0}^{d_{n}(t)}\!\!\left[ - \varepsilon \; A \eta ^n(s)ds+\mathrm{curl }\, G(s,y^n(s))dW(s)\right] ,\end{aligned}$$
$$\begin{aligned} \eta ^n(t)&= \mathrm{curl }\,u_{0}- \!\int \limits _{0}^{d_{n}^{*}(t)} \!\!\!\mathrm{curl }\, F_\varepsilon (s,u^n(s))ds \nonumber \\&\quad +\int \limits _{0}^{t}\!\! \left[ - \varepsilon \; A \eta ^n(s)ds+\mathrm{curl }\, G(s,y^n(s))dW(s)\right] . \end{aligned}$$
We at first prove the following analog of Lemmas 3.1 and 3.2.

Lemma 4.2

Assume that \(p\) is a integer such that \(p\ge 2\). Assume that \(G\) satisfies Assumptions (G1) and (G2) with \(K_2<\frac{2}{2p-1}\) and \(L_2<2\), and \(R\) satisfies Assumptions (R1) and (R2). Let \(u_0\) be an \({\mathcal F}_0\)-measurable, \(V\)-valued random variable such that \(\mathbb {E}\Vert u_0\Vert ^{2p}<\infty \). Then for \( \varepsilon \; \in [0,1)\), there exists a positive constant \(C\) such that for every integer \(n\ge 1\)
$$\begin{aligned} \sup _{t\in [0,T]}\mathbb {E} \big ( \Vert u^n(t)\Vert ^{2p} + \Vert y^n(t)\Vert ^{2p} \big ) + \mathbb {E}\int \limits _0^T \big ( 1+ \Vert u^n(t)\Vert ^{2(p-1)} \big ) |A u^n(t)|^{2} dt \le C.\nonumber \\ \end{aligned}$$
Furthermore, if \( \varepsilon \; \in (0,1)\), there exists a positive constant \(C\) such that for every integer \(n\ge 1\)
$$\begin{aligned} \mathbb {E}\int \limits _0^T \big ( 1+ \Vert y^n(t)\Vert ^{2(p-1)}) |A y^n(t)|^2 dt \le C. \end{aligned}$$


We briefly sketch the proof, which is similar to that of Lemmas 3.1 and 3.2. Let us fix \(n\in \mathbb {N}\). First note that, using Lemmas 3.1 and 3.2, (4.2) and (4.3), it is easy to see that the upper estimates (4.13) and (4.14) can be deduced from similar upper estimates where \(\Vert u^n\Vert \), \(\Vert y^n\Vert \), \(| A u^n |\) and \(| A y^n |\) are replaced by \(|\xi ^n|\), \(|\eta ^n|\), \(\Vert \xi ^n\Vert \) and \(\Vert \eta ^n\Vert \) respectively. We prove by induction that for \(l\in \{ -1, 0, \cdots , n-1\}\),
$$\begin{aligned}&\left[ \mathbb {E}\left( \sup _{t\in [t_l,t_{l+1})} |\xi ^n(t)|^2\right) \vee \left( \sup _{t\in [t_l,t_{l+1})} \mathbb {E}|\eta ^n(t)|^2\right) \right] \nonumber \\&\quad \le \mathbb {E}|\mathrm{curl }\, u_0 |^2 e^{\frac{(l+1)aT}{n}} + \frac{\alpha T}{n} \sum _{j=1}^{l+1} e^{\frac{jaT}{n}},\end{aligned}$$
$$\begin{aligned}&\left[ \mathbb {E}\left( \sup _{t\in [t_l,t_{l+1})} |\xi ^n(t)|^{2p}\right) \vee \left( \sup _{t\in [t_l,t_{l+1})} \mathbb {E}|\eta ^n(t)|^{2p}\right) \right] \nonumber \\&\quad \le \mathbb {E}|\mathrm{curl}\, u_{0}|^{2p}\mathrm{e}^{\frac{(l+1)bT}{n}}+\frac{\beta T}{n}\sum _{j=1}^{l+1}\mathrm{e}^{\frac{jbT}{n}}, \end{aligned}$$
where \(a:=2(R_{0}+R_{1})+K_{1}\), \(\alpha :=2R_{0}+K_{0}\), \(b:=2p(R_{0}+R_{1})+p(2p-1)(K_{0}+K_{1})\) and \(\beta :=2pR_{0}+p(2p-1)K_{1}\). Indeed, these inequalities hold for \(l=-1\). Suppose that they hold for \(l\le i-1\), \(i<n-1\); we prove them for \(l=i\).
We at first prove (4.15) for \(l=i\). The identities (4.9), (4.4) and Assumption (R2) imply that for \(t\in [t_i, t_{i+1})\), we have
$$\begin{aligned} |\xi _n(t)|^2 + 2(1-\varepsilon ) \int \limits _{t_i}^t \Vert \xi ^n(s)\Vert ^2\,ds&= |\eta ^n(t_i^-)|^2 -2 \int \limits _{t_i}^t \langle \mathrm{curl} \, R(s,u^n(s)), \xi ^n(s)\rangle \,ds\\&\le |\eta ^n(t_i^-)|^2 + 2 \frac{R_0 T}{n} + 2 (R_0+R_1)\int \limits _{t_i}^t |\xi ^n(s)|^2\,ds, \end{aligned}$$
while the identities (4.10) and (4.2), the Itô Lemma and Assumption (G2) imply
$$\begin{aligned}&\mathbb {E}|\eta ^n(t)|^2 + 2 \varepsilon \; \mathbb {E}\int \limits _{t_i}^t \Vert \eta ^n(s)\Vert ^2\,ds \\&\quad = \mathbb {E}|\xi ^n(t_{i+1}^-)|^2 +\, \mathbb {E}\int \limits _{t_i}^t \Vert \mathrm{curl} \, G(s,y^n(s))\Vert _{\mathcal {T}_2(K,H)}^2 \,ds\\&\quad \le \mathbb {E}|\xi ^n(t_{i+1}^-)|^2 + \frac{K_0 T}{n} + K_1 \mathbb {E}\int \limits _{t_i}^t |\eta ^n(s)|^2 \,ds + \varepsilon \; K_2 \mathbb {E}\int \limits _{t_i}^t \Vert \eta ^n(s)\Vert ^2]\,ds. \end{aligned}$$
Then arguments similar to that used in the proof of Lemma 3.1 yield (4.15) and then for every \( \varepsilon \; \in [0,1)\) and \(C\) independent of \(n\)
$$\begin{aligned} \sup _{t\in [0,T)}\mathbb {E} \big ( \Vert u^n(t)\Vert ^2 + \Vert y^n(t)\Vert ^2\big ) + \sup _{n\in \mathbb {N}} \mathbb {E}\int \limits _0^T |A u^n(t)|^2 dt \le C. \end{aligned}$$
Furthermore, for \( \varepsilon \; \in (0,1)\) we have
$$\begin{aligned} \mathbb {E}\int \limits _0^T |A y^n(t)|^2 dt \le C. \end{aligned}$$
We then prove (4.16) for \(l=i\). Using once more (4.4) and Assumption (R2), we deduce that for \(t\in [t_i, t_{i+1})\), we have
$$\begin{aligned}&|\xi ^n(t)|^{2p} + 2p (1-\varepsilon )\! \int \limits _{t_i}^t \! \Vert \xi ^n(s)\Vert ^2 |\xi ^n(s)|^{2(p-1)}\,ds\\&\quad = |\eta ^n(t_i^-)|^{2p} -2p\! \int \limits _{t_i}^t \! \langle \mathrm{curl} \, R(s,u^n(s)), \xi ^n(s)\rangle |\xi ^n(s)|^{2(p-1)} \,ds\\&\quad \le |\eta ^n(t_i^-)|^{2p} + 2p \frac{R_0T}{n} + 2p (R_0+R_1) \int \limits _{t_i}^t |\xi ^n(s)|^{2p}\,ds. \end{aligned}$$
Similarly, (4.10), the Itô Lemma and Assumption (G2) imply that for \(t\in [t_i, t_{i+1})\), we have
$$\begin{aligned}&\mathbb {E}|\eta ^n(t)|^{2p} + 2p \varepsilon \; \mathbb {E}\int \limits _{t_i}^t \Vert \eta ^n(s)\Vert ^2 |\eta ^n(s)|^{2(p-1)} \,ds \\&\quad = \mathbb {E}|\xi ^n(t_{i+1}^-)|^{2p} + 2p(p-1) \mathbb {E}\int \limits _{t_i}^t \Vert \mathrm{curl} \, G(s,y^n(s))\Vert _{\mathcal {T}_2(K,H)}^2 |\eta ^n(s)|^{2(p-1)}\,ds\\&\quad \le \mathbb {E}|\xi ^n(t_{i+1}^-)|^{2p} + 2p(p-1) \frac{K_0 T}{n} + 2p(p-1)(K_0+K_1) \mathbb {E}\int \limits _{t_i}^t \!\! |\eta ^n(s)|^{2p}\,ds\\&\qquad +\, 2p(p-1) \varepsilon \; K_2 \mathbb {E}\int \limits _{t_i}^t \!\! \Vert \eta ^n(s)\Vert ^2 |\eta ^n(s)|^{2(p-1)}\,ds. \end{aligned}$$
Then arguments similar to those used in Lemmas 3.1 and 3.2 yield (4.16), and then (4.13) and (4.14). This concludes the proof of the proposition since \((2p-1)K_2<2\) when \( \varepsilon \; >0\).\(\square \)

We finally find an estimate from above on the \(V\) norm of the difference of \(u^n\) and \(y^n\). This estimate will be used to obtain the speed of convergence of the scheme.

Proposition 4.3

Assume that \(u_0\) is \({\mathcal F}_0\)-measurable with \(\mathbb {E}\Vert u_0\Vert ^4 <\infty \); let \(T>0\) and \( \varepsilon \; \in [0,1)\). Assume that Assumptions (G1) and (G2) are satisfied with \(K_2 < 2/3\) and \(L_2<2\), and that Assumptions (R1) and (R2) hold. Then there exists a positive constant \(C:=C(T)\), such that for every integer \(n\ge 1\)
$$\begin{aligned} \mathbb {E}\int \limits _0^T \Vert y^n(t) - u^n(t)\Vert ^2 dt \le \frac{C}{n}. \end{aligned}$$


Using (3.11) and (4.3), we see that the proof of (4.17) reduces to check that
$$\begin{aligned} \mathbb {E}\int \limits _0^T |\eta ^n(t) - \xi ^n(t)|^2 dt \le \frac{C}{n}. \end{aligned}$$
We only prove this inequality when \( \varepsilon \; \in (0,1)\). The proof in the case \( \varepsilon \; =0\) can be done by adapting the arguments in the proof of Proposition 3.3. So let us assume that \( \varepsilon \; \in (0,1)\). Then the Itô Lemma, (4.13) and (4.14) imply that for any \(t\in (0,T)\):
$$\begin{aligned}&\mathbb {E}|\eta ^n(t)-\xi ^n(t)|^2= -2 \; \mathbb {E} \int \limits _t^{d^*_n(t)} \!\! \Big [ (1-\varepsilon ) \langle A\xi ^n(s),\eta ^n(s)-\xi ^n(s)\rangle \\&\qquad +\, \langle B(u^n(s),\xi ^n(s)), \eta ^n(s) - \xi ^n(s)\rangle + \langle \mathrm{curl }\, R(s, u^n(s)) , \eta ^n(s)-\xi ^n(s)\rangle \Big ]\,ds \\&\qquad -2 \varepsilon \; \mathbb {E} \int \limits _{d_n(t)}^t \langle A \eta ^n(s), \eta ^n(s)-\xi ^n(s)\rangle +\mathbb {E} \int \limits _{d_n(t)}^t \Vert \mathrm{curl }\, G(s,y^n(s))\Vert _{\mathcal {T}_2(K,H)}^2\,ds \end{aligned}$$
Integrating on \([0,T]\), using (2.6) with \(\eta =1\), Assumptions (R2) and (G2), the Fubini Theorem and Lemma 4.2, we obtain
$$\begin{aligned}&\mathbb {E}\! \int \limits _0^T \!\! \!|\eta ^n(t)-\xi ^n(t)|^2 dt \\&\quad \le 2 \mathbb {E}\! \int \limits _0^T\!\! \!\,ds \Big [ (1-\varepsilon ) |A u^n(s)| |A (y^n(s)-u^n(s))| + |A(u^n(s)- y^n(s))|^2 \\&\qquad +\, C_1\vert u^n(s)\vert _\mathrm {X}^2 \vert \xi ^n(s)\vert _\mathrm {X}^2 \Big ] \int \limits _{d_n(s)}^s \!\! dt + 2 \varepsilon \; \mathbb {E}\int \limits _0^T \!\! \,ds (|A u^n(s)|^2 + |Ay^n(s)|^2) \int \limits _s^{d^*_n(s)} \!\! dt \\&\qquad +\, \mathbb {E}\int \limits _0^T \!\,ds [K_0+K_1 |\eta ^n(s)|^2 + K_2 \varepsilon \; |A y^n(s)|^2] \int \limits _s^{d^*_n(s)} \!\! dt\\&\quad \le \frac{CT}{n} \mathbb {E}\int \limits _0^T \!\!\big ( |A u^n(s)|^2 \!+\! |A y^n(s)|^2 \!+\! \Vert u^n(s)\Vert ^4 + |u^n(s)|^4 + |\xi ^n(s)|^2 |Au^n(s)|^2\big )\,ds \\&\quad \le \frac{CT}{n}, \end{aligned}$$
where the last inequality can be deduced from Lemmas 3.2 and 4.2. This concludes the proof.\(\square \)

4.2 A priori estimates for the process \(z^n\)

For technical reasons, let us consider the process \((z^n(t), t\in [0,T])\) defined by
$$\begin{aligned} z^n(t)=u_0 - \int \limits _0^t F_\varepsilon (s,u^n(s))\,ds - \varepsilon \; \int \limits _0^{d_n(t)} \! A y^n(s)\,ds + \int \limits _{0}^{t} G(s,y^n(s))dW(s).\nonumber \\ \end{aligned}$$
Note that for any \( \varepsilon \in [ 0,1)\) the process \(z^n(t)\) coincides with \(u^n(t^+)\) and \(y^n(t^-)\) on the time grid, i.e.,
$$\begin{aligned} z^n(t_k)=y^n(t_k^-)=u^n(t_k^+) \quad \hbox { for} \; k=0,1, \cdots , n. \end{aligned}$$
The following lemma gives upper estimates of the differences \(z^n-u^n\) and \(z^n-y^n\) in various topologies.

Lemma 4.4

Let \( \varepsilon \; \in [0,1)\) and let \(u_0\) be \({\mathcal F}_0\)-measurable.
  1. (i)
    Assume that \(u_0\) is \(H\)-valued with \({\mathbb E} |u_0|^{2p}<\infty \) for some integer \(p\ge 2\). Suppose that Assumption (G1) holds with \(K_2<\frac{2}{2p-1}\) and \(L_2<2\) and that Assumption (R1) is satisfied. Then there exists a positive constant \(C:=C(T, \varepsilon )\) such that for every integer \(n\ge 1\),
    $$\begin{aligned} \sup _{t\in [0,T]} \mathbb {E}|z^n(t)- u^n(t)|^{2p} \le \frac{C}{n^p}. \end{aligned}$$
  2. (ii)
    Assume that \(u_0\) is \(V\)-valued with \({\mathbb E} \Vert u_0\Vert ^{4}<\infty \), let Assumptions (G1) and (G2) hold with \(K_2<\frac{2}{3}\) and \(L_2<2\) and let Assumptions (R1) and (R2) be satisfied. Then there exists a positive constant \(C:=C(T, \varepsilon )\) such that for every integer \(n\ge 1\),
    $$\begin{aligned} \mathbb {E}\int \limits _0^T \big ( \Vert z^n(t)-u^n(t)\Vert ^2 + \Vert z^n(t)-y^n(t)\Vert ^2\big ) dt \le \frac{C}{n}. \end{aligned}$$


For \(t\in [0,T]\), we have
$$\begin{aligned} z^n(t)-u^n(t)=\int \limits _{d_n(t)}^t G(s,y^n(s)) dW(s). \end{aligned}$$
Let the assumptions of (i) be satisfied. Since \(\mathbb {E}|u_0|^{2p}<\infty \) and \(K_2<\frac{2}{2p-1}\), the Burkholder–Davies–Gundy and Hölder inequalities together with Assumption (G1) and Lemma 3.2 imply that for any \(t\in [0,T]\),
$$\begin{aligned}&\mathbb {E}|z^n(t)-u^n(t)|^{2p} \le C_p \mathbb {E}\Big | \int \limits _{d_n(t)}^t \Vert G(s,y^n(s)\Vert ^2_{\mathcal {T}_2(K,H)}\,ds \Big |^p\\&\quad \le C_p \left( \frac{T}{n}\right) ^{p-1} \mathbb {E} \int \limits _{d_n(t)}^t \Big | K_0 + K_1 |y^n(s)|^2 + \varepsilon \; K_2 \Vert y^n(s)\Vert ^2 \Big |^p\,ds\\&\quad \le \frac{C_p(T)}{n^{p-1}} \left[ \! K_0^p \!+\! K_1^p \sup _{t\in [0,T]} \mathbb {E}|y^n(t)|^{2p} \!+\! \varepsilon ^p \; K_2^p \sup _{t\in [0,T]} \mathbb {E}\Vert y^n(t)\Vert ^{2p}\!\right] \frac{T}{n} \le \frac{C_p(T)}{n^p}. \end{aligned}$$
Let the assumptions of (ii) hold true and let \(\zeta ^n(t)=\mathrm{curl } \, z^n(t)\); then
$$\begin{aligned} \zeta ^n(t)\;&= \; \mathrm{curl } \, u_0 -\int \limits _0^t \big [ (1-\varepsilon ) A\zeta ^n(s) + \mathrm{curl } \,B(u^n(s),u^n(s)) + \mathrm{curl } \,R(s,u^n(s))\big ]\,ds\\&-\, \varepsilon \; \int \limits _0^{d_n(t)} A\eta ^n(s)\,ds + \int \limits _0^t \mathrm{curl } \, G(s,y^n(s)) dW(s). \end{aligned}$$
Therefore, \(\zeta ^n(t)-\xi ^n(t)=\int _{d_n(t)}^t \mathrm{curl } \, G(s,y^n(s)) dW(s)\); the Itô Lemma, the Fubini Theorem and Assumption (G2) imply that
$$\begin{aligned} \int \limits _0^T \mathbb {E}|\zeta ^n(t)-\xi ^n(t)|^2\,ds&= \int \limits _0^T \mathbb {E}\int \limits _{d_n(t)}^t \big | \mathrm{curl } \, G(s,y^n(s))\big |_{\mathcal {T}_2(K,H)}^2\,ds dt\\&\le \int \limits _0^T \left[ K_0+K_1 |\eta ^n(s)|^2 + \varepsilon \; K_2 |A y^n(s)|^2\right] \left( \int \limits _s^{d^*_n(s)} dt\right) \,ds\\&\le \frac{T}{n} \left[ K_0 T + \mathbb {E}\int \limits _0^T \left( K_1 |\eta ^n(t)|^2 + \varepsilon \; K_2 |A y^n(t)|^2\right) dt\right] . \end{aligned}$$
Hence, Lemma 4.2 and (4.21) applied with \(p=1\) yield
$$\begin{aligned} \int \limits _0^T \mathbb {E}\Vert z^n(t)-u^n(t)\Vert ^2 dt \le \frac{C(T)}{n}. \end{aligned}$$
This upper estimate and Proposition 4.3 conclude the proof of (4.22).\(\square \)

5 Speed of convergence for the scheme in the case of the Stochastic NSEs on a \(2\)D torus

In this entire section we consider the Stochastic NSEs on a \(2\)D torus. Thus, we suppose that \(u_0\) is \({\mathcal F}_0\)-measurable such that \(\mathbb {E}\Vert u_0\Vert ^8<\infty \) and fix some \(T>0\). We also assume that Assumptions (G1) and (G2) are satisfied with \( K_2 < \frac{2}{7}\) and \( L_2<2\), and that the Assumptions (R1) and (R2) hold true. Our aim is to prove the convergence of the scheme in the \(H\) norm uniformly on the time grid and in \(L^2(0,T;V)\).

The non linearity of the Navier–Stokes equations requires to impose a localization in order to obtain an \(L^2(\Omega , {\mathbb P})\) convergence. However, the probability of this localization set converges to 1, which yields the order of convergence in probability as defined by Printems in [19]; the order of convergence in probability of \(u^n\) to \(u\) in \(H\) is \(\gamma >0\) if
$$\begin{aligned} \lim _{C\rightarrow \infty } P\left( \sup _{k=0, \cdots , n} |u^n(t_k)-u(t_k)| \ge \frac{C}{n^\gamma }\right) =0. \end{aligned}$$
First, we prove that when properly localized, the schemes \(u^n\) and \(y^n\) converge to \(u\) in \(L^2(\Omega )\) for various topologies. Thus, for every \(M>0\), \(t\in [0,T]\) and any integer \(n\ge 1\), let
$$\begin{aligned} \Omega _{M}^n(t):=\left\{ \omega \in \Omega \; : \; \int \limits _{0}^{t}\left( \vert u(s,\omega )\vert _{\mathrm {X}}^{4} + \vert u^n(s,\omega )\vert _{\mathrm {X}}^{4}\right) \,ds\le M\right\} . \end{aligned}$$
This definition shows that \(\Omega _M^n(t)\subset \Omega _M^n(s)\) for \(s\le t\) and that \(\Omega _M^n(t)\in {\mathcal F}_t\) for any \(t\in [0,T]\). Furthermore, Lemma 3.1 shows that \(\sup _{n\ge 1} P(\Omega _M^n(T)^c)\rightarrow 0\) as \(M\rightarrow \infty \). Let \(\tau ^n_M=\inf \{ t\ge 0\; : \; \int \limits _{0}^{t}\big ( \vert u(s,\omega )\vert _{\mathrm {X}}^{4} + \vert u^n(s,\omega )\vert _{\mathrm {X}}^{4}\big )\,ds\ge M\}\wedge T\); we clearly have \(\tau _M^n=T\) on the set \(\Omega _M^n(T)\).

The following proposition proves that, localized on the set \(\Omega _M^n(T)\), the strong speed of convergence of \(z^n\) to \(u\) (resp. of \(u^n\) and \(y^n\) to \(u\)) in \(L^\infty (0,T;H)\) (resp. in \( L^2(0,T;V)\)) is \(1/2\).

Proposition 5.1

Let \( \varepsilon \; \in [0,1)\), \(u_0\) be \({\mathcal F}_0\)-measurable such that \(\mathbb {E}\Vert u_0\Vert ^8<\infty \). Suppose that the Assumptions (G1), (G2), (R1) and (R2) hold with \(K_2<\frac{2}{7}\), \(L_2<2\) with \( \varepsilon \; L_2\) strictly smaller than \(2(1-\varepsilon )\). Then there exist positive constants \(C(T)\) and \(\tilde{C}_2\) such that for every for every \(M>0\) and \(n\in \mathbb {N}\), we have:
$$\begin{aligned}&\mathbb {E}\left( \sup _{t\in [0, \tau ^n_M]} \! |z^n(t)-u(t)|^2 + \int \limits _0^{\tau ^n_M} \!\! \left[ \Vert u^n(t)-u(t)\Vert ^2 + \Vert y^n(t)-u(t)\Vert ^2\right] dt\right) \nonumber \\&\quad \le \frac{K(T,M)}{n}, \end{aligned}$$
where \(z^n\) is defined by (4.19), \(u\) is the solution to (2.8) and
$$\begin{aligned} K(M,T)=C(T) \exp \left( C(T) e^{\tilde{C}_2 M}\right) . \end{aligned}$$


First of all let us observe that in view of Proposition 4.3 we only have to prove inequality (5.2) for the first two terms on the left hand side. For this aim let us fix \(M>0\) and a natural number \(n\ge 1\). To ease notation we put \(\tau :=\tau _M^n\). Then we have
$$\begin{aligned} z^n(t\wedge \tau )\!-\!u(t\wedge \tau )&= -\int \limits _0^{t\wedge \tau } [F_\varepsilon (s,u^n(s)) \!-\! F(s,u(s))]\,ds \!-\! \varepsilon \; \int \limits _0^{d_n(t\wedge \tau )} \!\! A y^n(s)\,ds \nonumber \\&+ \int \limits _0^{t\wedge \tau } [ G(s,y^n(s))\! -\! G(s,u(s)) ] dW(s), \text{ for } \text{ any } t\in [0,T].\nonumber \\ \end{aligned}$$
The Itô Lemma yields that
$$\begin{aligned} |z^n(t\wedge \tau )-u(t\wedge \tau )|^2 = \sum _{i=1}^4 T_i(t) + I(t),\;\; \text{ for } t\in [0,T], \end{aligned}$$
where, also for \(t\in [0,T]\),
$$\begin{aligned} I(t)\;&= \; 2\int \limits _0^{t\wedge \tau } \langle [G(s,y^n(s)) - G(s,u(s))] \,dW(s)\, , \, z^n(s)-u(s)\rangle , \end{aligned}$$
$$\begin{aligned} T_1(t)\;&= \; -2\int \limits _0^{t\wedge \tau } \langle F_{\varepsilon } (s,u^n(s)) - F_\varepsilon (s,u(s)) \, , \, z^n(s)-u(s)\rangle \,ds, \nonumber \\ T_2(t)\;&= \; -2 \varepsilon \; \int \limits _0^{d_n(t\wedge \tau )} \langle Ay^n(s) - A u(s) \, , \, z^n(s)- u(s)\rangle \,\,ds, \nonumber \\ T_3(t)\;&= \; -2 \varepsilon \; \int \limits _{d_n(t\wedge \tau )}^{t\wedge \tau } \langle A u(s) \, , \, z^n(s)-u(s)\rangle \,ds, \nonumber \\ T_4(t)\;&= \; \int \limits _0^{t\wedge \tau } |G(s, y^n(s)) - G(s,u(s)) |_{\mathcal {T}_2(K,H)}^2\,ds. \end{aligned}$$
We at first upper estimate the above term \(T_1(t)\) as follows: \(T_1(t)\le \sum _{i=1}^5 T_{1,i}(t)\), where
$$\begin{aligned} T_{1,1}(t) \,&= \, -2(1-\varepsilon ) \int \limits _0^{t\wedge \tau } \langle A u^n(s) - Au(s) \, , \, u^n(s)-u(s)\rangle \,ds, \nonumber \\ T_{1,2}(t) \,&= \, -2(1-\varepsilon ) \int \limits _0^{t\wedge \tau } \langle A u^n(s) - Au(s) \, , \, z^n(s)-u^n(s)\rangle \,ds, \nonumber \\ T_{1,3}(t)\,&= \, -2\int \limits _0^{t\wedge \tau } \langle B(u^n(s),u^n(s)) - B(u(s),u(s)) \, , \, u^n(s)-u(s)\rangle \,ds, \nonumber \\ T_{1,4}(t)\,&= \,-2\! \int \limits _0^{t\wedge \tau } \!\! \! \Big \langle \big [ B(u^n(s)\!-\! u(s) ,u^n(s)) \!+\! B(u(s),u^n(s)\!-\! u(s)) ,z^n(s)\!-\!u^n(s)\Big \rangle \,ds, \nonumber \\ T_{1,5}(t)\,&= \,-2(1-\varepsilon ) \int \limits _0^{t\wedge \tau } \langle R(s, u^n(s)) - R(s,u(s)) \, , \, z^n(s)- u(s) \rangle \,ds. \end{aligned}$$
The definition of the \(V\)-norm implies that
$$\begin{aligned} T_{1,1}(t)=-2(1-\varepsilon ) \int \limits _0^{t\wedge \tau } \Vert u^n(s)-u(s)\Vert ^2\,ds , \end{aligned}$$
while the Cauchy–Schwarz and Young inequalities yield that for any \(\eta >0\):
$$\begin{aligned} T_{1,2}(t)&\le 2(1-\varepsilon ) \int \limits _0^{t\wedge \tau } \Vert u^n(s)-u(s)\Vert \; \Vert z^n(s)-u^n(s) \Vert \,\,ds \nonumber \\&\le \eta (1- \varepsilon \; ) \int \limits _0^{t\wedge \tau } \Vert u^n(s)-u(s)\Vert ^2\,ds + \frac{1-\varepsilon }{\eta } \int \limits _0^{t\wedge \tau } \Vert z^n(s)-u^n(s)\Vert ^2\,ds. \nonumber \\ \end{aligned}$$
Using (2.7) we deduce that for any \(\eta >0\) we have:
$$\begin{aligned} T_{1,3}(t)&\le 2\eta \int \limits _0^{t\wedge \tau } \Vert u^n(s)-u(s)\Vert ^2\,ds + 2 C_\eta \int \limits _0^{t\wedge \tau } |u^n(s)-u(s)|^2 \vert u(s)\vert _\mathrm {X}^4\,ds \nonumber \\&\le 2\eta \int \limits _0^{t\wedge \tau }\Vert u^n(s)-u(s)\Vert ^2\,ds + 4 C_\eta \int \limits _0^{t\wedge \tau } |z^n(s)-u(s)|^2 \vert u(s)\vert _\mathrm {X}^4\,ds\nonumber \\&\quad +\,4 C_\eta \int \limits _0^{t\wedge \tau } |z^n(s)-u^n(s)|^2 \vert u(s)\vert _\mathrm {X}^4\,ds. \end{aligned}$$
Furthermore, condition (2.6) on \(B\), property (2.4) of the space \(\mathrm {X}\) and the Cauchy–Schwarz and Young inequalities yield for any \(\eta >0\):
$$\begin{aligned} T_{1,4}(t)&\le 2 \int \limits _0^{t\wedge \tau } \!\! \left[ 2\eta \Vert z^n(s)-u^n(s) \Vert ^2 + C_\eta |u^n(s)-u(s)|_{ X}^2 \left( |u^n(s)|_{ X}^2 + | u(s)|_{X}^2\right) \right] ds \nonumber \\&\le 8\eta \int \limits _0^{t\wedge \tau }\Vert u^n(s)-u(s)\Vert ^2 ds + 8\eta \int \limits _0^{t\wedge \tau }\Vert z^n(s)-u(s)\Vert ^2 ds \nonumber \\&+ \,2C C_\eta \left( \int \limits _0^{t\wedge \tau }\Vert u^n(s)-u(s)\Vert ^2 ds \right) ^{1/2} \left( \int \limits _0^{t\wedge \tau }|u^n(s)-z^n(s)|^4 ds \right) ^{1/4} \nonumber \\&\times \left( \int \limits _0^{t\wedge \tau } \left[ |u^n(s)|_{ X}^8 + |u(s)|_{ X}^8 \right] ds \right) ^{1/4} \nonumber \\&+\, 2 C C_\eta \left( \int \limits _0^{t\wedge \tau }\!\!\! \Vert u^n(s)-u(s)\Vert ^2 ds \right) ^{1/2} \left( \int \limits _0^{t\wedge \tau } \!\! \! |z^n(s)-u(s)|^2 \left[ |u(s)|_{ X}^4 \right. \right. \nonumber \\&\left. \left. +\, |u^n(s)|_{ X}^4 \right] ds \right) ^{1/2}\nonumber \\&\le 10\eta \int \limits _0^{t\wedge \tau }\Vert u^n(s)-u(s)\Vert ^2 ds + 8\eta \int \limits _0^{t\wedge \tau }\Vert z^n(s)-u(s)\Vert ^2 ds \nonumber \\&+\, C \frac{C_\eta }{\eta } \left( \int \limits _0^{t\wedge \tau }\!\!\! |u^n(s)-z^n(s)|^4 ds \right) ^{1/2} \left( \int \limits _0^{t\wedge \tau }\!\!\! \left[ |u(s)|_{ X}^8 + |u^n(s)|_{X}^8\right] ds \right) ^{1/2} \nonumber \\&+\, C \frac{C_\eta }{\eta } \int \limits _0^{t\wedge \tau }\!\!\! |z^n(s)-u(s)|^2 \left[ |u(s)|_{ X}^4 + |u^n(s)|_{ X}^4\right] ds. \end{aligned}$$
Finally, Assumption (R1) and the triangular and Cauchy–Schwarz inequalities imply
$$\begin{aligned} T_{1,5}(t)&\le 2R_1 \int \limits _0^{t\wedge \tau }|u^n(s)-u(s)|\, |z^n(s)-u(s)|\,ds\nonumber \\&\le 3 R_1 \int \limits _0^{t\wedge \tau }|z^n(s)-u(s)|^2\,ds + R_1 \int \limits _0^{t\wedge \tau }|z^n(s)-u^n(s)|^2\,ds. \end{aligned}$$
First note that \(T_2=-2\epsilon \int _0^{t\wedge \tau } \langle \nabla (y^n(s)-u(s))\, , \, \nabla (z^n(s)-u(s)) \rangle ds\). Replacing \(u\) by \(u^n\), and using the Cauchy–Schwarz and Young inequalities we deduce
$$\begin{aligned} T_2(t)&\le - 2 \varepsilon \! \int \limits _0^{d_n(t\wedge \tau )} \!\! \Vert u^n(s)-u(s)\Vert ^2\,ds \nonumber \\&+\, 2 \varepsilon \! \int \limits _0^{d_n(t\wedge \tau )}\!\! \Vert y^n(s)-u^n(s)\Vert \Vert z^n(s)-u^n(s)\Vert \,ds \nonumber \\&+\, 2 \varepsilon \; \int \limits _0^{d_n(t\wedge \tau )} \Vert u^n(s)-u(s)\Vert \left[ \Vert y^n(s)-u^n(s)\Vert +\Vert z^n(s)-u^n(s)\Vert \right] \,ds \nonumber \\&\le \; 2 \varepsilon \; \int \limits _0^{d_n(t\wedge \tau )} \Vert y^n(s)-u^n(s)\Vert \Vert z^n(s)-u^n(s)\Vert \,ds. \end{aligned}$$
The Cauchy–Schwarz and Young inequality imply that for every \(\eta >0\), we have
$$\begin{aligned} T_3(t)&\le 2 \varepsilon \; \int \limits _{d_n(t\wedge \tau )}^{t\wedge \tau } \Vert u(s)\Vert \left( \Vert z^n(s)-u^n(s)\Vert + \Vert u^n(s)-u(s)\Vert \right) \,ds \nonumber \\&\le \varepsilon \; \eta \int \limits _{d_n(t\wedge \tau )}^{t\wedge \tau } \Vert u^n(s)-u(s)\Vert ^2\,ds \nonumber \\&+\, \frac{\varepsilon }{\eta } \left( \int \limits _{d_n(t\wedge \tau )}^{t\wedge \tau } \Vert u(s)\Vert ^2\,ds \right) ^{1/2} \left( \int \limits _{d_n(t\wedge \tau )}^{t\wedge \tau } \Vert z^n(s)\!-\!u^n(s)\Vert ^2\,ds \right) ^{1/2}.\qquad \qquad \end{aligned}$$
Finally, since \( \varepsilon \; L_2 < 2(1-\varepsilon ) \) we can choose \(\alpha >0\) such that \( \varepsilon \; L_2 < 2(1-\varepsilon ) -(2+L_2)\alpha \); thus the Assumption (G1) yields
$$\begin{aligned} T_4(t)&\le L_1 \int \limits _0^{t\wedge \tau } \!\! \big [ |y^n(s)-u(s)|^2 + \varepsilon \; L_2 \Vert y^n(s)-u(s)\Vert ^2\big ]\,ds \nonumber \\&\le 2 L_1 \int \limits _0^{t\wedge \tau } \!\! |z^n(s)-u(s)|^2\,ds + \varepsilon \; L_2(1+\alpha ) \int \limits _0^{t\wedge \tau } \!\! \Vert u^n(s)-u(s)\Vert ^2\,ds\nonumber \\&+\,2 L_1 \int \limits _0^{t\wedge \tau } \!\! |y^n(s)-z^n(s)|^2\,ds + \varepsilon \; L_2 \Big (1+\frac{1}{\alpha }\Big ) \int \limits _0^{t\wedge \tau } \!\! \Vert y^n(s)-u^n(s)\Vert ^2\,ds.\nonumber \\ \end{aligned}$$
Fix \(\eta >0\) such that \(13 \eta <\alpha \). Put
$$\begin{aligned} X(t):=\sup _{s\in [0,t\wedge \tau ]} |z^n(s)-u(s)|^2\, , \;\; Y(t):=\int \limits _0^{t\wedge \tau } \Vert u^n(s)-u(s)\Vert ^2\,ds, \text{ for } t\in [0,T]. \end{aligned}$$
Then inequalities (5.3)–(5.14) imply that
$$\begin{aligned} X(t)+ \alpha Y(t)\le \int \limits _0^{t\wedge \tau }\!\! \varphi (s) X(s)\,ds + \sup _{s\in [0,t\wedge \tau ]} |I(s)| + Z(t), \text{ for } t\in [0,T]\text{, } \end{aligned}$$
where \(I(t)\) is defined by (5.4), while the processes \(\varphi \) and \(Z\) are defined as follows:
$$\begin{aligned} \varphi (s)&= 4C_\eta \vert u(s)\vert _\mathrm {X}^4 + C \frac{C_\eta }{\eta } \left( \vert u(s)\vert _\mathrm {X}^4 +\vert u^n(s)\vert _\mathrm {X}^4 \right) +3R_1+2L_1, \\ Z(t)&= \int \limits _0^{t\wedge \tau } \left[ \left( \frac{1-\varepsilon }{\eta } + 2 \varepsilon +4 L_{1}\; \right) \Vert z^n(s)-u^n(s)\Vert ^2 \right. \\&\quad +\, |z^n(s)-u^n(s)|^2 \left( R_1+ 4 C_\eta \vert u(s)\vert _\mathrm {X}^4\right) \\&\left. \quad + \left[ \varepsilon \; L_2 \left( 1+ \frac{1}{\alpha }\right) + 2 \varepsilon +4L_{1}\; \right] \Vert y^n(s)-u^n(s)\Vert ^2 \right] \,ds \\&\quad +\, \frac{C_\eta }{\eta } C \left( \int \limits _0^{t\wedge \tau } |u^n(s)-z^n(s)|^4\,ds\right) ^{1/2} \left( \int \limits _0^{t\wedge \tau } [\vert u^n(s)\vert _\mathrm {X}^8 + \vert u(s)\vert _\mathrm {X}^8]\,ds \right) ^{1/2} \\&\quad + \,\frac{\varepsilon }{\eta } \left( \int \limits _{d_n(t \wedge \tau )}^{t\wedge \tau } \Vert u(s)\Vert ^2\,ds\right) ^{1/2} \left( \int \limits _{d_n(t \wedge \tau )}^{t\wedge \tau } \Vert z^n(s)-u^n(s)\Vert ^2\,ds\right) ^{1/2}. \end{aligned}$$
The definition of the stopping time \(\tau \) implies the existence of constants \(C_1\) and \(C_2\) larger than 1 and independent of \(n\) such that:
$$\begin{aligned} \int \limits _0^\tau \varphi (s)\,ds \le (3R_1+2L_1)T + M\big (4C_\eta + C \frac{C_\eta }{\eta }\big )=C_1T+C_2M:= C(\varphi ). \end{aligned}$$
Since \(V\subset \mathrm {X}\), using Propositions 3.3, 4.3 and Lemma 4.4, we deduce the existence of a constant \(C(T)\) depending on \(T\) (and not on \(n\)) such that
$$\begin{aligned} \mathbb {E}[Z(T)]&\le \left[ \frac{1-\varepsilon }{\eta } + R_1+4 \varepsilon \; + 8L_1 + \varepsilon \; L_2 \left( 1+\frac{1}{\alpha }\right) \right] \frac{C}{n} \nonumber \\&\quad +\, 4 C_\eta \left( \mathbb {E}\int \limits _0^T |z^n(s)-u^n(s)|^4\,ds\right) ^{1/2} \left( {\mathbb E} \int \limits _0^T \Vert u(s)\Vert ^8\,ds\right) ^{1/2} \nonumber \\&\quad +\, C \frac{C_\eta }{\eta } \left( \mathbb {E}\int \limits _0^{T\wedge \tau } |u^n(s)-z^n(s)|^4\,ds\right) ^{1/2}\nonumber \\&\quad \times \left( \mathbb {E}\int \limits _0^{T\wedge \tau } \left( \Vert u^n(s)\Vert ^8 + \Vert u(s)\Vert ^8 \right) \,ds\right) ^{1/2} \nonumber \\&\quad +\, \frac{\varepsilon }{\eta }\left( \frac{T}{n} \sup _{s\in [0,T]} \mathbb {E}\Vert u(s)\Vert ^2\right) ^{1/2} \left( \!\mathbb {E}\int \limits _0^T \Vert z^n(s)\!-\!u^n(s)\Vert ^2ds\!\right) ^{1/2} \le \frac{C(T)}{n}.\nonumber \\ \end{aligned}$$
Furthermore, Assumption (G1), the Burkholder–Davies–Gundy, the Cauchy–Schwarz and the Young inequalities imply that for any \(\beta >0\):
$$\begin{aligned} \mathbb {E}\left( \sup _{s\le t \wedge \tau }|I(s)|\right)&\le 12 \mathbb {E}\left( \int \limits _0^{t\wedge \tau } \Vert G(s,y^n(s))\!-\! G(s,u(s))\Vert _{\mathcal {T}_2(K,H)}^2 |z^n(s)\!-\!u(s)|^2\,ds \right) ^{1/2}\nonumber \\&\le \beta \mathbb {E}\left( \sup _{s\le t\wedge \tau } |z^n(s)-u(s)|\right) \nonumber \\&+\, \frac{36}{\beta } \mathbb {E}\int \limits _0^{t\wedge \tau } \!\!\! \left[ L_1|y^n(s)-u(s)|^2 + \varepsilon \; L_2 \Vert y^n(s)-u(s)\Vert ^2\right] \,ds \nonumber \\&\le \beta \mathbb {E}X(t) + \frac{36 L_1 (1+\delta _1)}{\beta } \int \limits _0^t \mathbb {E} X(s)\,ds \nonumber \\&+ \,\frac{36 \varepsilon \; L_2 (1+\delta _2)}{\beta }\; \mathbb {E} Y(t) + \frac{\tilde{C}}{n}, \end{aligned}$$
where \(\tilde{C}=C \left( \frac{36 L_1}{\beta (1+\delta _1)} + \frac{36 \varepsilon \; L_2}{\beta (1+\delta _2)}\right) \). Choose \(\beta >0\) such that \(2\beta \left( 1+C(\varphi ) e^{C(\varphi )}\right) =1\). Then suppose that \( \varepsilon \; L_2\) is small enough to ensure that
$$\begin{aligned} 72 (1+\delta _2) \varepsilon \; \frac{1}{\beta } L_2\big (1+ C(\varphi ) e^{C(\varphi )}\big ) \le \alpha . \end{aligned}$$
Using the argument similar to that used in the proof of Lemma 3.9 in [11] we deduce that
$$\begin{aligned} X(t)+\alpha Y(t)\le \left[ Z+\sup _{s\le t\wedge \tau } | I(s) | \right] \left( 1+C(\varphi ) e^{C(\varphi )} \right) . \end{aligned}$$
Then taking expectation, using (5.16), the Gronwall lemma and (5.15), we deduce
$$\begin{aligned}&\mathbb {E} X(T) + {\alpha } \mathbb {E}Y(T) \\&\quad \le 2\left( 1 + C(\varphi )e^{C(\varphi )}\right) \left[ \mathbb {E} (Z) + \frac{\tilde{C}}{n} \right] \exp \left[ {144} L_1 (1+\delta _1) \left( 1+C(\varphi ) e^{C(\varphi )} \right) ^2\right] \\&\quad \le \frac{C(T)+\tilde{C}}{n} \exp \left[ C(T) e^{3 C_2 M}\right] \end{aligned}$$
for some positive constant \(C(T)\) which does not depend on \(n\). This completes the proof of (5.2)\(\square \)

The following theorem proves that, properly localized, the sequences \(u^n\) and \(y^n\) converge strongly to \(u\) in \(L^2(\Omega , {\mathbb P})\) for various topologies, and that the “localized” speed of convergence of these processes is \(1/2\).

Theorem 5.2

Let \( \varepsilon \; \in [0,1)\); suppose that Assumptions (R1), (R2), (G1) and (G2) hold with \(\ L_2\) small enough and \(K_2<\frac{2}{7}\), and let \(u_0\) be \({\mathcal F}_0\)-measurable with \(\mathbb {E}\Vert u_0\Vert ^8<\infty \). Then the processes \(u^n\) and \(y^n\) defined in Sect. 2.3 converge to the solution \(u\) to the stochastic Navier–Stokes equations (2.8) on a \(2\)-D torus. More precisely, given any \(M>0\) there exist positive constants \(C(T)\) and \(\tilde{C}_2\), which do not depend on \(n\) and \(M\), such that for any integer \(n= 1, \cdots \) we have
$$\begin{aligned}&\mathbb {E}\left[ 1_{\Omega _M^n( T )} \sup _{k=0, \cdots , n-1} \left( \sup _{s\in [t_k,t_{k+1})} \left( |u^n(s^+) - u(s)|^2 + |y^n(s^+)-u(s)|^2 \right) \right) \right] \nonumber \\&\quad \le \frac{K(M,T)}{n} \end{aligned}$$
$$\begin{aligned}&\mathbb {E}\Big [ 1_{\Omega _M^n(t)} \int \limits _0^t \!\!\big [ \Vert u^n(s) - u(s)\Vert ^2 + \Vert y^n(s)-u(s)\Vert ^2\big ]ds \Big ) \Big ] \le \frac{K(M,T)}{n}, \end{aligned}$$
where \(K(M,T):=C(T) \exp \big [ C(T) e^{\tilde{C}_2 M}\big ]\), and
$$\begin{aligned} \Omega _M^n(t)= \Big \{ \omega : \int \limits _0^t \big ( \vert u(s)(\omega )\vert _\mathrm {X}^4 + \vert u^n(s)(\omega )\vert ^4_\mathrm {X}\big )\,ds \le M \Big \} \; \hbox { for } t\in [0,T]. \end{aligned}$$


First note that on \(\Omega _M^n(t)\) we have \(\tau ^n_M\ge t\). Hence using Propositions 5.1, we deduce that (5.18) holds true. Furthermore, the Cauchy–Schwarz inequality and (4.14) prove that
$$\begin{aligned} \mathbb {E}\Big ( \sup _{k=1, \cdots , n} |z^n(t_k)-y^n(t_k^-)|^2 \Big )&= \mathbb {E}\Big ( \sup _{k=1, \cdots , n} \varepsilon ^2 \Big | \int \limits _{t_k}^{t_{k+1}} A y^n(s)\,ds \Big |^2 \Big ) \\&\le \varepsilon ^2 \frac{T}{n} \mathbb {E}\Big ( \sup _{k=1, \cdots , n} \int \limits _{t_k}^{t_{k+1}} \big |A y^n(s)\big |^2\,ds \Big ) \le \frac{C(T) \varepsilon ^2}{n}. \end{aligned}$$
Therefore, since \( z^n(t_k) =u^n(t_k^+)=y^n(t_k^-)\), Proposition 5.1 yields
$$\begin{aligned} \mathbb {E} \left[ 1_{\Omega _M^n(T )} \sup _{k=0, \cdots , n} \left( |u^n(t_k^+) - u(t_k)|^2 + |y^n(t_k^-)-u(t_k)|^2 \right) \right] \le \frac{K(M,T)}{n}.\nonumber \\ \end{aligned}$$
Finally, using Assumption (G1), Young’s inequality and Lemma 4.2, we obtain that for any \(k=0, \cdots , n-1\):
$$\begin{aligned}&\mathbb {E}\left( \sup _{t\in (t_k,t_{k+1})} |y^n(t)-y^n(t_k^+)|^2 \right) \\&\quad \le \mathbb {E}\left[ \int \limits _{t_k}^{t_{k+1}}\!\! \left( \frac{\varepsilon }{2} \Vert y^n(t_k^-)\Vert ^2 + \left[ K_0+K_1 |y^n(s)|^2 + \varepsilon \; K_2 \Vert y^n(s\Vert ^2\right] \right) \,ds \right. \\&\left. \qquad + \left( \int \limits _{t_k}^{t_{k+1}} |y^n(s)-y^n(t_k^+)|^2 \left[ K_0 + K_1 |y^n(s)|^2 + K_2 \; \varepsilon \; \Vert y^n(s)\Vert ^2\right] \,ds \right) ^{1/2}\right] \\&\quad \le \frac{1}{2} \mathbb {E}\left( \sup _{t\in (t_k,t_{k+1})} |y^n(t)-y^n(t_k^+)|^2 \right) + C \frac{T}{n} \sup _{s\in [0,T]} \mathbb {E}\Vert y^n(s)\Vert ^2, \end{aligned}$$
so that
$$\begin{aligned} \mathbb {E}\Big ( \sup _{t\in (t_k,t_{k+1})} |y^n(t)-y^n(t_k^+)|^2 \Big ) \le \frac{C(T)}{n}. \end{aligned}$$
Using the Itô Lemma, the inequalities (2.5) and (4.8), the Schwarz and Young inequality, the Assumptions (G1) and the Burkholder Davies Gundy inequality, we deduce that
$$\begin{aligned}&\mathbb {E}\left( \sup _{t\in (t_k, t_{k+1}) }|u(t)-u(t_k^+)|^2\right) \\&\quad \le \mathbb {E}\left( \sup _{t\in (t_k, t_{k+1}) } \int \limits _{t_k}^t \left[ - 2 \Vert u(s)\Vert ^2 + 2 \Vert u(s) \Vert \Vert u(t_k^+)\Vert \right. \right. \\&\qquad + \,\vert u(s)\vert _\mathrm {X}^2 \Vert u(s)-u(t_k^+)\Vert ^2 +(R_0+R_1 |u(s)|)\, |u(s)-u(t_k^+)| \\&\left. \left. \qquad + \left( K_0+K_1 |u(s)|^2 + \varepsilon \; K_2 \Vert u(s) \Vert ^2\right) \right] \,ds \right) \\&\qquad +\, \mathbb {E} \left( \int \limits _{t_k}^{t_{k+1}} |u(s)-u(t_k^+)|^2 \left[ K_0+K_1 |u(s)|^2 + \varepsilon \; K_2 \Vert u(s)\Vert ^2 \right] \,ds \right) ^{1/2}\\&\quad \le \frac{1}{2} \mathbb {E}\left( \sup _{t\in (t_k, t_{k+1}) }|u(t)-u(t_k^+)|^2\right) +\frac{C}{n} \left( 1+ \sup _{t\in [0,T]} \mathbb {E} \Vert u(s)\Vert ^2 \right) , \end{aligned}$$
and hence
$$\begin{aligned} \mathbb {E}\left( \sup _{t\in (t_k, t_{k+1}) }|u(t)-u(t_k^+)|^2\right) \le \frac{C(T)}{n}. \end{aligned}$$
A similar simpler argument using the inequalities (2.5) and (4.13) yields
$$\begin{aligned}&\mathbb {E}\left( \sup _{t\in (t_k, t_{k+1}) }|u^n(t)-u^n(t_k^+)|^2\right) \nonumber \\&\quad \le \frac{C}{n} \left( 1+ \sup _{t\in [0,T]} {\mathbb E} [ \Vert u^n(t) \Vert ^4 + \Vert y^n(t)\Vert ^4 ] \right) \le \frac{C(T)}{n}. \end{aligned}$$
The inequalities (5.20)–(5.22) and (5.19) conclude the proof of (5.17).\(\square \)

Corollary 5.3

Let \( \varepsilon \in [0,1)\); assume that the assumptions of Theorem 5.2 are satisfied. For any integer \(n\ge 1\) let \(\tilde{e}_n(T)\) denote the error term defined by
$$\begin{aligned} \tilde{e}_n(T)&= \sup _{k=1,\cdots ,n } \big [ |u^n(t_k^+) - u(t_k)| + |y^n(t_k^+) - u(t_k)| \big ] \!+\! \left( \int \limits _0^T \!\! \Vert u^n(s) - u(s)\Vert ^2 \,ds\! \right) ^{\!\!1/2} \\&\quad + \left( \int \limits _0^T \!\! \Vert y^n(s)-u(s)\Vert ^2\,ds \right) ^{1/2}. \end{aligned}$$
Then \(\tilde{e}_n(T)\) converges to 0 in probability with the speed almost \(1/2\). To be precise, for any sequence \(\big (z(n)\big )_{n=1}^\infty \) converging to \(\infty \),
$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}\left( \tilde{e}_n(T)\ge \frac{z(n)}{\sqrt{n}}\right) = 0. \end{aligned}$$
Therefore, since the schemes \(u_n\) and \(y_n\) are right-continuous, they converge to u in probability in H with rate almost 1/2.


Let \(z(n)\rightarrow \infty \) and let \(M(n):=\ln (\ln ( \ln (z(n))))\); then \(M(n)\rightarrow \infty \). Thus, using (2.13) and (4.13) for \(p=2\) and the Markov inequality, we deduce that \({\mathbb P} ( \Omega \setminus \Omega _{M(n)}^n(T) ) \rightarrow 0\). Finally, note that \(C(T) \big (\ln (\ln z(n))\big )^{\tilde{C}_2} - 2 \ln \big ( z(n)\big ) \rightarrow -\infty \) as \(n\rightarrow \infty \) for any positive constant \(C(T)\). Therefore, using the inequalities (5.17) and (5.18), the explicit forms of \({K}(T,M(n))\), the choice of \(M(n)\) and Markov’s inequality, we deduce that
$$\begin{aligned} \mathbb {P}\left( \tilde{e}_n(T)\ge \frac{z(n)}{\sqrt{n}}\right)&\le \mathbb {P}\left( \Omega \setminus \Omega ^n_{M(n)}(T)\right) + \frac{n}{z(n)^2} \mathbb {E}\left( 1_{\Omega _{M(n)}(T) } \tilde{e}_n(T)^2\right) \\&\le \mathbb {P}\left( \Omega ^n_{M(n)}(T)\right) + C(T) \; \frac{n}{z(n)^2}\; \frac{1}{n} \exp [ C(T) (\ln (\ln (z(n))))^{\tilde{C}_2}] \rightarrow 0 \end{aligned}$$
as \(n\rightarrow \infty \); this concludes the proof.\(\square \)



This paper was partially written while H. Bessaih was invited professor at the University of Paris 1 Panthéon-Sorbonne. Parts of this paper were also written while the three authors were visiting the Bernoulli Center in Lausanne. They would like to thank the Bernoulli Center for the financial support, the very good working conditions and the friendly atmosphere.

Finally, the authors would like to thank the anonymous referees for their careful reading and valuable comments.


  1. 1.
    Barbu V., private communication (October 2011, Innsbruck, Austria).Google Scholar
  2. 2.
    Bensoussan A., Some existence results for stochastic partial differential equations, Pitman Res. Notes Math. Ser., 268, Longman Sci. Tech., Harlow, (Trento, 1990), p. 37–53.Google Scholar
  3. 3.
    Bensoussan, A., Glowinski, R., Rãscanu, A.: Approximation of Some Stochastic Differential Equations by Splitting Up Method. Applied Mathematics and Optimization 25, 81–106 (1992)CrossRefzbMATHMathSciNetGoogle Scholar
  4. 4.
    Bessaih, H., Millet, A.: Large deviations and the zero viscosity limit for 2D stochastic Navier-Stokes Equations with free boundary. SIAM J. Math. Anal. 44–3, 1861–1893 (2012)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Brzeźniak, Z., Carelli, E., Prohl, A.: Finite element based discretizations of the incompressible Navier-Sokes equations with multiplicative random forcing. IMA J. Numer. Anal. 33(3), 771–824 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  6. 6.
    Brzeźniak, Z., Millet, A.: On the splitting method for some complex-valued quasilinear evolution equations, Proceedings in Mathematics and Statistics, Vol 22, Springer Verlag, p. 57–90.(2012)Google Scholar
  7. 7.
    Carelli, E., Prohl, A.: Rates of convergence for discretizations of the stochastic incompressible Navier-Stokes Equations. SIAM J. Numer. Anal. 50–5, 2467–2496 (2012)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Cattabriga, L.: Su un problema al contorno relativo al sistema di equazioni di Stokes. Rend. Sem. Mat. Univ. Padova 31, 308–340 (1961)zbMATHMathSciNetGoogle Scholar
  9. 9.
    Chueshov, I., Millet, A.: Stochastic 2D Hydrodynamical Type Systems: Well Posedness and Large Deviations. Appl Math Optim 61–3, 379–420 (2010)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Dörsek, P.: Semigroup splitting and cubature approximations for the stochastic Navier-Stokes Equations. SIAM J. Numer. Anal. 50–2, 729–746 (2012)CrossRefGoogle Scholar
  11. 11.
    Duan, J., Millet, A.: Large deviations for the Boussinesq equations under random influences. Stochastic Processes and their Applications 119–6, 2052–2081 (2009)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Flandoli, F.: An introduction to 3D stochastic fluid dynamics, in SPDE in hydrodynamic: recent progress and prospects. Lecture Notes in Mathematics, : Springer-Verlag, Berlin; Fondazione C.I.M.E. Florence 2008, 51–150 (1942)Google Scholar
  13. 13.
    Flandoli, F., Gatarek, D.: Martingale and stationary solutions for stochastic Navier-Stokes equations. Probability Theory and Related Fields 102, 367–391 (1995)CrossRefzbMATHMathSciNetGoogle Scholar
  14. 14.
    Funaki, T.: A Stochastic Partial Differential Equation with Values in a Manifold. J. Functional Analysis 109, 257–288 (1992)CrossRefzbMATHMathSciNetGoogle Scholar
  15. 15.
    Gyöngy, I., Krylov, N.V.: On the splitting-up method and stochastic partial differential equations. The Annals of Probability 31–2, 564–591 (2003)Google Scholar
  16. 16.
    Gyöngy, I., Krylov, N.V.: An accelerated splitting-up method for parabolic equations. SIAM J. Math. Anal. 37(4), 1070–1097 (2005)CrossRefzbMATHMathSciNetGoogle Scholar
  17. 17.
    Krylov, N., Rosovskii, B.: Stochastic evolution equations. J. Soviet Mathematics 16, 1233–1277 (1981)CrossRefzbMATHGoogle Scholar
  18. 18.
    Nagase, N.: Remarks on nonlinear stochastic partial differential equations: an application of the splitting-up method. SIAM J. Control and Optimization 33–6, 1716–1730 (1995)CrossRefMathSciNetGoogle Scholar
  19. 19.
    Printems, J.: On the discretization in time of parabolic stochastic partial differential equations, M2AN Math. Model. Numer. Anal. 35, 1055–1078 (2001)CrossRefzbMATHMathSciNetGoogle Scholar
  20. 20.
    Schmalfuss, B.: Qualitative properties for the stochastic Navier-Stokes equation. Nonlin. Anal. 28, 1545–1563 (1997)CrossRefzbMATHMathSciNetGoogle Scholar
  21. 21.
    Sritharan, S.S., Sundar, P.: Large deviations for the two-dimensional Navier-Stokes Equations with multiplicative noise. Stochastic Process. Appl. 116(11), 1636–1659 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  22. 22.
    Temam, R.: Navier-Stokes Equations and nonlinear functional analysis, CBMS-NSF Regional Conference Series in Applied Mathematics, 41, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, (1983)Google Scholar
  23. 23.
    Temam, R.: Navier-Stokes equations. Theory and numerical analysis. Studies in Mathematics and its Applications 2, North-Holland Publishing Co., Amsterdam-New York (1979)Google Scholar
  24. 24.
    Yudovich, V.L.: Uniqueness theorem for the basic nonstationary problem in the dynamics of an ideal incompressible fluid. Mathematical Research Letters 2, 27–38 (1995)CrossRefzbMATHMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Department of Mathematics, Dept. 3036, 1000 East University AvenueUniversity of WyomingLaramieUSA
  2. 2.Department of MathematicsUniversity of YorkYorkUK
  3. 3.SAMM, EA 4543Université Paris 1 Panthéon-SorbonneParis CedexFrance
  4. 4.Laboratoire de Probabilités et Modèles AléatoiresUniversités Paris 6-Paris 7Paris Cedex 05France

Personalised recommendations