1 Introduction and main results

We consider the following stochastic generalized Camassa–Holm (CH) equation on \(\mathbb R\):

$$\begin{aligned} u_t-u_{xxt}+(k+2)u^ku_x{} & {} -(1-\partial ^2_{x})h(t,u)\dot{\mathcal W}\nonumber \\{} & {} =(k+1)u^{k-1}u_xu_{xx}+u^ku_{xxx},\ \ k\in \mathbb N_{>0}. \end{aligned}$$
(1.1)

In (1.1), \(\mathcal W\) is a cylindrical Wiener process.

For \(h= 0\) and \(k=1\), Eq. (1.1) reduces to the deterministic CH equation given by

$$\begin{aligned} u_{t}-u_{xxt}+3uu_{x}=2u_{x}u_{xx}+uu_{xxx}. \end{aligned}$$
(1.2)

Equation (1.2) was introduced by Fokas and Fuchssteiner [21] to study completely integrable generalizations of the Korteweg–de Vries equation with bi-Hamiltonian structure. In [10], Camassa and Holm proved that (1.2) can be connected to the unidirectional propagation of shallow water waves over a flat bottom. Since then, (1.2) has been studied intensively, and we only mention a few related results here. The CH equation exhibits both phenomena of soliton interaction (peaked soliton solutions) and wave breaking (the solution remains bounded while its slope becomes unbounded in finite time [16]).

When \(h= 0\) and \(k=2\), Eq. (1.1) becomes the so-called Novikov equation

$$\begin{aligned} u_t-u_{xxt}+4u^2u_x=3uu_xu_{xx}+u^2u_{xxx}, \end{aligned}$$
(1.3)

which was derived in [44]. Equation (1.3) also possesses a bi-Hamiltonian structure with an infinite sequence of conserved quantities, and it admits peaked solutions [24], as well as multipeakon solutions with explicit formulas [34]. For the study of other deterministic instances of (1.1), we refer to [28, 60].

When additional noise is included, as in [46], the noise term can be used to account for the randomness arising from the energy exchange mechanisms. Indeed, in [40, 59], the weakly dissipative term \((1-\partial ^2_{x})(\lambda u)\) with \(\lambda >0\) was added to the governing equations. In [46], such weakly dissipative term is assumed to be time-dependent, nonlinear in u and random. Therefore, \((1-\partial ^2_{x})h(t,u)\dot{\mathcal W} \) is proposed to describe random energy exchange mechanisms.

In this work, we consider the Cauchy problem for (1.1) on the whole space \(\mathbb R\). Applying the operator \((1-\partial _{x}^2)^{-1}\) to (1.1), we reformulate the equation as

$$\begin{aligned} \left\{ \begin{aligned} \textrm{d}u+\left[ u^k\partial _xu+F(u)\right] \!\, \textrm{d}t&=h(t,u)\, \textrm{d}\mathcal W,\quad x\in \mathbb R, \ t>0,\ k\in \mathbb N_{>0},\\ u(\omega ,0,x)&=u_0(\omega ,x),\quad x\in \mathbb R, \end{aligned} \right. \end{aligned}$$
(1.4)

with

$$\begin{aligned} F(u):= & {} F_1(u)+F_2(u)+F_3(u)\nonumber \\{} & {} \text {and}\ \left\{ \begin{aligned}&F_1(u):=(1-\partial _{x}^2)^{-1}\partial _x\left( u^{k+1}\right) ,\\&F_2(u):=\frac{2k-1}{2}(1-\partial _{x}^2)^{-1}\partial _x\left( u^{k-1}u_x^2\right) ,\\&F_3(u):=\frac{k-1}{2}(1-\partial _{x}^2)^{-1}\left( u^{k-2}u_x^3\right) . \end{aligned} \right. \end{aligned}$$
(1.5)

Here we remark that \(F_3(u)\) in (1.5) will disappear for the CH case (i.e., when \(k=1\)). The operator \((1-\partial _{x}^2)^{-1}\) in \(F(\cdot )\) is understood as

$$\begin{aligned} \left[ (1-\partial _{x}^2)^{-1}f\right] (x)=\Big [\frac{1}{2} \textrm{e}^{-|\cdot |}\star f\Big ](x), \end{aligned}$$

where \(\star \) stands for the convolution.

In this paper, regarding (1.4), we focus on the following issues:

  • Local well-posedness, in the sense of Hadamard (existence, uniqueness and continuous dependence on initial data), and blow-up criterion of (1.4).

  • Understanding the dependence on initial data, and in particular how continuous the solution map \(u_0\mapsto u\) is.

  • Analyzing the effect of noise vs blow-up of the deterministic counterpart of (1.4).

For the first and second issue, we refer to Theorems 1.1 and 1.2, respectively. Extended remarks, explanations of difficulties, and a review of literature are given in Remarks 1.1, 1.2, 1.3 and 1.4.

The third question in our targets is on the impact of noise, which is one of the central questions in the study of stochastic partial differential equations (SPDEs). Regularization effects of noise have been observed for many different models. For example, it is known that the well-posedness of linear stochastic transport equations with noise can be established under weaker hypotheses than its deterministic counterpart, cf. [20]. Particularly, for the impact of linear noise in different models, we refer to [2, 14, 15, 26, 38, 47, 54].

Notably, the existing results on regularization by noise are largely restricted to linear equations or linear noise. Hence we have particular interest in the nonlinear noise case. Finding such noise is important as it helps us to understand the stabilizing mechanisms of noise. This is the first step to characterize relevant noise which provides regularization effects for the CH-type equations. In order to emphasize our ideas in a simple way, we only consider the noise as a 1-D Brownian motion in the current setting. That is, we consider the case that \(h(t,u) \, \textrm{d}\mathcal W=q(t,u) \, \textrm{d}W\), where W is a standard 1-D Brownian motion and \(q:[0,\infty )\times H^s\rightarrow H^s\) is a nonlinear function. Here we use the notation q rather than h because h needs to be a Hilbert–Schmidt operator [see (1.8)] to define the stochastic integral with respect to a cylindrical Wiener process \(\mathcal W\). Then we will focus on

$$\begin{aligned} \left\{ \begin{aligned}&\textrm{d}u+\left[ u^ku_x+F(u)\right] \! \, \textrm{d}t = q(t,u) \, \textrm{d}W,\quad x\in \mathbb R, \ t>0,\ k\in \mathbb N_{>0},\\&u(\omega ,0,x)=u_{0}(\omega ,x), \quad x\in \mathbb R. \end{aligned} \right. \end{aligned}$$
(1.6)

In Theorem 1.3, we provide a sufficient condition on q such that global existence can be guaranteed. We refer to Remark 1.5 for further remarks on Theorem 1.3.

Before we introduce the notations, definitions and assumptions, we recall some recent results on stochastic CH-type equations. For the stochastic CH type equation with multiplicative noise, we refer to [46,47,48], where global existence and wave breaking were studied in the periodic case, i.e., \(x\in \mathbb T\). In particular, when the noise is of transport type, we refer to [1, 4, 22, 32, 33]. We also refer to [12, 13, 45] for more results in stochastic CH type equations.

1.1 Notations

We begin by introducing some notations. Let \(({\Omega },\{\mathcal F_t\}_{t\ge 0}, \mathbb P)\) be a right-continuous complete filtration probability space. Formally, we consider a separable Hilbert space \(\mathfrak U\) and let \(\{e_n\}\) be a complete orthonormal basis of \(\mathfrak U\). Let \(\{W_n\}_{n\ge 1}\) be a sequence of mutually independent standard 1-D Brownian motions on \(({\Omega },\{\mathcal {F}_t\}_{t\ge 0},\mathbb P)\). Then we define the cylindrical Wiener process \(\mathcal W\) as

$$\begin{aligned} \mathcal W:=\sum _{n=1}^\infty W_ne_n. \end{aligned}$$
(1.7)

Let \(\mathcal X\) be a separable Hilbert space. \(\mathcal L_2(\mathfrak U; \mathcal X)\) stands for the Hilbert-Schmidt operators from \(\mathfrak U\) to \(\mathcal X\). If \(Z\in L^2 ({\Omega }; L^2_\textrm{loc}([0,\infty );\mathcal L_2(\mathfrak U; \mathcal X)))\) is progressively measurable, then the integral

$$\begin{aligned} \int _0^t Z \, \textrm{d}\mathcal W:=\sum _{n=1}^\infty \int _0^t Z e_n \, \textrm{d}W_n \end{aligned}$$
(1.8)

is a well-defined \(\mathcal X\)-valued continuous square-integrable martingale [see [5, 23] for example]. Throughout the paper, when a stopping time is defined, we set \(\inf \emptyset :=\infty \) by convention.

For \(s\in \mathbb R\), the differential operator \(D^s:=(1-\partial _{x}^2)^{s/2}\) is defined by \(\widehat{D^sf}(\xi )=(1+\xi ^2)^{s/2}\widehat{f}(\xi )\), where \(\widehat{f}\) denotes the Fourier transform of f. The Sobolev space \(H^s(\mathbb R)\) is defined as

$$\begin{aligned} H^s(\mathbb R):=\left\{ f\in L^2(\mathbb R):\Vert f\Vert _{H^s(\mathbb R)}^2:=\int _{\mathbb R}(1+|\xi |^2)^s|\widehat{f}(\xi )|^2 \, \textrm{d}\xi <+\infty \right\} , \end{aligned}$$

and the inner product on \(H^s(\mathbb R)\) is \((f,g)_{H^s}:=(D^sf,D^sg)_{L^2}.\) In the sequel, for simplicity, we will drop \(\mathbb R\) if there is no ambiguity. We will use \(\lesssim \) to denote estimates that hold up to some universal deterministic constant which may change from line to line but whose meaning is clear from the context. For linear operators A and B, \([A,B]:=AB-BA\) is the commutator of A and B.

1.2 Definitions and assumptions

We first make the precise notion of a solution to (1.4).

Definition 1.1

Let \(({\Omega }, \{\mathcal {F}_t\}_{t\ge 0},\mathbb P, \mathcal W)\) be a fixed in advance. Let \(s>3/2\), \(k\in \mathbb N_{>0}\) and \(u_0\) be an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable.

  1. 1.

    A local solution to (1.4) is a pair \((u,\tau )\), where \(\tau \) is a stopping time satisfying \(\mathbb P\{\tau >0\}=1\) and \(u:{\Omega }\times [0,\infty )\rightarrow H^s\) is an \(\mathcal {F}_t\)-predictable \(H^s\)-valued process satisfying

    $$\begin{aligned} u(\cdot \wedge \tau )\in C([0,\infty );H^s)\ \ {\mathbb P}\text {-}\mathrm{a.s.}, \end{aligned}$$

    and for all \(t>0\),

    $$\begin{aligned} u(t\wedge \tau )-u(0)+\int _0^{t\wedge \tau } \left[ u^k\partial _xu+F(u)\right] \, \textrm{d}t' =\int _0^{t\wedge \tau }h(t',u) \, \textrm{d}\mathcal W\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
  2. 2.

    The local solutions are said to be unique, if given any two pairs of local solutions \((u_1,\tau _1)\) and \((u_2,\tau _2)\) with \(\mathbb P\left\{ u_1(0)=u_2(0)\right\} =1,\) we have

    $$\begin{aligned} \mathbb P\left\{ u_1(t,x)=u_2(t,x),\ (t,x)\in [0,\tau _1\wedge \tau _2]\times \mathbb R\right\} =1. \end{aligned}$$
  3. 3.

    Additionally, \((u,\tau ^*)\) is called a maximal solution to (1.4) if \(\tau ^*>0\) almost surely and if there is an increasing sequence \(\tau _n\rightarrow \tau ^*\) such that for any \(n\in \mathbb N\), \((u,\tau _n)\) is a solution to (1.4) and on the set \(\{\tau ^*<\infty \}\), we have

    $$\begin{aligned} \sup _{t\in [0,\tau _n]}\Vert u\Vert _{H^s}\ge n. \end{aligned}$$
  4. 4.

    If \((u,\tau ^*)\) is a maximal solution and \(\tau ^*=\infty \) almost surely, then we say that the solution exists globally.

Motivated by [46, 49], we introduce the concept on stability of exiting time in Sobolev spaces. Exiting time, as its name would suggest, is defined as the time when solution leaves a certain range.

Definition 1.2

(Stability of exiting time) Let \(({\Omega }, \{\mathcal {F}_t\}_{t\ge 0},\mathbb P, \mathcal W)\) be fixed, \(s>3/2\) and \(k\in \mathbb N_{>0}\). Let \(u_0\) be an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable such that \(\mathbb E\Vert u_0\Vert _{H^s}^2<\infty \). Assume that \(\{u_{0,n}\}\) is a sequence of \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variables satisfying \(\mathbb E\Vert u_{0,n}\Vert _{H^s}^2<\infty \). For each n, let u and \(u_n\) be the unique solutions to (1.4), as in Definition 1.1, with initial values \(u_0\) and \(u_{0,n}\), respectively. For any \(R>0\), define the R-exiting times

$$\begin{aligned} \tau _n^R:=\inf \{t\ge 0:\Vert u_n\Vert _{H^s}>R\},\ \ \tau ^R:=\inf \{t\ge 0:\Vert u\Vert _{H^s}>R\}. \end{aligned}$$

Now we define the following properties on stability:

  1. 1.

    If \(u_{0,n}\rightarrow u_0\) in \(H^{s}\) \({\mathbb P}\text {-}\mathrm{a.s.}\) implies that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\tau ^R_{n}=\tau ^R\ \ {\mathbb P}\text {-}\mathrm{a.s.}, \end{aligned}$$
    (1.9)

    then the R-exiting time of u is said to be stable.

  2. 2.

    If \(u_{0,n}\rightarrow u_0\) in \(H^{s'}\) for all \(s'<s\) almost surely implies that (1.9) holds true, the R-exiting time of u is said to be strongly stable.

Our main results rely on the following assumptions concerning the noise coefficient h(tu) in (1.1).

Hypothesis \({\textbf {H}}_{\textbf {1}}\)  For \(s>1/2\), we assume that \(h:[0,\infty )\times H^s\ni (t,u)\mapsto h(t,u)\in \mathcal L_2(\mathfrak U; H^s)\) is measurable and satisfies the following conditions:

\(H_1(1)\):

There is a non-decreasing function \(f(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) such that for any \(u\in H^s\) with \(s>3/2\), we have the following growth condition

$$\begin{aligned} \sup _{t\ge 0}\Vert h(t,u)\Vert _{\mathcal L_2(\mathfrak U; H^s)}\le f(\Vert u\Vert _{W^{1,\infty }}) (1+\Vert u\Vert _{H^s}). \end{aligned}$$
\(H_1(2)\):

There is a non-decreasing function \(g_1(\cdot ):[0,\infty )\rightarrow [0,\infty )\) such that for all \(N\ge 1\),

$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^s},\,\Vert v\Vert _{H^s}\le N}\left\{ \textbf{1}_{\{u\ne v\}} \frac{\Vert h(t,u)-h(t,v)\Vert _{\mathcal L_2(\mathfrak U, H^s)}}{\Vert u-v\Vert _{H^s}}\right\} \le g_1(N),\ \ s>3/2. \end{aligned}$$
\(H_1(3)\):

There is a non-decreasing function \(g_2(\cdot ):[0,\infty )\rightarrow [0,\infty )\) such that for all \(N\ge 1\) and \(3/2\ge s>1/2\),

$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^{s+1}},\,\Vert v\Vert _{H^{s+1}}\le N}\left\{ \textbf{1}_{\{u\ne v\}} \frac{\Vert h(t,u)-h(t,v)\Vert _{\mathcal L_2(\mathfrak U, H^s)}}{\Vert u-v\Vert _{H^s}}\right\} \le g_2(N). \end{aligned}$$

Here we outline \(H_1(2)\) is the classical local Lipschitz condition. \(H_1(3)\) is needed to prove uniqueness in Lemma 3.1. Indeed, if one finds two solutions \(u,v\in H^s\) to (1.4), one can only estimate \(u-v\) in \(H^{s'}\) for \(s'\le s-1\) because the term \(u^ku_x\) loses one derivative. We refer to Remark 1.1 for more details.

Hypothesis \({\textbf {H}}_{\textbf {2}}\) When we consider (1.4) in Sect. 4, we assume that there is a real number \(\rho _0\in (1/2,1)\) such that for \(s\ge \rho _0\), \(h:[0,\infty )\times H^s\ni (t,u)\mapsto h(t,u)\in \mathcal L_2(\mathfrak U; H^s)\) is measurable. Besides, we suppose the following:

\(H_2(1)\):

There exists a non-decreasing function \(l(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) such that for any \(u\in H^s\) with \(s>3/2\),

$$\begin{aligned} \sup _{t\ge 0}\Vert h(t,u)\Vert _{\mathcal L_2(\mathfrak U; H^s)}\le l(\Vert u\Vert _{W^{1,\infty }})\Vert u\Vert _{H^s}, \end{aligned}$$

and \(H_1(2)\) holds.

\(H_2(2)\):

There is a non-decreasing function \(g_3(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) such that for all \(N\ge 1\),

$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^s}\le N}\Vert h(t,u)\Vert _{\mathcal L_2(\mathfrak U; H^{\rho _0})}\le g_3(N) \textrm{e}^{-\frac{1}{\Vert u\Vert _{H^{\rho _0}}}},\ \ s>3/2, \end{aligned}$$
(1.10)

and

$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^{\rho _0}},\,\Vert v\Vert _{H^{\rho _0}}\le N}\left\{ \textbf{1}_{\{u\ne v\}} \frac{\Vert h(t,u)-h(t,v)\Vert _{\mathcal L_2(\mathfrak U, H^{\rho _0})}}{\Vert u-v\Vert _{H^{\rho _0}}}\right\} \le g_3(N). \end{aligned}$$

We remark here that (1.10) means that there is a \(\rho _0\in (1/2,1)\) such that, if \(u_n\) is bounded in \(H^s\) and \(u_n\) tends to zero in the topology of \(H^{\rho _0}\) as n tends to \(\infty \), then \(\Vert h(t,u_n)\Vert _{\mathcal L_2(\mathfrak U; H^{\rho _0})}\) tends to zero exponentially as n tends to \(\infty \). Examples of such noise structure can be found in Sect. 4.4.

As for the regularization effect of noise, we impose the following condition on q in (1.6):

Hypothesis \({\textbf {H}}_{\textbf {3}}\)  We assume that when \(s>3/2\), \(q:[0,\infty )\times H^s\ni (t,u)\mapsto q(t,u)\in H^s\) is measurable. Define the set \(\mathcal {V}\) as a subset of \( C^2([0,\infty );[0,\infty ))\) such that

$$\begin{aligned} \mathcal {V}:=\left\{ V(0)=0,\ V'(x)>0, \ V''(x)\le 0 \ \text {and} \ \lim _{x\rightarrow \infty } V(x)=\infty \right\} . \end{aligned}$$

Then we assume the following:

\(H_3(1)\):

There is a non-decreasing function \(g_4(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) such that for any \(u\in H^s\) with \(s>3/2\), we have the following growth condition

$$\begin{aligned} \sup _{t\ge 0}\Vert q(t,u)\Vert _{H^s}\le g_4(\Vert u\Vert _{W^{1,\infty }}) (1+\Vert u\Vert _{H^s}). \end{aligned}$$
\(H_3(2)\):

\(q(\cdot ,u)\) is bounded for all \(u\in H^s\) and there is a non-decreasing function \(g_4(\cdot ):[0,\infty )\rightarrow [0,\infty )\), such that

$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^{s}},\,\Vert v\Vert _{H^{s}}\le N}\left\{ \textbf{1}_{\{u\ne v\}} \frac{\Vert q(t,u)-q(t,v)\Vert _{H^s}}{\Vert u-v\Vert _{H^s}}\right\} \le g_4(N),\ \ N\ge 1,\ s>3/2. \end{aligned}$$
\(H_3(3)\):

There is a \(V\in \mathcal {V}\) and constants \(N_1, N_2>0\) such that for all \((t,u)\in [0,\infty )\times H^s\) with \(s>3/2\),

$$\begin{aligned} \mathcal {H}_{s}(t,u) \le \,&N_1 -N_2 \frac{\left\{ V'(\Vert u\Vert ^2_{H^s})\left| \left( q(t,u), u\right) _{H^s}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^s})},\ \ \end{aligned}$$

where

$$\begin{aligned} \,&\mathcal {H}_{s}(t,u)\\ :=\,&V'(\Vert u\Vert ^2_{H^{s}})\Big \{ 2\lambda _s \Vert u\Vert ^k_{W^{1,\infty }}\Vert u\Vert ^2_{H^{s}}+\Vert q(t,u)\Vert ^2_{H^{s}} \Big \} + 2V''(\Vert u\Vert ^2_{H^{s}})\left| \left( q(t,u), u\right) _{H^{s}}\right| ^2 \end{aligned}$$

and \(\lambda _s>0\) is the constant given in Lemma A.6 below.

Examples of the noise structure satisfying Hypothesis \(H_3\) can be found in Sect. 5.2.

1.3 Main results and remarks

Now we summarize our major contributions providing proofs later in the remainder of the paper.

Theorem 1.1

Let \(s>3/2\), \(k\ge 1\) and let h(tu) satisfy Hypothesis \(H_1\). Assume that \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable satisfying \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \). Then

  1. (i)

    (Existence and uniqueness) There is a unique local solution \((u,\tau )\) to (1.4) in the sense of Definition 1.1 with

    $$\begin{aligned} \mathbb E\sup _{t\in [0,\tau ]}\Vert u(t)\Vert ^2_{H^s}<\infty . \end{aligned}$$
    (1.11)
  2. (ii)

    (Blow-up criterion) The local solution \((u,\tau )\) can be extended to a unique maximal solution \((u,\tau ^*)\) with

    $$\begin{aligned} {\textbf {1}}_{\left\{ \limsup _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{H^{s}}=\infty \right\} }={\textbf {1}}_{\left\{ \limsup _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{W^{1,\infty }}=\infty \right\} }\ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
    (1.12)
  3. (iii)

    (Stability for almost surely bounded initial data) Assume additionally that \(u_0\in L^\infty ({\Omega };H^s)\). Let \(v_0\in L^\infty ({\Omega };H^s)\) be another \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable. For any \(T>0\) and any \(\epsilon >0\), there is a \(\delta =\delta (\epsilon ,u_0,T)>0\) such that if

    $$\begin{aligned} \Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}<\delta , \end{aligned}$$
    (1.13)

    then there is a stopping time \(\tau \in (0,T]\) \({\mathbb P}\text {-}\mathrm{a.s.}\) and

    $$\begin{aligned} \mathbb E\sup _{t\in [0,\tau ]}\Vert u(t)-v(t)\Vert ^2_{H^s}<\epsilon , \end{aligned}$$
    (1.14)

    where u and v are the solutions to (1.4) with initial data \(u_0\) and \(v_0\), respectively.

Remark 1.1

Existence and uniqueness have been studied for abundant SPDEs. In many works, the authors did not address the continuous dependence on initial data. In this work, our Theorem 1.1 provides a local well-posedness result in the sense of Hadamard including the continuous dependence on initial data. Moreover, a blow-up criterion is also obtained. We refer to [11, 19, 42] for the study about the dependence on the initial data for cases that solutions to the target problems exist globally. However, it is necessary to point out that almost nothing is known on the analysis for dependence on initial data for SPDEs whose solutions may blow up in finite time.

The key difficulty for such a case is as follows: on one hand, if solutions to a nonlinear stochastic partial differential equation (SPDE) blow up in finite time, it is usually very difficult to obtain the lifespan estimates. On the other hand, we have to find a positive time \(\tau \) to obtain an inequality like (1.14). In addition, the target problem (1.4) is more difficult because the classical Itô formulae are not applicable. Indeed, for \(u_0\in H^s\), we can only know \(u\in H^s\) because this is a transport type equation, then \(u^ku_x\in H^{s-1}\). However, the inner product \( \left( u^ku_x, u\right) _{H^s}\) appears if one uses the Itô formula in a Hilbert space (cf. [23, Theorem 2.10]) and the dual product \({}_{H^{s-1}}\langle u^ku_x, u\rangle _{H^{s+1}}\) appears in the Itô formula under a Gelfand triplet (cf. [39, Theorem I.3.1]). Since we only have \(u\in H^s\) and \(u^ku_x\in H^{s-1}\), neither of them are well-defined. Likewise, when we consider the \(H^s\)-norm for the difference between two solutions \(u,v\in H^s\) to (1.4), we will have to handle \((u^ku_x-v^kv_x,u-v)_{H^s}\), which gives rise to control either \(\Vert u\Vert _{H^{s+1}}\) or \(\Vert v\Vert _{H^{s+1}}\).

Remark 1.2

Now we list some technical remarks on the statements of Theorem 1.1.

  1. (1).

    Our proof for (i) in Theorem 1.1 is motivated by the recent results in [55]. For the convenience of the reader, here we also give a brief comparison between our approach and the framework employed in many previous works.

    • We first briefly review the martingale approach used to prove existence of nonlinear SPDEs. Roughly speaking, in searching for a solution to a nonlinear SPDE in some space \(\mathcal X\), the martingale approach, as its name would suggest, includes obtaining martingale solution first and then establishing (pathwise) uniqueness to obtain the (pathwise) solution. To begin with, one needs to approximate the equation and establish uniform estimate. For nonlinear problems, one may have to add a cut-off function to cut the nonlinear parts growing in some space \(\mathcal {Z}\) with \(\mathcal X\hookrightarrow \mathcal {Z}\) (such choice of \(\mathcal {Z}\) depends on concrete problems). As far as we know, the technique of cut-off first appears in [17] for the stochastic Schrödinger equation. This cut-off enables us to split the expectation of nonlinear terms, and then the \(L^2({\Omega }; \mathcal X)\) estimate can be closed. For example, for (1.4), the estimate for \(\mathbb E\Vert u\Vert ^2_{H^s}\) will give rise to \(\mathbb E\left( \Vert u\Vert _{W^{1,\infty }}\Vert u\Vert ^2_{H^s}\right) \), and hence we need to add a function to cut \(\Vert \cdot \Vert _{W^{1,\infty }}\). With this additional cut-off, we need to consider the cut-off version of the problem first and remove it then. The first main step in the martingale approach is finding a martingale solution. Usually, this can be done by first obtaining tightness of the measures defined by the approximative solutions in some space \(\mathcal Y\), and then using Prokhorov’s Theorem and Skorokhod’s Theorem to obtain the convergence in \(\mathcal Y\). Since \(\mathcal X\) is usually infinite dimensional (usually, \(\mathcal X\) is a Sobolev space), to obtain tightness, it is required that \(\mathcal X\) is compactly embedded into \(\mathcal Y\), i.e, \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\). This brings another requirement to specify \(\mathcal {Z}\), that is, \(\mathcal Y\hookrightarrow \mathcal {Z}\). Otherwise, taking limits will not bring us back to the cut-off problem due to the additional cut-off term \(\Vert \cdot \Vert _{\mathcal {Z}}\) (in some cases, the choice of \(\mathcal {Z}\) may only give rise to a semi-norm and here we use this notation \(\Vert \cdot \Vert _{\mathcal {Z}}\) only for simplicity). Usually, in bounded domains, it is not difficult to pick \(\mathcal Y\) and \(\mathcal {Z}\) such that \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\hookrightarrow \mathcal {Z}\) (Sobolev spaces enjoy compact embeddings in bounded domains), see for example [4, 9, 18, 26, 48]. In unbounded domains, the difficulty lies in the choice of \(\mathcal Y\) and \(\mathcal {Z}\) such that \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\hookrightarrow \mathcal {Z}\). We refer to [7, 8] for fluid models with certain cancellation properties (for example, divergence free) and linear growing noise. However, it is difficult to achieve this for SPDEs with general nonlinear terms and nonlinear noise. For instance, the cut-off in our case will have to involve \(\Vert \cdot \Vert _{\mathcal {Z}}=\Vert \cdot \Vert _{W^{1,\infty }}\) [see \(H_1(1)\) and (2.3)]. Even though we can get the convergence in \(H_\textrm{loc}^{s'}\) with some \(\frac{3}{2}<s'<s\), it is still not clear whether the convergence holds true in \(W^{1,\infty }\), and this is because local convergence can not control a global object \(\Vert \cdot \Vert _{W^{1,\infty }}\). Therefore, technically speaking, nonlinear SPDEs are more non-local than its deterministic counterpart.

    • Due to the above unsolved technical issue, the martingale approach is difficult to apply in our problem and we will try to prove convergence directly, which is motivated by [41, 55] [see also [49, 53, 54] for recent developments]. Generally speaking, we will analyze the difference between two approximative solutions and directly find a space \(\mathcal Y\) such that \(\mathcal X\hookrightarrow \mathcal Y\hookrightarrow \mathcal {Z}\) and convergence (up to a subsequence) holds true in \(\mathcal Y\). The difficult part is finding convergence in \(\mathcal Y\) without compactness \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\) (compared to the martingale approach, tightness comes from the compact embedding \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\)). In this paper, the target path space is \(C([0,T];H^{s})=\mathcal X\), and we are able to prove convergence (up to a subsequence) in \(C([0,T];H^{s-\frac{3}{2}})=\mathcal Y\) directly. After taking limits to obtain a solution, one can improve the regularity to \(H^s\) again, and the technical difficulty in this step is to prove the time continuity of the solution because the classical Itô formula is not applicable (see in Remark 1.1). To overcome this difficulty, we apply a mollifier \(J_\varepsilon \) to equation and estimate \(\mathbb E\Vert J_\varepsilon u\Vert ^2_{H^s}\) first [see (2.11)]. We also remark that the techniques in removing the cut-off have been used in [5, 25, 54]. Here we formulate such a technical result in Lemma A.7 in an abstract way.

  2. (2).

    Now we give a remark on (iii) in Theorem 1.1. For the question on dependence on initial data, there are some delicate differences between the stochastic and the deterministic case. In the deterministic counterpart of (1.4), due to the lifespan estimate [see (4.10) for instance], for given \(u_0\in H^s\), it can be shown that if \(\Vert u_0-v_0\Vert _{H^s}\) is small enough, then there is a \(T>0\) depending on \(u_0\) such that \(\sup _{t\in [0,T]}\Vert u(t)-v(t)\Vert ^2_{H^s}\) is also small. In stochastic setting, since existence and uniqueness are obtained in the framework of \(L^2({\Omega };H^s)\), it is therefore very natural to expect that, for given \(u_0\in L^2({\Omega };H^s)\), if \(\mathbb E\Vert u_0-v_0\Vert ^2_{H^s}\) is small enough, then for some almost surely positive \(\tau \) depending \(u_0\), \(\mathbb E\sup _{t\in [0,\tau ]}\Vert u(t)-v(t)\Vert ^2_{H^s}\) is also small. However, so far we have only proved it with assuming the smallness of \(\Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}\). Since \(L^\infty ({\Omega };H^s)\) can be viewed as being less random than \(L^2({\Omega };H^s)\), one may roughly conclude that what the solution map needs to be continuous/stable (the initial data and its perturbation are \(L^\infty ({\Omega };H^s)\)) is more “picky" in determinism than what the existence of such a solution map requires (existence and uniqueness guarantee that a solution map can be defined). For the technical difficulties involved, we have the following explanations:

    • As is mentioned in Remark 1.1, when we estimate the \(H^s\)-norm for the difference between two solutions u and v, \(H^{s+1}\)-norm will appear. Hence, we have to use smooth approximations to make the analysis valid. More precisely, we approximate u and v by smooth process \(u_\varepsilon \) and \(v_\varepsilon \) and consider

      $$\begin{aligned} \Vert u-v\Vert _{H^s}\le \Vert u-u_{\varepsilon }\Vert _{H^s}+\Vert u_{\varepsilon }-v_{\varepsilon }\Vert _{H^s}+\Vert v_{\varepsilon }-v\Vert _{H^s}. \end{aligned}$$
      (1.15)

      Then all terms can be estimated because \(\Vert u_\varepsilon \Vert _{H^{s+1}}\) and \(\Vert v_\varepsilon \Vert _{H^{s+1}}\) make sense. Here we refer to Remark 3.2 for more details on the construction of such an approximation.

    • In dealing with the above three terms in the stochastic case, two sequences of stopping times (exiting times) are needed to control \(\Vert u_\varepsilon \Vert _{H^s}\) and \(\Vert v_\varepsilon \Vert _{H^s}\) [see (3.20) below]. Since we aim at obtaining \(\tau >0\) almost surely in (1.14) (otherwise the difference between two solutions on the set \(\{\tau =0\}\) can not be measured), we will have to guarantee that those stopping times used in bounding \(\Vert u_\varepsilon \Vert _{H^s}\) and \(\Vert v_\varepsilon \Vert _{H^s}\) have positive lower bounds almost surely. Up to now, we have only achieved this for initial values belonging to \(L^\infty ({\Omega };H^s)\). We also remark that this is different from the proof for existence. In the proof for existence, \(u_\varepsilon \) exists on a common interval [0, T] for all \(\varepsilon \) and enjoys a uniform-in-\(\varepsilon \) estimate (2.4), hence we can get rid of stopping times in convergence (from (2.8) to (2.9)). Here we do not have such common existence interval due to the lack of a lifespan estimate, which is a significant difference between the stochastic and the deterministic cases. Indeed, we can easily find the lifespan estimate for the deterministic counterpart of (1.4) [see (4.10) below].

    • Moreover, even if the above issue can be handled, in dealing with the three terms in (1.15), we are confronted with \(\frac{1}{\varepsilon ^2}\mathbb E\Vert u_0- v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\) for some suitably chosen \(s'\) (cf. (3.27)). After \(\varepsilon \) is fixed, the smallness of \(\mathbb E\Vert u_0-v_0\Vert ^2_{H^s}\) is not enough to control \(\frac{1}{\varepsilon ^2}\mathbb E\Vert u_0- v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\), either. We use the \(L^\infty ({\Omega };H^s)\) condition to take \(\Vert u_0\Vert ^2_{H^{s}}\) out of \(\frac{1}{\varepsilon ^2}\mathbb E\Vert u_0- v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\). In deterministic case, no expectation is involved, \(\frac{1}{\varepsilon ^2}\Vert u_0- v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\) can be controlled by \(\Vert u_0- v_0\Vert ^2_{H^{s}}\).

Roughly speaking, (iii) in Theorem 1.1 means that for any fixed \(u_0\in L^\infty ({\Omega };H^s)\) and any \(T>0\), if \(\Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}\rightarrow 0\), then

$$\begin{aligned} \exists \, \tau \in (0,T] \ {\mathbb P}\text {-}\mathrm{a.s.}\ \text {such that}\ \mathbb E\Vert u(\cdot \wedge \tau )-v(\cdot \wedge \tau )\Vert ^2_{C([0,T];H^s)}\rightarrow 0, \end{aligned}$$

where uv are solutions corresponding to \(u_0\), \(v_0\), respectively. Below we will study this issue quantitatively. The next result addresses at least a partially negative answer.

Theorem 1.2

(Weak instability) Let \(s>5/2\) and \(k\ge 1\). If h satisfies Hypothesis \(H_2\), then at least one of the following properties holds true:

  1. (i)

    For any \(R\gg 1\), the R-exiting time is not strongly stable for the zero solution to (1.4) in the sense of Definition 1.2;

  2. (ii)

    There is a \(T>0\) such that the solution map \(u_0\mapsto u\) defined by (1.4) is not uniformly continuous as a map from \(L^\infty ({\Omega };H^s)\) into \(L^1({\Omega };C([0,T];H^s))\). More precisely, there exist two sequences of solutions \(u^{1,n}\) and \(u^{2,n}\), and two sequences of stopping time \(\tau _{1,n}\) and \(\tau _{2,n}\), such that

    • For \(i=1,2\), \(\mathbb P\{\tau _{i,n}>0\}=1\) for each \(n>1\). Besides,

      $$\begin{aligned} \lim _{n\rightarrow \infty }\tau _{1,n}=\lim _{n\rightarrow \infty }\tau _{2,n}=\infty \ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
      (1.16)
    • For \(i=1,2\), \(u^{i,n}\in C([0,\tau _{i,n}];H^s)\) \({\mathbb P}\text {-}\mathrm{a.s.}\), and

      $$\begin{aligned} \mathbb E\left( \sup _{t\in [0,\tau _{1,n}]}\Vert u^{1,n}(t)\Vert _{H^s}+\sup _{t\in [0,\tau _{2,n}]}\Vert u^{2,n}(t)\Vert _{H^s}\right) \lesssim 1. \end{aligned}$$
      (1.17)
    • At initial time \(t=0\), for any \(p\in [1,\infty ]\),

      $$\begin{aligned} \lim _{n\rightarrow \infty }\Vert u^{1,n}(0)-u^{2,n}(0)\Vert _{L^p({\Omega };H^s)}=0. \end{aligned}$$
      (1.18)
    • When \(t>0\),

      $$\begin{aligned} \liminf _{n\rightarrow \infty }\mathbb E\sup _{t\in [0,T\wedge \tau _{1,n}\wedge \tau _{2,n}]} \Vert u^{1,n}(t)&-u^{2,n}(t)\Vert _{H^s}\nonumber \\ \gtrsim&\left\{ \begin{aligned}&\sup _{t\in [0,T]}|\sin (t)|,\ \text {if}\ k\ \text {is odd},\\&\sup _{t\in [0,T]}\big |\sin \big (\frac{t}{2}\big )\big |,\ \text {if}\ k\ \text {is even}. \end{aligned}\right. \end{aligned}$$
      (1.19)

Remark 1.3

We first briefly outline the main difficulties encountered in the proof for Theorem 1.2 and the main strategies we used.

  1. (1).

    Because we can not get an explicit expression of the solution to (1.4), to obtain (1.19), we will construct two sequences of approximative solutions \(\{u_{m,n}\}\) (\(m\in \{1,2\}\)) such that the actual solutions \(\{u^{m,n}\}\) with \(u^{m,n}(0)=u_{m,n}(0)\) satisfy

    $$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb E\sup _{[0,\tau _{m,n}]}\Vert u^{m,n}-u_{m,n}\Vert _{H^s} =0, \end{aligned}$$
    (1.20)

    where \(u^{m,n}\) exists at least on \([0,\tau _{m,n}]\). Then, one can establish (1.19) by estimating \(\{u_{m,n}\}\) rather than \(\{u^{m,n}\}\). We also remark that the construction of approximative solution \(u_{m,n}\) for \(x\in \mathbb R\) is more difficult than the construction of approximative solution for \(x\in \mathbb T\) [see [46]] since the approximative solution involves both high and low frequency parts (high frequency part is already enough for the case \(x\in \mathbb T\), cf. [46, 55]). The key point is that we need to guarantee \(\inf _{n}\tau _{m,n}>0\) almost surely in dealing with (1.20). Hence we are confronted with a common difficulty in SPDEs again, that is, the lack of lifespan estimate. In deterministic cases, one can easily obtain the lifespan estimate, which enables us to find a common interval [0, T] such that all actual solutions exist on [0, T] (see for example Lemma 4.1). In the stochastic case, so far we have not been able to prove this.

  2. (2).

    To settle the above difficulty, we observe that the bound \(\inf _{n}\tau _{m,n}>0\) can be connected to the stability property of the exiting time (see Definition 1.2). The condition that the \(R_0\)-exiting time is strongly stable at the zero solution will be used to provide a common existence time \(T>0\) such that for all n, \(u^{m,n} \) exists up to T (see Lemma 4.4 below). Therefore, to prove Theorem 1.2, we will show that, if the \(R_0\)-exiting time is strongly stable at the zero solution for some \(R_0\gg 1\), then the solution map \(u_0\mapsto u\) defined by (1.4) can not be uniformly continuous. To get (1.20), we estimate the error in \(H^{2s-\rho _0}\) and \(H^{\rho _0}\), respectively, where \(\rho _0\) is given in \(H_2\). Then (1.20) is a consequence of the interpolation. We remark that (1.18) holds because the approximative solutions are constructed deterministically.

Remark 1.4

With regard to similar results in the literature and further hypotheses, we give some more remarks on Theorem 1.2.

  1. (1).

    In deterministic cases, the issue of the (optimal) initial-data dependence of solutions has been extensively investigated for various nonlinear dispersive and integrable equations. We refer to [35] for the inviscid Burgers equation and to [37] for the Benjamin–Ono equation. For the CH equation we refer the readers to [29, 30] concerning the non-uniform dependence on initial data in Sobolev spaces \(H^s\) with \(s>3/2\). For the first results of this type in Besov spaces, we refer to [50, 56]. Particularly, non-uniform dependence on initial data in critical Besov space first appears in [51, 52]. In this work, Theorem 1.2 and (iii) in Theorem 1.1 demonstrate that the continuity of the solution map \(u_{0}\mapsto u\) is almost an optimal result in the sense that, when the growth of the noise coefficient satisfies certain conditions (cf. Hypothesis \(H_2\)), the map \(u_{0}\mapsto u\) is continuous, but one can not improve the stability of the exiting time and simultaneously the continuity of the map \(u_{0}\mapsto u\). Up to our knowledge, results of this type for SPDEs first appeared in [46, 49]. We also refer to [3, 43, 55] for recent developments.

  2. (2).

    It is worthwhile mentioning that, as noted in (1) of Remark 1.3, the strong stability of exiting times is used as a technical “assumption" to handle the lower bound of a sequence of stopping times. So far we have not been able to verify the non-emptyness of this strong stability assumption for the current model. However, if the transport noise \(u_x\circ \, \textrm{d}W\) is considered (W is a standard 1-D Brownian motion and \(\circ \, \textrm{d}W\) means the Stratonovich stochastic differential), we might conjecture that either the notion of strong stability of exiting times can be captured, or the solution map \(u_0\mapsto u\) can become more regular than being continuous. Indeed, if \(h(t,u) \, \textrm{d}\mathcal W\) is replaced by \(u_x\circ \, \textrm{d}W\) in (1.4), one can rewrite the equation into Itô’s form with an additional viscous term \(-\frac{1}{2}u_{xx}\) on the left hand side of the equation. Therefore, it is reasonable to expect that in this case, either the strong stability of exiting times or the continuity of the solution map \(u_0\mapsto u\) can be improved. We refer to [31] and [27] for deterministic examples on the continuity of the solution map.

Theorem 1.3

(Noise prevents blow-up) Let \(s>5/2\), \(k\ge 1\) and \(u_0\in H^s\) be an \(\mathcal {F}_0\)-measurable random variable with \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \). If Hypothesis \(H_3\) holds true, then the corresponding maximal solution \((u,\tau ^*)\) to (1.6) satisfies

$$\begin{aligned} \mathbb P\left\{ \tau ^*=\infty \right\} =1. \end{aligned}$$

Remark 1.5

We notice that many of the existing results on regularization effects by noise are essentially restricted to linear equations or linear growing noise. In Theorem 1.3, both the drift and diffusion term are nonlinear. We also remark that the blow-up can actually occur in the deterministic counterpart of (1.6). For example, when \(k=1\), blow-up (as wave breaking) of solutions to the CH equation can be found in [16]. Therefore, Theorem 1.3 demonstrates that large enough noise can prevent singularities. Indeed, \(H_3(3)\) means that the growth of \(u^ku_x+F(u)\) can be controlled provided that the noise grows fast enough in terms of a Lyapunov type function V. In contrast to \(H_1(2)\) and \(H_1(3)\), we require \(s>3/2\) in both \(H_3(2)\) and \(H_3(3)\). As is stated in Hypothesis \(H_1\), \(H_3(2)\) implies that uniqueness holds true for solutions in \(H^s\) with \(s>5/2\). It seems that one can require \(s>1/2\) in \(H_3(2)\) to guarantee uniqueness in \(H^\rho \) with \(\rho >3/2\), but at present we can only construct examples for the case that \(s>3/2\) is required in both \(H_3(2)\) and \(H_3(3)\).

We outline the remainder of the paper. In Sect. 2, we study the cut-off version of (1.4) and then we remove the cut-off and prove Theorem 1.1 in Sect. 3. We prove Theorem 1.2 in Sect. 4. Concerning the interplay of noise vs blow-up, we prove Theorem 1.3 in Sect. 5.

2 Cut-off version: regular solutions

We first consider a cut-off version of (1.4). To this end, for any \(R>1\), we let \(\chi _R(x):[0,\infty )\rightarrow [0,1]\) be a \(C^{\infty }\)-function such that \(\chi _R(x)=1\) for \(x\in [0,R]\) and \(\chi _R(x)=0\) for \(x>2R\). Then we consider the following cut-off problem

$$\begin{aligned} \left\{ \begin{aligned}&\, \textrm{d}u+\chi _R(\Vert u\Vert _{W^{1,\infty }})\left[ u^k\partial _xu+F(u)\right] \, \textrm{d}t=\chi _R(\Vert u\Vert _{W^{1,\infty }})h(t,u) \, \textrm{d}\mathcal W,\\&u(\omega ,0,x)=u_0(\omega ,x)\in H^{s}. \end{aligned} \right. \end{aligned}$$
(2.1)

In this section, we aim at proving the following result:

Proposition 2.1

Let \(s>3\), \(k\ge 1\), \(R>1\) and Hypothesis \(H_1\) be satisfied. Assume that \(u_0\in L^2({\Omega };H^{s})\) is an \(H^{s}\)-valued \(\mathcal {F}_0\)-measurable random variable. Then, for any \(T>0\), (2.1) has a solution \(u \in L^2\left( {\Omega }; C\left( [0,T];H^{s}\right) \right) \). More precisely, there is a constant \(C(R,T,u_0)>0\) such that

$$\begin{aligned} \mathbb E\sup _{t\in [0,T]}\Vert u\Vert ^2_{H^{s}}\le C(R,T,u_0). \end{aligned}$$
(2.2)

The proof for Proposition 2.1 is given in the following subsections.

2.1 The approximation scheme

The first step is to construct a suitable approximation scheme. From Lemma A.5, we see that the nonlinear term F(u) preserves the \(H^s\)-regularity of \(u\in H^s\) for any \(s>3/2\). However, to apply the theory of SDEs in Hilbert space to (2.1), we will have to mollify the transport term \(u^k\partial _xu\) since the product \(u^k\partial _xu\) loses one regularity. To this end, we consider the following approximation scheme:

$$\begin{aligned} \left\{ \begin{aligned} \, \textrm{d}u+H_{1,\varepsilon }(u)\, \textrm{d}t&=H_{2}(t,u) \, \textrm{d}\mathcal W,\\ H_{1,\varepsilon }(u)&=\chi _R(\Vert u\Vert _{W^{1,\infty }})\left[ J_{\varepsilon } \left( (J_{\varepsilon }u)^k\partial _xJ_{\varepsilon }u\right) +F(u)\right] ,\\ H_{2}(t,u)&=\chi _R(\Vert u\Vert _{W^{1,\infty }})h(t,u),\\ u(0,x)&=u_0(x)\in H^{s}, \end{aligned} \right. \end{aligned}$$
(2.3)

where \(J_{\varepsilon }\) is the Friedrichs mollifier defined in Appendix A. After mollifying the transport term \(u^k\partial _xu\), it follows from \(H_1(2)\) and Lemmas A.1 and A.5 that for any \(\varepsilon \in (0,1)\), \(H_{1,\varepsilon }(\cdot )\) and \(H_{2}(t,\cdot )\) are locally Lipschitz continuous in \(H^s\) with \(s>\frac{3}{2}\). Besides, we notice that the cut-off function \(\chi _R(\Vert \cdot \Vert _{W^{1,\infty }})\) guarantees the linear growth condition (cf. Lemma A.5 and \(H_1(1)\)). Thus, for fixed \(({\Omega }, \{\mathcal {F}_t\}_{t\ge 0},\mathbb P, \mathcal W)\) and for \(u_0\in L^2({ \Omega };H^s)\) with \(s>3/2\), the existence theory of SDE in Hilbert space (see for example [23]) means that (2.3) admits a unique solution \(u_{\varepsilon }\in C([0,\infty );H^s)\) \({\mathbb P}\text {-}\mathrm{a.s.}\).

2.2 Uniform estimates

Now we establish some uniform-in-\(\varepsilon \) estimates for \(u_\varepsilon \).

Lemma 2.1

Let \(k\ge 1\), \(s>3/2\), \(R>1\) and \(\varepsilon \in (0,1)\). Assume that h satisfies Hypothesis \(H_1\) and \(u_0\in L^2({\Omega };H^s)\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable. Let \(u_{\varepsilon }\in C([0,\infty );H^{s})\) be the unique solution to (2.3). Then for any \(T>0\), there is a constant \(C=C(R,T,u_0)>0\) such that

$$\begin{aligned} \sup _{\varepsilon >0}\mathbb E\sup _{t\in [0,T]}\Vert u_{\varepsilon }(t)\Vert ^2_{H^s} \le C. \end{aligned}$$
(2.4)

Proof

Using the Itô formula for \(\Vert u_\varepsilon \Vert ^2_{H^s}\), we have that for any \(t>0\),

$$\begin{aligned} \, \textrm{d}\Vert u_\varepsilon (t)\Vert ^2_{H^s}&=\, 2\chi _R\left( \Vert u_\varepsilon \Vert _{W^{1,\infty }}\right) \left( h(t,u_\varepsilon ) \, \textrm{d}\mathcal W,u_\varepsilon \right) _{H^s}\nonumber \\&\quad -2\chi _R\left( \Vert u_\varepsilon \Vert _{W^{1,\infty }}\right) \left( D^sJ_{\varepsilon }\left[ (J_{\varepsilon }u_\varepsilon )^k\partial _xJ_{\varepsilon }u_\varepsilon \right] ,D^s u_\varepsilon \right) _{L^2}\, \textrm{d}t\nonumber \\&\quad -2\chi _R\left( \Vert u_\varepsilon \Vert _{W^{1,\infty }}\right) \left( D^s F(u_\varepsilon ),D^s u_\varepsilon \right) _{L^2}\, \textrm{d}t\nonumber \\&\quad + \chi ^2_R(\Vert u_\varepsilon \Vert _{W^{1,\infty }})\Vert h(t,u_\varepsilon )\Vert _{\mathcal L_2(\mathfrak U; H^s)}^2\, \textrm{d}t.\nonumber \end{aligned}$$

On account of Lemmas A.1 and A.3, we derive

$$\begin{aligned}&\,\left| \left( D^sJ_{\varepsilon }\left[ (J_{\varepsilon }u_\varepsilon )^k\partial _xJ_{\varepsilon }u_\varepsilon \right] , D^s u_\varepsilon \right) _{L^2}\right| \le C\Vert u_\varepsilon \Vert ^{k}_{W^{1,\infty }}\Vert u_\varepsilon \Vert ^2_{H^s}, \end{aligned}$$

Therefore, one can infer from the BDG inequality, \(H_1(1)\), Lemma A.5 and the above estimate that

$$\begin{aligned} \mathbb E\sup _{t\in [0,T]}\Vert u_\varepsilon (t)\Vert ^2_{H^s}-\mathbb E\Vert u_0\Vert ^2_{H^s} \le \,&\frac{1}{2}\mathbb E\sup _{t\in [0,T]}\Vert u_\varepsilon \Vert _{H^s}^2+ C_R\mathbb E\int _0^{T}\left( 1+\Vert u_\varepsilon \Vert ^2_{H^s}\right) \, \textrm{d}t, \end{aligned}$$

which implies

$$\begin{aligned} \mathbb E\sup _{t\in [0,T]}\Vert u_\varepsilon (t)\Vert ^2_{H^s} \le 2\mathbb E\Vert u_0\Vert ^2_{H^s}+ C_R\int _0^{T} \left( 1+\mathbb E\sup _{t'\in [0,t]}\Vert u_\varepsilon (t')\Vert _{H^s}^2\right) \, \textrm{d}t. \end{aligned}$$
(2.5)

Using Grönwall’s inequality in (2.5) implies (2.4). \(\square \)

2.3 Convergence of approximative solutions

Now we are going to show that the family \(\{u_\varepsilon \}\) contains a convergent subsequence. For different layers \(u_\varepsilon \) and \(u_\eta \), we see that \(v_{\varepsilon ,\eta }:=u_\varepsilon -u_\eta \) satisfies the following problem:

$$\begin{aligned} \textrm{d}v_{\varepsilon ,\eta }+\Big (\sum _{i=1}^{8}q_i\Big ) \, \textrm{d}t=\,\Big (\sum _{i=9}^{10}q_i\Big ) \, \textrm{d}\mathcal W,\ \ v_{\varepsilon ,\eta }(0,x)=0, \end{aligned}$$
(2.6)

where

$$\begin{aligned} q_1:=\,&\left[ \chi _R(\Vert u_{\varepsilon }\Vert _{W^{1,\infty }})-\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) \right] J_\varepsilon [(J_\varepsilon u_\varepsilon )^k\partial _xJ_\varepsilon u_\varepsilon ],\\ q_2:=\,&\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) (J_\varepsilon -J_\eta )[(J_\varepsilon u_\varepsilon )^k\partial _xJ_\varepsilon u_\varepsilon ],\\ q_3:=\,&\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) J_\eta [((J_\varepsilon u_\varepsilon )^k-(J_\eta u_\varepsilon )^k)\partial _xJ_\varepsilon u_\varepsilon ],\\ q_4:=\,&\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) J_\eta [((J_\eta u_\varepsilon )^k-(J_\eta u_\eta )^k)\partial _xJ_\varepsilon u_\varepsilon ],\\ q_5:=\,&\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) J_\eta [(J_\eta u_{\eta })^k\partial _x(J_\varepsilon -J_\eta ) u_\varepsilon ],\\ q_6:=\,&\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) J_\eta [(J_\eta u_{\eta })^k\partial _xJ_\eta (u_\varepsilon -u_\eta )],\\ q_7:=\,&\left[ \chi _R(\Vert u_{\varepsilon }\Vert _{W^{1,\infty }})-\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) \right] F(u_\varepsilon ),\\ q_8:=\,&\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) [F(u_\varepsilon )-F(u_\eta )],\\ q_9:=\,&\left[ \chi _R(\Vert u_{\varepsilon }\Vert _{W^{1,\infty }})-\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) \right] h(t,u_\varepsilon ),\\ q_{10}:=\,&\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) [h(t,u_\varepsilon )-h(t,u_\eta )]. \end{aligned}$$

Lemma 2.2

Let \(s>3\) and \(k\ge 1\) and let \(\mathcal {G}(x):=x^{2k+2}+1\). For any \(\varepsilon ,\eta \in (0,1)\), we find a constant \(C>0\) such that

$$\begin{aligned} \sum _{i=1}^{8}\left| (q_i, v_{\varepsilon ,\eta })_{H^{s-\frac{3}{2}}}\right| \le \,&C\mathcal {G}\left( \Vert u_\varepsilon \Vert _{H^{s}}+\Vert u_\eta \Vert _{H^{s}}\right) \left( \Vert v_{\varepsilon ,\eta }\Vert ^2_{H^{s-\frac{3}{2}}}+\max \{\varepsilon ,\eta \}\right) . \end{aligned}$$

Proof

Using Lemmas A.1, A.3 and A.5, the mean value theorem for \(\chi _R(\cdot )\), and the embedding \(H^{s-\frac{3}{2}}\hookrightarrow W^{1,\infty }\), we have that for some \(C>0\),

$$\begin{aligned} \left\| D^{s-\frac{3}{2}}q_1\right\| _{L^2},\ \left\| D^{s-\frac{3}{2}}q_7\right\| _{L^2} \le \,&C\Vert v_{\varepsilon ,\eta }\Vert _{H^{s-\frac{3}{2}}}\Vert u_\varepsilon \Vert ^{k+1}_{H^{s}}, \end{aligned}$$

and

$$\begin{aligned} \left\| D^{s-\frac{3}{2}}q_8\right\| _{L^2} \le C\left( \Vert u_\varepsilon \Vert _{H^{s}}+\Vert u_\eta \Vert _{H^{s}}\right) ^{k}\Vert v_{\varepsilon ,\eta }\Vert _{H^{s-\frac{3}{2}}}. \end{aligned}$$

Using Lemma A.1, we see that

$$\begin{aligned} \left\| D^{s-\frac{3}{2}}q_i\right\| _{L^2}&\quad \le C\max \{\varepsilon ^{1/2},\eta ^{1/2}\}\Vert u_\varepsilon \Vert ^{k+1}_{H^{s}},\ i=2,3,\\ \left\| D^{s-\frac{3}{2}}q_4\right\| _{L^2}&\quad \le C\left( \Vert u_\varepsilon \Vert _{H^{s}}+\Vert u_\eta \Vert _{H^{s}}\right) ^{k-1}\Vert v_{\varepsilon ,\eta }\Vert _{H^{s-\frac{3}{2}}}\Vert u_\varepsilon \Vert _{H^{s}},\\ \left\| D^{s-\frac{3}{2}}q_5\right\| _{L^2}&\quad \le C\max \{\varepsilon ^{1/2},\eta ^{1/2}\}\Vert u_\varepsilon \Vert _{H^{s}}\Vert u_\eta \Vert ^k_{H^{s}}. \end{aligned}$$

For \(q_6\), using Lemma A.1 and then integrating by part, we have

$$\begin{aligned}&\left( D^{s-\frac{3}{2}}q_6,D^{s-\frac{3}{2}}v_{\varepsilon ,\eta }\right) _{L^2}\\&\quad =\, \chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) \int _\mathbb R[D^{s-\frac{3}{2}},(J_\eta u_{\eta })^k]\partial _xJ_\eta v_{\varepsilon ,\eta }\cdot D^{s-\frac{3}{2}}J_\eta v_{\varepsilon ,\eta }\, \textrm{d}x\\&\qquad -\frac{1}{2}\chi _R\left( \Vert u_{\eta }\Vert _{W^{1,\infty }}\right) \int _\mathbb R\partial _x (J_\eta u_{\eta })^k(D^{s-\frac{3}{2}}J_\eta v_{\varepsilon ,\eta })^2\, \textrm{d}x. \end{aligned}$$

Via the embedding \(H^{s-\frac{3}{2}}\hookrightarrow W^{1,\infty }\) and Lemmas A.1 and A.3, we obtain

$$\begin{aligned} \left| \left( D^{s-\frac{3}{2}}q_6,D^{s-\frac{3}{2}}v_{\varepsilon ,\eta }\right) _{L^2}\right| \lesssim \,&\Vert u_\eta \Vert ^k_{H^{s}}\Vert v_{\varepsilon ,\eta }\Vert ^{2}_{H^{s-\frac{3}{2}}}. \end{aligned}$$

Therefore, we can put this all together to find

$$\begin{aligned} \,\sum _{i=1}^{8}\left| (q_i, v_{\varepsilon ,\eta })_{H^{s-\frac{3}{2}}}\right|&\le \, C\left( (\Vert u_\varepsilon \Vert _{H^{s}}+\Vert u_\eta \Vert _{H^{s}})^{k+1}+1\right) \Vert v_{\varepsilon ,\eta }\Vert ^2_{H^{s-\frac{3}{2}}}\\&\quad +C(\Vert u_\varepsilon \Vert _{H^{s}}+\Vert u_\eta \Vert _{H^{s}})^{2k+2}\max \{\varepsilon ,\eta \}, \end{aligned}$$

which gives rise to the desired estimate. \(\square \)

Lemma 2.3

Let \(s>3\), \(R>1\) and \(\varepsilon \in (0,1)\). For any \(T>0\) and \(K>1\), we define

$$\begin{aligned} \tau ^{T}_{\varepsilon ,K}:=\inf \left\{ t\ge 0:\Vert u_\varepsilon (t)\Vert _{H^{s}}\ge K\right\} \wedge T,\ \ \tau ^T_{\varepsilon ,\eta ,K}:=\tau ^{T}_{\varepsilon ,K}\wedge \tau ^{T}_{\eta ,K}. \end{aligned}$$
(2.7)

Then we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{\eta \le \varepsilon } \mathbb E\sup _{t\in [0,\tau ^T_{\varepsilon ,\eta ,K}]}\Vert u_\varepsilon -u_\eta \Vert _{H^{s-\frac{3}{2}}}=0. \end{aligned}$$
(2.8)

Proof

By employing the BDG inequality to (2.6), for some constant \(C>0\), we arrive at

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^T_{\varepsilon ,\eta ,K}]}\Vert v_{\varepsilon ,\eta }(t)\Vert ^{2}_{H^{s-\frac{3}{2}}}\\&\quad \le \, \frac{1}{2}\mathbb E\sup _{t\in [0,\tau ^T_{\varepsilon ,\eta ,K}]}\Vert v_{\varepsilon ,\eta }\Vert ^{2}_{H^{s-\frac{3}{2}}} +C\mathbb E\int _0^{\tau ^T_{\varepsilon ,\eta ,K}}\sum _{i=1}^{8}\left| (q_i, v_{\varepsilon ,\eta })_{H^{s-\frac{3}{2}}}\right| \, \textrm{d}t\\&\qquad +C\mathbb E\int _0^{\tau ^T_{\varepsilon ,\eta ,K}}\sum _{i=9}^{10} \Vert q_i\Vert _{\mathcal L_2\big (\mathfrak U;H^{s-\frac{3}{2}}\big )}^2\, \textrm{d}t. \end{aligned}$$

For \(q_9\) and \(q_{10}\), we use (2.7), the mean value theorem for \(\chi _R(\cdot )\), \(H_1(1)\) and \(H_1(2)\) to find a constant \(C=C(K)>0\) such that

$$\begin{aligned} \mathbb E\int _0^{\tau ^T_{\varepsilon ,\eta ,K}}\sum _{i=9}^{10} \Vert q_i\Vert _{\mathcal L_2\big (\mathfrak U;H^{s-\frac{3}{2}}\big )}^2\, \textrm{d}t \le \,&C(K)\int _0^T\mathbb E\sup _{t'\in [0,\tau ^t_{\varepsilon ,\eta ,K}]}\Vert v_{\varepsilon ,\eta }(t')\Vert ^{2}_{H^{s-\frac{3}{2}}}\, \textrm{d}t. \end{aligned}$$

On account of Lemma 2.2 and the above estimate, we find

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^T_{\varepsilon ,\eta ,K}]}\Vert v_{\varepsilon ,\eta }(t)\Vert ^{2}_{H^{s-\frac{3}{2}}}\\&\le \, C(K)\int _0^{T}\mathbb E\sup _{t'\in [0,\tau ^t_{\varepsilon ,\eta ,K}]}\Vert v_{\varepsilon ,\eta }(t')\Vert ^{2}_{H^{s-\frac{3}{2}}} \, \textrm{d}t+C(K)T\max \{\varepsilon ,\eta \}. \end{aligned}$$

Therefore, (2.8) holds true. \(\square \)

Lemma 2.4

For any fixed \(s>3\) and \(T>0\), there is an \(\{\mathcal {F}_t\}_{t\ge 0}\) progressive measurable \(H^{s-3/2}\)-valued process u and a countable subsequence of \(\{u_\varepsilon \}\) (still denoted as \(\{u_\varepsilon \})\) such that

$$\begin{aligned} u_\varepsilon \xrightarrow []{\varepsilon \rightarrow 0}u \ \textrm{in}\ C\left( [0,T];H^{s-\frac{3}{2}}\right) \ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
(2.9)

Proof

We first let \(\varepsilon \) be discrete, i.e., \(\varepsilon =\varepsilon _n (n\ge 1)\) such that \(\varepsilon _n\rightarrow 0\) as \(n\rightarrow \infty \). In this way, for all n, \(u_{\varepsilon _n}\) can be defined on the same set \(\widetilde{{\Omega }}\) with \(\mathbb P\{\widetilde{{\Omega }}\}=1\). For brevity, \(u_{\varepsilon _n}\) is still denoted as \(u_\varepsilon \). For any \(\epsilon >0\), by using (2.7), Lemma 2.1 and Chebyshev’s inequality, we see that

$$\begin{aligned}&\, \mathbb P\left\{ \sup _{t\in [0,T]}\Vert u_\varepsilon -u_\eta \Vert _{H^{s-\frac{3}{2}}}>\epsilon \right\} \\&\quad \le \mathbb P\left\{ \tau ^T_{\varepsilon ,K}<T\right\} +\mathbb P\left\{ \tau ^T_{\eta ,K}<T\right\} +\mathbb P\left\{ \sup _{t\in [0,\tau ^T_{\varepsilon ,\eta ,K}]}\Vert u_\varepsilon -u_\eta \Vert _{H^{s-\frac{3}{2}}}>\epsilon \right\} \\&\quad \le \frac{2C(R,T,u_0)}{K^2} +\mathbb P\left\{ \sup _{t\in [0,\tau ^T_{\varepsilon ,\eta ,K}]}\Vert u_\varepsilon -u_\eta \Vert _{H^{s-\frac{3}{2}}}> \epsilon \right\} . \end{aligned}$$

Now (2.8) clearly forces

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{\eta \le \varepsilon }\mathbb P\left\{ \sup _{t\in [0,T]}\Vert u_\varepsilon -u_\eta \Vert _{H^{s-\frac{3}{2}}}>\epsilon \right\} \le \frac{2C(R,T,u_0)}{K^2}. \end{aligned}$$

Letting \(K\rightarrow \infty \), we see that \(u_\varepsilon \) converges in probability in \(C\left( [0,T];H^{s-\frac{3}{2}}\right) \). Therefore, up to a further subsequence, (2.9) holds. \(\square \)

2.4 Proof for Proposition 2.1

By (2.9), since for each \(\varepsilon \in (0,1)\), \(u_\varepsilon \) is \(\{\mathcal {F}_t\}_{t\ge 0}\) progressive measurable, so is u. Notice that \(H^{s-3/2}\hookrightarrow W^{1,\infty }\). Then one can send \(\varepsilon \rightarrow 0\) in (2.3) to prove that u solves (2.1). Furthermore, it follows from Lemma 2.1 and Fatou’s lemma that

$$\begin{aligned} \mathbb E\sup _{t\in [0,T]}\Vert u(t)\Vert ^2_{H^s} <C(R,u_0,T). \end{aligned}$$
(2.10)

With (2.10), to prove (2.2), we only need to prove \(u\in C([0,T];H^{s})\), \({\mathbb P}\text {-}\mathrm{a.s.}\) Due to Lemma 2.4 and (1.11), \(u\in C([0,T];H^{s-3/2})\cap L^\infty \left( 0,T;H^s\right) \) almost surely. Since \(H^s\) is dense in \(H^{s-3/2}\), we see that \(u\in C_w\left( [0,T];H^s\right) \) (\(C_w\left( [0,T];H^s\right) \) is the set of weakly continuous functions with values in \(H^s\)). Therefore, we only need to prove the continuity of \([0,T]\ni t\mapsto \Vert u(t)\Vert _{H^s}\). As is mentioned in Remark 1.1, we first consider the following mollified version with \(J_\varepsilon \) being defined in (A.1):

$$\begin{aligned} \, \textrm{d}\Vert J_\varepsilon u(t)\Vert ^2_{H^s}&=\, 2\chi _R(\Vert u\Vert _{W^{1,\infty }}) \left( J_\varepsilon h(t,u) \, \textrm{d}\mathcal W,J_\varepsilon u\right) _{H^s}\nonumber \\&\quad -2\chi _R(\Vert u\Vert _{W^{1,\infty }}) \left( J_{\varepsilon } \left[ u^ku_x+F(u)\right] , J_\varepsilon u\right) _{H^s}\, \textrm{d}t\nonumber \\&\quad +\chi ^2_R(\Vert u\Vert _{W^{1,\infty }})\Vert J_\varepsilon h(t',u)\Vert _{\mathcal L_2(\mathfrak U;H^s)}^2\, \textrm{d}t. \end{aligned}$$
(2.11)

By (2.10),

$$\begin{aligned} \tau _N:=\inf \{t\ge 0:\Vert u(t)\Vert _{H^s}>N\}\rightarrow \infty \ \text {as}\ N\rightarrow \infty \ \ {\mathbb P}\text {-}\mathrm{a.s.} \end{aligned}$$
(2.12)

Then we only need to prove the continuity up to time \(\tau _N\wedge T\) for each \(N\ge 1\). Let \([t_2,t_1] \subset [0,T]\) with \(t_1-t_2<1\). We use Lemma A.6, the BDG inequality, Hypothesis \(H_1\) and (2.12) to find

$$\begin{aligned} \mathbb E\left[ \left( \Vert J_\varepsilon u(t_1\wedge \tau _N)\Vert ^2_{H^s}-\Vert J_\varepsilon u(t_2\wedge \tau _N)\Vert ^2_{H^s}\right) ^4\right] \le \,&C(N,T)|t_1-t_2|^{2}. \end{aligned}$$

We notice that for any \(T>0\), \(J_\varepsilon u\) tends to u in \(C\left( [0,T];H^{s}\right) \) as \(\varepsilon \rightarrow 0\). This, together with Fatou’s lemma, implies

$$\begin{aligned} \mathbb E\left[ \left( \Vert u(t_1\wedge \tau _N)\Vert ^2_{H^s}-\Vert u(t_2\wedge \tau _N)\Vert ^2_{H^s}\right) ^4\right] \le \,&C(N,T)|t_1-t_2|^{2}. \end{aligned}$$

This and Kolmogorov’s continuity theorem ensure the continuity of \(t\mapsto \Vert u(t\wedge \tau _N)\Vert _{H^{s}}\).

3 Proof for Theorem 1.1

Now we can prove Theorem 1.1. For the sake of clarity, we provide the proof in several subsections.

3.1 Proof for (i) in Theorem 1.1: Existence and uniqueness

3.1.1 Uniqueness

Before we prove the existence of a solution in \(H^s\) with \(s>3/2\), we first prove uniqueness since some estimates here will be used later.

Lemma 3.1

Let \(s>3/2\), \(k\ge 1\), and Hypothesis \(H_1\) hold. Suppose that \(u_0\) and \(v_0\) are two \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variables satisfying \( u_0,v_0 \in L^2({\Omega };H^s)\). Let \((u,\tau _1)\) and \((v,\tau _2)\) be two local solutions to (1.4) in the sense of Definition 1.1 such that \(u(0)=u_0\), \(v(0)=v_0\) almost surely. For any \(N>0\) and \(T>0\), we denote

$$\begin{aligned} \tau _{u}:=\inf \left\{ t\ge 0: \Vert u(t)\Vert _{H^s}>N\right\} ,\ \ \tau _{v}:=\inf \left\{ t\ge 0: \Vert v(t)\Vert _{H^s}>N\right\} , \end{aligned}$$

and \(\tau ^T_{u,v}:=\tau _{u}\wedge \tau _{v}\wedge T\). Then for \(s'\in \left( \frac{1}{2},\min \left\{ s-1,\frac{3}{2}\right\} \right) \), we have that

$$\begin{aligned} \displaystyle \mathbb E\sup _{t\in [0,\tau ^T_{u,v}]}\Vert u(t)-v(t)\Vert ^2_{H^{s'}}\le C(N,T)\mathbb E\Vert u_0-v_0\Vert ^2_{H^{s'}}. \end{aligned}$$
(3.1)

Proof

Let \(w(t)=u(t)-v(t)\) for \(t\in [0,\tau _1\wedge \tau _2]\). We have

$$\begin{aligned} \, \textrm{d}w+ \frac{1}{k+1}\partial _x\left[ u^{k+1}-v^{k+1}\right] \, \textrm{d}t+\left[ F(u)-F(v)\right] \, \textrm{d}t= \left[ h(t,u)-h(t,v)\right] \, \textrm{d}\mathcal W. \end{aligned}$$

Then we use the Itô formula for \(\Vert w\Vert ^2_{H^{s'}}\) with \(s'\in \left( \frac{1}{2},\min \left\{ s-1,\frac{3}{2}\right\} \right) \) to find that

$$\begin{aligned} \, \textrm{d}\Vert w\Vert ^2_{H^{s'}}&=\, 2 \left( \left[ h(t,u)- h(t,v)\right] \, \textrm{d}\mathcal W, w\right) _{H^{s'}} -\frac{2}{k+1}\left( \partial _x(P_k w), w\right) _{H^{s'}}\, \textrm{d}t\nonumber \\&\quad -2\left( \left[ F(u)-F(v)\right] ,w\right) _{H^{s'}}\, \textrm{d}t + \Vert h(t,u)-h(t,v)\Vert _{\mathcal L_2(\mathfrak U; H^{s'})}^2\, \textrm{d}t\nonumber \\&:=\, R_{1}+\sum _{i=2}^4R_i\, \textrm{d}t, \end{aligned}$$

where \(P_k=u^{k}+u^{k-1}v+\cdots +u v^{k-1}+v^k\). Taking the supremum over \(t\in [0,\tau ^T_{u,v}]\) and using the BDG inequality, \(H_1(3)\) and the Cauchy–Schwarz inequality yield

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^T_{u,v}]}\Vert w(t)\Vert ^2_{H^{s'}} -\mathbb E\Vert w(0)\Vert ^2_{H^{s'}} \nonumber \\&\quad \le \frac{1}{2}\mathbb E\sup _{t\in [0,\tau ^T_{u,v}]}\Vert w\Vert _{H^{s'}}^2 +Cg_2(N)\int _0^{T} \mathbb E\sup _{t'\in [0,\tau ^t_{u,v}]}\Vert w(t')\Vert ^2_{H^{s'}}\, \textrm{d}t\\&\qquad +\sum _{i=2}^4\mathbb E\int _0^{\tau ^T_{u,v}}|R_i| \, \textrm{d}t. \end{aligned}$$

Using Lemma A.4, integration by parts and \(H^s\hookrightarrow W^{1,\infty }\), we have

$$\begin{aligned} |R_2|\lesssim \,&\left| \left( [D^{s'}\partial _x,P_k]w,D^{s'}w\right) _{L^2}\right| +\left| \left( P_kD^{s'}\partial _xw,D^{s'}w\right) _{L^2}\right| \\ \lesssim \,&\Vert w\Vert ^2_{H^{s'}}\left( \Vert u\Vert _{H^s}+\Vert v\Vert _{H^s}\right) ^k. \end{aligned}$$

Therefore, for some constant \(C(N)>0\), we have that

$$\begin{aligned} \mathbb E\int _0^{\tau ^T_{u,v}}|R_2| \, \textrm{d}t \le C(N)\int _0^{T} \mathbb E\sup _{t'\in [0,\tau ^t_{u,v}]}\Vert w(t')\Vert ^2_{H^{s'}}\, \textrm{d}t. \end{aligned}$$

Similarly, Lemma A.5 and \(H_1(3)\) yield

$$\begin{aligned} \sum _{i=3}^4\mathbb E\int _0^{\tau ^T_{u,v}}|R_i| \, \textrm{d}t \le C(N)\int _0^{T} \mathbb E\sup _{t'\in [0,\tau ^t_{u,v}]}\Vert w(t')\Vert ^2_{H^{s'}}\, \textrm{d}t. \end{aligned}$$

Therefore, we combine the above estimates to find

$$\begin{aligned} \mathbb E\sup _{t\in [0,\tau ^T_{u,v}]}\Vert w(t)\Vert ^2_{H^{s'}} \le 2\mathbb E\Vert w(0)\Vert ^2_{H^{s'}}+ C(N)\int _0^{T} \mathbb E\sup _{t'\in [0,\tau ^t_{u,v}]}\Vert w(t')\Vert ^2_{H^{s'}} \, \textrm{d}t. \end{aligned}$$

Using the Grönwall inequality in the above estimate leads to (3.1). \(\square \)

Similarly, one can obtain the following uniqueness result for the original problem (1.4), and we omit the details for simplicity.

Lemma 3.2

Let \(s>3/2\), and let Hypothesis \(H_1\) be true. Let \(u_0\) be an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable such that \(u_0\in L^2({\Omega };H^s)\). If \((u_1,\tau _1)\) and \((u_2,\tau _2)\) are two local solutions to (1.4) satisfying \(u_i(\cdot \wedge \tau _i)\in L^2\left( {\Omega };C([0,\infty );H^s)\right) \) for \(i=1,2\) and \(\mathbb P\{u_1(0)=u_2(0)=u_0(x)\}=1\), then

$$\begin{aligned} \mathbb P\left\{ u_1(t,x)=u_2(t,x), \ \ (t,x)\in [0,\tau _1\wedge \tau _2]\times \mathbb R\right\} =1. \end{aligned}$$

3.1.2 The case \(s>3\)

To begin with, we first state the following existence and uniqueness results in \(H^s\) with \(s>3\) for the Cauchy problem (1.4):

Proposition 3.1

Let \(s>3\), \(k\ge 1\), and h(tu) satisfy Hypothesis \(H_1\). If \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable satisfying \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \), then there is a unique local solution \((u,\tau )\) to (1.4) in the sense of Definition 1.1 with

$$\begin{aligned} u(\cdot \wedge \tau )\in L^2\left( {\Omega }; C\left( [0,\infty );H^s\right) \right) . \end{aligned}$$
(3.2)

Proof

Since uniqueness has been obtained in Lemma 3.2, via Proposition 2.1, we only need to remove the cut-off function. For \(u_0(\omega ,x)\in L^2({\Omega }; H^s)\), we let

$$\begin{aligned} {\Omega }_m:=\{m-1\le \Vert u_0\Vert _{H^s}<m\},\ m\ge 1. \end{aligned}$$

Let \(u_{0,m}:=u_0{\textbf {1}}_{\{m-1\le \Vert u_0\Vert _{H^s}<m\}}\). For any \(R>0\), on account of Proposition 2.1, we let \(u_{m,R}\) be the global solution to the cut-off problem (2.1) with initial value \(u_{0,m}\) and cut-off function \(\chi _R(\cdot )\). Define

$$\begin{aligned} \tau _{m,R}:=\inf \left\{ t>0:\sup _{t'\in [0,t]}\Vert u_{m,R}(t')\Vert ^2_{H^s}>\Vert u_{0,m}\Vert ^2_{H^s}+2\right\} . \end{aligned}$$

Then for any \(R>0\) and \(m\in \mathbb N\), it follows from the time continuity of the solution that \(\mathbb P\{\tau _{m,R}>0\}=1\). Particularly, for any \(m\in \mathbb N\), we assign \(R=R_m\) such that \(R^2_m>c^2m^2+2c^2\), where \(c>0\) is the embedding constant such that \(\Vert \cdot \Vert _{W^{1,\infty }} \le c \Vert \cdot \Vert _{H^s}\) for \(s>3\). For simplicity, we denote \((u_m,\tau _m):=(u_{m,R_m},\tau _{m,R_m})\). Then we have

$$\begin{aligned} \mathbb P\left\{ \Vert u_m\Vert ^2_{W^{1,\infty }}\le c^2\Vert u_m\Vert ^2_{H^{s}}\le c^2\Vert u_{0,m}\Vert ^2_{H^{s}}+2c^2<R^2_m,\ t\in [0,\tau _{m}],\ m\ge 1\right\} =1, \end{aligned}$$

which means \(\mathbb P\left\{ \chi _{R_m}(\Vert u_m\Vert _{W^{1,\infty }})=1,\ t\in [0,\tau _m],\ m\ge 1\right\} =1.\) Therefore, \((u_m,\tau _m)\) is the solution to (1.4) with initial value \(u_{0,m}\). Since \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \), the condition (A.5) is satisfied with \(I=\mathbb N^+\). Applying Lemma A.7 means that

$$\begin{aligned} \left( u=\sum _{m\ge 1}{} {\textbf {1}}_{\{m-1\le \Vert u_0\Vert _{H^s}<m\}}u_m,\ \ \tau =\sum _{m\ge 1}{} {\textbf {1}}_{\{m-1\le \Vert u_0\Vert _{H^s}<m\}}\tau _m\right) \end{aligned}$$

is a solution to (1.4) corresponding to the initial condition \(u_0\). Besides,

$$\begin{aligned} \sup _{t\in [0,\tau ]}\Vert u\Vert _{H^s}^2 =\,&\sum _{m\ge 1}{} {\textbf {1}}_{\{m-1\le \Vert u_0\Vert _{H^s}<m\}}\sup _{t\in [0,\tau _m]}\Vert u_m\Vert _{H^s}^2 \le 2\Vert u_{0}\Vert ^2_{H^s}+4\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$

Taking expectation gives rise to (3.2). \(\square \)

3.1.3 The case \(s>3/2\)

When \(s>3/2\), we first consider the following problem

$$\begin{aligned} \left\{ \begin{aligned}&\, \textrm{d}u+\left[ u^k\partial _xu+F(u)\right] \, \textrm{d}t=h(t,u) \, \textrm{d}\mathcal W,\ \ k\ge 1,\ \ x\in \mathbb R,\ t>0,\\&u(\omega ,0,x)=J_{\varepsilon } u_0(\omega ,x)\in H^{\infty },\ \ x\in \mathbb R,\ \varepsilon \in (0,1), \end{aligned} \right. \end{aligned}$$
(3.3)

where \(J_\varepsilon \) is the mollifier defined in (A.1). Proposition 3.1 implies that for each \(\varepsilon \in (0,1)\), (3.3) has a local pathwise solution \((u_\varepsilon ,\tau _\varepsilon )\) such that \(u_\varepsilon \in L^2\left( {\Omega }; C\left( [0,\tau _\varepsilon ];H^{s}\right) \right) \).

Lemma 3.3

Assume \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable such that \(\Vert u_0\Vert _{H^s}\le M\) for some \(M>0\). For any \(T>0\) and \(s>3/2\), we define

$$\begin{aligned} \tau ^T_{\varepsilon }:=\inf \left\{ t\ge 0:\Vert u_\varepsilon \Vert _{H^s}\ge \Vert J_{\varepsilon } u_0\Vert _{H^s}+2\right\} \wedge T, \ \ \tau ^{T}_{\varepsilon ,\eta }:=\tau ^T_{\varepsilon }\wedge \tau ^T_{\eta },\ \ \varepsilon ,\eta \in (0,1).\nonumber \\ \end{aligned}$$
(3.4)

Let \(K\ge 2M+5\) be fixed and let \(s'\in \left( \frac{1}{2},\min \left\{ s-1,\frac{3}{2}\right\} \right) \). Then, there is a constant \(C(K,T)>0\) such that \(w_{\varepsilon ,\eta }=u_\varepsilon -u_\eta \) satisfies

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^s}\nonumber \\&\quad \le \, C(K,T)\mathbb E\left\{ \Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^s}+\Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (0)\Vert ^2_{H^{s+1}}\right\} \nonumber \\&\qquad +C(K,T)\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}}. \end{aligned}$$
(3.5)

Proof

To start with, we notice that Lemma A.1 implies

$$\begin{aligned} \Vert J_{\varepsilon } u_0\Vert _{H^s}\le M,\ \ \varepsilon \in (0,1)\ \ {\mathbb P}\text {-}\mathrm{a.s.} \end{aligned}$$
(3.6)

Since (3.4) and (3.6) are used frequently in the following, they will be used without further notice. Let

$$\begin{aligned} P_{l}=P_{l}(u_\varepsilon ,u_\eta ):= {\left\{ \begin{array}{ll} u_{\varepsilon }^{l}+u_{\varepsilon }^{l-1}u_{\eta }+\cdots +u_{\varepsilon }u_{\eta }^{l-1}+u_{\eta }^{l},\ \ \text {if}\ \ l\ge 1,\\ 1,\ \ \text {if}\ \ l=0. \end{array}\right. } \end{aligned}$$
(3.7)

Applying the Itô formula to \(\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s}\) gives rise to

$$\begin{aligned} \, \textrm{d}\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s}&=\, 2 \left( \left[ h(t,u_\varepsilon )-h(t,u_\eta )\right] \, \textrm{d}\mathcal W, w_{\varepsilon ,\eta }\right) _{H^s} -2\left( w_{\varepsilon ,\eta } P_{k-1}\partial _x u_{\varepsilon },w_{\varepsilon ,\eta }\right) _{H^s}\, \textrm{d}t \nonumber \\&\quad -2\left( u^k_{\eta }\partial _x w_{\varepsilon ,\eta },w_{\varepsilon ,\eta }\right) _{H^s}\, \textrm{d}t -2\left( \left[ F(u_\varepsilon )-F(u_\eta )\right] ,w_{\varepsilon ,\eta }\right) _{H^s}\, \textrm{d}t\nonumber \\&\quad +\left\| h(t,u_\varepsilon )-h(t,u_\eta )\right\| _{\mathcal L_2(\mathfrak U;H^s)}^2\, \textrm{d}t :=\, Q_{1,s}+\sum _{i=2}^{5}Q_{i,s}\, \textrm{d}t. \end{aligned}$$
(3.8)

Since \(H^{s'}\hookrightarrow L^\infty \) and \(H^{s}\hookrightarrow W^{1,\infty }\), we can use Lemmas A.3 and A.5 to find

$$\begin{aligned} \left| Q_{2,s}\right|&\lesssim \, \left( \Vert w_{\varepsilon ,\eta }P_{k-1}\Vert _{H^s}\Vert \partial _x u_\varepsilon \Vert _{L^\infty } +\Vert w_{\varepsilon ,\eta }P_{k-1}\Vert _{L^\infty }\Vert \partial _x u_\varepsilon \Vert _{H^s}\right) \Vert w_{\varepsilon ,\eta }\Vert _{H^s}\\&\lesssim \, \Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s}\left( \Vert u_\varepsilon \Vert _{H^{s}}+\Vert u_\eta \Vert _{H^{s}}\right) ^{k} +\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}} \Vert u_\varepsilon \Vert ^2_{H^{s+1}}\\&\quad +\left( \Vert u_\varepsilon \Vert _{H^{s}}+\Vert u_\eta \Vert _{H^{s}}\right) ^{2k-2} \Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s}\\ \left| Q_{3,s}\right|&\lesssim \, \left| \left( [D^s,u^k_\eta ]\partial _x w_{\varepsilon ,\eta },D^s w_{\varepsilon ,\eta }\right) _{L^2}\right| +\left| \left( u^k_\eta \partial _x D^s w_{\varepsilon ,\eta },D^s w_{\varepsilon ,\eta }\right) _{L^2}\right| \\&\lesssim \,\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s}\Vert u_\eta \Vert _{H^{s}}^k, \end{aligned}$$

and

$$\begin{aligned} \left| Q_{4,s}\right| \lesssim \,&\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s}\left( \Vert u_\varepsilon \Vert _{H^{s}}+\Vert u_\eta \Vert _{H^{s}}\right) ^{k}. \end{aligned}$$

The above estimates and \(H_1(2)\) imply that there is a constant \(C(K)>0\) such that

$$\begin{aligned}&\,\sum _{i=2}^5\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}|Q_{i,s}| \, \textrm{d}t \\&\quad \lesssim \, \mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }} \left[ \left( \left( \Vert u_\varepsilon \Vert _{H^s}+\Vert u_\eta \Vert _{H^s}\right) ^{2k}+1\right) \Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s} +\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}\right] \, \textrm{d}t\nonumber \\&\quad \quad +\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }} g_1^2(K)\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s} \, \textrm{d}t\nonumber \\&\quad \le \, C(K)\int _0^{T} \mathbb E\sup _{t'\in [0,\tau ^{t}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t')\Vert ^2_{H^s}\, \textrm{d}t +C(K)T\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (t)\Vert ^2_{H^{s+1}}. \end{aligned}$$

For \(Q_{1,s}\), applying the BDG inequality and \(H_1(2)\), we derive

$$\begin{aligned}&\, \mathbb E\left( \sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\int _0^{t}\Big (\left[ h(t,u_\varepsilon )-h(t,u_\eta )\right] \, \textrm{d}\mathcal W, w_{\varepsilon ,\eta }\Big )_{H^s}\right) \\&\quad \le \, \frac{1}{2}\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }\Vert _{H^s}^2 +Cg_1^2(K)\int _0^{T} \mathbb E\sup _{t'\in [0,\tau ^{t}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t')\Vert ^2_{H^s}\, \textrm{d}t. \end{aligned}$$

Summarizing the above estimates and then using Grönwall’s inequality, we find some constant \(C=C(K,T)>0\) such that

$$\begin{aligned} \,&\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^s} \nonumber \\&\quad \le \, C\left( \mathbb E\Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^s} +\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (t)\Vert ^2_{H^{s+1}}\right) . \end{aligned}$$
(3.9)

Now we estimate \(\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (t)\Vert ^2_{H^{s+1}}\). To this end, we first recall (1.7) and then apply the Itô formula to deduce that for any \(\rho >0\),

$$\begin{aligned} \, \textrm{d}\Vert u_\varepsilon \Vert ^2_{H^{\rho }} =\,&2\sum _{l=1}^{\infty }\left( h(t,u_\varepsilon )e_l,u_\varepsilon \right) _{H^{\rho }} \, \textrm{d}W_l-2\left( D^\rho \left[ (u_\varepsilon )^k\partial _x u_\varepsilon \right] ,D^\rho u_\varepsilon \right) _{L^2}\, \textrm{d}t\nonumber \\&-2\left( D^\rho F(u_\varepsilon ),D^\rho u_\varepsilon \right) _{L^2}\, \textrm{d}t +\Vert h(t,u_\varepsilon )\Vert _{\mathcal L_2(\mathfrak U;H^\rho )}^2\, \textrm{d}t\nonumber \\ :=\,&\sum _{l=1}^{\infty }Z_{1,\rho ,l} \, \textrm{d}W_l+\sum _{i=2}^4Z_{i,\rho }\, \textrm{d}t. \end{aligned}$$
(3.10)

In the same way, we also rewrite \(Q_{1,s}\) in (3.8) as

$$\begin{aligned} Q_{1,s}=2\sum _{j=1}^{\infty } \left( \left[ h(t,u_\varepsilon )- h(t,u_\eta )\right] e_j,w_{\varepsilon ,\eta }\right) _{H^s} \, \textrm{d}W_j:=\sum _{j=1}^{\infty }Q_{1,s,j} \, \textrm{d}W_j. \end{aligned}$$
(3.11)

With the summation form (3.11) at hand, applying the Itô product rule to (3.8) and (3.10), we derive

$$\begin{aligned} \, \textrm{d}\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}&=\, \sum _{j=1}^{\infty }\left( \Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}Z_{1,s+1,j} +\Vert u_\varepsilon \Vert ^2_{H^{s+1}}Q_{1,s',j}\right) \, \textrm{d}W_j\nonumber \\&\quad +\sum _{i=2}^4\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}Z_{i,s+1}\, \textrm{d}t +\sum _{i=2}^5\Vert u_\varepsilon \Vert ^2_{H^{s+1}}Q_{i,s'}\, \textrm{d}t \\&\quad +\sum _{j=1}^{\infty }Q_{1,s',j}Z_{1,s+1,j}\, \textrm{d}t. \end{aligned}$$

We first notice that

$$\begin{aligned} Q_{2,s'}+Q_{3,s'}=\frac{2}{k+1}\left( \partial _x (P_{k}w_{\varepsilon ,\eta }),w_{\varepsilon ,\eta }\right) _{H^{s'}}, \end{aligned}$$

where \(P_{k}\) is defined by (3.7). As a result, Lemma A.4, integration by parts and \(H^s\hookrightarrow W^{1,\infty }\) give rise to

$$\begin{aligned} |Q_{2,s'}+Q_{3,s'}|\lesssim \,&\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}\left( \Vert u_\varepsilon \Vert _{H^s}+\Vert u_\eta \Vert _{H^s}\right) ^k. \end{aligned}$$

Using Lemma A.3, Hypothesis \(H_1\), Lemma A.5 as well as the embedding of \(H^s\hookrightarrow W^{1,\infty }\) for \(s>3/2\), we obtain that for some \(C(K)>0\),

$$\begin{aligned}&\sum _{i=2}^4\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}|Z_{i,s+1}| \lesssim \Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}} \left[ \Vert u_\varepsilon \Vert ^k_{H^s}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}+f^2(\Vert u_\varepsilon \Vert _{H^s})(1+\Vert u_\varepsilon \Vert ^2_{H^{s+1}}) \right] ,\\&\sum _{i=4}^5\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}|Q_{i,s'}| \, \textrm{d}t \le \, C(K)\int _0^{T} \mathbb E\sup _{t'\in [0,\tau ^{t}_{\varepsilon ,\eta }]}\Vert u_\varepsilon (t')\Vert ^2_{H^{s+1}}\Vert w_{\varepsilon ,\eta }(t')\Vert ^2_{H^{s'}} \, \textrm{d}t. \end{aligned}$$

Then one can infer from the above three inequalities, the BDG inequality and Hypothesis \(H_1\) that for some constant \(C(K)>0\),

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}\Vert u_\varepsilon \Vert ^2_{H^{s+1}} -\mathbb E\Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (0)\Vert ^2_{H^{s+1}}\nonumber \\&\quad \lesssim \, \mathbb E\left( \int _0^{\tau ^{T}_{\varepsilon ,\eta }} \Vert w_{\varepsilon ,\eta }\Vert ^4_{H^{s'}}\Vert h(t,u_\varepsilon )\Vert ^2_{\mathcal L_2(\mathfrak U; H^{s+1})}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}\, \textrm{d}t\right) ^{\frac{1}{2}}\nonumber \\&\quad \quad +\mathbb E\left( \int _0^{\tau ^{T}_{\varepsilon ,\eta }} \Vert u_\varepsilon \Vert ^4_{H^{s+1}}\Vert h(t,u_\varepsilon )-h(t,u_\eta )\Vert ^2_{\mathcal L_2(\mathfrak U; H^{s'})}\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}} \, \textrm{d}t\right) ^{\frac{1}{2}}\nonumber \\&\quad \quad +\sum _{i=2}^4\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}|Z_{i,s+1}| \, \textrm{d}t+\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}|Q_{2,s'}+Q_{3,s'}| \, \textrm{d}t\nonumber \\&\quad \quad +\sum _{i=4}^5\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}|Q_{i,s'}| \, \textrm{d}t+\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}\sum _{j=1}^{\infty }|Q_{1,s',j}Z_{1,s+1,j}| \, \textrm{d}t\nonumber \\&\quad \le \frac{1}{2}\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]} \Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}\nonumber \\&\quad \quad +C(K)\int _0^{T} \mathbb E\sup _{t'\in [0,\tau ^{t}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t')\Vert ^2_{H^{s'}}\Vert u_\varepsilon (t')\Vert ^2_{H^{s+1}} \, \textrm{d}t\nonumber \\&\quad \quad +C(K)T\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}} +\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}\sum _{j=1}^{\infty }|Q_{1,s',j}Z_{1,s+1,j}| \, \textrm{d}t. \end{aligned}$$
(3.12)

For the last term, we proceed as follows:

$$\begin{aligned}&\,\mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}\sum _{j=1}^{\infty }\left| Q_{1,s',j}Z_{1,s+1,j}\right| \, \text {d}t\\ {}&\quad \lesssim \, \mathbb E\int _0^{\tau ^{T}_{\varepsilon ,\eta }}\Vert h(t,u_\varepsilon )-h(t,u_\eta )\Vert _{\mathcal L_2(\mathfrak U;H^{s'})}\Vert w_{\varepsilon ,\eta }\Vert _{H^{s'}} \Vert h(t,u_\varepsilon )\Vert _{\mathcal L_2(\mathfrak U;H^{s+1})}\Vert u_\varepsilon \Vert _{H^{s+1}}\, \text {d}t.\nonumber \\ {}&\quad \le \, C(K)T \mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}}\\ {}&\quad \quad +C(K)\int _0^T \mathbb E\sup _{t'\in [0,\tau ^{t}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t') \Vert ^2_{H^{s'}}\Vert u_\varepsilon (t')\Vert ^2_{H^{s+1}}\, \text {d}t \end{aligned}$$

Consequently, (3.12) reduces to

$$\begin{aligned}&\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}-2\mathbb E\Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (0)\Vert ^2_{H^{s+1}}\\ {}&\quad \le \, C(K)T \mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}}\\ {}&\quad \quad +C(K)\int _0^T \mathbb E\sup _{t'\in [0,\tau ^{t}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t') \Vert ^2_{H^{s'}}\Vert u_\varepsilon (t')\Vert ^2_{H^{s+1}}\, \text {d}t, \end{aligned}$$

which means that for some \(C(K,T)>0\),

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^{s'}}\Vert u_\varepsilon \Vert ^2_{H^{s+1}}\nonumber \\&\quad \le C\left( \mathbb E\Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (0)\Vert ^2_{H^{s+1}} +\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}}\right) . \end{aligned}$$
(3.13)

Combining (3.9) and (3.13), we obtain (3.5). \(\square \)

To proceed further, we state the following lemma in [25] as a form which is convenient for our purposes.

Lemma 3.4

(Lemma 5.1, [25]) Let all the conditions in Lemma 3.3 hold true. Assume

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{\eta \le \varepsilon } \mathbb E\sup _{t\in [0,\tau ^T_{\varepsilon ,\eta }]}\Vert u_\varepsilon -u_\eta \Vert _{H^s}=0 \end{aligned}$$
(3.14)

and

$$\begin{aligned} \lim _{a\rightarrow 0}\sup _{\varepsilon \in (0,1)} \mathbb P\left\{ \sup _{t\in [0,\tau ^T_{\varepsilon }\wedge a]}\Vert u_\varepsilon \Vert _{H^s}\ge \Vert J_{\varepsilon }u_0\Vert _{H^s}+1\right\} =0 \end{aligned}$$
(3.15)

hold true. Then we have:

  1. (a)

    There exists a sequence of stopping times \(\xi _{\varepsilon _n}\), for some countable sequence \(\{\varepsilon _n\}\) with \(\varepsilon _n\rightarrow 0\) as \(n\rightarrow \infty \), and a stopping time \(\tau \) such that

    $$\begin{aligned} \xi _{\varepsilon _n}\le \tau ^T_{\varepsilon _n},\ \ \lim _{n\rightarrow \infty }\xi _{\varepsilon _n}=\tau \in (0,T] \ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
  2. (b)

    There is a process \(u\in C([0,\tau ];H^s)\) such that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{t\in [0,\tau ]}\Vert u_{\varepsilon _n}-u\Vert _{H^s}=0,\ \sup _{t\in [0,\tau ]}\Vert u\Vert _{H^s}\le \Vert u_0\Vert _{H^s}+2\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
  3. (c)

    There is a sequence of sets \({\Omega }_n \uparrow {\Omega }\) such that for any \(p\in [1,\infty )\),

    $$\begin{aligned} \textbf{1}_{{\Omega }_n }\sup _{t\in [0,\tau ]}\Vert u_{\varepsilon _n}\Vert _{H^s}\le \Vert u_0\Vert _{H^s}+2\ \ {\mathbb P}\text {-}\mathrm{a.s.}, \ \text {and}\ \sup _{n}\mathbb E\left( \textbf{1}_{{\Omega }_n } \sup _{t\in [0,\tau ]}\Vert u_{\varepsilon _n}\Vert ^p_{H^s}\right) <\infty . \end{aligned}$$

Remark 3.1

In the original form of [25, Lemma 5.1], the authors only emphasize the existence of stopping time \(\tau \in (0,T]\) such that (b) and (c) in Lemma 3.4 hold true. However, here we point out that they obtained such \(\tau \) by constructing stopping times \(\xi _{\varepsilon _n}\). We refer to (5.2), (5.12), (5.15), (5.20) and (5.24) in [25] for the details. The properties (a) and (c) in Lemma 3.4 will be used in the proof for (iii) in Theorem 1.1.

Proposition 3.2

Let Hypothesis \(H_1\) hold. Assume that \(s>3/2\), \(k\ge 1\) and let \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable such that \(\Vert u_0\Vert _{H^s}\le M\) for some \(M>0\). Then (1.4) has a unique pathwise solution \((u,\tau )\) in the sense of Definition 1.1 such that

$$\begin{aligned} \sup _{t\in [0,\tau ]}\Vert u\Vert _{H^s}\le \Vert u_0\Vert _{H^s}+2\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$

Proof

We first prove that \(\{u_\varepsilon \}\) satisfies the estimates (3.14) and (3.15).

(i) (3.14) is satisfied. Lemma A.1 tells us that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{\eta \le \varepsilon }\mathbb E\Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^s}=0. \end{aligned}$$
(3.16)

Since \(\Vert J_\varepsilon u_0\Vert _{H^s}\le M\), as in Lemma 3.1, we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{\eta \le \varepsilon } \mathbb E\sup _{t\in [0,\tau ^T_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}} \le C(M,T)\lim _{\varepsilon \rightarrow 0}\sup _{\eta \le \varepsilon } \mathbb E\Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^{s}}=0. \end{aligned}$$
(3.17)

Moreover, it follows from Lemma A.1 that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{\eta \le \varepsilon }\Vert w_{\varepsilon ,\eta }(0)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (0)\Vert ^2_{H^{s+1}} \lesssim \lim _{\varepsilon \rightarrow 0}\sup _{\eta \le \varepsilon }o\left( \varepsilon ^{2s-2s'}\right) O\left( \frac{1}{\varepsilon ^2}\right) =0. \end{aligned}$$
(3.18)

Summarizing (3.16), (3.17), (3.18) and Lemma 3.3, (3.14) holds true.

(ii) (3.15) is satisfied. Recall (3.10) and let \(a>0\). We have

$$\begin{aligned} \sup _{t\in [0,\tau _\varepsilon ^T\wedge a]}\Vert u_{\varepsilon }(t)\Vert ^2_{H^s} \le \,&\Vert J_{\varepsilon }u_0\Vert ^2_{H^s} +\sup _{t\in [0,\tau _\varepsilon ^T\wedge a]}\left| \int _0^{t}\sum _{j=1}^{\infty }Z_{1,s,j} \, \textrm{d}W_j\right| \\&+\sum _{i=2}^4\int _0^{\tau _\varepsilon ^T\wedge a}|Z_{i,s}| \, \textrm{d}t, \end{aligned}$$

which clearly forces that

$$\begin{aligned}&\mathbb P\left\{ \sup _{t\in [0,\tau _\varepsilon ^T\wedge a]}\Vert u_{\varepsilon }(t)\Vert ^2_{H^s}>\Vert J_{\varepsilon }u_0\Vert ^2_{H^s}+1\right\} \nonumber \\&\quad \le \mathbb P\left\{ \sup _{t\in [0,\tau _\varepsilon ^T\wedge a]}\left| \int _0^{t}\sum _{j=1}^{\infty }Z_{1,s,j} \, \textrm{d}W_j\right|>\frac{1}{2}\right\} +\mathbb P\left\{ \sum _{i=2}^4\int _0^{\tau _\varepsilon ^T\wedge a}|Z_{i,s}| \, \textrm{d}t>\frac{1}{2}\right\} . \end{aligned}$$

Due to the Chebyshev inequality, Lemmas A.3 and A.5, Hypothesis \(H_1\), the embedding of \(H^s\hookrightarrow W^{1,\infty }\) for \(s>3/2\), (3.4) and (3.6), we have

$$\begin{aligned} \mathbb P\left\{ \sum _{i=2}^4\int _0^{\tau _\varepsilon ^T\wedge a}|Z_{i,s}| \, \textrm{d}t>\frac{1}{2}\right\}&\le C\sum _{i=2}^4\mathbb E\int _0^{\tau _\varepsilon ^T\wedge a}|Z_{i,s}| \, \textrm{d}t\nonumber \\&\le C\mathbb E\int _0^{\tau _\varepsilon ^T\wedge a} \left[ \Vert u_\varepsilon \Vert ^{k+2}_{H^{s}}\!+\!f^2(\Vert u_\varepsilon \Vert _{H^s})(1\!+\!\Vert u_\varepsilon \Vert ^2_{H^{s}})\right] \, \textrm{d}t\nonumber \\&\le C\mathbb E\int _0^{\tau _\varepsilon ^T\wedge a}C(M,T)\, \textrm{d}t\le C(M,T)a. \end{aligned}$$

Then we can infer from the Doob’s maximal inequality and the Itô isometry that

$$\begin{aligned}&\mathbb P\left\{ \sup _{t\in [0,\tau _\varepsilon ^T\wedge a]}\left| \int _0^{t} \sum _{j=1}^{\infty }Z_{1,s,j} \, \textrm{d}W_j\right| >\frac{1}{2}\right\} \\&\quad \le C\mathbb E\left( \int _0^{\tau _\varepsilon ^T\wedge a}\sum _{j=1}^{\infty }Z_{1,s,j} \, \textrm{d}W_j\right) ^2\nonumber \\&\quad \le C\mathbb E\int _0^{\tau _\varepsilon ^T\wedge a} \left[ f^2(\Vert u_\varepsilon \Vert _{W^{1,\infty }})(1+\Vert u_\varepsilon \Vert _{H^{s}})^2\Vert u_\varepsilon \Vert ^2_{H^{s}}\right] \, \textrm{d}t\nonumber \\&\quad \le C\mathbb E\int _0^{\tau _\varepsilon ^T\wedge a}C(M,T)\, \textrm{d}t\le C(M,T)a. \end{aligned}$$

Hence we have

$$\begin{aligned} \mathbb P\left\{ \sup _{t\in [0,\tau _\varepsilon ^T\wedge a]}\Vert u_{\varepsilon }(t)\Vert ^2_{H^s}>\Vert J_{\varepsilon }u_0\Vert ^2_{H^s}+1\right\} \le C(M,T)a, \end{aligned}$$

which gives (3.15).

(iii) Applying Lemma 3.4. By Lemma 3.4, we can take limit in some subsequence of \(\{u_{\varepsilon _n}\}\) to build a solution u to (1.4) such that \(u\in C([0,\tau ];H^s)\) and \(\sup _{t\in [0,\tau ]}\Vert u\Vert _{H^s}\le \Vert u_0\Vert _{H^s}+2.\) Uniqueness is a direct corollary of Lemma 3.2. \(\square \)

Now we can finish the proof for (i) in Theorem 1.1.

Proof for (i) in Theorem 1.1. As in Proposition 3.1, we let

$$\begin{aligned} u_0(\omega ,x)&:=\sum _{m\ge 1}u_{0,m}(\omega ,x) :=\sum _{m\ge 1}u_0(\omega ,x){\textbf {1}}_{\{m-1\le \Vert u_0\Vert _{H^s}<m\}}\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$

For each \(m\ge 1\), we can infer from Proposition 3.2 that (1.4) has a unique solution \((u_m,\tau _m)\) with \(u_m(0)=u_{0,m}\) almost surely. Furthermore, \(\sup _{t\in [0,\tau _m]}\Vert u_m\Vert _{H^s}\le \Vert u_{0,m}\Vert _{H^s}+2\) \({\mathbb P}\text {-}\mathrm{a.s.}\) Using Lemma A.7 in a similar way as in Proposition 3.2, we find that

$$\begin{aligned} \left( u=\sum _{m\ge 1}{} {\textbf {1}}_{\{m-1\le \Vert u_0\Vert _{H^s}<m\}}u_m,\ \ \tau =\sum _{m\ge 1}{} {\textbf {1}}_{\{m-1\le \Vert u_0\Vert _{H^s}<m\}}\tau _m\right) \end{aligned}$$

is a solution to (1.4) satisfying (1.11) and \(u(0)=u_0\) almost surely. Uniqueness is given by Lemma 3.2. \(\square \)

3.2 Proof for (ii) in Theorem 1.1: Blow-up criterion

With a local solution \((u,\tau )\) at hand, one may pass from \((u,\tau )\) to the maximal solution \((u,\tau ^*)\) as in [5, 26]. In the periodic setting, i.e., \(x\in \mathbb T=\mathbb R/2\pi \mathbb Z\), the blow-up criterion (1.12) for a maximal solution has been proved in [46] by using energy estimate and some stopping-time techniques. When \(x\in \mathbb R\), (1.12) can be also obtained in the same way, and we omit the details for brevity.

3.3 Proof for (iii) in Theorem 1.1: Stability

Let \(u_0, v_0\in L^\infty ({\Omega };H^s)\) be two \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variables. Let u and v be the corresponding solutions with initial conditions \(u_0\) and \(v_0\). To prove (iii) in Theorem 1.1, for any \(\epsilon >0\) and \(T>0\), we need to find a \(\delta =\delta (\epsilon ,u_0,T)>0\) and a \(\tau \in (0,T]\) \({\mathbb P}\text {-}\mathrm{a.s.}\) such that (1.14) holds true as long as (1.13) is satisfied. Without loss of generality, by (1.13), we can first assume

$$\begin{aligned} \Vert v_0\Vert _{L^\infty ({\Omega };H^s)}\le \Vert u_0\Vert _{L^\infty ({\Omega };H^s)}+1. \end{aligned}$$
(3.19)

From now on \(\epsilon >0\) and \(T>0\) are given.

However, as is mentioned in Remark 1.1, the term \(u^ku_x\) loses one regularity and the estimate for \(\mathbb E\sup _{t\in [0,\tau ]}\Vert u(t)-v(t)\Vert ^2_{H^s}\) will involve \(\Vert u\Vert _{H^{s+1}}\) or \(\Vert v\Vert _{H^{s+1}}\), which might be infinite since we only know \(u,v\in H^s\). To overcome this difficulty, we will consider (3.3). Let \(\varepsilon \in (0, 1)\). By (i) in Theorem 1.1, there is a unique solution \(u_{\varepsilon }\) (resp. \(v_{\varepsilon }\)) to the problem (3.3) with initial data \(J_\varepsilon u_0\) (resp. \(J_\varepsilon v_0\)). Then the \(H^{s+1}\)-norm is well-defined for the smooth solution \(u_\varepsilon \) and \(v_\varepsilon \). Similar to (3.4), for any \(T>0\), we define

$$\begin{aligned} \tau ^{f,T}_{\varepsilon }:=\inf \left\{ t\ge 0:\Vert f_\varepsilon \Vert _{H^s}\ge \Vert J_{\varepsilon } f_0\Vert _{H^s}+2\right\} \wedge T,\ \ f\in \{u,v\}. \end{aligned}$$
(3.20)

Recalling the analysis in Lemma 3.3 and Proposition 3.2 (for the case \(f=v\), we notice (3.19)), we can use Lemma 3.4 to find that there exists a unified subsequence \(\{\varepsilon _n\}\) with \(\varepsilon _n\rightarrow 0\) as \(n\rightarrow \infty \) such that for \(f\in \{u,v\}\), there is a sequence of stopping times \(\xi ^f_{\varepsilon _n}\) and a stopping time \(\tau ^f\) satisfying

$$\begin{aligned} \xi ^f_{\varepsilon _n}\le \tau ^{f,T}_{\varepsilon _n},\ n\ge 1\ \ \text {and}\ \ \lim _{n\rightarrow \infty }\xi ^f_{\varepsilon _n}=\tau ^f\in (0,T] \ \ {\mathbb P}\text {-}\mathrm{a.s.}, \end{aligned}$$
(3.21)

and

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{t\in [0,\tau ^f]}\Vert f-f_{\varepsilon _n}\Vert _{H^s}=0, \ \sup _{t\in [0,\tau ^f]}\Vert f\Vert _{H^s}\le \Vert f_0\Vert _{H^s}+2 \ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
(3.22)

Moreover, for \(f\in \{u,v\}\), there exists \({\Omega }^f_n\uparrow {\Omega }\) such that

$$\begin{aligned} \textbf{1}_{{\Omega }^f_n }\sup _{t\in [0,\tau ^f]}\Vert f_{\varepsilon _n}\Vert _{H^s}\le \Vert f_0\Vert _{H^s}+2\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
(3.23)

Next, we let \({\Omega }_n:={\Omega }^u_n\, \cap \, {\Omega }^v_n.\) Then \({\Omega }_n\uparrow {\Omega }\). This, (3.22), (3.23) and Lebesgue’s dominated convergence theorem yield

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb E\sup _{t\in [0,\tau ^f]}\Vert f-\textbf{1}_{{\Omega }_n}f_{\varepsilon _n}\Vert ^2_{H^s} =\,&0,\ \ f\in \{u,v\}. \end{aligned}$$

Therefore, we have, when n is large enough, that

$$\begin{aligned} \mathbb E\sup _{t\in [0,\tau ^f]}\Vert f-\textbf{1}_{{\Omega }_n}f_{\varepsilon _n}\Vert ^2_{H^s} < \frac{\epsilon }{9},\ \ f\in \{u,v\}. \end{aligned}$$
(3.24)

Now we consider \(\mathbb E\sup _{t\in [0,\tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_n}u_{\varepsilon _n}-\textbf{1}_{{\Omega }_n}v_{\varepsilon _n}\Vert ^2_{H^s}\). It follows from (3.21) that for all \(n\ge 1\),

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_n}u_{\varepsilon _n}-\textbf{1}_{{\Omega }_n}v_{\varepsilon _n}\Vert ^2_{H^s}\nonumber \\&\quad \le \, \mathbb E\sup _{t\in [0,\xi ^u_{\varepsilon _n}\wedge \xi ^v_{\varepsilon _n}\wedge \tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_n}u_{\varepsilon _n}-\textbf{1}_{{\Omega }_n}v_{\varepsilon _n}\Vert ^2_{H^s}\nonumber \\&\quad \quad +\mathbb E\sup _{t\in [\xi ^u_{\varepsilon _n}\wedge \xi ^v_{\varepsilon _n}\wedge \tau ^u\wedge \tau ^v,\tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_n}u_{\varepsilon _n}-\textbf{1}_{{\Omega }_n}v_{\varepsilon _n}\Vert ^2_{H^s}\nonumber \\&\quad \le \, \mathbb E\sup _{t\in [0,\tau ^{u,T}_{\varepsilon _n}\wedge \tau ^{v,T}_{\varepsilon _n}]}\Vert u_{\varepsilon _n}(t)-v_{\varepsilon _n}(t)\Vert ^2_{H^s}\nonumber \\&\quad \quad +\mathbb E\sup _{t\in [\xi ^u_{\varepsilon _n}\wedge \xi ^v_{\varepsilon _n}\wedge \tau ^u\wedge \tau ^v,\tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_n}u_{\varepsilon _n}-\textbf{1}_{{\Omega }_n}v_{\varepsilon _n}\Vert ^2_{H^s}. \end{aligned}$$
(3.25)

By (3.23),

$$\begin{aligned} \sup _{t\in [\xi ^u_{\varepsilon _n}\wedge \xi ^v_{\varepsilon _n}\wedge \tau ^u\wedge \tau ^v,\tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_n}u_{\varepsilon _n}-\textbf{1}_{{\Omega }_n}v_{\varepsilon _n}\Vert ^2_{H^s} \le \, 32\left( \Vert u_0\Vert _{H^s}^2+\Vert v_0\Vert _{H^s}^2+1\right) . \end{aligned}$$

Consequently, by Lebesgue’s dominated convergence theorem and (3.21), we have for \(n\gg 1\) that,

$$\begin{aligned} \mathbb E\sup _{t\in [\xi ^u_{\varepsilon _n}\wedge \xi ^v_{\varepsilon _n}\wedge \tau ^u\wedge \tau ^v,\tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_n}u_{\varepsilon _n}-\textbf{1}_{{\Omega }_n}v_{\varepsilon _n}\Vert ^2_{H^s} < \frac{\epsilon }{18}. \end{aligned}$$
(3.26)

Now we estimate \(\mathbb E\sup _{t\in [0,\tau ^{u,T}_{\varepsilon _n}\wedge \tau ^{v,T}_{\varepsilon _n}]}\Vert u_{\varepsilon _n}(t)-v_{\varepsilon _n}(t)\Vert ^2_{H^s}\). Similar to (3.5), by using (3.19), one can show that for \(s'\in \left( \frac{1}{2},\min \left\{ s-1,\frac{3}{2}\right\} \right) \),

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^{u,T}_{\varepsilon _n}\wedge \tau ^{v,T}_{\varepsilon _n}]}\Vert u_{\varepsilon _n}(t)-v_{\varepsilon _n}(t)\Vert ^2_{H^s}\nonumber \\&\quad \le \, C \mathbb E\left\{ \Vert J_{\varepsilon _n} u_0-J_{\varepsilon _n} v_0\Vert ^2_{H^s} +\Vert J_{\varepsilon _n} u_0-J_{\varepsilon _n} v_0\Vert ^2_{H^{s'}}\Vert J_{\varepsilon _n} u_0\Vert ^2_{H^{s+1}}\right\} \nonumber \\&\quad \quad +C\mathbb E\sup _{t\in [0,\tau ^{u,T}_{\varepsilon _n}\wedge \tau ^{v,T}_{\varepsilon _n}]}\Vert u_{\varepsilon _n}(t)-v_{\varepsilon _n}(t)\Vert ^2_{H^{s'}}\nonumber \\&\quad \le \, C \mathbb E\Big \{\Vert u_0-v_0\Vert ^2_{H^s} +\frac{1}{\varepsilon _n^2}\Vert u_0-v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\Big \}\nonumber \\&\quad \quad +C\mathbb E\sup _{t\in [0,\tau ^{u,T}_{\varepsilon _n}\wedge \tau ^{v,T}_{\varepsilon _n}]}\Vert u_{\varepsilon _n}(t)-v_{\varepsilon _n}(t)\Vert ^2_{H^{s'}}, \end{aligned}$$
(3.27)

where \(C=C\left( \Vert u_0\Vert _{L^\infty ({\Omega };H^s)},T\right) \) and Lemma A.1 is used in the last step. Since \(u_0\in L^\infty ({\Omega };H^s)\), by Lemmas 3.1 and A.1 again, we have

$$\begin{aligned}&\mathbb E\sup _{t\in [0,\tau ^{u,T}_{\varepsilon _n}\wedge \tau ^{v,T}_{\varepsilon _n}]}\Vert u_{\varepsilon _n}(t)-v_{\varepsilon _n}(t)\Vert ^2_{H^s}\nonumber \\&\quad \le C\mathbb E\left\{ \Vert u_0-v_0\Vert ^2_{H^s} +\frac{1}{\varepsilon _n^2}\Vert u_0-v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\right\} +C\mathbb E\Vert J_{\varepsilon _n} u_0-J_{\varepsilon _n} v_0\Vert ^2_{H^{s'}}\nonumber \\&\quad \le C\mathbb E\Vert u_0-v_0\Vert ^2_{H^s} +C\frac{1}{\varepsilon _n^2}\mathbb E\Vert u_0-v_0\Vert ^2_{H^{s'}} +C\mathbb E\Vert u_0-v_0\Vert ^2_{H^{s'}}, \end{aligned}$$
(3.28)

where \(C=C\left( \Vert u_0\Vert _{L^\infty ({\Omega };H^s)},T\right) \) as before. Fix \(n=n_0\gg 1\) such that (3.24) and (3.26) are satisfied, i.e.,

$$\begin{aligned} \left\{ \begin{aligned}&\mathbb E\sup _{t\in [0,\tau ^f]}\Vert f-\textbf{1}_{{\Omega }_{n_0}}f_{\varepsilon _{n_0}}\Vert ^2_{H^s}<\frac{\epsilon }{9},\ \ f\in \{u,v\},\\&\mathbb E\sup _{t\in [\xi ^u_{\varepsilon _{n_0}}\wedge \xi ^v_{\varepsilon _{n_0}}\wedge \tau ^u\wedge \tau ^v,\tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_{n_0}}u_{\varepsilon _{n_0}}-\textbf{1}_{{\Omega }_{n_0}}v_{\varepsilon _{n_0}}\Vert ^2_{H^s} < \frac{\epsilon }{18}. \end{aligned}\right. \end{aligned}$$
(3.29)

Then, for (3.28) with \(n=n_0\), we can find a \(\delta =\delta (\epsilon ,u_0,T)\in (0,1)\) such that (3.19) is satisfied and

$$\begin{aligned}&\mathbb E\sup _{t\in [0,\tau ^{u,T}_{\varepsilon _{n_0}}\wedge \tau ^{v,T}_{\varepsilon _{n_0}}]}\Vert u_{\varepsilon _{n_0}}(t)-v_{\varepsilon _{n_0}}(t)\Vert ^2_{H^s}<\frac{\epsilon }{18},\ \ \text {if}\ \ \Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}<\delta . \end{aligned}$$
(3.30)

As a result, for (3.25) with fixed \(n=n_0\), we use (3.29)\(_2\) and (3.30) to derive that

$$\begin{aligned} \mathbb E\sup _{t\in [0,\tau ^u\wedge \tau ^v]}\Vert \textbf{1}_{{\Omega }_{n_0}}u_{\varepsilon _{n_0}}-\textbf{1}_{{\Omega }_{n_0}}v_{\varepsilon _{n_0}}\Vert ^2_{H^s}&\quad \le \frac{\epsilon }{18}+\frac{\epsilon }{18}=\ \frac{\epsilon }{9},\ \ \text {if}\ \ \Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}<\delta . \end{aligned}$$

This inequality and (3.29)\(_1\) yield that

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,\tau ^u\wedge \tau ^v]}\Vert u-v\Vert ^2_{H^s}\\&\quad \le 3\sum _{f\in \{u,v\}}\mathbb E\sup _{t\in [0,\tau ^u\wedge \tau ^v]}\Vert f-\textbf{1}_{{\Omega }_{n_0}}f_{\varepsilon _{n_0}}\Vert ^2_{H^s}\\&\qquad +3\mathbb E\sup _{t\in [0,\tau ^u\wedge \tau ^v]}\Vert \textbf{1}_{{\Omega }_{n_0}}u_{\varepsilon _{n_0}}-\textbf{1}_{{\Omega }_{n_0}}v_{\varepsilon _{n_0}}\Vert ^2_{H^s}\\&\quad \le \frac{\epsilon }{3}+\frac{\epsilon }{3}+\frac{\epsilon }{3}=\epsilon ,\ \ \text {if}\ \ \Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}<\delta . \end{aligned}$$

Hence we obtain (1.14) with \(\tau =\tau ^u\wedge \tau ^v\). Due to (3.21), \(\tau \in (0,T]\) almost surely.

Remark 3.2

Here we remark that the restriction \(\textbf{1}_{{\Omega }_{n}}\) is needed to estimate

$$\begin{aligned} \mathbb E\sup _{t\in [0,\tau ^f]}\Vert f-\textbf{1}_{{\Omega }_{n}}f_{\varepsilon _{n}}\Vert ^2_{H^s} \end{aligned}$$

for \(f\in \{u,v\}\). This is because we only have \(\lim _{n\rightarrow \infty }\sup _{t\in [0,\tau ^f]}\Vert f-f_{\varepsilon _n}\Vert _{H^s}=0\) \({\mathbb P}\text {-}\mathrm{a.s.}\) (cf. (b) in Lemma 3.4), and we need to interchange limit and expectation. By (c) in Lemma 3.4,

$$\begin{aligned} \sup _{t\in [0,\tau ^f]}\Vert f-\textbf{1}_{{\Omega }_{n}}f_{\varepsilon _{n}}\Vert ^2_{H^s}\le 2\sup _{t\in [0,\tau ^f]}\Vert f\Vert ^2_{H^s}+2\textbf{1}_{{\Omega }_{n}}\sup _{t\in [0,\tau ^f]}\Vert f_{\varepsilon _{n}}\Vert ^2_{H^s}\le 4\Vert f_0\Vert ^2_{H^s}+16. \end{aligned}$$

Hence Lebesgue’s dominated convergence theorem can be used. In the deterministic case, one can directly consider \(\Vert f-f_{\varepsilon _{n}}\Vert ^2_{H^s}.\)

4 Weak instability

Now we prove Theorem 1.2. As is mentioned in Remark 1.3, since we can not get an explicit expression of the solution to (1.4), we start with constructing some approximative solutions from which (1.19) can be established.

4.1 Approximative solutions and actual solutions

Following the approach in [28, 46], now we construct the approximative solutions. We fix two functions \(\phi ,\tilde{\phi }\in C_c^{\infty }\) such that

$$\begin{aligned} \phi (x)=\left\{ \begin{aligned}&1,\ \text {if}\ |x|<1,\\&0,\ \text {if}\ |x|\ge 2, \end{aligned}\right. \ \ \text {and} \ \ \tilde{\phi }(x)=1\ \text {if}\ x\in \textrm{supp}\ \phi . \end{aligned}$$
(4.1)

Let \(k\ge 1\) and

$$\begin{aligned} m\in \{-1,1\}\ \text {if}\ k \ \text {is odd and} \ m\in \{0,1\}\ \text {if}\ k \ \text {is even}. \end{aligned}$$
(4.2)

Then we consider the following sequence of approximative solutions

$$\begin{aligned} u_{m,n}=u_l+u_h, \end{aligned}$$
(4.3)

where \(u_h= u_{h,m,n}\) is the high-frequency part defined by

$$\begin{aligned} u_h = u_{h,m,n}(t,x) = n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx-mt),\ \ n\in \mathbb N, \end{aligned}$$
(4.4)

and \(u_l=u_{l,m,n}\) is the low-frequency part constructed such that \(u_l\) is the solution to the following problem:

$$\begin{aligned} \left\{ \begin{aligned}&\partial _tu_l+u_l^k\partial _xu_l+F(u_l)=0,\quad x\in \mathbb R, \ t>0,\ k\ge 1,\\&u_l(0,x)=mn^{-\frac{1}{k}}\tilde{\phi }\left( \frac{x}{n^{\delta }}\right) ,\quad x\in \mathbb R. \end{aligned} \right. \end{aligned}$$
(4.5)

The parameter \(\delta >0\) in (4.4) and (4.5) will be determined later for different \(k\ge 1\). Particularly, when \(m=0\), we have \(u_l=0\). In this case the approximative solution \(u_{0,n}\) has no low-frequency part and

$$\begin{aligned} u_{0,n}(t,x)=n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx). \end{aligned}$$

Next, we consider the problem (1.4) with initial data \(u_{m,n}(0,x)\), i.e.,

$$\begin{aligned} \left\{ \begin{aligned}&\, \textrm{d}u+ [u^k\partial _xu+F(u)]\, \textrm{d}t=h(t,u) \, \textrm{d}\mathcal W,\ \ t>0,\ x\in \mathbb R,\\&u(0,x)=mn^{-\frac{1}{k}}\tilde{\phi }\left( \frac{x}{n^{\delta }}\right) +n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx),\ \ x\in \mathbb R, \end{aligned} \right. \end{aligned}$$
(4.6)

where \(F(\cdot )\) is defined by (1.5). Since h satisfies \(H_2(1)\), similar to the proof for Theorem 1.1, we see that for each fixed \(n\in \mathbb N\), (4.6) has a unique solution \((u^{m,n},\tau ^{m,n})\) such that \(u^{m,n}\in C\left( [0,\tau ^{m,n}];H^s\right) \) \({\mathbb P}\text {-}\mathrm{a.s.}\) with \(s>5/2\).

4.2 Estimates on the errors

Substituting (4.3) into (1.4), we define the error \(\mathcal E(\omega ,t,x)\) as

$$\begin{aligned} \mathcal E(\omega ,t,x)&:=\, u_{m,n}(t,x)-u_{m,n}(0,x)\\&\quad + \int _0^t\left[ u_{m,n}^k\partial _xu_{m,n}+F(u_{m,n})\right] \, \textrm{d}t'-\int _0^th(t',u_{m,n}) \, \textrm{d}\mathcal W\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$

For simplicity, we let

$$\begin{aligned} \mathcal Z_{q}=\mathcal Z_q(u_h,u_l)=\left\{ \begin{aligned}&\sum _{j=1}^{q}C_{q}^{j}u_l^{q-j}u_h^j,\ \text {if}\ q\ge 1,\\&0,\ \text {if}\ q=0, \end{aligned}\right. \end{aligned}$$
(4.7)

where \(C_{q}^{j}\) is the binomial coefficient. By using (4.3), (4.5) and (4.7), \(\mathcal E(\omega ,t,x)\) can be reformulated as

$$\begin{aligned}&\mathcal E(\omega ,t,x)\nonumber \\&\quad =\, u_l(t,x)-u_l(0,x)+\int _0^tu_{l}^k\partial _xu_l\, \textrm{d}t'+\int _0^tF(u_l)\, \textrm{d}t'\nonumber \\&\quad \quad +u_h(t,x)-u_h(0,x)+\int _0^t\left[ u_l^k\partial _xu_h+\mathcal Z_k(\partial _xu_l+\partial _xu_h)\right] \, \textrm{d}t'\nonumber \\&\quad \quad +\int _0^t\left[ F(u_l+u_h)-F(u_l)\right] \, \textrm{d}t'-\int _0^th(t',u_{m,n}) \, \textrm{d}\mathcal W\nonumber \\&\quad =\, u_h(t,x)-u_h(0,x)+\int _0^t\left[ u_l^k\partial _xu_h+\mathcal Z_k(\partial _xu_l+\partial _xu_h)\right] \, \textrm{d}t'\nonumber \\&\quad \quad +\int _0^t\left[ F(u_l+u_h)-F(u_l)\right] \, \textrm{d}t'-\int _0^th(t',u_{m,n}) \, \textrm{d}\mathcal W\ \ {\mathbb P}\text {-}\mathrm{a.s.} \end{aligned}$$
(4.8)

4.2.1 Estimates on the low-frequency part

The following lemma gives a decay estimate for the low-frequency part of \(u_{m,n}\), that is, \(u_l\).

Lemma 4.1

Let \(k\ge 1\), \(|m|=1\) or \(m=0\), \(s>3/2\), \(\delta \in (0,2/k)\) and \(n\gg 1\). Then there is a \(T_l>0\) such that for all \(n\gg 1\), the initial value problem (4.5) has a unique smooth solution \(u_l=u_{l,m,n}\in C([0,T_l];H^s)\) such that \(T_l\) does not depend on n. Besides, for all \(r>0\), there is a constant \(C=C_{r,\tilde{\phi },T_l}>0\) such that \(u_l\) satisfies

$$\begin{aligned} \Vert u_l(t)\Vert _{H^r}\le C|m|n^{\frac{\delta }{2}-\frac{1}{k}},\ \ t\in [0,T_l]. \end{aligned}$$
(4.9)

Proof

When \(m=0\), as is mentioned above, \(u_l\equiv 0\) for all \(t\ge 0\). It remains to prove the case \(|m|=1\). For any fixed \(n\ge 1\), since \(u_l(0,x)\in H^\infty \), by applying Theorem 1.1 with \(h=0\) and deterministic initial data, we see that for any \(s>3/2\), (4.5) has a unique (deterministic) solution \(u_l=u_{l,m,n}\in C\left( [0,T_{m,n}];H^s\right) \). Different from the stochastic case, here we will show that there is a lower bound of the existence time, i.e., there is a \(T_l>0\) such that for all \(n\gg 1\), \(u_l=u_{l,m,n}\) exists on \([0,T_l]\) and satisfies (4.9).

Step 1: Estimate \(\Vert u_l(0,x)\Vert _{H^r}\). When \(n\gg 1\), we have

$$\begin{aligned} \Vert u_l(0,x)\Vert ^2_{H^r} =\,&m^2n^{2\delta -\frac{2}{k}} \int _\mathbb R(1+|\xi |^2)^r\left| \widehat{\tilde{\phi }}(n^{\delta }\xi )\right| ^2\, \textrm{d}\xi \\ =\,&m^2n^{\delta -\frac{2}{k}} \int _\mathbb R\left( 1+\left| \frac{z}{n^{\delta }}\right| ^2\right) ^r\left| \widehat{\tilde{\phi }}(z)\right| ^2\, \textrm{d}z \le C m^2n^{\delta -\frac{2}{k}} \end{aligned}$$

for some constant \(C=C_{r,\tilde{\phi }}>0\). As a result, we have

$$\begin{aligned} \Vert u_l(0,x)\Vert _{H^r} \le C |m|n^{\frac{\delta }{2}-\frac{1}{k}}. \end{aligned}$$

Step 2: Prove (4.9) for \(r>3/2\). In this case, we apply Lemmas A.3 and A.5, \(H^r\hookrightarrow W^{1,\infty }\) to find

$$\begin{aligned} \,&\frac{1}{2}\frac{\, \textrm{d}}{\, \textrm{d}t}\Vert u_l\Vert ^2_{H^r}\\&\quad \le \, \left| \left( D^{r}u_l,D^{r}\left( u_l^k\partial _xu_l\right) \right) _{L^2}\right| +\left| \left( D^{r}u_l,D^{r}F(u_l)\right) _{L^2}\right| \\&\quad \le \, \left| \left( [D^{r},u_l^k]\partial _xu_l,D^{r}u_l\right) _{L^2}\right| +\left| \left( u_l^kD^{r}\partial _xu_l,D^{r}u_l\right) _{L^2}\right| +\left\| u_l\right\| _{H^r}\left\| F(u_l)\right\| _{H^r}\\&\quad \lesssim \, \Vert u_l^k\Vert _{H^r}\Vert \partial _xu_l\Vert _{L^{\infty }}\Vert u_l\Vert _{H^r} +\Vert \partial _xu_l\Vert _{L^{\infty }}\Vert u_l\Vert _{L^\infty }^{k-1}\Vert u_l\Vert _{H^r}^2 +\Vert u_l\Vert ^{k}_{W^{1,\infty }}\Vert u_l\Vert ^2_{H^r}\\&\quad \le \, C\Vert u_l\Vert ^{k+2}_{H^r},\ \ C=C_r>0. \end{aligned}$$

Solving the above inequality gives

$$\begin{aligned} \Vert u_l\Vert _{H^r}\le \frac{\Vert u_l(0)\Vert _{H^r}}{\left( 1-Ckt\Vert u_l(0)\Vert ^k_{H^r}\right) ^{\frac{1}{k}}},\ \ 0\le t<\frac{1}{Ck\Vert u_l(0)\Vert ^k_{H^r}}. \end{aligned}$$

Therefore, we arrive at

$$\begin{aligned} \Vert u_l\Vert _{H^r}\le 2 \Vert u_l(0)\Vert _{H^r},\ \ t\in [0,T_{m,n}],\ \ T_{m,n}=\frac{1}{2Ck\Vert u_l(0)\Vert ^k_{H^r}}. \end{aligned}$$
(4.10)

By Step 1, we have \(T_{m,n}\gtrsim \frac{1}{2Ckn^{k(\frac{\delta }{2}-\frac{1}{k})}}\rightarrow \infty ,\ \text {as}\ n\rightarrow \infty \). Consequently, we can find a common time interval \([0,T_{l}]\) such that

$$\begin{aligned} \Vert u_l\Vert _{H^r}\le 2 \Vert u_l(0)\Vert _{H^r}\le C|m|n^{\frac{\delta }{2}-\frac{1}{k}},\ \ t\in [0,T_l], \end{aligned}$$

which is (4.9).

Step 3: Prove (4.9) for \(0<r\le 3/2\). Similarly, by applying Lemmas A.3 and A.5, we have

$$\begin{aligned} \,&\frac{1}{2}\frac{\, \textrm{d}}{\, \textrm{d}t}\Vert u_l\Vert ^2_{H^r}\\&\quad \le \, \left| \left( D^{r}u_l,D^{r}\left( u_l^k\partial _xu_l\right) \right) _{L^2}\right| +\left| \left( D^{r}u_l,D^{r}F(u_l)\right) _{L^2}\right| \\&\quad \le \, \left| \left( [D^{r},u_l^k]\partial _xu_l,D^{r}u_l\right) _{L^2}\right| +\left| \left( u_l^kD^{r}\partial _xu_l,D^{r}u_l\right) _{L^2}\right| +\left\| u_l\right\| _{H^r}\left\| F(u_l)\right\| _{H^r}\\&\quad \lesssim \, \Vert u_l^k\Vert _{H^r}\Vert \partial _xu_l\Vert _{L^{\infty }}\Vert u_l\Vert _{H^r} +\Vert \partial _xu_l\Vert _{L^{\infty }}\Vert u_l\Vert _{L^\infty }^{k-1}\Vert u_l\Vert _{H^r}^2\\&\qquad +\left\| u_l\right\| _{H^r}\Vert u_l\Vert ^{k}_{W^{1,\infty }}\left( \Vert u_l\Vert _{H^r}+\Vert \partial _x u_l\Vert _{H^{r}}\right) . \end{aligned}$$

It follows from the embedding \(H^{r+\frac{3}{2}}\hookrightarrow H^{r+1}\) and \(H^{r+\frac{3}{2}}\hookrightarrow W^{1,\infty }\) that

$$\begin{aligned} \frac{1}{2}\frac{\, \textrm{d}}{\, \textrm{d}t}\Vert u_l\Vert ^2_{H^r}\lesssim \,&\Vert u_l^k\Vert _{H^r}\Vert \partial _xu_l\Vert _{L^{\infty }}\Vert u_l\Vert _{H^r} +\Vert \partial _xu_l\Vert _{L^{\infty }}\Vert u_l\Vert _{L^\infty }^{k-1}\Vert u_l\Vert _{H^r}^2\\&+\left\| u_l\right\| _{H^r}\Vert u_l\Vert ^{k}_{W^{1,\infty }}\left( \Vert u_l\Vert _{H^r}+\Vert \partial _x u_l\Vert _{H^{r}}\right) \\ \lesssim \,&\Vert u_l\Vert ^k_{W^{1,\infty }}\Vert u_l\Vert ^2_{H^r} +\Vert u_l\Vert ^{k}_{W^{1,\infty }}\left\| u_l\right\| _{H^r}\left\| u_l\right\| _{H^{r+1}}\\ \lesssim \,&\Vert u_l\Vert _{H^{r+\frac{3}{2}}}^{k}\Vert u_l\Vert _{H^{r}}^2+\Vert u_l\Vert _{H^{r}}\Vert u_l\Vert _{H^{r+\frac{3}{2}}}^{k+1}. \end{aligned}$$

Using the conclusion of Step 2 for \(r+\frac{3}{2}>\frac{3}{2}\), we have

$$\begin{aligned} \frac{\, \textrm{d}}{\, \textrm{d}t}\Vert u_l\Vert _{H^r}\lesssim \Vert u_l\Vert _{H^{r}}\Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k}+\Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k+1},\ \ t\in [0,T_l], \end{aligned}$$

and hence

$$\begin{aligned} \Vert u_l(t)\Vert _{H^r}\lesssim \Vert u_l(0)\Vert _{H^r}+\Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k+1}T_l+\int _0^t\Vert u_l\Vert _{H^{r}}\Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k}\, \textrm{d}t',\ \ t\in [0,T_l]. \end{aligned}$$

Applying Grönwall’s inequality to the above inequality, we have

$$\begin{aligned} \Vert u_l\Vert _{H^r}\lesssim \left( \Vert u_l(0)\Vert _{H^r}+\Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k+1}T_l\right) \exp \left\{ \Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k}T_l\right\} ,\ \ t\in [0,T_l]. \end{aligned}$$

Since \(\delta \in (0,2/k)\), we can infer from Step 1 that \(\exp \left\{ \Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k}T_l\right\} <C(T_l)\) for some constant \(C(T_l)>0\) and \(\Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k+1}\le \Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}\le C|m|n^{\frac{\delta }{2}-\frac{1}{k}}\). Hence we see that there is a constant \(C=C_{r,\tilde{\phi },T_l}>0\) such that

$$\begin{aligned} \Vert u_l\Vert _{H^r}\le C|m|n^{\frac{\delta }{2}-\frac{1}{k}},\ \ t\in [0,T_l], \end{aligned}$$

which is (4.9). \(\square \)

Recall the approximative solution defined by (4.3). The above result means that the \(H^s\)-norm of the low-frequency part \(u_l\) is decaying. For the high-frequency part \(u_h\), as in Lemma A.8, its \(H^s\)-norm is bounded.

4.2.2 Estimate on \(\mathcal E\)

Recall the error \(\mathcal E\) given in (4.8). By using (4.1) and (4.2), we have \(m=m^k\) and \(\phi =\tilde{\phi }^k\phi \) for all \(k\ge 1\). Then by (4.4) and \(u_l(0,x)\) in (4.5), we see that as long as \(m\ne 0\),

$$\begin{aligned}&\,u_{h}(t,x)-u_{h}(0,x)\nonumber \\&\quad =\, n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx-mt)-n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx)\\&\quad =\, m^{-1}m^k\tilde{\phi }^k\left( \frac{x}{n^{\delta }}\right) n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx-mt)\\&\qquad -m^{-1}m^k \tilde{\phi }^k\left( \frac{x}{n^{\delta }}\right) n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx)\nonumber \\&\quad =\, m^{-1}u_l^k(0,x)n^{1-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx-mt)\\&\quad \quad -m^{-1}u_l^k(0,x)n^{1-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx)\\&\quad =\, \int _0^tu_l^k(0,x)n^{1-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \sin (nx-mt')\, \textrm{d}t'. \end{aligned}$$

If \(m=0\), then \(u_l=0\) and we also have

$$\begin{aligned} u_{h}(t,x)-u_{h}(0,x)=\int _0^t 0\ \, \textrm{d}t' =\int _0^tu_l^k(0,x)n^{1-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \sin (nx-mt')\, \textrm{d}t'. \end{aligned}$$

To sum up, we find that for all \(k\ge 1\), m satisfying (4.2), \(u_h\) given by (4.4) and \(u_l(0,x)\) in (4.5),

$$\begin{aligned} u_{h}(t,x)-u_{h}(0,x) =\int _0^tu_l^k(0,x)n^{1-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \sin (nx-mt')\, \textrm{d}t'. \end{aligned}$$
(4.11)

On the other hand, for all \(k\ge 1\),

$$\begin{aligned} \int _0^tu_{l}^k\partial _xu_h\, \textrm{d}t'=\,&-\int _0^tu_{l}^k(t')n^{1-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \sin (nx-mt')\, \textrm{d}t'\nonumber \\&+\int _0^tu_{l}^k(t')n^{-\frac{3\delta }{2}-s}\partial _x\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx-mt')\, \textrm{d}t'. \end{aligned}$$
(4.12)

Combining (4.11), (4.12) and (1.5) into (4.8) yields

$$\begin{aligned} \mathcal E(\omega ,t,x) =\,&\sum _{i=1}^4\int _0^tE_i\, \textrm{d}t'-\int _0^th(t',u_{m,n}) \, \textrm{d}\mathcal W,\ \ k\ge 1\ \ {\mathbb P}\text {-}\mathrm{a.s.}, \end{aligned}$$
(4.13)

where

$$\begin{aligned} E_1:=\,&[u_{l}^k(0)-u_{l}^k(t)]n^{1-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \sin (nx-mt)\nonumber \\&+u_{l}^k(t)n^{-\frac{3\delta }{2}-s}\partial _x\phi \left( \frac{x}{n^{\delta }}\right) \cos (nx-mt) +\mathcal Z_k(\partial _xu_l+\partial _xu_h),\\ E_2:=\,&F_1(u_l+u_h)-F_1(u_l)=D^{-2}\partial _x\mathcal Z_{k+1},\\ E_3:=\,&F_2(u_l+u_h)-F_2(u_l)\nonumber \\ =\,&\frac{2k-1}{2}D^{-2}\partial _x\Big \{u_l^{k-1} \left[ 2(\partial _xu_l)(\partial _xu_h)+(\partial _xu_h)^2\right] +\mathcal Z_{k-1}(\partial _xu_l+\partial _xu_h)^2\Big \},\\ E_4:=\,&F_3(u_l+u_h)-F_3(u_l)\nonumber \\ =\,&\frac{k-1}{2}D^{-2}\Big \{u_l^{k-2}\big [3(\partial _xu_l)^2(\partial _xu_h)+3(\partial _xu_l)(\partial _xu_h)^2+(\partial _xu_h)^3\big ]\nonumber \\&+\mathcal Z_{k-2}(\partial _xu_l+\partial _xu_h)^3\Big \}. \end{aligned}$$

We remark here that \(E_4\) disappears when \(k=1\). Recalling \(\rho _0\in (1/2,1)\) in Hypothesis \(H_2\), now we shall estimate the \(H^{\rho _0}\)-norm of the error \(\mathcal E\). Actually, we will show that the \(H^{\rho _0}\)-norm of \(\mathcal E\) is decaying.

Lemma 4.2

Let \(T_l>0\) be given in Lemma 4.1, and \(\rho _0\in (1/2,1)\) be given in \(H_2\). Let \(n\gg 1\), \(s>5/2\). Let

$$\begin{aligned} \left\{ \begin{aligned}&\frac{2}{3}<\delta<1,\ \text {when}\ k=1,\\&\frac{2}{k}-\frac{2}{2k-1}<\delta <\frac{1}{k},\ \text {when}\ k\ge 2, \end{aligned}\right. \end{aligned}$$
(4.14)

and

$$\begin{aligned} 0>r_s=-s-1+\rho _0+k\delta ,\ \ k\ge 1. \end{aligned}$$
(4.15)

Then the error \(\mathcal E\) given by (4.13) satisfies that for some \(C=C(T_l)>0\),

$$\begin{aligned} \mathbb E\sup _{t\in [0,T_l]}\Vert \mathcal E(t)\Vert _{H^{\rho _0}}^2\le Cn^{2r_s}. \end{aligned}$$

Proof

The proof is technical and it is given in Appendix B. \(\square \)

4.2.3 Estimate on \(u_{m,n}-u^{m,n}\)

Recall the approximative solutions \(u_{m,n}\) given by (4.3). Then we have the following estimates on the difference between the actual solutions and the approximative solutions.

Lemma 4.3

Let \(k\ge 1\), \(s>5/2\) and \(\rho _0\) be given in \(H_2\). Let (4.14) hold true and \(r_s<0\) be given in (4.15). For any \(R>1\), we define

$$\begin{aligned} \tau ^{m,n}_R:=\inf \{t>0:\Vert u^{m,n}\Vert _{H^{s}}>R\}. \end{aligned}$$
(4.16)

Then for \(n\gg 1\),

$$\begin{aligned}&\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert (u_{m,n}-u^{m,n})(t)\Vert _{H^{\rho _0}}^2 \le Cn^{2r_s}, \end{aligned}$$
(4.17)
$$\begin{aligned}&\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert (u_{m,n}-u^{m,n})(t)\Vert _{H^{2s-\rho _0}}^2 \le Cn^{2s-2\rho _0}, \end{aligned}$$
(4.18)

where \(T_l>0\) is given in Lemma 4.1 and \(C=C(R,T_l)>0\).

Proof

Let \(v=v_{m,n}=u_{m,n}-u^{m,n}\). Then v satisfies \(v(0)=0\) and

$$\begin{aligned} v(t)+\int _0^t\left( \frac{1}{k+1}\partial _x(Pv)+F(u_{m,n})-F(u^{m,n})\right)&\, \textrm{d}t'\\ =-\int _0^t h(t',u^{m,n})&\, \textrm{d}\mathcal W+\sum _{i=1}^4\int _0^tE_i\, \textrm{d}t', \end{aligned}$$

where

$$\begin{aligned} P=P_{m,n}=u_{m,n}^k+u_{m,n}^{k-1}u^{m,n}+\cdots +u_{m,n}(u^{m,n})^{k-1}+(u^{m,n})^{k},\quad k\ge 1. \end{aligned}$$

On \([0,T_l]\), by the Itô formula, we have that

$$\begin{aligned} \Vert v(t)\Vert _{H^{\rho _0}}^2=\,&-2\int _0^t\left( h(t',u^{m,n}) \, \textrm{d}\mathcal W,v\right) _{H^{\rho _0}}+2\sum _{i=1}^4\int _0^t(E_i,v)_{H^{\rho _0}}\, \textrm{d}t'\\&-\frac{2}{k+1}\int _0^t(\partial _x(Pv),v)_{H^{\rho _0}}\, \textrm{d}t' -2\int _0^t([F(u_{m,n})-F(u^{m,n})],v)_{H^{\rho _0}}\, \textrm{d}t'\\&+\int _0^t\Vert h(t',u^{m,n})\Vert _{\mathcal L_2(\mathfrak U;H^{\rho _0})}^2\, \textrm{d}t'. \end{aligned}$$

Taking supremum with respect to \(t\in [0,T_l\wedge \tau ^{m,n}_R]\), and then using the BDG inequality yield

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert v(t)\Vert _{H^{\rho _0}}^2\\&\quad \le \, C\mathbb E\left( \int _0^{T_l\wedge \tau ^{m,n}_R}\Vert v\Vert _{H^{\rho _0}}^2\Vert h(t,u^{m,n})\Vert _{\mathcal L_2(\mathfrak U;H^{\rho _0})}^2\, \textrm{d}t\right) ^{1/2}\\&\quad \quad +2\sum _{i=1}^4\mathbb E\int _0^{T_l\wedge \tau ^{m,n}_R}\left| (E_i,v)_{H^{\rho _0}}\right| \, \textrm{d}t\\&\quad \quad +\frac{2}{k+1}\mathbb E\int _0^{T_l\wedge \tau ^{m,n}_R}\left| (\partial _x(Pv),v)_{H^{\rho _0}}\right| \, \textrm{d}t\\&\quad \quad +2\mathbb E\int _0^{T_l\wedge \tau ^{m,n}_R}\big |\big (F(u_{m,n})-F(u^{m,n}),v\big )_{H^{\rho _0}}\big | \, \textrm{d}t\\&\quad \quad +\mathbb E\int _0^{T_l\wedge \tau ^{m,n}_R}\Vert h(t,u^{m,n})\Vert _{\mathcal L_2(\mathfrak U;H^{\rho _0})}^2\, \textrm{d}t. \end{aligned}$$

It follows from Lemmas A.8 and 4.1 that \(\Vert u_{m,n}\Vert _{H^{s}}\lesssim 1\) on \([0,\tau ^{m,n}_R\wedge T_l]\). Hence we can infer from Hypothesis \(H_2\) that

$$\begin{aligned} \Vert h(t,u^{m,n})\Vert _{\mathcal L_2(\mathfrak U;H^{\rho _0})}^2 \lesssim \,&\Vert h(t,u_{m,n})\Vert _{\mathcal L_2(\mathfrak U;H^{\rho _0})}^2+\Vert h(t,u_{m,n})-h(t,u^{m,n})\Vert _{\mathcal L_2(\mathfrak U;H^{\rho _0})}^2\\ \lesssim \,&\left( \textrm{e}^\frac{-1}{\Vert u_{m,n}\Vert _{H^{\rho _0}}}\right) ^2 +g^2_3(CR)\Vert v\Vert _{H^{\rho _0}}^2,\ \ t\in [0,\tau ^{m,n}_R\wedge T_l]\ \ {\mathbb P}\text {-}\mathrm{a.s.}, \end{aligned}$$

where \(g_3(\cdot )\) is given in \(H_2(2)\). As a result, for any fixed \(s>5/2\), by applying Lemmas A.8 and 4.1 again, we can pick \(N=N(s,k)\gg 1\) to derive

$$\begin{aligned} \Vert h(t,u^{m,n})\Vert _{\mathcal L_2(\mathfrak U;H^{\rho _0})}^2 \lesssim \,&\Vert u_{m,n}\Vert ^{2N}_{H^{\rho _0}} +g^2_3(CR) \Vert v\Vert _{H^{\rho _0}}^2\\ \lesssim \,&\left( n^{N(-s+\rho _0)}+n^{N(\frac{\delta }{2}-\frac{1}{k})}\right) ^2 +g^2_3(CR) \Vert v\Vert _{H^{\rho _0}}^2\\ \lesssim \,&n^{2r_s}+g^2_3(CR) \Vert v\Vert _{H^{\rho _0}}^2,\ \ t\in [0,\tau ^{m,n}_R\wedge T_l]\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$

Consequently, we can infer from the above inequalities that

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert v(t)\Vert _{H^{\rho _0}}^2\\&\quad \le \, \frac{1}{2}\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert v(t)\Vert _{H^{\rho _0}}^2+2\sum _{i=1}^4\mathbb E\int _0^{T_l\wedge \tau ^{m,n}_R}\left| (E_i,v)_{H^{\rho _0}}\right| \, \textrm{d}t\\&\quad \quad +\frac{2}{k+1}\mathbb E\int _0^{T_l\wedge \tau ^{m,n}_R}\left| (\partial _x(Pv),v)_{H^{\rho _0}}\right| \, \textrm{d}t\\&\quad \quad +2\mathbb E\int _0^{T_l\wedge \tau ^{m,n}_R} \left| \big (F(u_{m,n})-F(u^{m,n}),v\big )_{H^{\rho _0}}\right| \, \textrm{d}t\\&\quad \quad +C(T_l)n^{2r_s}+C_R\int _0^{T_l}\mathbb E\sup _{t'\in [0,t\wedge \tau ^{m,n}_R]}\Vert v(t')\Vert _{H^{\rho _0}}^2\, \textrm{d}t. \end{aligned}$$

Via Lemma 4.2, we have

$$\begin{aligned} 2\sum _{i=1}^4\left| (E_i,v)_{H^{\rho _0}}\right| \le \,&2\sum _{i=1}^4\Vert E_i\Vert _{H^{\rho _0}}\Vert v\Vert _{H^{\rho _0}}\\ \lesssim \,&\sum _{i=1}^4\Vert E_i\Vert _{H^{\rho _0}}^2+\Vert v\Vert _{H^{\rho _0}}^2\lesssim C(T_l)n^{2r_s}+\Vert v\Vert _{H^{\rho _0}}^2. \end{aligned}$$

Using Lemma A.4 and integration by parts, we obtain that

$$\begin{aligned} \,&\left| (D^{\rho _0}\partial _x(Pv),D^{\rho _0}v)_{L^2}\right| \\&\quad =\, \left| ([D^{\rho _0}\partial _x,P]v,D^{\rho _0}v)_{L^2} +(PD^{\rho _0}\partial _xv,D^{\rho _0}v)_{L^2}\right| \\&\quad \lesssim \, \Vert P\Vert _{H^s}\Vert v\Vert _{H^{\rho _0}}^2+\Vert P_x\Vert _{L^{\infty }}\Vert v\Vert _{H^{\rho _0}}^2 \lesssim (\Vert u^{m,n}\Vert _{H^s}+\Vert u_{m,n}\Vert _{H^s})^k\Vert v\Vert _{H^{\rho _0}}^2. \end{aligned}$$

Then, we use Lemma A.5 to find that

$$\begin{aligned} \left| \big (F(u_{m,n})-F(u^{m,n}),v\big )_{H^{\rho _0}}\right| \lesssim \,&\Vert F(u_{m,n})-F(u^{m,n})\Vert _{H^{\rho _0}}\Vert v\Vert _{H^{\rho _0}}\\ \lesssim \,&\Vert F(u_{m,n})-F(u^{m,n})\Vert _{H^{\rho _0}}^2+\Vert v\Vert _{H^{\rho _0}}^2\\ \lesssim \,&(\Vert u^{m,n}\Vert _{H^{s}}+\Vert u_{m,n}\Vert _{H^{s}})^{2k}\Vert v\Vert _{H^{\rho _0}}^2+\Vert v\Vert _{H^{\rho _0}}^2, \end{aligned}$$

To sum up, by (4.16), Lemmas 4.1 and A.8, we arrive at

$$\begin{aligned} \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert v(t)\Vert _{H^{\rho _0}}^2 \le C(T_l)n^{2r_s}+C_R\int _0^{T_l}\mathbb E\sup _{t'\in [0,t\wedge \tau ^{m,n}_R]}\Vert v(t')\Vert _{H^{\rho _0}}^2\, \textrm{d}t. \end{aligned}$$

Using the Grönwall inequality, we obtain (4.17).

Now we prove (4.18). Since \(2\,s-\rho _0>s>\frac{5}{2}\) and \(u^{m,n}\) is the unique solution to (4.6), similar to (2.5), we can use (4.16) and \(H_2(1)\) to find for each fixed \(n\in \mathbb N\) that

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert u^{m,n}(t)\Vert _{H^{2s-\rho _0}}^2\\&\quad \le \, C\mathbb E\Vert u_{m,n}(0)\Vert _{H^{2s-\rho _0}}^2 +C_R\int _0^{T_l}\mathbb E\sup _{t'\in [0,t\wedge \tau ^{m,n}_R]}\Vert u^{m,n}(t')\Vert _{H^{2s-\rho _0}}^2\, \textrm{d}t. \end{aligned}$$

Using the Grönwall inequality and Lemmas 4.1 and A.8, we find a constant \(C=C(R,T_l)\) such that for all \(n\ge 1\),

$$\begin{aligned} \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert u^{m,n}(t)\Vert _{H^{2s-\rho _0}}^2 \le \,&C\mathbb E\Vert u_{m,n}(0)\Vert _{H^{2s-\rho _0}}^2\\ \le \,&C(n^{\frac{\delta }{2}-\frac{1}{k}}+n^{s-\rho _0})^2\le Cn^{2s-2\rho _0}. \end{aligned}$$

Hence, by Lemmas 4.1 and A.8 again, we arrive at

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert (u^{m,n}-u_{m,n})(t)\Vert _{H^{2s-\rho _0}}^2\\&\quad \le \, 2\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert u^{m,n}(t)\Vert _{H^{2s-\rho _0}}^2 +2\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_R]}\Vert u_{m,n}(t)\Vert _{H^{2s-\rho _0}}^2 \\&\quad \le \, Cn^{2s-2\rho _0},\ \ n\ge 1. \end{aligned}$$

The proof is therefore completed. \(\square \)

4.3 Finish the proof for Theorem 1.2

To begin with, we observe the following property:

Lemma 4.4

Let \(H_2(1)\) hold true. Suppose that for some \(R_0\gg 1\), the \(R_0\)-exiting time of the zero solution to (1.4) is strongly stable. Then we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\tau ^{m,n}_{R_0}=\infty \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
(4.19)

Proof

By \(H_2(1)\), the unique solution with zero initial data to (1.4) is zero. On the other hand, we notice that for all \(s'<s\), \(\lim _{n\rightarrow \infty }\Vert u_{m,n}(0)\Vert _{H^{s'}}=\lim _{n\rightarrow \infty }\Vert u_{m,n}(0)-0\Vert _{H^{s'}}=0\). Since the \(R_0\)-exiting time of the zero solution is \(\infty \), we see that (4.19) holds provided that the \(R_0\)-exiting time of the zero solution to (1.4) is strongly stable. \(\square \)

Proof for Theorem 1.2 Our strategy is to show that if the \(R_0\)-exiting time is strongly stable at the zero solution for some \(R_0\gg 1\), then \(\{u^{-1,n}\}\) and \(\{u^{1,n}\}\) (if k is odd) or \(\{u^{0,n}\}\) and \(\{u^{1,n}\}\) (if k is even) are two sequences of solutions such that (1.16), (1.17), (1.18) and (1.19) are satisfied.

For each \(n>1\) and for fixed \(R_0\gg 1\), Lemmas 4.1, A.8 and (4.16) give \(\mathbb P\{\tau ^{m,n}_{R_0}>0\}=1\), and Lemma 4.4 implies (1.16). Then, it follows from (4.16) that \(u^{m,n}\in C([0,\tau ^{m,n}_{R_0}];H^s)\) \({\mathbb P}\text {-}\mathrm{a.s.}\) and (1.17) holds true. Next, we check (1.18). By interpolation, we have

$$\begin{aligned}&\,\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_{R_0}]}\Vert u_{m, n}-u^{m, n}\Vert _{H^{s}}\\&\quad \le \, C \left( \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_{R_0}]}\Vert u_{m, n}-u^{m, n}\Vert _{H^{\rho _0}}\right) ^{\frac{1}{2}} \left( \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_{R_0}]}\Vert u_{m, n}-u^{m, n}\Vert _{H^{2s-\rho _0}}\right) ^{\frac{1}{2}}\\&\quad \le \, C \left( \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_{R_0}]}\Vert u_{m, n}-u^{m, n}\Vert ^2_{H^{\rho _0}}\right) ^{\frac{1}{4}} \left( \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_{R_0}]}\Vert u_{m, n}-u^{m, n}\Vert ^2_{H^{2s-\rho _0}}\right) ^{\frac{1}{4}}. \end{aligned}$$

For \(T_l>0\), combining Lemma 4.3 and the above estimate yields

$$\begin{aligned} \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_{R_0}]} \Vert u_{m, n}-u^{m, n}\Vert _{H^{s}} \le C(R_0,T_l) n^{\frac{1}{4}\cdot 2r_{s}}\cdot n^{\frac{1}{4}\cdot (2s-2\rho _0)}=C(R_0,T_l)n^{r'_s}, \end{aligned}$$

where \(r_s\) is defined by (4.14) and \( r'_s=r_s\cdot \frac{1}{2}+(s-\rho _0)\cdot \frac{1}{2}=\frac{k\delta -1}{2}<0. \) Consequently, we can deduce that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{m,n}_{R_0}]}\Vert u_{m, n}-u^{m, n}\Vert _{H^{s}} =0. \end{aligned}$$
(4.20)

When k is odd,

$$\begin{aligned} \Vert u^{-1,n}(0)-u^{1,n}(0)\Vert _{H^{s}} =\,&\Vert u_{-1,n}(0)-u_{1,n}(0)\Vert _{H^{s}}\\ =\,&2\Big \Vert n^{-\frac{1}{k}}\tilde{\phi }\left( \frac{x}{n^{\delta }}\right) \Big \Vert _{H^{s}}\rightarrow 0, {~\mathrm as~}n\rightarrow \infty . \end{aligned}$$

When k is even

$$\begin{aligned} \Vert u^{0,n}(0)-u^{1,n}(0)\Vert _{H^{s}}=\,&\Vert u_{0,n}(0)-u_{1,n}(0)\Vert _{H^{s}}\\ =\,&\Big \Vert n^{-\frac{1}{k}}\tilde{\phi }\left( \frac{x}{n^{\delta }}\right) \Big \Vert _{H^{s}}\rightarrow 0, {~\mathrm as~}n\rightarrow \infty . \end{aligned}$$

The above two estimates imply that (1.18) holds true.

Now we prove (1.19). Let \(T_l>0\) be given in Lemma 4.1. When k is odd, we use (4.20) to derive

$$\begin{aligned}&\,\liminf _{n\rightarrow \infty } \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{-1,n}_{R_0}\wedge \tau ^{1,n}_{R_0}]} \Vert u^{-1,n}(t)-u^{1,n}(t)\Vert _{H^s}\\&\quad \gtrsim \, \liminf _{n\rightarrow \infty } \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{-1,n}_{R_0}\wedge \tau ^{1,n}_{R_0}]} \Vert u_{-1,n}(t)-u_{1,n}(t)\Vert _{H^s}\\&\qquad -\lim _{n\rightarrow \infty } \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{-1,n}_{R_0}\wedge \tau ^{1,n}_{R_0}]} \Vert u_{-1,n}(t)-u^{-1,n}(t)\Vert _{H^s}\\&\qquad -\lim _{n\rightarrow \infty }\mathbb E\sup _{t\in [0,T_l\wedge \tau _{-1,n}^R\wedge \tau _{1,n}^R]} \Vert u_{1,n}(t)-u^{1,n}(t)\Vert _{H^s}\\&\quad \gtrsim \, \liminf _{n\rightarrow \infty } \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{-1,n}_{R_0}\wedge \tau ^{1,n}_{R_0}]} \Vert u_{-1,n}(t)-u_{1,n}(t)\Vert _{H^s}. \end{aligned}$$

It follows from the construction of \(u_{m,n}\), Fatou’s lemma, Lemmas 4.1, A.8 and 4.4 that

$$\begin{aligned}&\, \liminf _{n\rightarrow \infty } \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{-1,n}_{R_0}\wedge \tau ^{1,n}_{R_0}]} \Vert u_{-1,n}(t)-u_{1,n}(t)\Vert _{H^s}\nonumber \\&\quad =\, \liminf _{n\rightarrow \infty } \mathbb E\sup _{t\in [0,T_l\wedge \tau ^{-1,n}_{R_0}\wedge \tau ^{1,n}_{R_0}]} \Big \Vert -2n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \sin (nx)\sin (t)\nonumber \\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\qquad \qquad +[u_{l,-1,n}(t)-u_{l,1,n}(t)]\Big \Vert _{H^s}\nonumber \\&\quad \gtrsim \, \liminf _{n\rightarrow \infty }\mathbb E\sup _{t\in [0,T_l\wedge \tau ^{-1,n}_{R_0}\wedge \tau ^{1,n}_{R_0}]}n^{-\frac{\delta }{2}-s}\Big \Vert \phi \left( \frac{x}{n^{\delta }}\right) \sin (nx)\Big \Vert _{H^s}|\sin t|-\liminf _{n\rightarrow \infty } n^{\frac{\delta }{2}-\frac{1}{k}}\nonumber \\&\quad \gtrsim \, \sup _{t\in [0,T_l]}|\sin t|, \end{aligned}$$
(4.21)

which is (1.19) in the case that k is odd. When k is even, one has

$$\begin{aligned} \Vert u_{0,n}(t)-u_{1,n}(t)\Vert _{H^s}&=\Big \Vert -2n^{-\frac{\delta }{2}-s}\phi \left( \frac{x}{n^{\delta }}\right) \sin (nx-t/2)\sin (t/2)-u_{l,1,n}(t)\Big \Vert _{H^s}\nonumber \\&\gtrsim n^{-\frac{\delta }{2}-s}\Big \Vert \phi \left( \frac{x}{n^{\delta }}\right) \sin (nx-t/2)\Big \Vert _{H^s}|\sin (t/2)|-n^{\frac{\delta }{2}-\frac{1}{k}}. \end{aligned}$$

Similar to (4.21), we can also obtain (1.19) in the case that k is even. The proof is completed. \(\square \)

4.4 Example

Now we give an example of noise structure satisfying Hypothesis \(H_2\). For simplicity, we consider the case that \(h(t,u) \, \textrm{d}\mathcal W=b(t,u) \, \textrm{d}W\), where W is a standard 1-D Brownian motion. Let \(m\ge 1\) and \(f(\cdot )\) be a continuous and bounded function, then

$$\begin{aligned} b(t,u)= f(t) \textrm{e}^{-\frac{1}{\Vert u\Vert _{H^{\rho _0}}}}u^m, \end{aligned}$$

satisfies Hypothesis \(H_2\).

5 Noise prevents blow up

5.1 Proof for Theorem 1.3

Our approach is motivated by [6, 45]. Let \(s>5/2\) and \(u_0\) be an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable with \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \). With \(H_3(1)\) and \(H_3(2)\) at hand, one can follow the steps in the proof for Theorem 1.1 to obtain a unique solution u to (1.6) such that \(u\in C([0,\tau ^*);H^s)\) \({\mathbb P}\text {-}\mathrm{a.s.}\) and

$$\begin{aligned} {\textbf {1}}_{\left\{ \limsup _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{H^{s}}=\infty \right\} }={\textbf {1}}_{\left\{ \limsup _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{W^{1,\infty }}=\infty \right\} }\ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$
(5.1)

Here we remark that \(H_3(2)\) is the condition of locally Lipschitz continuous in \(H^{\sigma }\) with \(\sigma >3/2\), hence uniqueness can only be considered for solution in \(H^s\) with \(s>5/2\). This is because, if two solutions to (1.6) belong to \(H^s\), the difference between them can be only estimated in \(H^{s'}\) for \(s'\le s-1\) (Recalling (3.9), \(H^{s+1}\)-norm appears).

Define

$$\begin{aligned} \tau _{m}:=\inf \left\{ t\ge 0: \Vert u(t)\Vert _{H^{s-1}}\ge m\right\} ,\ \ m\ge 1\ \ \text {and} \ \ \widetilde{\tau ^*}:=\lim \limits _{m\rightarrow \infty }\tau _m. \end{aligned}$$

Due to (5.1), we have \(\tau _{m}<\widetilde{\tau ^*}=\tau ^*\) \({\mathbb P}\text {-}\mathrm{a.s.}\) and hence we only need to show

$$\begin{aligned} \widetilde{\tau ^*}=\infty \ \ {\mathbb P}\text {-}\mathrm{a.s.}. \end{aligned}$$
(5.2)

For \(V\in \mathcal {V}\), applying the Itô formula to \(\Vert u(t)\Vert ^2_{H^{s-1}}\) and then to \(V(\Vert u\Vert ^2_{H^{s-1}})\), we find

$$\begin{aligned} \, \textrm{d}V(\Vert u\Vert ^2_{H^{s-1}}) =\,&2V'(\Vert u\Vert ^2_{H^{s-1}})\left( q(t,u), u\right) _{H^{s-1}}\, \textrm{d}W\nonumber \\&+V'(\Vert u\Vert ^2_{H^{s-1}}) \left\{ -2\left( u^ku_x,u\right) _{H^{s-1}} -2\left( F(u), u\right) _{H^{s-1}}\right\} \, \textrm{d}t\nonumber \\&+V'(\Vert u\Vert ^2_{H^{s-1}})\Vert q(t,u)\Vert ^2_{H^{s-1}}\, \textrm{d}t\\&+2V''(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t,u), u\right) _{H^{s-1}}\right| ^2\, \textrm{d}t. \end{aligned}$$

Next, we recall \(\tau _{m}<\widetilde{\tau ^*}=\tau ^*\) and \(s-1>3/2\), take expectation and then use Hypothesis \(H_3\) and Lemma A.6 to find that

$$\begin{aligned}&\mathbb EV(\Vert u(t\wedge \tau _m)\Vert ^2_{H^{s-1}}) \\&\quad =\, \mathbb EV(\Vert u_0\Vert ^2_{H^{s-1}})\\&\qquad +\mathbb E\int _0^{t\wedge \tau _m}V'(\Vert u\Vert ^2_{H^{s-1}}) \left\{ -2\left( u^ku_x, u\right) _{H^{s-1}} -2\left( F(u), u\right) _{H^{s-1}}\right\} \, \textrm{d}t'\nonumber \\&\quad \quad +\mathbb E\int _0^{t\wedge \tau _m}V'(\Vert u\Vert ^2_{H^{s-1}})\Vert q(t',u)\Vert ^2_{H^{s-1}} \, \textrm{d}t'\\&\quad \quad +\mathbb E\int _0^{t\wedge \tau _m} 2V''(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t',u),u\right) _{H^{s-1}}\right| ^2\, \textrm{d}t'\\&\quad \le \, \mathbb EV(\Vert u_0\Vert ^2_{H^{s-1}})+ \mathbb E\int _0^{t\wedge \tau _m}\mathcal {H}_{s-1}(t',u)\, \textrm{d}t'\nonumber \\&\quad \le \, \mathbb EV(\Vert u_0\Vert ^2_{H^{s-1}})+N_1 t-\mathbb E\int _0^{t\wedge \tau _m} N_2\frac{\left\{ V'(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t',u), u\right) _{H^{s-1}}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^{s-1}})} \, \textrm{d}t', \end{aligned}$$

where \(\mathcal {H}_{\sigma }(t,u)\) (\(u\in H^{\sigma }\) and \(\sigma >3/2\)) is defined in Hypothesis \(H_3(3)\). Then we can infer from the above estimate that there is a constant \(C(u_0,N_1,N_2,t)>0\) such that

$$\begin{aligned} \mathbb E\int _0^{t\wedge \tau _m} \frac{\left\{ V'(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t',u), u\right) _{H^{s-1}}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^{s-1}})}\, \textrm{d}t' \le C(u_0,N_1,N_2,t). \end{aligned}$$
(5.3)

Next, for any \(T>0\), it follows from the BDG inequality that

$$\begin{aligned}&\mathbb E\sup _{t\in [0,{T\wedge \tau _m}]}V(\Vert u\Vert ^2_{H^{s-1}})- \mathbb EV(\Vert u_0\Vert ^2_{H^{s-1}})\\&\quad \le \, C\mathbb E\left( \int _0^{T\wedge \tau _m} \left\{ V'(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t,u), u\right) _{H^{s-1}}\right| \right\} ^2 \, \textrm{d}t\right) ^\frac{1}{2}\\&\qquad +N_1T+N_2\mathbb E\int _0^{T\wedge \tau _m} \frac{\left\{ V'(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t,u), u\right) _{H^{s-1}}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^{s-1}})}\, \textrm{d}t\\&\quad \le \, \frac{1}{2}\mathbb E\sup _{t\in [0,{T\wedge \tau _m}]}\left( 1+V(\Vert u\Vert ^2_{H^{s-1}})\right) \\&\qquad +C\mathbb E\int _0^{T\wedge \tau _m} \frac{\left\{ V'(\Vert u\Vert ^2_{H^{s-1}}) \left| \left( q(t,u), u\right) _{H^{s-1}}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^{s-1}})} \, \textrm{d}t\\&\qquad +N_1T+N_2\mathbb E\int _0^{T\wedge \tau _m} \frac{\left\{ V'(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t,u), u\right) _{H^{s-1}}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^{s-1}})}\, \textrm{d}t. \end{aligned}$$

Thus we use (5.3) to obtain

$$\begin{aligned}&\mathbb E\sup _{t\in [0,{T\wedge \tau _m}]}V(\Vert u\Vert ^2_{H^{s-1}}) \\&\quad \le \, 1+2\mathbb EV(\Vert u_0\Vert ^2_{H^{s-1}}) +C\mathbb E\int _0^{T\wedge \tau _m} \frac{\left\{ V'(\Vert u\Vert ^2_{H^{s-1}}) \left| \left( q(t,u), u\right) _{H^{s-1}}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^{s-1}})} \, \textrm{d}t\\&\qquad +2N_1T+2N_2\mathbb E\int _0^{T\wedge \tau _m}\frac{\left\{ V'(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t,u), u\right) _{H^{s-1}}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^{s-1}})} \, \textrm{d}t\\&\quad \le \, C(u_0,N_1,T)+C(N_2) \mathbb E\int _0^{T\wedge \tau _m} \frac{\left\{ V'(\Vert u\Vert ^2_{H^{s-1}})\left| \left( q(t,u), u\right) _{H^{s-1}}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^{s-1}})} \, \textrm{d}t\\&\quad \le \, C(u_0,N_1,N_2,T). \end{aligned}$$

As a result, for all \(m\ge 1\),

$$\begin{aligned} \mathbb P\{\widetilde{\tau ^*}<T\}\le \,&\mathbb P\{\tau _{m}<T\}\\ \le \,&\mathbb P\left\{ \sup _{t\in [0,T\wedge \tau _m]}V(\Vert u\Vert ^2_{H^{s-1}})\ge V(m^2)\right\} \le \,\frac{C(u_0,N_1,N_2,T)}{V(m^2)}. \end{aligned}$$

Since \(\mathbb P\{\widetilde{\tau ^*}<T\}\) does not depend on m, sending \(m\rightarrow \infty \) gives rise to \(\mathbb P\{\tau ^*<T\}=0\). Since \(T>0\) is arbitrary, we obtain (5.2), which completes the proof for Theorem 1.3.

5.2 Example

As in (1.12), for the solution to (1.4), its \(H^s\)-norm blows up if and only if its \(W^{1,\infty }\)-norm blows up. On the other hand, \(H_3(3)\) means that the growth of \(2\lambda _s\Vert u\Vert ^k_{W^{1,\infty }}\Vert u\Vert ^2_{H^s}\) can be canceled by \(2V''(\Vert u\Vert ^2_{H^s})|(q(t,u), u)_{H^s}|\). Motivated by these two observations, we consider the following examples where the \({W^{1,\infty }}\)-norm of u will be involved, that is,

$$\begin{aligned} q(t,u)=\beta (t,\Vert u\Vert _{W^{1,\infty }})u, \end{aligned}$$
(5.4)

where \(\beta (t,x)\) satisfies the following conditions:

Hypothesis \({\textbf {H}}_{\textbf {4}}\)  We assume that

  • The function \(\beta (t,x)\in C\left( [0,\infty )\times [0,\infty )\right) \) such that for any \(x\ge 0\), \(\beta (\cdot ,x)\) is bounded as a function of t, and for all \(t\ge 0\), \(\beta (t,\cdot )\) is locally Lipschitz continuous as a function of x;

  • The function \(\beta (t,x)\ne 0\) for all \((t,x)\in [0,\infty )\times [0,\infty )\), and \(\limsup _{x\rightarrow +\infty }\frac{2\lambda _s x^k}{\beta ^2(t,x)}<1\) for all \(t\ge 0\), where \(\lambda _s>0\) is given in Lemma A.6.

Now we give a concrete example \(\beta (t,x)\) satisfying Hypothesis \(H_4\). Let \(b:[0,\infty )\rightarrow [0,\infty )\) be a continuous function and there are constants \(b_*,b^*>0\) such that \(b_*\le b^2(t)\le b^*<\infty \) for all t. For all \(k\ge 1\), if

$$\begin{aligned} \text {either}\ \theta>k/2,\ b^*>b_*>0\ \ \textrm{or}\ \ \theta =k/2,\ b^*>b_*>2\lambda _s, \end{aligned}$$

then \(\beta (t,x)=b(t)(1+x)^\theta \) satisfies Hypothesis \(H_4\). Moreover, by the following two lemmas, we will see that \(q(t,u)=b(t)(1+\Vert u\Vert _{W^{1,\infty }})^\theta u\) satisfies Hypothesis \(H_3\).

Lemma 5.1

Let \(\lambda _s\) be given in Lemma A.6. Let \(K>0\). If Hypothesis \(H_4\) holds true, then there is an \(M_1>0\) such that for any \(M_2>0\) and all \(0<x\le Ky<\infty \),

$$\begin{aligned} \frac{2\lambda _s x^ky^2+ \beta ^2(t,x)y^2}{1+y^2} -\frac{2\beta ^2(t,x)y^4}{(1+y^2)^2}\le M_1-M_2\frac{2\beta ^2(t,x)y^4}{(1+y^2)^2(1+\log (1+y^2))}.\nonumber \\ \end{aligned}$$
(5.5)

Proof

By Hypothesis \(H_4\), we have

$$\begin{aligned}&\limsup _{x\rightarrow +\infty }\frac{2\lambda _s x^ky^2+ \beta ^2(t,x)y^2}{1+y^2} -\frac{2\beta ^2(t,x)y^4}{(1+y^2)^2}+M_2\frac{2\beta ^2(t,x)y^4}{(1+y^2)^2(1+\log (1+y^2))}\\&\quad \le \, \limsup _{x\rightarrow +\infty }\left( \frac{2\lambda _s x^k}{\beta ^2(t,x)}+ 1 -\frac{2\left( \frac{x}{K}\right) ^4}{\left( 1+\left( \frac{x}{K}\right) ^2\right) ^2}\right. \left. +M_2\frac{2}{\left( 1+\log \left( 1+\left( \frac{x}{K}\right) ^2\right) \right) }\right) \beta ^2(t,x)<0, \end{aligned}$$

which implies (5.5). \(\square \)

Lemma 5.2

If \(\beta (t,x)\) satisfies Hypothesis \(H_4\), then q(tu) defined by (5.4) satisfies Hypothesis \(H_3\).

Proof

It follows from Lemma 5.1 that \(H_3(3)\) holds true with the choice \(V(x)=\log (1+x)\in \mathcal {V}\). Since \(H^s\hookrightarrow W^{1,\infty }\) with \(s>3/2\), it is obvious that the other requirements in Hypothesis \(H_3\) are verified. \(\square \)