1 Introduction

We consider a stochastic version of the generalized Camassa–Holm equation. Let t denote the time variable and let x be the one-dimensional space variable. The equation is given for \(k\in {\mathbb {N}}\) by

$$\begin{aligned} u_t-u_{xxt}+(k+2)u^ku_x-(1-\partial ^2_{xx})h(t,u)\dot{\mathcal W}=(k+1)u^{k-1}u_xu_{xx}+u^ku_{xxx}. \end{aligned}$$
(1.1)

In (1.1), \(h:{\mathbb {R}}^+\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is some nonlinear function and \(\dot{{\mathcal {W}}}\) is a cylindrical Wiener process. We will consider (1.1) on the torus, i.e. \(x\in {\mathbb {T}}={\mathbb {R}}/2\pi {\mathbb {Z}}\).

For \(h= 0\) and \(k=1\), Eq. (1.1) reduces to the deterministic Camassa–Holm (CH) equation given by

$$\begin{aligned} u_{t}-u_{xxt}+3uu_{x}=2u_{x}u_{xx}+uu_{xxx}. \end{aligned}$$
(1.2)

Fokas & Fuchssteiner [24] introduced (1.2) when studying completely integrable generalizations of the Korteweg-de-Vries (KdV) equation with bi-Hamiltonian structure whereas Camassa & Holm [8] proposed (1.2) to describe the unidirectional propagation of shallow water waves over a flat bottom. Since then, the CH equation (1.2) has been studied intensively, and we cannot even attempt to survey the vast research history here. For the paper at hand it is important to mention the wave-breaking phenomenon which illustrates possible loss of regularity as a fundamental mechanism in the CH equations. In contrast to smooth soliton solutions to the KdV equation [44], solutions to the CH equation remain indeed bounded but their slope can become unbounded in finite time, cf. [14] and related work in [12, 15, 52]. Moreover, for a smooth initial profile, it is possible to predict exactly (by establishing a necessary and sufficient condition) whether wave-breaking occurs for solutions to the Cauchy problem for (1.2) [14, 52]. The other essential feature of the CH equation is the occurrence of traveling waves with a peak at their crest, exactly like that the governing equations for water waves admit the so-called Stokes waves of the greatest height, see [13, 16, 17]. Bressan & Constantin proved the existence of dissipative as well as conservative solutions in [6, 7]. Later, Holden & Raynaud [37, 38] also obtained global conservative and dissipative solutions using Lagrangian transport ideas.

When \(h= 0\) and \(k=2\), Eq. (1.1) forms the cubic equation

$$\begin{aligned} u_t-u_{xxt}+4u^2u_x=3uu_xu_{xx}+u^2u_{xxx}, \end{aligned}$$
(1.3)

which has been derived by Novikov in [55]. It has been proven that (1.3) possesses a bi-Hamiltonian structure with an infinite sequence of conserved quantities and (1.3) admits peaked solutions of explicit form \(u(t, x)=\pm \sqrt{c} e^{-|x-ct|}\) with \(c>0\) [26], as well as multipeakon solutions with explicit formulas [39]. For the study of other deterministic instances of (1.1) we refer to [9, 32, 66, 67].

Equations like (1.2) are naturally embedded in high-order descriptions with energy-dissipative evolution. A weakly dissipative evolution is given for some parameter \(\lambda >0\) by

$$\begin{aligned} u_t-u_{xxt}+(k+2)u^ku_x- (1-\partial ^2_{xx}) (\lambda u)=(k+1)u^{k-1}u_xu_{xx}+u^ku_{xxx},\qquad x\in {\mathbb {T}}, \, t>0. \end{aligned}$$
(1.4)

For instance, the weakly dissipative CH equation, i.e. (1.4) for \(k=1\), has been studied in [49, 65]. For the Novikov equation (1.3) with the same weakly dissipative term \((1-\partial ^2_{xx})(\lambda u)\), we can also refer to [49].

In this work, we assume that the energy exchange mechanisms are connected with randomness to account for external stochastic influence. We are interested in the case where in the term \((1-\partial ^2_{xx})(\lambda u)\) the deterministic parameter \(\lambda \) is substituted by a formal cylindrical Wiener process (cf. Sect. 2 for precise definitions), and where the previously linear dependency on u is replaced by a non-autonomous and nonlinear term h(tu). Thus, we consider the Cauchy problem for (1.1) on the torus \({\mathbb {T}}\), with random initial data \(u_0=u_0(\omega ,x)\). Applying the operator \((1-\partial _{xx}^2)^{-1}\) to (1.1), we reformulate the problem as the stochastic evolution

$$\begin{aligned} \left\{ \begin{aligned} \mathrm{d}u+\left[ u^k\partial _xu+F(u)\right] \mathrm{d}t&=h(t,u)\mathrm{d}{\mathcal {W}},\quad x\in {\mathbb {T}}, \ t>0,\ k\ge 1,\\ u(\omega ,0,x)&=u_0(\omega ,x),\quad x\in {\mathbb {T}}, \end{aligned} \right. \end{aligned}$$
(1.5)

with \(F(u)=F_1(u)+F_2(u)+F_3(u)\) and

$$\begin{aligned} \begin{aligned}&F_1(u)=(1-\partial _{xx}^2)^{-1}\partial _x\left( u^{k+1}\right) ,\\&F_2(u)=\frac{2k-1}{2}(1-\partial _{xx}^2)^{-1}\partial _x\left( u^{k-1}u_x^2\right) ,\\&F_3(u)=\frac{k-1}{2}(1-\partial _{xx}^2)^{-1}\left( u^{k-2}u_x^3\right) . \end{aligned} \end{aligned}$$
(1.6)

Here we remark that the operator \((1-\partial _{xx}^2)^{-1}\) in \(F(\cdot )\) is understood as

$$\begin{aligned} \left[ (1-\partial _{xx}^2)^{-1}f\right] (x)=[G_{{\mathbb {T}}}*f](x) \ \text {for}\ f\in L^2({\mathbb {T}}) \ \text {with}\ G_{{\mathbb {T}}}=\frac{\cosh (x-2\pi \left[ \frac{x}{2\pi }\right] -\pi )}{2\sinh (\pi )}, \end{aligned}$$
(1.7)

where the convolution is on \({\mathbb {T}}\) only, and the symbol [x] stands for the integer part of x.

The first objective of this paper is to analyze the local existence and uniqueness of pathwise solutions as well as blow-up criteria for problem (1.5) with nonlinear multiplicative noise (see Theorem 2.1). We note that for the CH equation with additive noise, existence and uniqueness has been obtained in [11]. For the stochastic modified CH equation with linear multiplicative noise, we refer to [10]. When the multiplicative noise is given by a one-dimensional Wiener process with \(H^s\) diffusion coefficient, the stochastic CH equation was considered in second author’s work [58]. In this paper, we consider the local existence and uniqueness of pathwise solutions to (1.5) as well as blow-up criteria for more general noise given by a cylindrical Wiener process with nonlinear coefficient.

The second major objective of the paper is to investigate whether stochastic perturbations can improve the dependence on initial data in the Cauchy problem (1.5). We notice that various instances of transport-type stochastic evolution laws have been studied with respect to the effect of noise on the regularity of their solutions. For example, we refer to [21, 23, 53, 54] for linear stochastic transport equations and to what concerns nonlinear stochastic conservation laws, we refer to [27, 50, 51]. In [29, 46, 57, 58] the dissipation of energy caused by linear multiplicative noise has been analyzed.

In contrast to these works we focus in this paper on the initial-data dependence in the Cauchy problem (1.5). Actually, much less is known on the noise effect with respect to the dependence on initial data. But the question whether (and how) noise can affect initial-data dependence is interesting. Formally speaking, regularization produced by noise may be seen like the deterministic regularization effect induced by adding a Laplacian. However, if one would add a real Laplacian to the governing equations, then by using techniques from the theory for semilinear parabolic equations, the dependence on initial data in some cases turns out to be more regular than being just continuous. For example, for the deterministic incompressible Euler equations, the dependence on initial data cannot be better than continuous [35], but for the deterministic incompressible Navier-Stokes equations with sufficiently large viscosity, it is at least Lipschitz continuous in sufficiently high Sobolev norms, see pp. 79–81 in [31]. This motivates us to study whether (and how) noise can affect initial-data dependence. Actually, to our knowledge, there are very few results in this direction. We can only refer to [59] for the stochastic Euler-Poincaré equation with respect to this problem. However, in this work, we have higher order nonlinearities, which requires more technical estimates. To analyze initial-data dependence, we introduce the concept of the stability of the exiting time (see Definition 2.2 below), cf. [59]. Then we show under some conditions on h(tu), that the multiplicative noise (in the Itô sense) cannot improve the stability of the exiting time, and, at the same time, improve the continuity of the map \(u_0\mapsto u\) (see Theorem 2.2). It is worth noting that in deterministic cases, the issue of the optimal dependence of solutions (for example, the solution map is continuous but not uniformly continuous) to various nonlinear dispersive and integrable equations has been the subject of many papers. One of the first results of this type dates back at least as far as to Kato [42]. Indeed, Kato [42] proved that the solution map \(u_{0}\mapsto u\) in \(H^s({\mathbb {T}})\) (\(s>3/2\)), given by the inviscid Burgers equation, is not Hölder continuous regardless of the Hölder exponent. Since then different techniques have been successfully applied to various nonlinear dispersive and integrable equations, see [2, 45, 47] for example. Particularly, for the incompressible Euler equation, we refer to [35, 60], and for CH type equations, we refer to [33, 34, 61,62,63] and the references therein.

We conclude the paper with a discussion of time-global well-posedness for (1.5). For general noise terms, it is difficult to determine whether a local pathwise solution to (1.5) is globally defined. As shown in [29, 46, 48, 57, 58], the linear noise \(h(t,u) \mathrm{d} W=\beta u \mathrm{d}W\), with \(\beta \in {\mathbb {R}}\setminus \{0\}\) and W being a standard 1-D Brownian motion, acts dissipatively for many SPDEs. Motivated by these works, we prove some results in Theorems 2.3 and 2.4 for the global dynamics of (1.5) with linear multiplicative noise , that is

$$\begin{aligned} \left\{ \begin{aligned}&\mathrm{d}u+\left[ u^k\partial _xu+F(u)\right] \mathrm{d}t=b(t) u\mathrm{d}W,\quad x\in {\mathbb {T}}, \ t\in {\mathbb {R}}^+,\ k\ge 1,\\&u(\omega ,0,x)=u_0(\omega ,x),\quad x\in {\mathbb {T}}. \end{aligned} \right. \end{aligned}$$
(1.8)

2 Preliminaries and Main Results

2.1 Noise with \(H^s\) Coefficient

We begin by introducing some notations. Throughout the paper, \((\Omega , {\mathcal {F}},{\mathbb {P}})\) denotes a complete probability space, where \({\mathbb {P}}\) is the probability measure on \(\Omega \) and \({\mathcal {F}}\) is a \(\sigma \)-algebra. Let \(t>0\). \(\sigma \left\{ \left( x(\tau ),y(\tau )\right) _{\tau \in [0,t]}\right\} \) stands for the completion of the union \(\sigma \)-algebra generated by \(\left( x(\tau ),y(\tau )\right) \). All stochastic integrals are defined in the Itô sense and \({\mathbb {E}}\cdot \) is the mathematical expectation of \(\cdot \) with respect to \({\mathbb {P}}\). For some separable Banach space X, \({\mathcal {B}}(X)\) denotes the Borel sets of X, and \(\mathcal Pr(X)\) stands for the collection of Borel probability measures on X. For \(E\subseteq X\), \(\mathbf{1 }_{E}\) is the indicator function on E.

\(L^2({\mathbb {T}})\) is the usual square integrable function space on \({\mathbb {T}}\). For \(s\in {\mathbb {R}}\), \(D^s=(1-\partial _{xx}^2)^{s/2}\) is defined by \(\widehat{D^sf}(k)=(1+k^2)^{s/2}{\widehat{f}}(k)\), where \({\widehat{g}}\) denote the Fourier transform of g on \({\mathbb {T}}\). The Sobolev space \(H^s({\mathbb {T}})\) is defined as

$$\begin{aligned} H^s({\mathbb {T}})\triangleq \{f\in L^2({\mathbb {T}}):\Vert f\Vert _{H^s({\mathbb {T}})}^2=\sum _{k\in {{\mathbb {Z}}}}(1+k^2)^s|{\widehat{f}}(k)|^2<+\infty \}, \end{aligned}$$

and the inner product on \(H^s({\mathbb {T}})\) is \((f,g)_{H^s}\triangleq \sum _{k\in {{\mathbb {Z}}}}(1+k^2)^s{\widehat{f}}(k)\cdot \overline{{\widehat{g}}}(k)=(D^sf,D^sg)_{L^2}.\) For function spaces on \({\mathbb {T}}\), we will drop \({\mathbb {T}}\) if there is no ambiguity. We will use \(\lesssim \) to denote estimates that hold up to some universal deterministic constant which may change from line to line but whose meaning is clear from the context. For linear operators A and B, [AB] stands for the commutator of A and B, i.e., \([A,B]=AB-BA\).

We briefly recall some aspects of the stochastic analysis theory which we use below. We refer the readers to [19, 25, 40] for an extended treatment of this subject.

We call \({\mathcal {S}}=(\Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, {\mathcal {W}})\) a stochastic basis, where \(\{{\mathcal {F}}_t\}_{t\ge 0}\) is a right-continuous filtration on \((\Omega , {\mathcal {F}})\) such that \(\{{\mathcal {F}}_0\}\) contains all the \({\mathbb {P}}\)-negligible subsets and \({\mathcal {W}}(t)={\mathcal {W}}(\omega ,t),\omega \in \Omega \) is a cylindrical Brownian motion, defined on an auxiliary Hilbert space U, which is adapted to \(\{{\mathcal {F}}_t\}_{t\ge 0}\). Formally, if \(\{e_k\}\) is a complete orthonormal basis of U and \(\{W_k\}_{k\ge 1}\) is a sequence of mutually independent standard one-dimensional Brownian motions, then one may define

$$\begin{aligned} {\mathcal {W}}=\sum _{k=1}^\infty e_kW_k\ \ {\mathbb {P}}-a.s. \end{aligned}$$

To guarantee the convergence of the (formal) summation above, we consider a larger separable Hilbert space \(U_0\) such that the canonical embedding \(U\hookrightarrow U_0\) is Hilbert–Schmidt. Then we have that for any \(T>0\), cf. [19, 25, 41],

$$\begin{aligned} {\mathcal {W}}=\sum _{k=1}^\infty e_kW_k\in C([0,T];U_0)\ \ {\mathbb {P}}-a.s. \end{aligned}$$

To define the Itô stochastic integral

$$\begin{aligned} \int _0^\tau G\mathrm{d}{\mathcal {W}}=\sum _{k=1}^\infty \int _0^\tau G e_k\mathrm{d}W_k \end{aligned}$$
(2.1)

on \(H^s\), it is required (see e.g. [19, 56]) for the predictable stochastic process G to take values in the space of Hilbert-Schmidt operators from U to \(H^s\), denoted by \(L_2(U; H^s)\). For such G, (2.1) is a well-defined continuous \(H^s\)-valued square integrable martingale such that for all almost surely bounded stopping times \(\tau \) and for all \(v\in H^s\),

$$\begin{aligned} \left( \int _0^\tau G\ \mathrm{d}{\mathcal {W}},v\right) _{H^s}=\sum _{k=1}^\infty \int _0^\tau (G e_k,v)_{H^s}\ \mathrm{d}W_k. \end{aligned}$$

Moreover, the desirable Burkholder-Davis-Gundy inequality in our case turns out to be

$$\begin{aligned} {\mathbb {E}}\left( \sup _{t\in [0,T]}\left\| \int _0^t G\ \mathrm{d}{\mathcal {W}}\right\| _{H^s}^p\right) \le C(p,s) {\mathbb {E}}\left( \int _0^T \Vert G\Vert ^2_{L_2(U; H^s)}\ \mathrm{d}t\right) ^\frac{p}{2},\ \ p\ge 1. \end{aligned}$$

or in terms of the coefficients,

$$\begin{aligned} {\mathbb {E}}\left( \sup _{t\in [0,T]}\left\| \sum _{k=1}^\infty \int _0^t Ge_k\ \mathrm{d}W_k\right\| _{H^s}^p\right) \le C(p,s) {\mathbb {E}}\left( \int _0^T \sum _{k=1}^\infty \Vert Ge_k\Vert ^2_{H^s}\ \mathrm{d}t\right) ^\frac{p}{2},\ \ p\ge 1. \end{aligned}$$

Here we remark that the stochastic integral (2.1) does not depend on the choice of the space \(U_0\), cf. [19, 56]. For example, \(U_0\) can be defined as

$$\begin{aligned} U_0=\left\{ v=\sum _{k=1}^\infty a_ke_k:\sum _{k=1}^\infty \frac{a_k^2}{k^2}<\infty \right\} ,\ \ \Vert v\Vert _{U_0}=\sum _{k=1}^\infty \frac{a_k^2}{k^2}. \end{aligned}$$

2.2 Definitions of the Solutions and Stability of the Exiting Times

We now make the precise notion of a pathwise solution to (1.5).

Definition 2.1

(Pathwise solutions) Let \({\mathcal {S}}=(\Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, {\mathcal {W}})\) be a fixed stochastic basis. Let \(s>3/2\) and \(u_0\) be an \(H^s\)-valued \({\mathcal {F}}_0\)-measurable random variable (relative to \({\mathcal {S}})\).

  1. 1.

    A local pathwise solution to (1.5) is a pair \((u,\tau )\), where \(\tau \ge 0\) is a stopping time satisfying \({\mathbb {P}}\{\tau >0\}=1\) and \(u:\Omega \times [0,\infty ]\rightarrow H^s\) is an \({\mathcal {F}}_t\)-predictable \(H^s\)-valued process satisfying

    $$\begin{aligned} u(\cdot \wedge \tau )\in C([0,\infty );H^s)\ \ {\mathbb {P}}-a.s., \end{aligned}$$

    and for all \(t>0\),

    $$\begin{aligned} u(t\wedge \tau )-u(0)+\int _0^{t\wedge \tau } \left[ u^k\partial _xu+F(u)\right] \mathrm{d}t' =\int _0^{t\wedge \tau }h(t',u)\mathrm{d}{\mathcal {W}}\ \ {\mathbb {P}}-a.s. \end{aligned}$$
  2. 2.

    The local pathwise solutions are said to be pathwise unique, if given any two pairs of local pathwise solutions \((u_1,\tau _1)\) and \((u_2,\tau _2)\) with \({\mathbb {P}}\left\{ u_1(0)=u_2(0)\right\} =1,\) we have

    $$\begin{aligned} {\mathbb {P}}\left\{ u_1(t,x)=u_2(t,x),\ \forall \ (t,x)\in [0,\tau _1\wedge \tau _2]\times {\mathbb {T}}\right\} =1. \end{aligned}$$
  3. 3.

    Additionally, \((u,\tau ^*)\) is called a maximal pathwise solution to (1.5) if \(\tau ^*>0\) almost surely and if there is an increasing sequence \(\tau _n\rightarrow \tau ^*\) such that for any \(n\in {\mathbb {N}}\), \((u,\tau _n)\) is a pathwise solution to (1.5) and on the set \(\{\tau ^*<\infty \}\),

    $$\begin{aligned} \sup _{t\in [0,\tau _n]}\Vert u\Vert _{H^s}\ge n. \end{aligned}$$
  4. 4.

    If \((u,\tau ^*)\) is a maximal pathwise solution and \(\tau ^*=\infty \) almost surely, then we say that the pathwise solution exists globally.

A major result of this paper is a (negative) dependence statement for the initial data of the solutions. Precisely, it refers to the stability of a point in time when the solution leaves a certain range. This point in time is called exiting time and we introduce

Definition 2.2

(Stability of exiting time) Let \(s>3/2\) and \({\mathcal {S}}=(\Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, {\mathcal {W}})\) be a fixed stochastic basis. Let \(u_0\) be an \(H^s\)-valued \({\mathcal {F}}_0\)-measurable random variable such that \({\mathbb {E}}\Vert u_0\Vert ^2_{H^s}<\infty \). Assume that \(\{u_{0,n}\}\) is any sequence of \(H^s\)-valued \({\mathcal {F}}_0\)-measurable random variables satisfying \({\mathbb {E}}\Vert u_{0,n}\Vert ^2_{H^s}<\infty \). For each n, let u and \(u_n\) be the unique pathwise solutions to (1.5) with initial value \(u_0\) and \(u_{0,n}\), respectively.

For any \(R>0\) and \(n\in {\mathbb {N}}\), we define the R-exiting times

$$\begin{aligned} \tau ^R_{n}:=\inf \left\{ t\ge 0: \Vert u_n(t)\Vert _{H^s}>R\right\} \text{ and } \tau ^R:=\inf \left\{ t\ge 0: \Vert u(t)\Vert _{H^s}>R\right\} , \end{aligned}$$

where \(\inf \varnothing = \infty \). Furthermore,

  1. 1.

    If \(u_{0,n}\rightarrow u_0\) in \(H^{s}\) \({\mathbb {P}}-a.s.\) implies that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\tau ^R_{n}=\tau ^R\ \ {\mathbb {P}}-a.s., \end{aligned}$$
    (2.2)

    then the R-exiting time of u is said to be stable.

  2. 2.

    If \(u_{0,n}\rightarrow u_0\) in \(H^{s'}\) for all \(s'<s\) almost surely implies that (2.2) holds true, the R-exiting time of u is said to be strongly stable.

2.3 Assumptions

For our results on existence of pathwise solutions, on the stability of exiting times and global well-posedness of (1.1), we rely in the following sections on generic but slightly different assumptions concerning the data in (1.1). These are summarized here, for a comment on possible relaxed versions we refer to Remark 3.1.

Assumption \(\mathbf{A}_1\). We assume that \(h:[0,\infty )\times H^s\ni (t,u)\mapsto h(t,u)\in L_2(U; H^s)\) for \(u\in H^s\) with \(s\ge 0\) such that if \(u:\Omega \times [0,T]\rightarrow H^s\) is predictable, then h(tu) is also predictable. Furthermore, we assume the following:

  1. 1.

    There is an increasing and locally bounded function \(f(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) with \(f(0)=0\) such that for any \(t>0\) and \(u\in H^s\) with \(s>\frac{1}{2}\),

    $$\begin{aligned} \Vert h(t,u)\Vert _{L_2(U; H^s)}\le f(\Vert u\Vert _{W^{1,\infty }})(1+\Vert u\Vert _{H^s}). \end{aligned}$$
    (2.3)

    Particularly, if h does not depend on u, i.e., the additive noise case, then the condition \(f(0)=0\) can be removed.

  2. 2.

    There is an increasing locally bounded function \(g(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\), such that for any \(t>0\) and \(u\in H^s\) with \(s>\frac{1}{2}\),

    $$\begin{aligned} \Vert h(t,u)-h(t,v)\Vert _{L_2(U; H^s)}\le g(\Vert u\Vert _{H^{s}}+\Vert v\Vert _{H^{s}})\Vert u-v\Vert _{H^s}. \end{aligned}$$
    (2.4)

Assumption \(\mathbf{A}_2\). When we consider the initial-data dependence problem for (1.5) in Sect. 4, we need a similar assumption on \(h(t,\cdot )\). For \(s\ge 0\) and \(u\in H^s\), \(h:[0,\infty )\times H^s\ni (t,u)\mapsto h(t,u)\in L_2(U; H^s)\) for \(u\in H^s\) with \(s\ge 0\) such that if \(u:\Omega \times [0,T]\rightarrow H^s\) is predictable, then h(tu) is also predictable. Moreover, for all \(t\ge 0\), we assume

$$\begin{aligned} \Vert h(t,u)\Vert _{L_2(U; H^s)}\le \Vert F(u)\Vert _{H^s},\ \ \Vert h(t,u)-h(t,v)\Vert _{L_2(U; H^s)}\le \Vert F(u)-F(v)\Vert _{H^s}, \end{aligned}$$
(2.5)

where \(F(\cdot )\) is defined by (1.6).

Assumption \(\mathbf{A}_3\). When considering (1.8) with non-autonomous linear noise \(b(t) u \mathrm{d}W\), we assume that \(b(t)\in C([0,\infty ))\) such that \(\displaystyle 0<b_*\le b^2(0)\le \sup _{t\ge 0}b^2(t)\le b^*\) for some \(b_*,b^*\in {\mathbb {R}}\).

2.4 Main Results

In this section we summarize our major contributions providing proofs later in the remainder of the paper.

Theorem 2.1

Let \(s>3/2\), \(k\ge 1\) and let h(tu) satisfy Assumption \(A_1\). For a given stochastic basis \({\mathcal {S}}=(\Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, {\mathcal {W}})\), if \(u_0\) is an \(H^s\)-valued \({\mathcal {F}}_0\)-measurable random variable satisfying \({\mathbb {E}}\Vert u_0\Vert ^2_{H^s}<\infty \), then there is a local unique pathwise solution \((u,\tau )\) to (1.5) in the sense of Definition 2.1 with

$$\begin{aligned} u(\cdot \wedge \tau )\in L^2\left( \Omega ; C\left( [0,\infty );H^s\right) \right) . \end{aligned}$$
(2.6)

Moreover, \((u,\tau )\) can be extended to a unique maximal pathwise solution \((u,\tau ^*)\) with

$$\begin{aligned} \mathbf{1 }_{\{\tau ^*<\infty \}}=\mathbf{1 }_{\left\{ \limsup _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{W^{1,\infty }}=\infty \right\} }\ {\mathbb {P}}-a.s. \end{aligned}$$
(2.7)

Remark 2.1

We remark here that \(F_3(u)\) in (1.6) will disappear when \(k=1\). The proof for Theorem 2.1 combines the techniques as employed in the papers [3,4,5, 18, 20, 29, 58]. However, the Faedo-Galerkin method used e.g. in [20, 29] cannot be utilized directly since in (1.5), we do not have additional constraints like incompressibility, which guarantees the global existence of an approximate solution (see, e.g. [22, 29]). Without this, we need to find a positive lower bound for the lifespan of the approximate solutions, which is generally not clear. Particularly, for our case, this difficulty will be overcome by constructing a suitable approximation scheme and establishing an appropriate blow-up criterium, which applies not only for u, but also for the approximate solution \(u_\varepsilon \). This idea is transferred from the recent work [18] on deriving blow-up criteria.

The next result addresses the dependence of the solution on initial data giving at least a partial answer.

Theorem 2.2

(Weak instability) Consider the periodic initial value problem (1.5), where \(F(\cdot )\) is given by (1.6) with \(k\ge 1\). Let \({\mathcal {S}}=(\Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, {\mathcal {W}})\) be a fixed stochastic basis and let \(s>3/2\). If h satisfies Assumption \(A_2\), then at least one of the following properties holds true.

  1. 1.

    For any \(R\gg 1\), the R-exiting time is not strongly stable for the zero solution in the sense of Definition 2.2;

  2. 2.

    The solution map \(u_0\mapsto u\) defined by solving (1.5) is not uniformly continuous as a map from \(L^2(\Omega ,H^s)\) into \(L^2\left( \Omega ; C\left( [0,T];H^s\right) \right) \) for any \(T>0\). More precisely, there exist two sequences of solutions \(u_{1,n}(t)\) and \(u_{2,n}(t)\), and two sequences of stopping times \(\tau _{1,n}\) and \(\tau _{2,n}\), such that

    1. (a)

      \({\mathbb {P}}\{\tau _{i,n}>0\}=1\) for each \(n>1\) and \(i=1,2\). Besides,

      $$\begin{aligned} \lim _{n\rightarrow \infty } \tau _{1,n}=\lim _{n\rightarrow \infty } \tau _{2,n}=\infty \ \ {\mathbb {P}}-a.s. \end{aligned}$$
      (2.8)
    2. (b)

      For \(i=1,2\), \(u_{i,n}\in C([0,\tau _{i,n}];H^s) \) \({\mathbb {P}}-a.s.\), and

      $$\begin{aligned} {\mathbb {E}}\left( \sup _{t\in [0,\tau _{1,n}]}\Vert u_{1,n}(t)\Vert ^2_{H^{s}}+ \sup _{t\in [0,\tau _{2,n}]}\Vert u_{2,n}(t)\Vert ^2_{H^{s}}\right) \lesssim 1. \end{aligned}$$
      (2.9)
    3. (c)

      At \(t=0\),

      $$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {E}}\Vert u_{1,n}(0)-u_{2,n}(0)\Vert ^2_{H^{s}}=0. \end{aligned}$$
      (2.10)
    4. (d)

      For any \(T>0\),

      $$\begin{aligned} \liminf _{n\rightarrow \infty }{\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{1,n}\wedge \tau _{2,n}]}\Vert u_{1,n}(t)-u_{2,n}(t)\Vert _{H^{s}}^2 > rsim {\left\{ \begin{array}{ll} \displaystyle \left( \sup _{t\in [0,T]}|\sin t|\right) ^2, &{} \text { if} \ k\ \text {is odd},\\ \displaystyle \left( \sup _{t\in [0,T]}\left| \sin \frac{t}{2}\right| \right) ^2, &{} \text { if}\ k\ \text {is even}. \end{array}\right. } \end{aligned}$$
      (2.11)

Remark 2.2

To prove Theorem 2.2, we assume that for some \(R_0\gg 1\), the \(R_0\)-exiting time is strongly stable at the zero solution. Then we will construct an example to show that the solution map \(u_0\mapsto u\) defined by (1.5) is not uniformly continuous. This example involves the construction (for each \(s>3/2\)) of two sequences of solutions which are converging at time zero but remain far apart at any later time. Therefore we will first construct two sequences of approximation solutions \(u^{i,n}\) (\(i\in \{1,2\}\)) such that the actual solutions \(u_{i,n}\) starting from \(u_{i,n}(0)=u^{i,n}(0)\) satisfy that as \(n\rightarrow \infty \),

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {E}}\sup _{[0,\tau _{i,n}]}\Vert u_{i,n}-u^{i,n}\Vert _{H^s} =0, \end{aligned}$$
(2.12)

where \(u_{i,n}\) exists at least on \([0,\tau _{i,n}]\). If we obtain (2.12), then we can estimate the approximation solutions instead of the actual solutions and obtain (2.11). In deterministic case, for other works using the method of approximate solutions for studying dependence on initial data, we refer the reader to [2, 34, 47, 61, 63] and the references therein. However, in contrast to deterministic cases where lifespan estimate can be achieved (see (4.7)–(4.8) in [62] and (3.8)–(3.9) in [63] for example), it is not clear \(\inf _{n}\tau _{i,n}>0\) almost surely in stochastic setting. Therefore we are motivated to introduce the definition on the stability of the exiting time (see Definition 2.2). Then we find that the property \(\inf _{n}\tau _{i,n}>0\) can be connected with the stability property of the exiting time of the zero solution. Then we estimate the error in \(H^{2s-\sigma }\) and \(H^{\sigma }\) with suitable \(\sigma \). Different from [59], the problem (1.5) has nonlinearities of order \(k+1\) and (2.12) should depend on k. Therefore more technical estimates are needed (see Lemma 4.1 and (4.17) for example). Finally we use interpolation to derive (2.12). Theorem 2.2 shows that one cannot expect too much for the issue of the dependence on initial data. More precisely, we cannot expect to improve the stability of the exiting time for the zero solution, and simultaneously to improve the continuous dependence of solutions on initial data.

Finally, we focus on (1.8). For the issue of global existence, we have

Theorem 2.3

(Global existence: Case 1) Let \(k\ge 1\) and \(s>3/2\). Let b(t) satisfy Assumption \(A_3\) and \({\mathcal {S}}=(\Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, W)\) be a fixed stochastic basis. Assume \(u_0\) to be an \(H^s\)-valued \({\mathcal {F}}_0\)-measurable random variable satisfying \({\mathbb {E}}\Vert u_0\Vert ^2_{H^s}<\infty \). Let K be a constant such that \(\Vert \cdot \Vert _{W^{1,\infty }}<K\Vert \cdot \Vert _{H^s}\). Then there is a \(C=C(s)>1\) such that for any \(R>1\) and \(\lambda _1>2\), if \(\Vert u_0\Vert _{H^s}<\frac{1}{RK}\left( \frac{b_*}{C\lambda _1}\right) ^{1/k}\) almost surely, then (1.8) has a maximal pathwise solution \((u,\tau ^*)\) satisfying for any \(\lambda _2>\frac{2\lambda _1}{\lambda _1-2}\) the estimate

$$\begin{aligned} {\mathbb {P}}\left\{ \Vert u(t)\Vert _{H^s}<\frac{1}{K}\left( \frac{b_*}{C\lambda _1}\right) ^{1/k} \mathrm{e}^{-\frac{\left( (\lambda _1-2)\lambda _2-2\lambda _1\right) }{2\lambda _1\lambda 2}\int _0^tb^2(t') \mathrm{d}t'} \ \mathrm{\ for\ all}\ t>0 \right\} \ge 1-\left( \frac{1}{R}\right) ^{2/\lambda _2}. \end{aligned}$$

Remark 2.3

Here we notice that if \(k=1\), then \(F_3(u)\) in (1.6) will disappear. Motivated by the recent papers [29, 57, 58], where the linear noise \(\beta u \mathrm{d}W\) with \(\beta \in {\mathbb {R}}\setminus \{0\}\) is considered, we focus on the non-autonomous linear multiplicative noise case, namely (1.8). We transform (1.8) to a non-autonomous random system (5.2). Although the stochastic integral is absent in (5.2), to extend the deterministic results to the stochastic setting, we need to overcome a few technical difficulties since the system is not only random but also non-autonomous. In this work, we manage to gain some estimates and asymptotic limits of the Girsanov type processes (see e.g., (5.6), (5.8), (5.10) and Lemma A.7), which enable us to apply the energy estimate pathwisely (namely for a.e. \(\omega \in \Omega \)) and obtain Theorem 2.3.

Theorem 2.4

(Global existence: Case 2) Let \({\mathcal {S}}=(\Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, W)\) be a fixed stochastic basis. Let \(s>3\) and b(t) satisfy Assumption \(A_3\). Assume \(u_0\) to be an \(H^s\)-valued, \({\mathcal {F}}_0\)-measurable random variable satisfying \({\mathbb {E}}\Vert u_0\Vert ^2_{H^s}<\infty \). If \(u_0\) satisfies

$$\begin{aligned} {\mathbb {P}}\left\{ (1-\partial _{xx}^2)u_0(x)>0,\ \forall x\in {\mathbb {T}}\right\} =p,\ \ {\mathbb {P}}\left\{ (1-\partial _{xx}^2)u_0(x)<0,\ \forall x\in {\mathbb {T}}\right\} =q, \end{aligned}$$

for some \(p,q\in [0,1]\), then the corresponding maximal pathwise solution \((u,\tau ^*)\) to (1.8) satisfies

$$\begin{aligned} {\mathbb {P}}\{\tau ^*=\infty \}\ge p+q. \end{aligned}$$

That is to say, \({\mathbb {P}}\left\{ u\ \mathrm{exists\ globally}\right\} \ge p+q.\)

3 Proof for Theorem 2.1

3.1 Blow-Up Criteria

Let us postpone the proof for existence and uniqueness of solutions to (1.5) to Sect. 3.2. We will first prove the blow-up criteria, since some estimates will be used later. Motivated by [18], we first consider the relationship between the blow-up time of \(\Vert u(t)\Vert _{H^s}\) and the blow-up time of \(\Vert u(t)\Vert _{W^{1,\infty }}\) for (1.5). Even though one might expect that the \(\Vert u(t)\Vert _{H_s}\) norm may blow up earlier than \(\Vert u(t)\Vert _{W^{1,\infty }}\), the following result shows that this is not true.

Lemma 3.1

Let \((u,\tau ^*)\) be the unique maximal pathwise solution to (1.5). Then the real-valued stochastic process \(\Vert u\Vert _{W^{1,\infty }}\) is also \({\mathcal {F}}_t\)-adapted. Besides, for any \(m,n\in {\mathbb {Z}}^{+}\), define

$$\begin{aligned} \tau _{1,m}=\inf \left\{ t\ge 0: \Vert u(t)\Vert _{H^s}\ge m\right\} ,\ \ \ \tau _{2,n}=\inf \left\{ t\ge 0: \Vert u(t)\Vert _{W^{1,\infty }}\ge n\right\} . \end{aligned}$$

Moreover, let \(\displaystyle \tau _1=\lim _{m\rightarrow \infty }\tau _{1,m}\) and \(\displaystyle \tau _2=\lim _{n\rightarrow \infty }\tau _{2,n}\), then we have

$$\begin{aligned} \tau _{1}=\tau _{2} \ \ {\mathbb {P}}-a.s. \end{aligned}$$

As a corollary, \(\mathbf{1 }_{\left\{ \lim _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{W^{1,\infty }}=\infty \right\} }=\mathbf{1 }_{\{\tau ^*<\infty \}}\) \({\mathbb {P}}-a.s.\)

Proof

To begin with, we see that \(u(\cdot \wedge \tau )\in C([0,\infty );H^s)\) means that for any \(t\in [0,\tau ]\),

$$\begin{aligned} {[}u(t)]^{-1}(Y)=[u(t)]^{-1}(H^s\cap Y),\ \forall \ Y\in {\mathcal {B}}(W^{1,\infty }). \end{aligned}$$

Therefore u(t), as a \(W^{1,\infty }\)-valued process, is also \({\mathcal {F}}_t\)-adapted. We then infer from the embedding \(H^s\hookrightarrow W^{1,\infty }\) for \(s>3/2\) that for some \(M>0\) and \(m\in {\mathbb {N}}\),

$$\begin{aligned} \sup _{t\in [0,\tau _{1,m}]}\Vert u(t)\Vert _{W^{1,\infty }}\le M\sup _{t\in [0,\tau _{1,m}]}\Vert u(t)\Vert _{H^s} \le ([M]+1)m, \end{aligned}$$

where [M] means the integer part of M. Therefore we have \(\tau _{1,m}\le \tau _{2,([M]+1)m}\le \tau _2\) \({\mathbb {P}}-a.s.,\) which means that \(\tau _{1}\le \tau _2\) \({\mathbb {P}}-a.s.\) Now we only need to prove \(\tau _{2}\le \tau _1\) \({\mathbb {P}}-a.s.\) It is easy to see that for all \(n_1,n_2\in {\mathbb {Z}}^{+}\),

$$\begin{aligned}&\left\{ \sup _{t\in [0,\tau _{2,n_1}\wedge n_2]}\Vert u(t)\Vert _{H^s}<\infty \right\} =\bigcup _{m\in {\mathbb {Z}}^{+}}\left\{ \sup _{t\in [0,\tau _{2,n_1}\wedge n_2]}\Vert u(t)\Vert _{H^s}<m\right\} \\&\quad \subset \bigcup _{m\in {\mathbb {Z}}^{+}}\left\{ \tau _{2,n_1}\wedge n_2\le \tau _{1,m}\right\} . \end{aligned}$$

Since

$$\begin{aligned} \bigcup _{m\in {\mathbb {Z}}^{+}}\left\{ \tau _{2,n_1}\wedge n_2\le \tau _{1,m}\right\} \subset \left\{ \tau _{2,n_1}\wedge n_2\le \tau _{1}\right\} , \end{aligned}$$

we see that if

$$\begin{aligned} {\mathbb {P}}\left\{ \sup _{t\in [0,\tau _{2,n_1}\wedge n_2]}\Vert u(t)\Vert _{H^s}<\infty \right\} =1,\ \ \forall \ n_1,n_2\in {\mathbb {Z}}^{+} \end{aligned}$$
(3.1)

holds true, then for all \(n_1,n_2\in {\mathbb {Z}}^{+}\), \({\mathbb {P}}\left\{ \tau _{2,n_1}\wedge n_2\le \tau _{1}\right\} =1\) and

$$\begin{aligned} {\mathbb {P}}\left\{ \tau _2\le \tau _1\right\} ={\mathbb {P}}\left\{ \bigcap _{n_1\in {\mathbb {Z}}^{+}}\left\{ \tau _{2,n_1}\le \tau _{1}\right\} \right\} ={\mathbb {P}}\left\{ \bigcap _{n_1,n_2\in {\mathbb {Z}}^{+}}\left\{ \tau _{2,n_1}\wedge n_2\le \tau _{1}\right\} \right\} =1. \end{aligned}$$
(3.2)

Since (3.2) requires the assumption (3.1), it suffices to prove (3.1). However, we can not directly apply the Itô formula for \(\Vert u\Vert ^2_{H^s}\) to get control of \({\mathbb {E}}\Vert u(t)\Vert _{H^s}^2\) since we only have \(u\in H^s\) and \(u^ku_x\in H^{s-1}\). Therefore the well known Itô formula in general Hilbert space can not be used directly, see [19, Theorem 4.32] or [25, Theorem 2.10] for example. We will use the mollifier operator \(T_\varepsilon \) defined in Appendix A to overcome this obstacle. Indeed, applying \(T_\varepsilon \) to (1.5) and using the Itô formula for \(\Vert T_\varepsilon u\Vert ^2_{H^s}\), we have that for any \(t>0\),

$$\begin{aligned} \mathrm{d}\Vert T_\varepsilon u(t)\Vert ^2_{H^s} =&\left( T_\varepsilon h(t,u)\mathrm{d}{\mathcal {W}},T_\varepsilon u\right) _{H^s} -2\left( D^sT_\varepsilon \left[ u^ku\right] ,D^sT_\varepsilon u\right) _{L^2}\mathrm{d}t\\&-2\left( D^sT_\varepsilon F(u),D^sT_\varepsilon u\right) _{L^2}\mathrm{d}t + \Vert T_\varepsilon h(t,u)\Vert _{L_2(U; H^s)}^2\mathrm{d}t. \end{aligned}$$

Therefore for any \(n_1,n_2\ge 1\) and \(t\in [0,\tau _{2,n_1}\wedge n_2]\),

$$\begin{aligned} \Vert T_\varepsilon u(t)\Vert ^2_{H^s}-\Vert T_\varepsilon u(0)\Vert ^2_{H^s} =&2\sum _{j=1}^{\infty }\int _0^{t} \left( D^sT_\varepsilon h(t,u)e_j,D^sT_\varepsilon u\right) _{L^2}\mathrm{d}W_j\nonumber \\&-2\int _0^{t} \left( D^sT_\varepsilon \left[ u^k\partial _xu\right] ,D^sT_\varepsilon u\right) _{L^2}\mathrm{d}t\nonumber \\&-2\int _0^{t} \left( D^sT_\varepsilon F(u),D^sT_\varepsilon u\right) _{L^2}\mathrm{d}t\nonumber \\&+\int _0^{t} \sum _{k=1}^\infty \Vert D^sT_\varepsilon h(t,u)e_j\Vert _{L^2}^2\mathrm{d}t\nonumber \\ =&\int _0^{t}\sum _{j=1}^{\infty } L_{1,j}\mathrm{d}W_j+\sum _{i=2}^4\int _0^tL_i\mathrm{d}t, \end{aligned}$$

where \(\{e_k\}\) is the complete orthonormal basis of U. On account of the Burkholder-Davis-Gundy inequality, we arrive at

$$\begin{aligned}&{\mathbb {E}}\sup _{t\in [0,\tau _{2,n_1}\wedge n_2]}\Vert T_\varepsilon u(t)\Vert ^2_{H^s}\le {\mathbb {E}}\Vert T_\varepsilon u_0\Vert ^2_{H^s} +C{\mathbb {E}}\left( \int _0^{\tau _{2,n_1}\wedge n_2}\left| \sum _{j=1}^{\infty } L_{1,j}\right| ^2\mathrm{d}t\right) ^{\frac{1}{2}}\\&\quad +\sum _{i=2}^4{\mathbb {E}}\int _0^{\tau _{2,n_1}\wedge n_2}|L_i|\mathrm{d}t. \end{aligned}$$

Then (2.3), (A.3) and the stochastic Fubini theorem [19] lead to

$$\begin{aligned} {\mathbb {E}}\left( \int _0^{\tau _{2,n_1}\wedge n_2}\left| \sum _{j=1}^{\infty } L_{1,j}\right| ^2\mathrm{d}t\right) ^{\frac{1}{2}} \le&\frac{1}{2}{\mathbb {E}}\sup _{t\in [0,\tau _{2,n_1}\wedge n_2]}\Vert T_\varepsilon u\Vert _{H^s}^2 +Cf^2(n_1)\int _0^{n_2}\left( 1+{\mathbb {E}}\Vert u\Vert _{H^s}^2\right) \mathrm{d}t. \end{aligned}$$

For \(L_2\), we notice that \(T_\varepsilon \) satisfies (A.1), (A.2) and (A.3). Then it follows from Lemmas A.1, A.2 and A.3, integration by parts and \(H^s\hookrightarrow W^{1,\infty }\) that

$$\begin{aligned}&\left( D^sT_\varepsilon \left[ u^ku_x\right] ,D^sT_\varepsilon u\right) _{L^2}\nonumber \\ =&\left( \left[ D^s, u^k\right] u_x,D^sT^2_\varepsilon u\right) _{L^2}+ \left( [T_\varepsilon ,u^k]D^su_x, D^sT_\varepsilon u\right) _{L^2} +\left( u^kD^sT_\varepsilon u_x, D^sT_\varepsilon u\right) _{L^2}\nonumber \\ \le&C\Vert u\Vert ^{k}_{W^{1,\infty }}\Vert u\Vert ^2_{H^s}, \end{aligned}$$

which implies

$$\begin{aligned} {\mathbb {E}}\int _0^{\tau _{2,n_1}\wedge n_2}|L_2|\ \mathrm{d}t\le Cn^k_1\int _0^{n_2}\left( 1+{\mathbb {E}}\Vert u\Vert _{H^s}^2\right) \mathrm{d}t. \end{aligned}$$

Similarly, it follows from Lemma A.5 and the assumption (2.3) that

$$\begin{aligned} {\mathbb {E}}\int _0^{\tau _{2,n_1}\wedge n_2}|L_3|+|L_4|\ \mathrm{d}t\le C(n_1^k+f^2(n_1))\int _0^{n_2}\left( 1+{\mathbb {E}}\Vert u\Vert _{H^s}^2\right) \mathrm{d}t. \end{aligned}$$

Therefore we combine the above estimates with using (A.3) to have

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,\tau _{2,n_1}\wedge n_2]}\Vert T_\varepsilon u(t)\Vert ^2_{H^s} \le C{\mathbb {E}}\Vert u_0\Vert ^2_{H^s}+ C\int _0^{n_2}\left( 1+{\mathbb {E}}\sup _{t'\in [0,t\wedge \tau _{2,n_1}]}\Vert u(t')\Vert _{H^s}^2\right) \mathrm{d}t, \end{aligned}$$

where \(C=C(n_1,k)\) through \(n_1^k\) and \(n_1^k+f^2(n_1)\). Since the right hand side of the above estimate does not depend on \(\varepsilon \), and \(T_\varepsilon u\) tends to u in \(C\left( [0,T],H^{s}\right) \) for any \(T>0\) almost surely as \(\varepsilon \rightarrow 0\), one can send \(\varepsilon \rightarrow 0\) to obtain

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,\tau _{2,n_1}\wedge n_2]}\Vert u(t)\Vert ^2_{H^s} \le C{\mathbb {E}}\Vert u_0\Vert ^2_{H^s}+ C\int _0^{n_2}\left( 1+{\mathbb {E}}\sup _{t'\in [0,t\wedge \tau _{2,n_1}]}\Vert u(t')\Vert _{H^s}^2\right) \mathrm{d}t. \end{aligned}$$
(3.3)

Then Gronwall’s inequality shows that for each \(n_1,n_2\in {\mathbb {Z}}^{+}\), there is a constant \(C=C(n_1,n_2,u_0,k)>0\) such that

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,\tau _{2,n_1}\wedge n_2]}\Vert u(t)\Vert ^2_{H^s}<C(n_1,n_2,u_0,k), \end{aligned}$$

which gives (3.1). Now we prove (2.7). Let us assume the existence and uniqueness first. We notice that for fixed \(m,n>0\), even if \({\mathbb {P}}\{\tau _{1,m}=0\}\) or \({\mathbb {P}}\{\tau _{2,n}=0\}\) is larger than 0, for a.e. \(\omega \in \Omega \), there is \(m>0\) or \(n>0\) such that \(\tau _{1,m},\tau _{2,n}>0\). By continuity of \(\Vert u(t)\Vert _{H^s}\) and the uniqueness of u, it is easy to check that \(\tau _1=\tau _2\) is actually the maximal existence time \(\tau ^*\) of u in the sense of Definition 2.1. Consequently, we obtain the desired blow-up criteria. \(\square \)

3.2 Sketch of the Proof for Theorem 2.1

Since the main idea of proving Theorem 2.1 follows standard ideas and it is very similar to [58, 59], here we only give a sketch of the main steps.

  1. 1.

    (Approximate solutions) The first step is to construct a suitable approximation scheme. For any \(R>1\), we let \(\chi _R(x):[0,\infty )\rightarrow [0,1]\) be a \(C^{\infty }\) function such that \(\chi _R(x)=1\) for \(x\in [0,R]\) and \(\chi _R(x)=0\) for \(x>2R\). Then we consider the following cut-off problem on \({\mathbb {T}}\),

    $$\begin{aligned} \left\{ \begin{aligned}&\mathrm{d}u+\chi _R(\Vert u\Vert _{W^{1,\infty }})\left[ u^k\partial _xu+F(u)\right] \mathrm{d}t=\chi _R(\Vert u\Vert _{W^{1,\infty }})h(t,u)\mathrm{d}{\mathcal {W}},\\&u(\omega ,0,x)=u_0(\omega ,x)\in H^{s}. \end{aligned} \right. \end{aligned}$$
    (3.4)

    From Lemma A.5, we see that the nonlinear term F(u) preserves the \(H^s\)-regularity of \(u\in H^s\) for any \(s>3/2\). However, to apply the theory of SDEs in Hilbert space to (3.4), we will have to mollify the transport term \(u^k\partial _xu\) since the product \(u^k\partial _xu\) loses one regularity. To this end, we consider the following approximation scheme:

    $$\begin{aligned} \left\{ \begin{aligned} \mathrm{d}u+G_{1,\varepsilon }(u)\mathrm{d}t&=G_{2}(u)\mathrm{d}{\mathcal {W}},\ \ x\in {\mathbb {T}},\ t>0,\\ G_{1,\varepsilon }(u)&=\chi _R(\Vert u\Vert _{W^{1,\infty }})\left[ J_{\varepsilon } \left( (J_{\varepsilon }u)^k\partial _xJ_{\varepsilon }u\right) +F(u)\right] ,\\ G_{2}(u)&=\chi _R(\Vert u\Vert _{W^{1,\infty }})h(t,u),\\ u(0,x)&=u_0(x)\in H^{s}({\mathbb {T}}), \end{aligned} \right. \end{aligned}$$
    (3.5)

    where \(J_{\varepsilon }\) is the Friedrichs mollifier defined in Appendix A. After mollifying the transport term \(u^ku_x\), for a fixed stochastic basis \({\mathcal {S}}=(\Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, W)\) and for \(u_0\in L^2(\Omega ;H^s)\) with \(s>3\), according to the existence theory of SDE in Hilbert space (see for example [56, Theorem 4.2.4 with Example 4.1.3] and [40]), (3.5) admits a unique solution \(u_{\varepsilon }\in C([0,T_\varepsilon ),H^s)\) \({\mathbb {P}}-a.s.\) Moreover, the uniform \(L^{\infty }(\Omega ;W^{1,\infty })\) condition provided by the cut-off function \(\chi _R\) enables us to split the expectation \({\mathbb {E}}(\Vert u_\varepsilon \Vert _{H^{s}}^2\Vert u_\varepsilon \Vert _{W^{1,\infty }})\) to close the a priori \(L^2(\Omega ;H^s)\) estimate for \(u_\varepsilon \). Then we can go along the lines as we prove Lemma 3.1 to find that for each fixed \(\varepsilon \), if \(T_\varepsilon <\infty \), then \( \limsup _{t\rightarrow T_\varepsilon }\Vert u_\varepsilon (t)\Vert _{W^{1,\infty }}=\infty \). Due to the cut-off in (3.5), for a.e. \(\omega \in \Omega \), \(\Vert u_\varepsilon (t)\Vert _{W^{1,\infty }}\) is always bounded and hence \(u_{\varepsilon }\) is actually a global in time solution, that is, \(u_{\varepsilon }\in C([0,\infty ),H^s)\) \({\mathbb {P}}-a.s.\) We remark here that the global existence of \(u_\varepsilon \) is necessary in our framework due to the lack of lifespan estimate in the stochastic setting. Otherwise we will have to prove \({\mathbb {P}}\{\inf _{\varepsilon>0}T_\varepsilon >0\}=1\), which is not clear in general.

  2. 2.

    (Pathwise solution to the cut-off problem in \(H^s\) with \(s>3\)) We pass to the limit \(\varepsilon \rightarrow 0\). By applying the stochastic compactness arguments from Prokhorov’s and Skorokhod’s theorems, we obtain the almost sure convergence for a new approximate solution \((\widetilde{u_{\varepsilon }},\widetilde{W_{\varepsilon }})\) defined on a new probability space. By virtue of a refined martingale representation theorem [36, Theorem A.1], we may send \(\varepsilon \rightarrow 0\) in \((\widetilde{u_{\varepsilon }},\widetilde{W_{\varepsilon }})\) to build a global martingale solution in \(H^s\) with \(s>3\) to the cut-off problem. Finally, since F satisfies the estimates as in Lemma A.5 and h satisfies Assumption \(A_1\), one can obtain the pathwise uniqueness easily. Then the Gyöngy–Krylov characterization [30] of the convergence in probability can be applied here to prove the convergence of the original approximate solutions. For more details, we refer to [58, 59].

  3. 3.

    (Remove the cut-off and extend the range of s to \(s>3/2\)) When \(u_0\in L^{\infty }(\Omega ,H^s)\) with \(s>3/2\), by mollifying the initial data, we obtain a sequence of regular solutions \(\{u_n\}_{n\in {\mathbb {N}}}\) to (1.5). Motivated by [28, 58], one can prove that there is a subsequence (still denoted by \(u_n\)) such that for some almost surely positive stopping time \(\tau \),

    $$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{t\in [0,\tau ]}\Vert u_{n}-u\Vert _{H^s}=0\ \ {\mathbb {P}}-a.s., \end{aligned}$$

    and

    $$\begin{aligned} \sup _{t\in [0,\tau ]}\Vert u\Vert _{H^s}\le \Vert u_0\Vert _{H^s}+2 \ \ {\mathbb {P}}-a.s. \end{aligned}$$
    (3.6)

    Then we can pass the limit \(n\rightarrow \infty \) to prove that \((u,\tau )\) is a solution to (1.5). Besides, using a cutting argument, as in [3, 28, 29], enables us to remove the \(L^\infty (\Omega )\) assumption on \(u_0\). More precisely, when \({\mathbb {E}}\Vert u_0\Vert ^2_{H^s}<\infty \), we consider the decomposition

    $$\begin{aligned} \Omega _m=\{m-1\le \Vert u_0\Vert _{H^s}<m\},\ m\ge 1. \end{aligned}$$

    Since \({\mathbb {E}}\Vert u_0\Vert ^2_{H^s}<\infty \), \(\bigcup _{m\ge 1}\Omega _m\) is a set of full measure and \(1=\sum _{m\ge 1}\mathbf{1 }_{\Omega _m}\) \({\mathbb {P}}-a.s.\) Therefore we have

    $$\begin{aligned} u_0(\omega ,x)=\sum _{m\ge 1}u_{0,m}(\omega ,x) =\sum _{m\ge 1}u_0(\omega ,x)\mathbf{1 }_{\Omega _m}\ \ {\mathbb {P}}-a.s. \end{aligned}$$

    For each initial value \(u_{0,m}\), we let \((u_m,\tau _m)\) be the pathwise unique solution to (1.5) satisfying (3.6). Moreover, as \(F(0)=0\) and \(h(0)=0\) (cf. (2.3)), direct computation shows that

    $$\begin{aligned} \left( u=\sum _{m\ge 1}u_m\mathbf{1 }_{m-1\le \Vert u_0\Vert _{H^s}<m},\ \ \tau =\sum _{m\ge 1}\tau _m\mathbf{1 }_{m-1\le \Vert u_0\Vert _{H^s}<m}\right) \end{aligned}$$

    is the unique pathwise solution to (1.5) corresponding to the initial condition \(u_0\). Since \((u_m,\tau _m)\) satisfies (3.6), we have \(u(\cdot \wedge \tau )\in C\left( [0,\infty );H^s\right) \) \({\mathbb {P}}-a.s.\) and

    $$\begin{aligned} \sup _{t\in [0,\tau ]}\Vert u\Vert _{H^s}^2 =&\sum _ { m= 1 }^{ \infty }\mathbf{1 }_{m-1\le \Vert u_0\Vert _{H^s}<m} \sup _ { t \in \left[ 0 , \tau _ { m } \right] } \left\| u _ { m } \right\| _{H^s} ^ { 2 }\\ \le&C \sum _ { m =1} ^ { \infty } \mathbf{1 }_{m-1\le \Vert u_0\Vert _{H^s}<m} \left( 4+ \left\| u_{0,m} \right\| _{H^s} ^ { 2 } \right) =C\left( 4+ \left\| u_{0} \right\| _{H^s} ^ { 2 } \right) . \end{aligned}$$

    Taking expectation in the above inequality, we obtain (2.6). Since the passage from \((u,\tau )\) to a unique maximal pathwise solution \((u,\tau ^*)\) in the sense of Definition 2.1 can be carried out as in [18, 28, 29, 57], we omit the details. The proof for Theorem 2.1 is finished.

Remark 3.1

Changing the growth assumption (2.3) to \(\Vert h(t,u)\Vert _{L_2(U; H^s)}\le f(\Vert u\Vert _{H^{s'}})(1+\Vert u\Vert _{H^s})\) with \(H^s\hookrightarrow H^{s'}\hookrightarrow W^{1,\infty }\) leads to a criticality statement. One can then go along the lines in Lemma 3.1 to find \(\mathbf{1 }_{\{\tau ^*<\infty \}}=\mathbf{1 }_{\left\{ \limsup _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{H^{s'}}=\infty \right\} }\) \({\mathbb {P}}-a.s.\) Using another cut-off \(\chi _{R}(\Vert u\Vert _{H^{s'}})\) in the approximate scheme (3.4) and in (3.5), the other part of the proof also allows us to establish a local existence and uniqueness result. The difference condition (2.4) is essential to guarantee pathwise uniqueness.

4 Proof for Theorem 2.2

Now we are going to prove Theorem 2.2. To this end, it suffices to show that if for some \(R_0\gg 1\), the \(R_0\)-exiting time is strongly stable at the zero solution, then the solution map \(u_0\mapsto u\) is not uniformly continuous. We restrict our attention to \(k\ge 2\) since the case \(k=1\), i.e., the stochastic CH equation, has been obtained in [59].

4.1 Approximate Solutions and Associated Estimates

Define the approximate solutions as

$$\begin{aligned} u^{l, n}=l n^{-\frac{1}{k}}+n^{-s}\cos \theta \ \text {and}\ \theta =nx-l t, \ n\in {\mathbb {Z}}^{+}, \end{aligned}$$

where

$$\begin{aligned} l\in \{-1,1\}\ \text {if}\ k\ \text {is odd};\ \ l\in \{0,1\} \ \text {if}\ k\ \text {is even}. \end{aligned}$$
(4.1)

Substituting \(u^{l,n}\) into (1.5), we see that the error \(E^{l, n}(t)\) can be defined as

$$\begin{aligned} E^{l, n}(t)=u^{l, n}(t)-u^{l, n}(0)+\int _{0}^{t}\left[ \left( u^{l, n}\right) ^k\partial _x u^{l, n}+F(u^{l, n})\right] \mathrm{d}t'-\int _{0}^{t}h(t',u^{l, n})\mathrm{d}{\mathcal {W}}. \end{aligned}$$
(4.2)

Now we analyze the error as follows.

Lemma 4.1

Let \(s>3/2\). For \(n\gg 1\), \(\delta \in (1/2,\min \{s-1,3/2\})\) and any \(T>0\), there is a \(C=C(T)>0\) such that

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T]}\Vert E^{l, n}(t)\Vert ^2_{H^\delta }\le Cn^{-2r_s}. \end{aligned}$$

Here \(r_{s}\) is a parameter with

$$\begin{aligned} 0<r_s= {\left\{ \begin{array}{ll} \displaystyle 2s-\delta -\frac{k+1}{k}, &{} \mathrm{if} \ \displaystyle \frac{3}{2}<s\leqslant \frac{2k+1}{k},\\ s-\delta +1, &{} \mathrm{if} \ \displaystyle s>\frac{2k+1}{k}. \end{array}\right. } \end{aligned}$$
(4.3)

Proof

Since (4.1) implies \(l^k=l\). This means

$$\begin{aligned}&u^{l, n}(t)-u^{l, n}(0)+\int _{0}^{t}\left( u^{l, n}\right) ^k\partial _x u^{l, n}\ \mathrm{d}t'\\ =&u^{l, n}(t)-u^{l, n}(0)+\int _{0}^t\left( \sum _{j=0}^{k}\left( \begin{array}{l}{k} \\ {j}\end{array}\right) \left( l n^{-\frac{1}{k}}\right) ^{k-j} n^{-sj}\cos ^{j}\theta \right) \left( -n^{-s+1}\sin \theta \right) \ \mathrm{d}t'\\ =&\int _{0}^t\left( \sum _{j=1}^{k}\left( \begin{array}{l}{k} \\ {j}\end{array}\right) \left( l n^{-\frac{1}{k}}\right) ^{k-j} n^{-sj}\cos ^{j}\theta \right) \left( -n^{-s+1}\sin \theta \right) \ \mathrm{d}t' \triangleq \int _{0}^tT_{n,k}\ \mathrm{d}t'. \end{aligned}$$

Then we have

$$\begin{aligned} E^{l, n}(t)- \int _{0}^{t}\left[ T_{n,k}+F(u^{l, n})\right] \mathrm{d}t' +\int _{0}^{t}h(t',u^{l, n})\mathrm{d}{\mathcal {W}}=0. \end{aligned}$$
(4.4)

We first notice that

$$\begin{aligned} \left\| T_{n,k}\right\| _{H^{\delta }} \lesssim&\sum _{j=1}^{k}n^{\frac{j}{k}-sj-s}\left\| \cos ^j\theta \sin \theta \right\| _{H^{\delta }}\nonumber \\ \lesssim&\sum _{j=1}^{k}n^{\frac{j}{k}-sj-s-1}\left\| \cos ^{j+1}\theta \right\| _{H^{\delta +1}}\nonumber \\ \lesssim&\max _{1\le j\le n}\{n^{\frac{j}{k}-sj-s+\delta }\} \lesssim n^{\frac{1}{k}-2s+\delta }\lesssim n^{1-2s+\delta }. \end{aligned}$$
(4.5)

Recall that \(F(\cdot )\) is given by (1.6). Since \((1-\partial ^2_{xx})^{-1}\) is bounded from \(H^\delta \) to \(H^{\delta +2}\), we can use Lemmas A.6 and A.3 to estimate \(\Vert F_i(u^{l, n})\Vert _{H^{\delta }}\) (\(i=1,2,3\)) as follows.

$$\begin{aligned} \Vert F_1(u^{l, n})\Vert _{H^{\delta }}\lesssim&\sum _{j=0}^{k}n^{\frac{j}{k}-sj-s}\left\| \cos ^j\theta \sin \theta \right\| _{H^{\delta -2}}\nonumber \\ \lesssim&\sum _{j=0}^{k}n^{\frac{j}{k}-sj-s-1}\left\| \cos ^{j+1}\theta \right\| _{H^{\delta -1}}\nonumber \\ \lesssim&\sum _{j=0}^{k}n^{\frac{j}{k}-sj-s-1}\left\| \cos ^{j+1}\theta \right\| _{H^{\delta }}\nonumber \\ \lesssim&\max _{0\le j\le k}\{n^{\frac{j}{k}-sj-s-1+\delta }\} \lesssim n^{-s+\delta -1}. \end{aligned}$$
(4.6)
$$\begin{aligned} \Vert F_2(u^{l, n})\Vert _{H^{\delta }}\lesssim&\sum _{j=0}^{k-1}n^{\frac{j+1}{k}-sj-2s+1} \left\| \cos ^j\theta \sin ^2\theta \right\| _{H^{\delta -1}}\nonumber \\ \lesssim&\sum _{j=0}^{k-1}n^{\frac{j+1}{k}-sj-2s+1} \left\| \cos ^j\theta \sin ^2\theta \right\| _{H^{\delta }}\nonumber \\ \lesssim&\max _{0\le j\le k-1}\{n^{\frac{j+1}{k}-sj-2s+\delta +1}\} \lesssim n^{-2s+\delta +1+\frac{1}{k}}. \end{aligned}$$
(4.7)
$$\begin{aligned} \Vert F_3(u^{l, n})\Vert _{H^{\delta }} \lesssim&\sum _{j=0}^{k-2}n^{\frac{j+2}{k}-sj-3s+2}\left\| \cos ^j\theta \sin ^3\theta \right\| _{H^{\delta -2}}\nonumber \\ \lesssim&\sum _{j=0}^{k-2}n^{\frac{j+2}{k}-sj-3s+2}\left\| \cos ^j\theta \sin ^3\theta \right\| _{H^{\delta }}\nonumber \\ \lesssim&\max _{0\le j\le k-2}\{n^{\frac{j+2}{k}-sj-3s+\delta +2}\} \lesssim n^{-3s+\delta +\frac{2}{k}+2}. \end{aligned}$$
(4.8)

In the above estimates, we used the fact that \(F_3(\cdot )\) appears only for \(k\ge 2\). Combining this fact, (4.6), (4.7) and (4.8), we have

$$\begin{aligned} \Vert F(u^{l, n})\Vert _{H^{\delta }}\lesssim&\max \left\{ n^{1-2s+\delta },n^{-3s+\delta +\frac{2}{k}+2},n^{-2s+\delta +\frac{1}{k}+1},n^{-s+\delta -1} \right\} \lesssim n^{-r_s}. \end{aligned}$$
(4.9)

Then, for any \(T>0\) and \(t\in [0,T]\), by virtue of the Itô formula, we arrive at

$$\begin{aligned} \Vert E^{l, n}(t)\Vert ^2_{H^\delta }\le&\left| \left( -2\int _0^{t}h(t',u^{l,n})\mathrm{d}{\mathcal {W}},E^{l,n}\right) _{H^\delta }\right| +\sum _{i=2}^4\int _0^t|J_i|\mathrm{d}t', \end{aligned}$$

where

$$\begin{aligned} J_2&=2\left( D^\delta T_{n,k}, D^\delta E^{l, n}\right) _{L^2},\\ J_3&=2\left( D^\delta F(u^{l,n}),D^\delta E^{l, n}\right) _{L^2},\\ J_4&=\Vert h(t',u^{l,n})\Vert _{L_2(U,H^\delta )}^2. \end{aligned}$$

Taking the supremum with respect to \(t\in [0,T]\), using the Burkholder-Davis-Gundy inequality and using (2.5) and (4.9) yield

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T]}\left| \left( -2\int _0^{t}h(t',u^{l,n})\mathrm{d}{\mathcal {W}},E^{l,n}\right) _{H^\delta }\right| \le&2{\mathbb {E}}\left( \int _0^T\Vert E^{l, n}(t)\Vert _{H^\delta }^2 \Vert F(u^{l, n})\Vert _{H^\delta }^2\mathrm{d}t\right) ^{\frac{1}{2}}\nonumber \\ \le&2{\mathbb {E}}\left( \sup _{t\in [0,T]}\Vert E^{l, n}(t)\Vert _{H^\delta }^2 \int _0^T\Vert F(u^{l, n})\Vert _{H^\delta }^2\mathrm{d}t\right) ^{\frac{1}{2}}\nonumber \\ \le&\frac{1}{2}{\mathbb {E}}\sup _{t\in [0,T]}\Vert E^{l, n}(t)\Vert _{H^\delta }^2 +CTn^{-2r_s}. \end{aligned}$$

By virtue of (4.5) and (4.9), we obtain

$$\begin{aligned} \int _0^{T}{\mathbb {E}}|J_2|\mathrm{d}t \le&C\int _0^{T}{\mathbb {E}}\left( \left\| T_{n,k}\right\| _{H^{\delta }} \Vert E^{l, n}(t)\Vert _{H^\delta }\right) \mathrm{d}t\\ \le&C\int _0^{T}{\mathbb {E}}\left\| T_{n,k}\right\| ^2_{H^{\delta }}\mathrm{d}t +C\int _0^{T}{\mathbb {E}}\Vert E^{l, n}(t)\Vert _{H^\delta }^2\mathrm{d}t\\ \le&CTn^{-2r_s}+C\int _0^{T}{\mathbb {E}}\Vert E^{l, n}(t)\Vert ^2_{H^\delta }\mathrm{d}t, \\ \int _0^{T}{\mathbb {E}}|J_3|\mathrm{d}t \le&C\int _0^{T}{\mathbb {E}}\left( \Vert F(u^{l, n})\Vert _{H^{\delta }} \Vert E^{l, n}(t)\Vert _{H^\delta }\right) \mathrm{d}t\\ \le&C\int _0^{T}{\mathbb {E}}\Vert F(u^{l, n})\Vert ^2_{H^{\delta }}\mathrm{d}t +C\int _0^{T}{\mathbb {E}}\Vert E^{l, n}(t)\Vert ^2_{H^\delta }\mathrm{d}t\\ \le&CT n^{-2r_s} +C\int _0^{T}{\mathbb {E}}\Vert E^{l, n}(t)\Vert ^2_{H^\delta }\mathrm{d}t, \end{aligned}$$

and by (2.5),

$$\begin{aligned} \int _0^{T}{\mathbb {E}}|J_4|\mathrm{d}t \le&C\int _0^{T}{\mathbb {E}}\Vert F(u^{l, n})\Vert ^2_{H^{\delta }}\mathrm{d}t\le CT n^{-2r_s}. \end{aligned}$$

Collecting the above estimates, we arrive at

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T]}\Vert E^{l, n}(t)\Vert ^2_{H^\delta }\le CT n^{-2r_s} +C\int _0^{T}{\mathbb {E}}\sup _{t'\in [0,t]}\Vert E^{l, n}(t')\Vert ^2_{H^\delta }\mathrm{d}t. \end{aligned}$$

Obviously, for each \(n\ge 1\) and \(l\in \{-1,1\}\), \({\mathbb {E}}\sup _{t'\in [0,t]}\Vert E^{l, n}(t')\Vert ^2_{H^\delta }\) is finite. Then we infer from the Grönwall inequality that

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T]}\Vert E^{l, n}(t)\Vert ^2_{H^\delta }\le Cn^{-2r_s},\ \ C=C(T). \end{aligned}$$

This is the desired result. \(\square \)

4.2 Construction of Actual Solutions

Now we consider the following periodic boundary value problem with deterministic initial data \(u^{l, n}(0,x)\), i.e.,

$$\begin{aligned} \left\{ \begin{aligned}&\mathrm{d}u+\left[ u^ku_x+F(u)\right] \mathrm{d}t =h(t,u)\mathrm{d}{\mathcal {W}},\qquad t>0,\ x\in {\mathbb {T}},\\&u(0,x)=u^{l, n}(0,x), \qquad x\in {\mathbb {T}}. \end{aligned} \right. \end{aligned}$$
(4.10)

Since h satisfies (2.5), we see that (2.3) and (2.4) are also verified. Then Theorem 2.1 yields that for each \(n\in {\mathbb {N}}\), (4.10) has a uniqueness maximal pathwise solution \((u_{l,n},\tau ^*_{l,n})\).

4.3 Estimates on the Errors

Lemma 4.2

Let \(s>\frac{3}{2}\), \(\frac{1}{2}<\delta <\min \left\{ s-1,\frac{3}{2}\right\} \) and \(r_{s}>0\) be given as in Lemma 4.1. For any \(R>1\) and l satisfying (4.1), we define

$$\begin{aligned} \tau ^R_{l,n}:=\inf \left\{ t\ge 0:\Vert u_{l, n}\Vert _{H^s}> R\right\} . \end{aligned}$$
(4.11)

Then for any \(T>0\), when \(n\gg 1\), we have that for l satisfying (4.1),

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert u^{l, n}-u_{l, n}\Vert ^2_{H^{\delta }}\le Cn^{-2r_s},\ \ C=C(R,T), \end{aligned}$$
(4.12)

and

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert u^{l, n}-u_{l, n}\Vert ^2_{H^{2s-\delta }}\le Cn^{2s-2\sigma },\ \ C=C(R,T). \end{aligned}$$
(4.13)

Proof

We first notice that by Lemma A.6, for l satisfying (4.1),

$$\begin{aligned} \Vert u^{l, n}(t)\Vert _{H^s}\lesssim 1,\ \ \forall \ t>0. \end{aligned}$$
(4.14)

Let \(q=q^{l,n}=\sum _{i=0}^{k}\left( u^{l, n}\right) ^{k-i}\left( u_{l, n}\right) ^i\) and \(v=v^{l,n}=u^{l, n}-u_{l, n}\). In view of (4.2), (4.4) and (4.10), we see that

$$\begin{aligned} v(t)+&\int _{0}^{t}\left[ \frac{1}{k+1}\partial _x(qv) -F(u_{l, n})\right] \mathrm{d}t'=-\int _{0}^{t}h(t',u_{l, n})\ \mathrm{d}{\mathcal {W}}+\int _{0}^{t}T_{n,k}\ \mathrm{d}t'. \end{aligned}$$

For any \(T>0\), we use the Itô formula on \([0,T\wedge \tau ^R_{l,n}]\), take the supremum over \(t\in [0,T\wedge \tau ^R_{l,n}]\) and use the Burkholder-Davis-Gundy inequality with noticing (2.5) to find

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert v(t)\Vert ^2_{H^{\delta }}\le C{\mathbb {E}}\left( \int _0^{T\wedge \tau ^R_{l,n}}|K_1|^2\ \mathrm{d}t\right) ^{\frac{1}{2}} +\sum _{i=2}^5{\mathbb {E}}\int _0^{T\wedge \tau ^R_{l,n}}|K_i|\ \mathrm{d}t, \end{aligned}$$

where

$$\begin{aligned} K_1&=\Vert v\Vert _{H^\delta }\Vert F(u_{l,n})\Vert _{H^\delta },\\ K_2&=2\left( D^\delta T_{n,k}, D^\delta v\right) _{L^2},\\ K_3&=-\frac{2}{k+1}\left( D^\delta \partial _x[qv],D^\delta v\right) _{L^2},\\ K_4&=2\left( D^\delta F(u_{l,n}),D^\delta v\right) _{L^2},\\ K_5&=\Vert h(t,u_{l,n})\Vert _{L_2(U,H^\delta )}^2. \end{aligned}$$

We can first infer from Lemma A.5 that

$$\begin{aligned} \Vert F(u_{l,n})\Vert _{H^\delta }^2\lesssim&\left( \Vert F(u^{l,n})-F(u_{l,n})\Vert _{H^\delta } +\Vert F(u^{l,n})\Vert _{H^\delta }\right) ^2\nonumber \\ \lesssim&(\Vert u^{l,n}\Vert _{H^{s}}+\Vert u_{l,n}\Vert _{H^{s}})^2 \Vert v\Vert ^2_{H^{\delta }} +\Vert F(u^{l,n})\Vert ^2_{H^\delta }. \end{aligned}$$

Therefore, applying Lemmas A.4 and A.5, \(H^{\delta }\hookrightarrow L^{\infty }\), integrating by part and (4.5), we have

$$\begin{aligned} |K_1|^2&\lesssim (\Vert u^{l, n}\Vert _{H^{s}}+\Vert u_{l, n}\Vert _{H^{s}})^2 \Vert v\Vert ^4_{H^{\delta }} +\Vert F(u^{l, n})\Vert ^2_{H^\delta }\Vert v\Vert ^2_{H^\delta }, \\ |K_2|&\lesssim \left\| T_{n,k}\right\| ^2_{H^{\delta }}+\Vert v\Vert ^2_{H^{\delta }} \lesssim n^{-2r_s}+\Vert v\Vert ^2_{H^{\delta }}, \\ |K_3|&\lesssim \Vert q\Vert _{H^s}\Vert v\Vert ^2_{H^{\delta }} +\Vert q_x\Vert _{L^\infty }\Vert v\Vert ^2_{H^{\delta }} \lesssim \Vert q\Vert _{H^s}\Vert v\Vert ^2_{H^\delta }, \\ |K_4|&\lesssim (\Vert u^{l, n}\Vert _{H^{s}}+\Vert u_{l, n}\Vert _{H^{s}}) \Vert v\Vert ^2_{H^{\delta }} +\Vert F(u^{l, n})\Vert ^2_{H^\delta }+\Vert v\Vert ^2_{H^\delta }, \end{aligned}$$

and

$$\begin{aligned} |K_5|\lesssim (\Vert u^{l, n}\Vert _{H^{s}}+\Vert u_{l, n}\Vert _{H^{s}})^2 \Vert v\Vert ^2_{H^{\delta }} +\Vert F(u^{l, n})\Vert ^2_{H^\delta }. \end{aligned}$$

By virtue of Lemmas A.5, (4.9), (4.11) and (4.14), we have

$$\begin{aligned}&C{\mathbb {E}}\left( \int _0^{T\wedge \tau ^R_{l,n}}|K_1|^2\ \mathrm{d}t\right) ^{\frac{1}{2}}\nonumber \\ \le&C{\mathbb {E}}\left( \sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert v\Vert _{H^{\delta }}^2 \int _0^{T\wedge \tau ^R_{l,n}} (\Vert u^{l, n}\Vert _{H^{s}}+\Vert u_{l, n}\Vert _{H^{s}})^2 \Vert v\Vert ^2_{H^{\delta }}\ \mathrm{d}t\right) ^{\frac{1}{2}}\nonumber \\&+C{\mathbb {E}}\left( \sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert v\Vert _{H^{\delta }}^2 \int _0^{T\wedge \tau ^R_{l,n}} \Vert F(u^{l, n})\Vert ^2_{H^\delta }\ \mathrm{d}t\right) ^{\frac{1}{2}}\nonumber \\ \le&\frac{1}{2}{\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert v\Vert _{H^{\delta }}^2 +C_R{\mathbb {E}}\int _0^{T\wedge \tau ^R_{l,n}} \Vert v(t)\Vert ^2_{H^{\delta }}\ \mathrm{d}t +C{\mathbb {E}}\int _0^{T\wedge \tau ^R_{l,n}} \Vert F(u^{l, n})\Vert ^2_{H^\delta }\ \mathrm{d}t\nonumber \\ \le&\frac{1}{2}{\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert v\Vert _{H^{\delta }}^2 +C_R{\mathbb {E}}\int _0^{T} \sup _{t'\in [0,t\wedge \tau ^R_{l,n}]}\Vert v(t')\Vert ^2_{H^\delta }\ \mathrm{d}t +CT n^{-2r_s}, \\&{\mathbb {E}}\int _0^{T\wedge \tau ^R_{l,n}}|K_2|+|K_4|+|K_5|\ \mathrm{d}t \le CTn^{-2r_s} +C_R\int _0^{T}{\mathbb {E}}\sup _{t'\in [0,t\wedge \tau ^R_{l,n}]}\Vert v(t')\Vert ^2_{H^{\delta }}\ \mathrm{d}t, \end{aligned}$$

and

$$\begin{aligned} {\mathbb {E}}\int _0^{T\wedge \tau ^R_{l,n}}|K_3|\ \mathrm{d}t \le C_R\int _0^{T}{\mathbb {E}}\sup _{t'\in [0,t\wedge \tau ^R_{l,n}]}\Vert v(t')\Vert ^2_{H^{\delta }}\ \mathrm{d}t. \end{aligned}$$

Consequently,

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert v(t)\Vert ^2_{H^{\delta }}\le CTn^{-2r_s} +C_R\int _0^{T}{\mathbb {E}}\sup _{t'\in [0,t\wedge \tau ^R_{l,n}]}\Vert v(t')\Vert ^2_{H^{\delta }}\ \mathrm{d}t. \end{aligned}$$

Using the Grönwall inequality, we have

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert v(t)\Vert ^2_{H^{\delta }}\le Cn^{-2r_s},\ \ C=C(R,T), \end{aligned}$$

which is (4.12). For (4.13), we first notice that \(u_{l, n}\) is the unique solution to (4.10) with \(2s-\delta >3/2\). For each fixed \(n\in {\mathbb {Z}}^{+}\), we can go along the lines as we derive (3.3) with using (4.11) to find

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert u_{l, n}(t)\Vert ^2_{H^{2s-\delta }} \le&C{\mathbb {E}}\Vert u^{l, n}(0)\Vert ^2_{H^{2s-\delta }}+ C_R\int _0^{T} \left( {\mathbb {E}}\sup _{t'\in [0,t\wedge \tau ^R_{l,n}]}\Vert u_{l,n}(t')\Vert _{H^{2s-\delta }}^2\right) \mathrm{d}t. \end{aligned}$$
(4.15)

From (4.15), we can use the Grönwall inequality and Lemma A.6 to infer

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert u_{l, n}(t)\Vert ^2_{H^{2s-\delta }} \le C{\mathbb {E}}\Vert u^{l, n}(0)\Vert ^2_{H^{2s-\delta }} \le C n^{2s-2\delta },\ \ C=C(R,T). \end{aligned}$$

By Lemma A.6 again, we have that for some \(C=C(R,T)\) and l satisfying (4.1),

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert v\Vert ^2_{H^{2s-\delta }} \le C{\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert u_{l, n}\Vert ^2_{H^{2s-\delta }} +C{\mathbb {E}}\sup _{t\in [0,T\wedge \tau ^R_{l,n}]}\Vert u^{l, n}\Vert ^2_{H^{2s-\delta }} \le C n^{2s-2\delta }, \end{aligned}$$

which is (4.13). \(\square \)

4.4 Final Proof for Theorem 2.2

To begin with, we will show that if the exiting time of the zero solution is strongly stable, then (2.8)–(2.11) are satisfied.

Lemma 4.3

If for some \(R_0\gg 1\), the \(R_0\)-exiting time of the zero solution to (1.5) is strongly stable, then for l satisfying (4.1) and \(\tau ^{R_0}_{l,n}\) given in (4.11), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \tau ^{R_0}_{l,n}=\infty \ \ {\mathbb {P}}-a.s. \end{aligned}$$
(4.16)

Proof

We notice that for all \(s'<s\), \( \lim _{n\rightarrow \infty } \Vert u^{l, n}(0)\Vert _{H^{s'}}=\lim _{n\rightarrow \infty } \Vert u_{l, n}(0)-0\Vert _{H^{s'}}=0. \) Since the unique solution with zero initial data to (1.5) is zero and the \(R_0\)-exiting time of the zero solution is \(\infty \), we see that (4.16) holds true provided the \(R_0\)-exiting time of the zero solution to (1.5) is strongly stable. \(\square \)

Proof for Theorem 2.2

For each \(n>1\), for l satisfying (4.1) and for the fixed \(R_0\gg 1\), Lemma A.6 and (4.11) give us \({\mathbb {P}}\{\tau _{l, n}^{R_0}>0\}=1\) and Lemma 4.3 implies (2.8). Besides, Theorem 2.1 and (4.11) show that \(u_{l, n}\in C([0,\tau _{l, n}^{R_0}];H^s)\) \({\mathbb {P}}-a.s.\) and (2.9) holds true. By virtue of the interpolation inequality, we have

$$\begin{aligned}&{\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{l, n}^{R_0}]}\Vert u^{l, n}-u_{l, n}\Vert _{H^{s}}\\ \le&C \left( {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{l, n}^{R_0}]}\Vert u^{l, n}-u_{l, n}\Vert _{H^{\delta }}\right) ^{\frac{1}{2}} \left( {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{l, n}^{R_0}]}\Vert u^{l, n}-u_{l, n}\Vert _{H^{2s-\delta }}\right) ^{\frac{1}{2}}\\ \le&C \left( {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{l, n}^{R_0}]}\Vert u^{l, n}-u_{l, n}\Vert ^2_{H^{\delta }}\right) ^{\frac{1}{4}} \left( {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{l, n}^{R_0}]}\Vert u^{l, n}-u_{l, n}\Vert ^2_{H^{2s-\delta }}\right) ^{\frac{1}{4}}. \end{aligned}$$

For any \(T>0\), combining Lemma 4.2 and the above estimate yields

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{l, n}^{R_0}]} \Vert u^{l, n}-u_{l, n}\Vert _{H^{s}} \lesssim n^{-\frac{1}{4}\cdot 2r_{s}}\cdot n^{\frac{1}{4}\cdot (2s-2\delta )}=n^{r'_s}, \end{aligned}$$
(4.17)

for some \(C=C(R_0,T)\) and l satisfying (4.1). Recalling \(k\ge 2\) and (4.3), we have

$$\begin{aligned} 0>r'_s=-r_s\cdot \frac{1}{2}+(s-\delta )\cdot \frac{1}{2}= {\left\{ \begin{array}{ll} \displaystyle -\frac{s}{2}+\frac{k+1}{2k}, &{} \mathrm{if} \ \displaystyle \frac{3}{2}<s\leqslant \frac{2k+1}{k},\\ \displaystyle -\frac{1}{2}, &{} \mathrm{if} \ \displaystyle s>\frac{2k+1}{k}. \end{array}\right. } \end{aligned}$$

Therefore for l satisfying (4.1),

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{l, n}^{R_0}]}\Vert u^{l, n}-u_{l, n}\Vert _{H^{s}} =0. \end{aligned}$$
(4.18)

For odd k, (2.10) is given by

$$\begin{aligned} \Vert u_{-1,n}(0)-u_{1,n}(0)\Vert _{H^{s}}=\Vert u^{-1,n}(0)-u^{1,n}(0)\Vert _{H^{s}}\lesssim n^{-1/k}\rightarrow 0, {~\mathrm as~}n\rightarrow \infty . \end{aligned}$$

For even k, (2.10) is given by

$$\begin{aligned} \Vert u_{0,n}(0)-u_{1,n}(0)\Vert _{H^{s}}=\Vert u^{0,n}(0)-u^{1,n}(0)\Vert _{H^{s}}\lesssim n^{-1/k}\rightarrow 0, {~\mathrm as~}n\rightarrow \infty . \end{aligned}$$

When k is odd, we use (4.18) to find that for any \(T>0\),

$$\begin{aligned}&\liminf _{n\rightarrow \infty } {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \Vert u_{-1,n}(t)-u_{1,n}(t)\Vert _{H^s}\nonumber \\&\quad \ge \liminf _{n\rightarrow \infty } {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \Vert u^{-1,n}(t)-u^{1,n}(t)\Vert _{H^s}\nonumber \\&\qquad -\lim _{n\rightarrow \infty } {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \Vert u^{-1,n}(t)-u_{-1,n}(t)\Vert _{H^s}\nonumber \\&\qquad -\lim _{n\rightarrow \infty }{\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \Vert u^{1,n}(t)-u_{1,n}(t)\Vert _{H^s}\nonumber \\&\quad > rsim \liminf _{n\rightarrow \infty } {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \Vert u^{-1,n}(t)-u^{1,n}(t)\Vert _{H^s}\nonumber \\&\quad > rsim \liminf _{n\rightarrow \infty } {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \Vert -2n^{-1}+n^{-s}\cos (nx+t)-n^{-s}\cos (nx-t)\Vert _{H^s}\nonumber \\&\quad > rsim \liminf _{n\rightarrow \infty } {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \left( n^{-s}\Vert \sin (nx)\Vert _{H^s}|\sin t|-\Vert 2n^{-1}\Vert _{H^s}\right) > rsim \sup _{t\in [0,T]}|\sin t|, \end{aligned}$$
(4.19)

where we have used Fatou’s lemma. By

$$\begin{aligned}&\left( {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \Vert u_{-1,n}(t)-u_{1,n}(t)\Vert ^2_{H^s}\right) ^{\frac{1}{2}} \ge {\mathbb {E}}\sup _{t\in [0,T\wedge \tau _{-1,n}^{R_0}\wedge \tau _{1,n}^{R_0}]} \Vert u_{-1,n}(t)-u_{1,n}(t)\Vert _{H^s} \end{aligned}$$

and (4.19), we obtain (2.11). Similarly, we can also prove (2.11) when k is even. The proof is therefore completed. \(\square \)

5 Global Existence

In this section, we study the global existence and the blow-up of solutions to (1.8), and estimate the associated probabilities. Motivated by [29, 57, 58], we introduce the following Girsanov type transform

$$\begin{aligned} v=\frac{1}{\beta (\omega ,t)} u,\ \ \beta (\omega ,t)=\mathrm{e}^{\int _0^tb(t') \mathrm{d} W_{t'}-\int _0^t\frac{b^2(t')}{2} \mathrm{d}t'}. \end{aligned}$$
(5.1)

Proposition 5.1

Let \(s>3/2\) and \(h(t,u)=b(t) u\) such that b(t) satisfies Assumption \(A_3\). Let \({\mathcal {S}}=\left( \Omega , {\mathcal {F}},{\mathbb {P}},\{{\mathcal {F}}_t\}_{t\ge 0}, W\right) \) be fixed in advance. If \(u_0(\omega ,x)\) is an \(H^s\)-valued \({\mathcal {F}}_0\)-measurable random variable with \({\mathbb {E}}\Vert u_0\Vert ^2_{H^s}<\infty \) and \((u,\tau ^*)\) is the corresponding unique maximal pathwise solution to (1.8), then for \(t\in [0,\tau ^*)\), the process v given in (5.1) solves

$$\begin{aligned} \left\{ \begin{aligned}&v_t+\beta ^k v^kv_x+\beta ^k F(v)=0,\ \ t>0,\ x\in {\mathbb {T}},\\&v(\omega ,0,x)=u_0(\omega ,x),\ x\in {\mathbb {T}}. \end{aligned} \right. \end{aligned}$$
(5.2)

Moreover, one has \(v\in C\left( [0,\tau ^*);H^s\right) \bigcap C^1([0,\tau ^*) ;H^{s-1})\) \({\mathbb {P}}-a.s.\) and, if \(s>3\), then

$$\begin{aligned} {\mathbb {P}}\{\Vert v(t)\Vert _{H^1}=\Vert u_0\Vert _{H^1}\}=1. \end{aligned}$$
(5.3)

Proof

Since b(t) satisfies Assumption \(A_3\), \(h(t,u)=b(t) u\) satisfies Assumption \(A_1\). Consequently, Theorem 2.1 implies that (1.8) has a unique maximal pathwise solution \((u,\tau ^*)\). Direct computation shows

$$\begin{aligned} \mathrm{d}v =&\left( -\beta ^k v^kv_x-\beta ^k F(v)\right) \mathrm{d}t. \end{aligned}$$
(5.4)

Since \(v(0)=u_0(\omega ,x)\), we see that v satisfies (5.2). Moreover, Theorem 2.1 implies \(u\in C\left( [0,\tau ^*);H^s\right) \) \({\mathbb {P}}-a.s.\), so \(v\in C([0,\tau ^*); H^{s})\) \({\mathbb {P}}-a.s.\) Besides, from Lemma A.5 and (5.2)\(_1\), we see that for a.e. \(\omega \in \Omega \), \(v_t=-\beta ^k v^kv_x-\beta ^k F(v)\in C([0,\tau ^*); H^{s-1})\). Hence \(v\in C^1\left( [0, \tau ^*);H^{s-1}\right) \) \({\mathbb {P}}-a.s.\) Notice that if \(s>3\), (5.2)\(_1\) is equivalent to

$$\begin{aligned} v_t-v_{xxt}+(k+2)\beta ^k v^kv_x=(k+1)\beta ^kv^{k-1}v_xv_{xx}+\beta ^kv^kv_{xxx},\ \ k\ge 1. \end{aligned}$$
(5.5)

Multiplying both sides of the above equation by v and then integrating the resulting equation on \(x\in {\mathbb {T}}\) with noticing that \((k+1)v^{k}v_xv_{xx}+v^{k+1}v_{xxx}=\partial _x\left( v^{k+1}v_{xx}\right) ,\) we see that for a.e. \(\omega \in \Omega \) and for all \(t>0\)

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\int _{{\mathbb {T}}}\left( v^2+v_x^2\right) \mathrm{d}x=0, \end{aligned}$$

which implies (5.3). \(\square \)

5.1 Global Existence: Case 1

Now we prove the first global existence result.

Proof for Theorem 2.3

To begin with, we apply the operator \(D^s\) to (5.4), multiply both sides of the resulting equation by \(D^sv\) and integrate over \({\mathbb {T}}\) to obtain that for a.e. \(\omega \in \Omega \),

$$\begin{aligned} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\Vert v(t)\Vert ^2_{H^s} = -\beta ^k(\omega ,t) \int _{{\mathbb {T}}}D^sv\cdot D^s\left[ v^kv_x\right] \mathrm{d}x - \beta ^k(\omega ,t)\int _{{\mathbb {T}}}D^sv\cdot D^sF(v) \mathrm{d}x. \end{aligned}$$

Using Lemmas A.2 and A.5, we conclude that there is a \(C=C(s)>1\) such that for a.e. \(\omega \in \Omega \),

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\Vert v(t)\Vert ^2_{H^s} \le C\beta ^k(t)\ \Vert v\Vert ^k_{W^{1,\infty }}\Vert v\Vert _{H^s}^2, \end{aligned}$$

where \(\beta \) is given in (5.1). Then \(w=\mathrm{e}^{-\int _0^tb(t') \mathrm{d} W_{t'}}u=\mathrm{e}^{-\int _0^t\frac{b^2(t')}{2} \mathrm{d}t'}v\) satisfies

$$\begin{aligned} \frac{\mathrm{d}}{ \mathrm{d}t}\Vert w(t)\Vert _{H^s}+\frac{b^2(t)}{2}\Vert w(t)\Vert _{H^s} \le C\alpha ^k(\omega ,t) \Vert w(t)\Vert ^k_{W^{1,\infty }}\Vert w(t)\Vert _{H^s},\ \ \alpha (\omega ,t)={\mathrm{e}}^{\int _0^tb(t') {\mathrm{d}} W_{t'}}. \end{aligned}$$

Let \(R>1\) and \(\lambda _1>2\). Assume \(\Vert u_0\Vert _{H^s}<\frac{1}{RK}\left( \frac{b_*}{C\lambda _1}\right) ^{1/k} \) almost surely. Define

$$\begin{aligned} \tau _{1}(\omega )=\inf \left\{ t>0:\alpha ^k(\omega ,t) \Vert w\Vert ^k_{W^{1,\infty }} =\Vert u\Vert ^k_{W^{1,\infty }}>\frac{b^2(t)}{C\lambda _1 }\right\} . \end{aligned}$$
(5.6)

Notice that \(\Vert u(0)\Vert ^k_{W^{1,\infty }}\le K^{k}\Vert u(0)\Vert ^k_{H^s}<\frac{b_*}{C\lambda _1}\). Therefore we have \( {\mathbb {P}}\{\tau _{1}>0\}=1, \) and for \(t\in [0,\tau _{1})\),

$$\begin{aligned} \frac{\mathrm{d}}{ \mathrm{d}t}\Vert w(t)\Vert _{H^s}+\frac{(\lambda _1-2)b^2(t)}{2\lambda _1}\Vert w(t)\Vert _{H^s} \le 0. \end{aligned}$$

The above inequality and \(w=\mathrm{e}^{-\int _0^tb(t')\mathrm{d} W_{t'}}u\) imply that for a.e. \(\omega \in \Omega \), for any \(\lambda _2>\frac{2\lambda _1}{\lambda _1-2}\) and for \(t\in [0,\tau _{1})\),

$$\begin{aligned} \Vert u(t)\Vert _{H^s} \le&\Vert w_0\Vert _{H^s}\mathrm{e}^{\int _0^tb(t') \mathrm{d} W_{t'}-\int _0^t\frac{(\lambda _1-2)b^2(t')}{2\lambda _1} \mathrm{d}t'}\nonumber \\ =&\Vert u_0\Vert _{H^s}\mathrm{e}^{\int _0^tb(t') \mathrm{d} W_{t'}-\int _0^t\frac{b^2(t')}{\lambda _2} \mathrm{d}t'}\mathrm{e}^{-\frac{\left( (\lambda _1-2)\lambda _2-2\lambda _1\right) }{2\lambda _1\lambda 2}\int _0^tb^2(t') \mathrm{d}t'} \end{aligned}$$
(5.7)

Define the stopping time

$$\begin{aligned} \tau _{2}=\tau _2(\omega ) =\inf \left\{ t>0:\mathrm{e}^{\int _0^tb(t') \mathrm{d} W_{t'}-\int _0^t\frac{b^2(t')}{\lambda _2} \mathrm{d}t'}>R\right\} . \end{aligned}$$
(5.8)

Notice that \({\mathbb {P}}\{\tau _{2}>0\}=1\). From (5.7), we have that almost surely

$$\begin{aligned} \Vert u(t)\Vert _{H^s}<&\frac{1}{RK}\left( \frac{b_*}{C\lambda _1}\right) ^{1/k} \times R\times \mathrm{e}^{-\frac{\left( (\lambda _1-2)\lambda _2-2\lambda _1\right) }{2\lambda _1\lambda 2}\int _0^tb^2(t') \mathrm{d}t'}\nonumber \\ =&\frac{1}{K}\left( \frac{b_*}{C\lambda _1}\right) ^{1/k} \mathrm{e}^{-\frac{\left( (\lambda _1-2)\lambda _2-2\lambda _1\right) }{2\lambda _1\lambda 2}\int _0^tb^2(t') \mathrm{d}t'} \le \frac{1}{K}\left( \frac{b_*}{C\lambda _1}\right) ^{1/k},\ \ t\in [0,\tau _{1}\wedge \tau _{2}). \end{aligned}$$
(5.9)

Combining (5.9) and (5.6), we find that

$$\begin{aligned} {\mathbb {P}}\{\tau _{1}\ge \tau _{2}\}=1. \end{aligned}$$
(5.10)

Therefore it follows from (5.9) that

$$\begin{aligned} {\mathbb {P}}\left\{ \Vert u(t)\Vert _{H^s}<\frac{1}{K}\left( \frac{b_*}{C\lambda _1}\right) ^{1/k} \mathrm{e}^{-\frac{\left( (\lambda _1-2)\lambda _2-2\lambda _1\right) }{2\lambda _1\lambda 2}\int _0^tb^2(t') \mathrm{d}t'} \ \mathrm{\ for\ all}\ t>0 \right\} \ge {\mathbb {P}}\{\tau _{2}=+\infty \}. \end{aligned}$$

We apply (ii) in Lemma A.7 to find that

$$\begin{aligned} {\mathbb {P}}\{\tau _{2}=+\infty \}>1-\left( \frac{1}{R}\right) ^{2/\lambda _2}, \end{aligned}$$

which completes the proof. \(\square \)

5.2 Global Existence: Case 2

Now we prove Theorem 2.4. Let \(\beta (\omega ,t)\) be given in (5.1). From Proposition 5.1, we see that for a.e. \(\omega \in \Omega \), \(v(\omega ,t,x)\) solves (5.2) on \([0,\tau ^*)\). Moreover, since \(H^s\hookrightarrow C^2\) for \(s>3/2\), we have \(v,v_x\in C^1\left( [0, \tau ^*)\times {\mathbb {T}}\right) \). Then for a.e. \(\omega \in \Omega \), for any \(x\in {\mathbb {T}}\), the problem

$$\begin{aligned} \left\{ \begin{aligned}&\frac{dq(\omega ,t,x)}{dt}=\beta ^k(\omega ,t)v^k(\omega ,t,q(\omega ,t,x)),\ \ \ \ t\in [0,\tau ^*),\\&q(\omega ,0,x)=x,\ \ \ x\in {\mathbb {T}}, \end{aligned} \right. \end{aligned}$$
(5.11)

has a unique solution \(q(\omega ,t,x)\) such that \(q(\omega ,t,x)\in C^1([0,\tau ^*)\times {\mathbb {T}})\) almost surely. Moreover, differentiating (5.11) with respect to x yields that for a.e. \(\omega \in \Omega \),

$$\begin{aligned} \left\{ \begin{aligned}&\frac{dq_x}{dt}=k\beta ^k(\omega ,t)v^{k-1}v_xq_x,\ \ \ \ t\in [0,\tau ^*),\\&q_x(\omega ,0,x)=1,\ \ \ x\in {\mathbb {T}}. \end{aligned} \right. \end{aligned}$$

For a.e. \(\omega \in \Omega \), we solve the above equation to obtain

$$\begin{aligned} q_x(\omega ,t,x)=\exp {\left( \int _0^tk\beta ^k(\omega ,t')v^{k-1}v_x(\omega ,t',q(\omega ,t',x))\ \mathrm{d}t'\right) }. \end{aligned}$$

Thus for all \((t,x)\in [0, \tau ^*)\times {\mathbb {T}}\), we find \(q_x>0\) almost surely.

Lemma 5.1

Let \(s>3\), \(V_0(\omega ,x)=(1-\partial _{xx}^2)u_0(\omega ,x)\) and \(V(\omega ,t,x)=v(\omega ,t,x)-v_{xx}(\omega ,t,x)\), where \(v(\omega ,t,x)\) solves (5.2) on \([0,\tau ^*)\) \({\mathbb {P}}-a.s.\) Then for all \((t,x)\in [0, \tau ^*)\times {\mathbb {T}}\),

$$\begin{aligned} \mathrm{sign}(v)=\mathrm{sign}(V)=\mathrm{sign}&(V_0) \ \ {\mathbb {P}}-a.s. \end{aligned}$$

Proof

We first notice that (5.2)\(_1\) is equivalent to (5.5). Therefore for a.e. \(\omega \in \Omega \), \(V=v-v_{xx}\) satisfies

$$\begin{aligned} V_{t}+\beta ^k v^kV_{x}+(k+1)\beta ^k V v^{k-1}v_{x}=0. \end{aligned}$$

Thus for a.e. \(\omega \in \Omega \), we have

$$\begin{aligned}&\frac{\mathrm{d}}{\mathrm{d}t}\left[ V(\omega ,t,q(\omega ,t,x))q_x^{\frac{k+1}{k}}(\omega ,t,x)\right] \\&\quad =V_tq^{\frac{k+1}{k}}_x+V_xq_tq^{\frac{k+1}{k}}_x+\frac{k+1}{k}Vq_x^{\frac{k+1}{k}-1}q_{xt}\\&\quad =q_x^{\frac{k+1}{k}}\left[ V_t+V_x\left( \beta ^k v^k\right) +\frac{k+1}{k}Vq_x^{-1} \left( k\beta ^k v^{k-1}v_xq_x\right) \right] \\&\quad =q_x^{\frac{k+1}{k}}\left[ V_t+\beta ^k v^k V_x+(k+1)\beta ^k Vv^{k-1} v_x\right] =0. \end{aligned}$$

Notice that \(q_x(\omega ,0,x)=1\) and \(q_x>0\) almost surely. Then we have that \(\mathrm{sign}(V)=\mathrm{sign}(V_0)\) \({\mathbb {P}}-a.s.\) Since \(v=G_{{\mathbb {T}}}*V\) with \(G_{{\mathbb {T}}}>0\) given in (1.7), we have \(\mathrm{sign}(v)=\mathrm{sign}(V)\) \({\mathbb {P}}-a.s.\) \(\square \)

Lemma 5.2

Let all the conditions as in the statement of Proposition 5.1 hold true. If

$$\begin{aligned} {\mathbb {P}}\{V_0(\omega ,x)>0,\ \forall \ x\in {\mathbb {T}}\}=p,\ \ {\mathbb {P}}\{V_0(\omega ,x)<0,\ \forall \ x\in {\mathbb {T}}\}=q, \end{aligned}$$

for some \(p,q\in [0,1]\), then the maximal pathwise solution u to (1.8) satisfies

$$\begin{aligned} {\mathbb {P}}\left\{ \Vert u_x(\omega ,t)\Vert _{L^{\infty }}\le \Vert u(\omega ,t)\Vert _{L^{\infty }}\lesssim \beta (\omega ,t)\Vert u_0\Vert _{H^1},\ \forall \ t\in [0,\tau ^*)\right\} \ge p+q. \end{aligned}$$

Proof

It is easy to see that for a.e. \(\omega \in \Omega \) and for all \((t,x)\in [0, \tau ^*)\times {\mathbb {T}}\),

$$\begin{aligned} \left[ v+v_x\right] (\omega ,t,x)=&\frac{1}{2\sinh (\pi )} \int ^{2\pi }_0\mathrm{e}^{(x-y-2\pi \left[ \frac{x-y}{2\pi }\right] -\pi )}V(\omega ,t,y)\ \mathrm{d}y, \end{aligned}$$
(5.12)
$$\begin{aligned} \left[ v-v_x\right] (\omega ,t,x) =&\frac{1}{2\sinh (\pi )} \int ^{2\pi }_0\mathrm{e}^{(y-x+2\pi \left[ \frac{x-y}{2\pi }\right] +\pi )}V(\omega ,t,y)\ \mathrm{d}y. \end{aligned}$$
(5.13)

Then one can employ (5.12), (5.13) and Lemma 5.1 to obtain that for a.e. \(\omega \in \Omega \) and for all \((t,x)\in [0, \tau ^*)\times {\mathbb {T}}\),

$$\begin{aligned} \left\{ \begin{aligned} -v(\omega ,t,x)\le v_x(\omega ,t,x)\le v(\omega ,t,x),\ \&\mathrm{if}\ \ V_0(\omega ,x)=(1-\partial _{xx}^2)u_0(\omega ,x)>0, \\ v(\omega ,t,x)\le v_x(\omega ,t,x)\le -v(\omega ,t,x),\ \&\mathrm{if}\ \ V_0(\omega ,x)=(1-\partial _{xx}^2)u_0(\omega ,x)<0. \end{aligned} \right. \end{aligned}$$
(5.14)

Notice that

$$\begin{aligned} \left\{ V_0(\omega ,x)>0\right\} \bigcap \left\{ V_0(\omega ,x)<0\right\} =\emptyset . \end{aligned}$$
(5.15)

Combining (5.14) and (5.15) yields

$$\begin{aligned} {\mathbb {P}}\left\{ |v_x(\omega ,t,x)|\le |v(\omega ,t,x)|,\ \forall \ (t,x)\in [0, \tau ^*)\times {\mathbb {T}}\right\} \ge p+q. \end{aligned}$$
(5.16)

In view of \(H^1\hookrightarrow L^{\infty }\), (5.3) and (5.16), we arrive at

$$\begin{aligned} {\mathbb {P}}\left\{ \Vert v_x(\omega ,t)\Vert _{L^{\infty }} \le \Vert v(\omega ,t)\Vert _{L^{\infty }} \lesssim \Vert v(\omega ,t)\Vert _{H^1} =\Vert u_0\Vert _{H^1},\ \forall \ t\in [0,\tau ^*)\right\} \ge p+q. \end{aligned}$$

Via (5.1), we obtain the desired estimate. \(\square \)

Proof for Theorem 2.4

Let \((u,\tau ^*)\) be the maximal pathwise solution to (1.8). Then Lemma 5.2 implies that

$$\begin{aligned} {\mathbb {P}}\left\{ \Vert u\Vert _{W^{1,\infty }}\lesssim 2\beta (\omega ,t)\Vert u_0\Vert _{H^1},\ \forall \ t\in [0,\tau ^*)\right\} \ge p+q. \end{aligned}$$

It follows from (i) in Lemma A.7 that \(\sup _{t>0}\beta (\omega ,t)<\infty \) \({\mathbb {P}}-a.s.\) Then we can infer from (2.7) that \({\mathbb {P}}\{\tau ^*=\infty \}\ge p+q\). That is to say, \( {\mathbb {P}}\left\{ u\ \mathrm{exists\ globally}\right\} \ge p+q \). \(\square \)