Well-posedness for a stochastic Camassa-Holm type equation with higher order nonlinearities

This paper aims at studying a generalized Camassa--Holm equation under random perturbation. We establish a local well-posedness result in the sense of Hadamard, i.e., existence, uniqueness and continuous dependence on initial data, as well as blow-up criteria for pathwise solutions in the Sobolev spaces $H^s$ with $s>3/2$ for $x\in\mathbb{R}$. The analysis on continuous dependence on initial data for nonlinear stochastic partial differential equations has gained less attention in the literature so far. In this work, we first show that the solution map is continuous. Then we introduce a notion of stability of exiting time. We provide an example showing that one cannot improve the stability of the exiting time and simultaneously improve the continuity of the dependence on initial data. Finally, we analyze the regularization effect of nonlinear noise in preventing blow-up. Precisely, we demonstrate that global existence holds true almost surely provided that the noise is strong enough.


Introduction and main results
We consider the following stochastic generalized Camassa-Holm (CH) equation on R: In (1.1), W is a cylindrical Wiener process. For h = 0 and k = 1, equation (1.1) reduces to the deterministic CH equation given by Equation (1.2) was introduced by Fokas & Fuchssteiner [21] to study completely integrable generalizations of the Korteweg-de Vries equation with bi-Hamiltonian structure. In [10], Camassa & Holm proved that (1.2) can be connected to the unidirectional propagation of shallow water waves over a flat bottom. Since then, (1.2) has been studied intensively, and we only mention a few related results here. The CH equation exhibits both phenomena of soliton interaction (peaked soliton solutions) and wave breaking (the solution remains bounded while its slope becomes unbounded in finite time [16]). When h = 0 and k = 2, equation (1.1) becomes the so-called Novikov equation which was derived in [44]. Equation (1.3) also possesses a bi-Hamiltonian structure with an infinite sequence of conserved quantities, and it admits peaked solutions [24], as well as multipeakon solutions with explicit formulas [34]. For the study of other deterministic instances of (1.1), we refer to [28,60]. When additional noise is included, as in [46], the noise term can be used to account for the randomness arising from the energy exchange mechanisms. Indeed, in [40,59], the weakly dissipative term (1−∂ 2 x )(λu) with λ > 0 was added to the governing equations. In [46], such weakly dissipative term is assumed to be time-dependent, nonlinear in u and random. Therefore, (1 − ∂ 2 x )h(t, u)Ẇ is proposed to describe random energy exchange mechanisms.
In this work, we consider the Cauchy problem for (1.1) on the whole space R. Applying the operator (1 − ∂ 2 x ) −1 to (1.1), we reformulate the equation as du + u k ∂ x u + F (u) dt = h(t, u) dW, x ∈ R, t > 0, k ∈ N >0 , u(ω, 0, x) = u 0 (ω, x), x ∈ R, (1.4) with F (u) := F 1 (u) + F 2 (u) + F 3 (u) and (1.5) Here we remark that F 3 (u) in (1.5) will disappear for the CH case, (i.e., when k = 1). The operator (1 − ∂ 2 x ) −1 in F (·) is understood as where ⋆ stands for the convolution. In this paper, regarding (1.4), we focus on the following issues: • Local well-posedness, in the sense of Hadamard (existence, uniqueness and continuous dependence on initial data), and blow-up criterion of (1.4). • Understanding the dependence on initial data, and in particular how continuous the solution map u 0 → u is. The third question in our targets is on the impact of noise, which is one of the central questions in the study of stochastic partial differential equations (SPDEs). Regularization effects of noise have been observed for many different models. For example, it is known that the well-posedness of linear stochastic transport equations with noise can be established under weaker hypotheses than its deterministic counterpart, cf. [20]. Particularly, for the impact of linear noise in different models, we refer to [2,14,15,26,38,47,54].
Notably, the existing results on regularization by noise are largely restricted to linear equations or linear noise. Hence we have particular interest in the nonlinear noise case. Finding such noise is important as it helps us to understand the stabilizing mechanisms of noise. This is the first step to characterize relevant noise which provides regularization effects for the CH-type equations. In order to emphasize our ideas in a simple way, we only consider the noise as a 1-D Brownian motion in the current setting. That is, we consider the case that h(t, u) dW = q(t, u) dW , where W is a standard 1-D Brownian motion and q : [0, ∞) × H s → H s is a nonlinear function. Here we use the notation q rather than h because h needs to be a Hilbert-Schmidt operator (see (1.8)) to define the stochastic integral with respect to a cylindrical Wiener process W. Then we will focus on du + u k u x + F (u) dt = q(t, u) dW, x ∈ R, t > 0, k ∈ N >0 , u(ω, 0, x) = u 0 (ω, x), x ∈ R.
(1. 6) In Theorem 1.3, we provide a sufficient condition on q such that global existence can be guaranteed. We refer to Remark 1.5 for further remarks on Theorem 1.3. Before we introduce the notations, definitions and assumptions, we recall some recent results on stochastic CH-type equations. For the stochastic CH type equation with multiplicative noise, we refer to [46][47][48], where global existence and wave breaking were studied in the periodic case, i.e., x ∈ T. In particular, when the noise is of transport type, we refer to [1,4,22,32,33]. We also refer to [12,13,45] for more results in stochastic CH type equations.
1.1. Notations. We begin by introducing some notations. Let (Ω, {F t } t≥0 , P) be a right-continuous complete filtration probability space. Formally, we consider a separable Hilbert space U and let {e n } be a complete orthonormal basis of U. Let {W n } n≥1 be a sequence of mutually independent standard 1-D Brownian motions on (Ω, {F t } t≥0 , P). Then we define the cylindrical Wiener process W as (1.7) Let X be a separable Hilbert space. L 2 (U; X ) stands for the Hilbert-Schmidt operators from U to X . If Z ∈ L 2 (Ω; L 2 loc ([0, ∞); L 2 (U; X ))) is progressively measurable, then the integral is a well-defined X -valued continuous square-integrable martingale (see [5,23] for example). Throughout the paper, when a stopping time is defined, we set inf ∅ := ∞ by convention. For s ∈ R, the differential operator D s := (1 − ∂ 2 x ) s/2 is defined by D s f (ξ) = (1 + ξ 2 ) s/2 f (ξ), where f denotes the Fourier transform of f . The Sobolev space H s (R) is defined as and the inner product on H s (R) is (f, g) H s := (D s f, D s g) L 2 . In the sequel, for simplicity, we will drop R if there is no ambiguity. We will use to denote estimates that hold up to some universal deterministic constant which may change from line to line but whose meaning is clear from the context. and for all t > 0, h(t ′ , u) dW P-a.s.
Motivated by [46,49], we introduce the concept on stability of exiting time in Sobolev spaces. Exiting time, as its name would suggest, is defined as the time when solution leaves a certain range. Definition 1.2 (Stability of exiting time). Let (Ω, {F t } t≥0 , P, W) be fixed, s > 3/2 and k ∈ N >0 . Let u 0 be an H s -valued F 0 -measurable random variable such that E u 0 2 H s < ∞. Assume that {u 0,n } is a sequence of H s -valued F 0 -measurable random variables satisfying E u 0,n 2 H s < ∞. For each n, let u and u n be the unique solutions to (1.4), as in Definition 1.1, with initial values u 0 and u 0,n , respectively. For any R > 0, define the R-exiting times τ R n := inf{t ≥ 0 : u n H s > R}, τ R := inf{t ≥ 0 : u H s > R}. Now we define the following properties on stability: then the R-exiting time of u is said to be stable. (2) If u 0,n → u 0 in H s ′ for all s ′ < s almost surely implies that (1.9) holds true, the R-exiting time of u is said to be strongly stable.
Our main results rely on the following assumptions concerning the noise coefficient h(t, u) in (1.1).
There is a non-decreasing function g 2 (·) : [0, ∞) → [0, ∞) such that for all N ≥ 1 and 3/2 ≥ s > 1/2, Here we outline H 1 (2) is the classical local Lipschitz condition. H 1 (3) is needed to prove uniqueness in Lemma 3.1. Indeed, if one finds two solutions u, v ∈ H s to (1.4), one can only estimate u − v in H s ′ for s ′ ≤ s − 1 because the term u k u x loses one derivative. We refer to Remark 1.1 for more details.
Hypothesis H 2 . When we consider (1.4) in Sect. 4, we assume that there is a real number Besides, we suppose the following: We remark here that (1.10) means that there is a ρ 0 ∈ (1/2, 1) such that, if u n is bounded in H s and u n tends to zero in the topology of H ρ0 as n tends to ∞, then h(t, u n ) L2(U;H ρ 0 ) tends to zero exponentially as n tends to ∞. Examples of such noise structure can be found in Sect. 4.4.
As for the regularization effect of noise, we impose the following condition on q in (1.6): Hypothesis H 3 . We assume that when s > 3/2, q : [0, ∞) × H s ∋ (t, u) → q(t, u) ∈ H s is measurable. Define the set V as a subset of C 2 ([0, ∞); [0, ∞)) such that Then we assume the following: There is a non-decreasing function g 4 (·) : [0, +∞) → [0, +∞) such that for any u ∈ H s with s > 3/2, we have the following growth condition where H s (t, u) and λ s > 0 is the constant given in Lemma A.6 below.
Examples of the noise structure satisfying Hypothesis H 3 can be found in Sect. 5.2.

1.3.
Main results and remarks. Now we summarize our major contributions providing proofs later in the remainder of the paper.
The local solution (u, τ ) can be extended to a unique maximal solution (u, τ * ) with (iii) (Stability for almost surely bounded initial data) Assume additionally that u 0 ∈ L ∞ (Ω; H s ). Let v 0 ∈ L ∞ (Ω; H s ) be another H s -valued F 0 -measurable random variable. For any T > 0 and any then there is a stopping time τ ∈ (0, T ] P-a.s. and where u and v are the solutions to (1.4) with initial data u 0 and v 0 , respectively.
Remark 1.1. Existence and uniqueness have been studied for abundant SPDEs. In many works, the authors did not address the continuous dependence on initial data. In this work, our Theorem 1.1 provides a local well-posedness result in the sense of Hadamard including the continuous dependence on initial data. Moreover, a blow-up criterion is also obtained. We refer to [11,19,42] for the study about the dependence on the initial data for cases that solutions to the target problems exist globally. However, it is necessary to point out that almost nothing is known on the analysis for dependence on initial data for SPDEs whose solutions may blow up in finite time.
The key difficulty for such a case is as follows: on one hand, if solutions to a nonlinear stochastic partial differential equation (SPDE) blow up in finite time, it is usually very difficult to obtain the lifespan estimates. On the other hand, we have to find a positive time τ to obtain an inequality like (1.14). In addition, the target problem (1.4) is more difficult because the classical Itô formulae are not applicable. Indeed, for u 0 ∈ H s , we can only know u ∈ H s because this is a transport type equation, then u k u x ∈ H s−1 . However, the inner product u k u x , u H s appears if one uses the Itô formula in a Hilbert space (cf. [23,Theorem 2.10]) and the dual product H s−1 u k u x , u H s+1 appears in the Itô formula under a Gelfand triplet (cf. [39,Theorem I.3.1]). Since we only have u ∈ H s and u k u x ∈ H s−1 , neither of them are well-defined. Likewise, when we consider the H s -norm for the difference between two solutions u, v ∈ H s to (1.4), we will have to handle (u k u x − v k v x , u − v) H s , which gives rise to control either u H s+1 or v H s+1 . (1) Our proof for (i) in Theorem 1.1 is motivated by the recent results in [55]. For the convenience of the reader, here we also give a brief comparison between our approach and the framework employed in many previous works.
• We first briefly review the martingale approach used to prove existence of nonlinear SPDEs. Roughly speaking, in searching for a solution to a nonlinear SPDE in some space X , the martingale approach, as its name would suggest, includes obtaining martingale solution first and then establishing (pathwise) uniqueness to obtain the (pathwise) solution. To begin with, one needs to approximate the equation and establish uniform estimate. For nonlinear problems, one may have to add a cut-off function to cut the nonlinear parts growing in some space Z with X ֒→ Z (such choice of Z depends on concrete problems). As far as we know, the technique of cut-off first appears in [17] for the stochastic Schrödinger equation. This cut-off enables us to split the expectation of nonlinear terms, and then the L 2 (Ω; X ) estimate can be closed. For example, for (1.4), the estimate for E u 2 H s will give rise to E u W 1,∞ u 2 H s , and hence we need to add a function to cut · W 1,∞ . With this additional cut-off, we need to consider the cut-off version of the problem first and remove it then. The first main step in the martingale approach is finding a martingale solution. Usually, this can be done by first obtaining tightness of the measures defined by the approximative solutions in some space Y, and then using Prokhorov's Theorem and Skorokhod's Theorem to obtain the convergence in Y. Since X is usually infinite dimensional (usually, X is a Sobolev space), to obtain tightness, it is required that X is compactly embedded into Y, i.e, X ֒→֒→ Y. This brings another requirement to specify Z, that is, Y ֒→ Z. Otherwise, taking limits will not bring us back to the cut-off problem due to the additional cut-off term · Z (in some cases, the choice of Z may only give rise to a semi-norm and here we use this notation · Z only for simplicity). Usually, in bounded domains, it is not difficult to pick Y and Z such that X ֒→֒→ Y ֒→ Z (Sobolev spaces enjoy compact embeddings in bounded domains), see for example [4,9,18,26,48]. In unbounded domains, the difficulty lies in the choice of Y and Z such that X ֒→֒→ Y ֒→ Z. We refer to [7,8] for fluid models with certain cancellation properties (for example, divergence free) and linear growing noise. However, it is difficult to achieve this for SPDEs with general nonlinear terms and nonlinear noise. For instance, the cut-off in our case will have to involve · Z = · W 1,∞ (see H 1 (1) and (2.3)). Even though we can get the convergence in H s ′ loc with some 3 2 < s ′ < s, it is still not clear whether the convergence holds true in W 1,∞ , and this is because local convergence can not control a global object · W 1,∞ . Therefore, technically speaking, nonlinear SPDEs are more non-local than its deterministic counterpart.
• Due to the above unsolved technical issue, the martingale approach is difficult to apply in our problem and we will try to prove convergence directly, which is motivated by [41,55] (see also [49,53,54] for recent developments). Generally speaking, we will analyze the difference between two approximative solutions and directly find a space Y such that X ֒→ Y ֒→ Z and convergence (up to a subsequence) holds true in Y. The difficult part is finding convergence in Y without compactness X ֒→֒→ Y (compared to the martingale approach, tightness comes from the compact embedding X ֒→֒→ Y). In this paper, the target path space is C([0, T ]; H s ) = X , and we are able to prove convergence (up to a subsequence) in C([0, T ]; H s− 3 2 ) = Y directly. After taking limits to obtain a solution, one can improve the regularity to H s again, and the technical difficulty in this step is to prove the time continuity of the solution because the classical Itô formula is not applicable (see in Remark 1.1). To overcome this difficulty, we apply a mollifier J ε to equation and estimate E J ε u 2 H s first (see (2.11)). We also remark that the techniques in removing the cut-off have been used in [5,25,54]. Here we formulate such a technical result in Lemma A.7 in an abstract way.
(2) Now we give a remark on (iii) in Theorem 1.1. For the question on dependence on initial data, there are some delicate differences between the stochastic and the deterministic case. In the deterministic counterpart of (1.4), due to the lifespan estimate (see (4.10) for instance), for given u 0 ∈ H s , it can be shown that if u 0 − v 0 H s is small enough, then there is a T > 0 depending on u 0 such that sup t∈[0,T ] u(t) − v(t) 2 H s is also small. In stochastic setting, since existence and uniqueness are obtained in the framework of L 2 (Ω; H s ), it is therefore very natural to expect that, for given u 0 ∈ L 2 (Ω; H s ), if E u 0 − v 0 2 H s is small enough, then for some almost surely H s is also small. However, so far we have only proved it with assuming the smallness of u 0 − v 0 L ∞ (Ω;H s ) . Since L ∞ (Ω; H s ) can be viewed as being less random than L 2 (Ω; H s ), one may roughly conclude that what the solution map needs to be continuous/stable (the initial data and its perturbation are L ∞ (Ω; H s )) is more "picky" in determinism than what the existence of such a solution map requires (existence and uniqueness guarantee that a solution map can be defined). For the technical difficulties involved, we have the following explanations: • As is mentioned in Remark 1.1, when we estimate the H s -norm for the difference between two solutions u and v, H s+1 -norm will appear. Hence, we have to use smooth approximations to make the analysis valid. More precisely, we approximate u and v by smooth process u ε and v ε and consider Then all terms can be estimated because u ε H s+1 and v ε H s+1 make sense. Here we refer to Remark 3.2 for more details on the construction of such an approximation. • In dealing with the above three terms in the stochastic case, two sequences of stopping times (exiting times) are needed to control u ε H s and v ε H s (see (3.20) below). Since we aim at obtaining τ > 0 almost surely in (1.14) (otherwise the difference between two solutions on the set {τ = 0} can not be measured), we will have to guarantee that those stopping times used in bounding u ε H s and v ε H s have positive lower bounds almost surely. Up to now, we have only achieved this for initial values belonging to L ∞ (Ω; H s ). We also remark that this is different from the proof for existence. In the proof for existence, u ε exists on a common interval [0, T ] for all ε and enjoys a uniform-in-ε estimate (2.4), hence we can get rid of stopping times in convergence (from (2.8) to (2.9)). Here we do not have such common existence interval due to the lack of a lifespan estimate, which is a significant difference between the stochastic and the deterministic cases. Indeed, we can easily find the lifespan estimate for the deterministic counterpart of (1.4) (see (4.10) below). • Moreover, even if the above issue can be handled, in dealing with the three terms in (1.15), we are confronted with 1 where u, v are solutions corresponding to u 0 , v 0 , respectively. Below we will study this issue quantitatively. The next result addresses at least a partially negative answer.  x ∈ R is more difficult than the construction of approximative solution for x ∈ T (see [46]) since the approximative solution involves both high and low frequency parts (high frequency part is already enough for the case x ∈ T, cf. [46,55]). The key point is that we need to guarantee inf n τ m,n > 0 almost surely in dealing with (1.20). Hence we are confronted with a common difficulty in SPDEs again, that is, the lack of lifespan estimate. In deterministic cases, one can easily obtain the lifespan estimate, which enables us to find a common interval [0, T ] such that all actual solutions exist on [0, T ] (see for example Lemma 4.1). In the stochastic case, so far we have not been able to prove this. (2) To settle the above difficulty, we observe that the bound inf n τ m,n > 0 can be connected to the stability property of the exiting time (see Definition 1.2). The condition that the R 0 -exiting time is strongly stable at the zero solution will be used to provide a common existence time T > 0 such that for all n, u m,n exists up to T (see Lemma 4.4 below). Therefore, to prove Theorem 1.2, we will show that, if the R 0 -exiting time is strongly stable at the zero solution for some R 0 ≫ 1, then the solution map u 0 → u defined by (1.4) can not be uniformly continuous. (1) In deterministic cases, the issue of the (optimal) initial-data dependence of solutions has been extensively investigated for various nonlinear dispersive and integrable equations. We refer to [35] for the inviscid Burgers equation and to [37] for the Benjamin-Ono equation. For the CH equation we refer the readers to [29,30] concerning the non-uniform dependence on initial data in Sobolev spaces H s with s > 3/2. For the first results of this type in Besov spaces, we refer to [50,56]. Particularly, non-uniform dependence on initial data in critical Besov space first appears in [51,52]. In this work, Theorem 1.2 and (iii) in Theorem 1.1 demonstrate that the continuity of the solution map u 0 → u is almost an optimal result in the sense that, when the growth of the noise coefficient satisfies certain conditions (cf. Hypothesis H 2 ), the map u 0 → u is continuous, but one can not improve the stability of the exiting time and simultaneously the continuity of the map u 0 → u. Up to our knowledge, results of this type for SPDEs first appeared in [46,49].
We also refer to [3,43,55] for recent developments. (2) It is worthwhile mentioning that, as noted in (1) of Remark 1.3, the strong stability of exiting times is used as a technical "assumption" to handle the lower bound of a sequence of stopping times. So far we have not been able to verify the non-emptyness of this strong stability assumption for the current model. However, if the transport noise u x • dW is considered (W is a standard 1-D Brownian motion and • dW means the Stratonovich stochastic differential), we might conjecture that either the notion of strong stability of exiting times can be captured, or the solution map u 0 → u can become more regular than being continuous. Indeed, if h(t, u) dW is replaced by u x • dW in (1.4), one can rewrite the equation into Itô's form with an additional viscous term − 1 2 u xx on the left hand side of the equation. Therefore, it is reasonable to expect that in this case, either the strong stability of exiting times or the continuity of the solution map u 0 → u can be improved. We refer to [31] and [27] for deterministic examples on the continuity of the solution map.
Remark 1.5. We notice that many of the existing results on regularization effects by noise are essentially restricted to linear equations or linear growing noise. In Theorem 1.3, both the drift and diffusion term are nonlinear. We also remark that the blow-up can actually occur in the deterministic counterpart of (1.6). For example, when k = 1, blow-up (as wave breaking) of solutions to the CH equation can be found in [16]. Therefore, Theorem 1.3 demonstrates that large enough noise can prevent singularities. Indeed, H 3 (3) means that the growth of u k u x + F (u) can be controlled provided that the noise grows fast enough in terms of a Lyapunov type function V . In contrast to H 1 (2) and H 1 (3), we require s > 3/2 in both H 3 (2) and H 3 (3). As is stated in Hypothesis H 1 , H 3 (2) implies that uniqueness holds true for solutions in H s with s > 5/2. It seems that one can require s > 1/2 in H 3 (2) to guarantee uniqueness in H ρ with ρ > 3/2, but at present we can only construct examples for the case that s > 3/2 is required in both H 3 (2) and H 3 (3).
We outline the remainder of the paper. In Sect. 2, we study the cut-off version of (1.4) and then we remove the cut-off and prove Theorem 1.1 in Sect. 3. We prove Theorem 1.2 in Sect. 4. Concerning the interplay of noise vs blow-up, we prove Theorem 1.3 in Sect. 5.

Cut-off version: Regular solutions
We first consider a cut-off version of (1.4). To this end, for any R > 1, we let Then we consider the following cut-off problem In this section, we aim at proving the following result: The proof for Proposition 2.1 is given in the following subsections.
2.1. The approximation scheme. The first step is to construct a suitable approximation scheme. From Lemma A.5, we see that the nonlinear term F (u) preserves the H s -regularity of u ∈ H s for any s > 3/2. However, to apply the theory of SDEs in Hilbert space to (2.1), we will have to mollify the transport term u k ∂ x u since the product u k ∂ x u loses one regularity. To this end, we consider the following approximation scheme: where J ε is the Friedrichs mollifier defined in Appendix A. After mollifying the transport term u k ∂ x u, it follows from H 1 (2) and Lemmas A.1 and A.5 that for any ε ∈ (0, 1), H 1,ε (·) and H 2 (t, ·) are locally Lipschitz continuous in H s with s > 3 2 . Besides, we notice that the cut-off function χ R ( · W 1,∞ ) guarantees the linear growth condition (cf. Lemma A.5 and H 1 (1)). Thus, for fixed (Ω, {F t } t≥0 , P, W) and for u 0 ∈ L 2 (Ω; H s ) with s > 3/2, the existence theory of SDE in Hilbert space (see for example [23]) means that (2.3) admits a unique solution u ε ∈ C([0, ∞); H s ) P-a.s..

2.2.
Uniform estimates. Now we establish some uniform-in-ε estimates for u ε .
Proof. Using the Itô formula for u ε 2 H s , we have that for any Therefore, one can infer from the BDG inequality, H 1 (1), Lemma A.5 and the above estimate that Using Grönwall's inequality in (2.5) implies (2.4).

2.3.
Convergence of approximative solutions. Now we are going to show that the family {u ε } contains a convergent subsequence. For different layers u ε and u η , we see that v ε,η := u ε − u η satisfies the following problem: where Lemma 2.2. Let s > 3 and k ≥ 1 and let G(x) := x 2k+2 + 1. For any ε, η ∈ (0, 1), we find a constant C > 0 such that Proof. Using Lemmas A.1, A.3 and A.5, the mean value theorem for χ R (·), and the embedding H s− 3 2 ֒→ W 1,∞ , we have that for some C > 0, For q 6 , using Lemma A.1 and then integrating by part, we have Via the embedding H s− 3 2 ֒→ W 1,∞ and Lemmas A.1 and A.3, we obtain Therefore, we can put this all together to find which gives rise to the desired estimate.
Lemma 2.3. Let s > 3, R > 1 and ε ∈ (0, 1). For any T > 0 and K > 1, we define Then we have lim Proof. By employing the BDG inequality to (2.6), for some constant C > 0, we arrive at For q 9 and q 10 , we use (2.7), the mean value theorem for χ R (·), H 1 (1) and H 1 (2) to find a constant On account of Lemma 2.2 and the above estimate, we find Therefore, (2.8) holds true.
Proof. We first let ε be discrete, i.e., ε = ε n (n ≥ 1) such that ε n → 0 as n → ∞. In this way, for all n, u εn can be defined on the same set Ω with P{ Ω} = 1. For brevity, u εn is still denoted as u ε . For any ǫ > 0, by using (2.7), Lemma 2.1 and Chebyshev's inequality, we see that Letting K → ∞, we see that u ε converges in probability in C Then we only need to prove the continuity up to time We use Lemma A.6, the BDG inequality, Hypothesis H 1 and (2.12) to find We notice that for any T > 0, J ε u tends to u in C ([0, T ]; H s ) as ε → 0. This, together with Fatou's lemma, implies This and Kolmogorov's continuity theorem ensure the continuity of t → u(t ∧ τ N ) H s .

Proof for Theorem 1.1
Now we can prove Theorem 1.1. For the sake of clarity, we provide the proof in several subsections.
3.1. Proof for (i) in Theorem 1.1: Existence and uniqueness.
3.1.1. Uniqueness. Before we prove the existence of a solution in H s with s > 3/2, we first prove uniqueness since some estimates here will be used later.
Then we use the Itô formula for w 2 H s ′ with s ′ ∈ 1 2 , min s − 1, 3 2 to find that Taking the supremum over t ∈ [0, τ T u,v ] and using the BDG inequality, H 1 (3) and the Cauchy-Schwarz inequality yield Using Lemma A.4, integration by parts and H s ֒→ W 1,∞ , we have Similarly, Lemma A.5 and Therefore, we combine the above estimates to find Using the Grönwall inequality in the above estimate leads to (3.1). Similarly, one can obtain the following uniqueness result for the original problem (1.4), and we omit the details for simplicity. Lemma 3.2. Let s > 3/2, and let Hypothesis H 1 be true. Let u 0 be an H s -valued F 0 -measurable random variable such that u 0 ∈ L 2 (Ω; H s ). If (u 1 , τ 1 ) and (u 2 , τ 2 ) are two local solutions to (1.4) satisfying u i (· ∧ τ i ) ∈ L 2 (Ω; C([0, ∞); H s )) for i = 1, 2 and P{u 1 (0) = u 2 (0) = u 0 (x)} = 1, then Then for any R > 0 and m ∈ N, it follows from the time continuity of the solution that P{τ m,R > 0} = 1.
For any T > 0 and s > 3/2, we define Let K ≥ 2M + 5 be fixed and let s ′ ∈ 1 2 , min s − 1, 3 2 . Then, there is a constant C(K, T ) > 0 such that Proof. To start with, we notice that Lemma A.1 implies Since (3.4) and (3.6) are used frequently in the following, they will be used without further notice. Let Applying the Itô formula to w ε,η Since H s ′ ֒→ L ∞ and H s ֒→ W 1,∞ , we can use Lemmas A.3 and A.5 to find Summarizing the above estimates and then using Grönwall's inequality, we find some constant C = C(K, T ) > 0 such that To this end, we first recall (1.7) and then apply the Itô formula to deduce that for any ρ > 0, In the same way, we also rewrite Q 1,s in (3.8) as With the summation form (3.11) at hand, applying the Itô product rule to (3.8) and (3.10), we derive We first notice that where P k is defined by (3.7). As a result, Lemma A.4, integration by parts and H s ֒→ W 1,∞ give rise to Using Lemma A.3, Hypothesis H 1 , Lemma A.5 as well as the embedding of H s ֒→ W 1,∞ for s > 3/2, we obtain that for some C(K) > 0, Then one can infer from the above three inequalities, the BDG inequality and Hypothesis H 1 that for some constant C(K) > 0, For the last term, we proceed as follows: Consequently, (3.12) reduces to which means that for some C(K, T ) > 0, Combining (3.9) and (3.13), we obtain (3.5).
To proceed further, we state the following lemma in [25] as a form which is convenient for our purposes.
hold true. Then we have: (a) There exits a sequence of stopping times ξ εn , for some countable sequence {ε n } with ε n → 0 as n → ∞, and a stopping time τ such that ξ εn ≤ τ T εn , lim n→∞ ξ εn = τ ∈ (0, T ] P-a.s. u H s ≤ u 0 H s + 2 P-a.s. (c) There is a sequence of sets Ω n ↑ Ω such that for any p ∈ [1, ∞), which clearly forces that P sup Due to the Chebyshev inequality, Lemmas A.3 and A.5, Hypothesis H 1 , the embedding of H s ֒→ W 1,∞ for s > 3/2, (3.4) and (3.6), we have Then we can infer from the Doob's maximal inequality and the Itô isometry that Hence we have P sup is a solution to (1.4) satisfying (1.11) and u(0) = u 0 almost surely. Uniqueness is given by Lemma 3.2.

3.2.
Proof for (ii) in Theorem 1.1: Blow-up criterion. With a local solution (u, τ ) at hand, one may pass from (u, τ ) to the maximal solution (u, τ * ) as in [5,26]. In the periodic setting, i.e., x ∈ T = R/2πZ, the blow-up criterion (1.12) for a maximal solution has been proved in [46] by using energy estimate and some stopping-time techniques. When x ∈ R, (1.12) can be also obtained in the same way, and we omit the details for brevity. From now on ǫ > 0 and T > 0 are given. However, as is mentioned in Remark 1.1, the term u k u x loses one regularity and the estimate for H s will involve u H s+1 or v H s+1 , which might be infinite since we only know u, v ∈ H s . To overcome this difficulty, we will consider (3.3). Let ε ∈ (0, 1). By (i) in Theorem 1.1, there is a unique solution u ε (resp. v ε ) to the problem (3.3) with initial data J ε u 0 (resp. J ε v 0 ). Then the H s+1 -norm is well-defined for the smooth solution u ε and v ε . Similar to (3.4), for any T > 0, we define (3.20) Recalling the analysis in Lemma 3.3 and Proposition 3.2 (for the case f = v, we notice (3.19)), we can use Lemma 3.4 to find that there exits a unified subsequence {ε n } with ε n → 0 as n → ∞ such that for f ∈ {u, v}, there is a sequence of stopping times ξ f εn and a stopping time τ f satisfying ξ f εn ≤ τ f,T εn , n ≥ 1 and lim n→∞ ξ f εn = τ f ∈ (0, T ] P-a.s., (3.21) and lim n→∞ sup Moreover, for f ∈ {u, v}, there exists Ω f n ↑ Ω such that Consequently, by Lebesgue's dominated convergence theorem and (3.21), we have for n ≫ 1 that, where C = C u 0 L ∞ (Ω;H s ) , T as before. Fix n = n 0 ≫ 1 such that (3.24) and (3.26) are satisfied, i.e., Then, for (3.28) with n = n 0 , we can find a δ = δ(ǫ, u 0 , T ) ∈ (0, 1) such that (3.19) is satisfied and This inequality and (3.29) 1 yield that Hence we obtain (1.14) with τ = τ u ∧ τ v . Due to (3.21), τ ∈ (0, T ] almost surely.

Weak instability
Now we prove Theorem 1.2. As is mentioned in Remark 1.3, since we can not get an explicit expression of the solution to (1.4), we start with constructing some approximative solutions from which (1.19) can be established.

Approximative solutions and actual solutions.
Following the approach in [28,46], now we construct the approximative solutions. We fix two functions φ,φ ∈ C ∞ c such that where u h = u h,m,n is the high-frequency part defined by and u l = u l,m,n is the low-frequency part constructed such that u l is the solution to the following problem: (4.5) The parameter δ > 0 in (4.4) and (4.5) will be determined later for different k ≥ 1. Particularly, when m = 0, we have u l = 0. In this case the approximative solution u 0,n has no low-frequency part and u 0,n (t, x) = n − δ 2 −s φ x n δ cos(nx). Next, we consider the problem (1.4) with initial data u m,n (0, x), i.e., where F (·) is defined by (1.5). Since h satisfies H 2 (1), similar to the proof for Theorem 1.1, we see that for each fixed n ∈ N, (4.6) has a unique solution (u m,n , τ m,n ) such that u m,n ∈ C ([0, τ m,n ]; H s ) P-a.s. with s > 5/2.

4.2.
Estimates on the errors. Substituting (4.3) into (1.4), we define the error E(ω, t, x) as E(ω, t, x) := u m,n (t, x) − u m,n (0, x) For simplicity, we let where C j q is the binomial coefficient. By using (4.3), (4.5) and (4.7), E(ω, t, x) can be reformulated as Estimates on the low-frequency part. The following lemma gives a decay estimate for the lowfrequency part of u m,n , that is, u l .
Lemma 4.1. Let k ≥ 1, |m| = 1 or m = 0, s > 3/2, δ ∈ (0, 2/k) and n ≫ 1. Then there is a T l > 0 such that for all n ≫ 1, the initial value problem (4.5) has a unique smooth solution u l = u l,m,n ∈ C([0, T l ]; H s ) such that T l does not depend on n. Besides, for all r > 0, there is a constant C = C r,φ,T l > 0 such that u l satisfies u l (t) H r ≤ C|m|n Proof. When m = 0, as is mentioned above, u l ≡ 0 for all t ≥ 0. It remains to prove the case |m| = 1. For any fixed n ≥ 1, since u l (0, x) ∈ H ∞ , by applying Theorem 1.1 with h = 0 and deterministic initial data, we see that for any s > 3/2, (4.5) has a unique (deterministic) solution u l = u l,m,n ∈ C ([0, T m,n ]; H s ). Different from the stochastic case, here we will show that there is a lower bound of the existence time, i.e., there is a T l > 0 such that for all n ≫ 1, u l = u l,m,n exists on [0, T l ] and satisfies (4.9).
Step 1: Estimate u l (0, x) H r . When n ≫ 1, we have for some constant C = C r,φ > 0. As a result, we have Step 2: Prove (4.9) for r > 3/2. In this case, we apply Lemmas A.3 and A.5, H r ֒→ W 1,∞ to find Solving the above inequality gives Therefore, we arrive at (4.10) By Step 1, we have T m,n 1 2Ckn k( δ 2 − 1 k ) → ∞, as n → ∞. Consequently, we can find a common time interval [0, T l ] such that

Estimate on E.
Recall the error E given in (4.8). By using (4.1) and (4.2), we have m = m k and φ =φ k φ for all k ≥ 1. Then by (4.4) and u l (0, x) in (4.5), we see that as long as m = 0, If m = 0, then u l = 0 and we also have To sum up, we find that for all k ≥ 1, m satisfying (4.2), u h given by (4.4) and u l (0, x) in (4.5), On the other hand, for all k ≥ 1, Combining (4.11), (4.12) and (1.5) into (4.8) yields where We remark here that E 4 disappears when k = 1. Recalling ρ 0 ∈ (1/2, 1) in Hypothesis H 2 , now we shall estimate the H ρ0 -norm of the error E. Actually, we will show that the H ρ0 -norm of E is decaying.

(4.15)
Then the error E given by (4.13) satisfies that for some C = C(T l ) > 0, Proof. The proof is technical and it is given in Appendix B.

4.2.3.
Estimate on u m,n − u m,n . Recall the approximative solutions u m,n given by (4.3). Then we have the following estimates on the difference between the actual solutions and the approximative solutions. Then for n ≫ 1, where T l > 0 is given in Lemma 4.1 and C = C(R, T l ) > 0.
where P = P m,n = u k m,n + u k−1 m,n u m,n + · · · + u m,n (u m,n ) k−1 + (u m,n ) k , k ≥ 1. On [0, T l ], by the Itô formula, we have that Taking supremum with respect to t ∈ [0, T l ∧ τ m,n R ], and then using the BDG inequality yield E sup where g 3 (·) is given in H 2 (2). As a result, for any fixed s > 5/2, by applying Lemmas A.8 and 4.1 again, we can pick N = N (s, k) ≫ 1 to derive Consequently, we can infer from the above inequalities that Using Lemma A.4 and integration by parts, we obtain that Using the Grönwall inequality, we obtain (4.17). Now we prove (4.18). Since 2s − ρ 0 > s > 5 2 and u m,n is the unique solution to (4.6), similar to (2.5), we can use (4.16) and H 2 (1) to find for each fixed n ∈ N that When k is odd, x n δ H s → 0, as n → ∞. When k is even u 0,n (0) − u 1,n (0) H s = u 0,n (0) − u 1,n (0) H s = n − 1 kφ x n δ H s → 0, as n → ∞. The above two estimates imply that (1.18) holds true. Now we prove (1.19). Let T l > 0 be given in Lemma 4.1. When k is odd, we use (4.20) to derive It follows from the construction of u m,n , Fatou's lemma, Lemmas 4.1, A.8 and 4.4 that which is (1.19) in the case that k is odd. When k is even, one has Similar to (4.21), we can also obtain (1.19) in the case that k is even. The proof is completed. 5. Noise prevents blow up 5.1. Proof for Theorem 1.3. Our approach is motivated by [6,45]. Let s > 5/2 and u 0 ∈ H s be an H s -valued F 0 -measurable random variable with E u 0 2 H s < ∞. With H 3 (1) and H 3 (2) at hand, one can follow the steps in the proof for Theorem 1.1 to obtain a unique solution u to (1.6) such that u ∈ C([0, τ * ); H s ) P-a.s. and Due to (5.1), we have τ m < τ * = τ * P-a.s. and hence we only need to show τ * = ∞ P-a.s..

(5.2)
For V ∈ V, applying the Itô formula to u(t) 2 H s−1 and then to V ( u 2 H s−1 ), we find Next, we recall τ m < τ * = τ * and s − 1 > 3/2, take expectation and then use Hypothesis H 3 and Lemma A.6 to find that where H σ (t, u) (u ∈ H σ and σ > 3/2) is defined in Hypothesis H 3 (3). Then we can infer from the above estimate that there is a constant C(u 0 , N 1 , N 2 , t) > 0 such that Next, for any T > 0, it follows from the BDG inequality that dt.
Lemma A.1 ( [41,48]). For all ε ∈ (0, 1), s, r ∈ R and u ∈ H s , J ε constructed in (A.1) satisfies where L(X ; Y) is the space of bounded linear operators from X to Y. 58]). Let f, g be two functions such that g ∈ W 1,∞ and f ∈ L 2 . Then for some C > 0, Besides, if s > 0, then we have for all f, g ∈ H s L ∞ , Lemma A.4 (Proposition 4.2, [57]). Let ρ > 3/2 and 0 ≤ η + 1 ≤ ρ. We have for some c > 0, Lemma A.5. For F (·) defined in (1.5), we have for all k ≥ 1 the following estimates: Proof. We only estimate F (v) H s for 0 < s ≤ 3/2 since the other cases can be found in [46,52,56]. When s > 0, by using (1.5) and Lemma A.3, we derive When k ≥ 2, we have Combining the above two cases for F 2 , we arrive at Now we consider F 3 . When k ≥ 3, we have Combining the above two cases for F 3 with noticing that F 3 = 0 for k = 1, we find Then the desired estimate is a consequence of (A.2), (A.3) and (A.4).
Lemma A.6. Let s > 3/2, k ≥ 1, F (·) be given in (1.5) and J ε be the mollifier defined in (A.1). There exists a constant λ s > 0 such that for all ε > 0, If u ∈ H s+1 , then u k u x ∈ H s , and the above estimate also holds true without J ε .
Proof. We only prove the case that u ∈ H s . It follows from Lemmas A.1, A.2 and A.3, integration by parts and H s ֒→ W 1,∞ that From Lemma A.5, we also have Combining the above two inequalities gives rise to the desired estimate of the lemma.
The following technique has been used in [4,5,25]. Here we formulate such a technique result in an abstract way.
Lemma A.7. Suppose u 0 is an H s -valued F 0 -measurable random variable, and suppose H 1 (1) holds true. Let I be a countable index set and let {Ω i } i∈I satisfy is a solution to (1.4) with initial data u 0 .
Proof. Since (u i , τ i ) is a solution to (1.4) with initial value u 0 1 Ωi , we find Therefore, we restrict the above equation to Ω i and we obtain It is clear that almost surely, By H 1 (1), we have h(t, 0) L2(U;H s ) < ∞. Then, from the above three equations, we have that almost surely which means (1 Ωi u i , 1 Ωi τ i ) also solves (1.4) with initial data 1 Ωi u 0 . By summing up both sides of the above equation with noticing (A.5), we derive that (A.6) is a solution to (1.4) with initial data u 0 almost surely. Indeed, for the initial data, we have u 0 = i∈I 1 Ωi u 0 P-a.s. For the nonlinear term u k ∂ x u, by (A.5), we have that P-a.s., i∈I t∧1Ω i τi The other terms can also be justified in the same way, here we omit the details. Finally, we recall the following estimate on the product of a Schwartz function and a trigonometric function.