Abstract
This paper aims at studying a generalized Camassa–Holm equation under random perturbation. We establish a local well-posedness result in the sense of Hadamard, i.e., existence, uniqueness and continuous dependence on initial data, as well as blow-up criteria for pathwise solutions in the Sobolev spaces \(H^s\) with \(s>3/2\) for \(x\in \mathbb R\). The analysis on continuous dependence on initial data for nonlinear stochastic partial differential equations has gained less attention in the literature so far. In this work, we first show that the solution map is continuous. Then we introduce a notion of stability of exiting time. We provide an example showing that one cannot improve the stability of the exiting time and simultaneously improve the continuity of the dependence on initial data. Finally, we analyze the regularization effect of nonlinear noise in preventing blow-up. Precisely, we demonstrate that global existence holds true almost surely provided that the noise is strong enough.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and main results
We consider the following stochastic generalized Camassa–Holm (CH) equation on \(\mathbb R\):
In (1.1), \(\mathcal W\) is a cylindrical Wiener process.
For \(h= 0\) and \(k=1\), Eq. (1.1) reduces to the deterministic CH equation given by
Equation (1.2) was introduced by Fokas and Fuchssteiner [21] to study completely integrable generalizations of the Korteweg–de Vries equation with bi-Hamiltonian structure. In [10], Camassa and Holm proved that (1.2) can be connected to the unidirectional propagation of shallow water waves over a flat bottom. Since then, (1.2) has been studied intensively, and we only mention a few related results here. The CH equation exhibits both phenomena of soliton interaction (peaked soliton solutions) and wave breaking (the solution remains bounded while its slope becomes unbounded in finite time [16]).
When \(h= 0\) and \(k=2\), Eq. (1.1) becomes the so-called Novikov equation
which was derived in [44]. Equation (1.3) also possesses a bi-Hamiltonian structure with an infinite sequence of conserved quantities, and it admits peaked solutions [24], as well as multipeakon solutions with explicit formulas [34]. For the study of other deterministic instances of (1.1), we refer to [28, 60].
When additional noise is included, as in [46], the noise term can be used to account for the randomness arising from the energy exchange mechanisms. Indeed, in [40, 59], the weakly dissipative term \((1-\partial ^2_{x})(\lambda u)\) with \(\lambda >0\) was added to the governing equations. In [46], such weakly dissipative term is assumed to be time-dependent, nonlinear in u and random. Therefore, \((1-\partial ^2_{x})h(t,u)\dot{\mathcal W} \) is proposed to describe random energy exchange mechanisms.
In this work, we consider the Cauchy problem for (1.1) on the whole space \(\mathbb R\). Applying the operator \((1-\partial _{x}^2)^{-1}\) to (1.1), we reformulate the equation as
with
Here we remark that \(F_3(u)\) in (1.5) will disappear for the CH case (i.e., when \(k=1\)). The operator \((1-\partial _{x}^2)^{-1}\) in \(F(\cdot )\) is understood as
where \(\star \) stands for the convolution.
In this paper, regarding (1.4), we focus on the following issues:
-
Local well-posedness, in the sense of Hadamard (existence, uniqueness and continuous dependence on initial data), and blow-up criterion of (1.4).
-
Understanding the dependence on initial data, and in particular how continuous the solution map \(u_0\mapsto u\) is.
-
Analyzing the effect of noise vs blow-up of the deterministic counterpart of (1.4).
For the first and second issue, we refer to Theorems 1.1 and 1.2, respectively. Extended remarks, explanations of difficulties, and a review of literature are given in Remarks 1.1, 1.2, 1.3 and 1.4.
The third question in our targets is on the impact of noise, which is one of the central questions in the study of stochastic partial differential equations (SPDEs). Regularization effects of noise have been observed for many different models. For example, it is known that the well-posedness of linear stochastic transport equations with noise can be established under weaker hypotheses than its deterministic counterpart, cf. [20]. Particularly, for the impact of linear noise in different models, we refer to [2, 14, 15, 26, 38, 47, 54].
Notably, the existing results on regularization by noise are largely restricted to linear equations or linear noise. Hence we have particular interest in the nonlinear noise case. Finding such noise is important as it helps us to understand the stabilizing mechanisms of noise. This is the first step to characterize relevant noise which provides regularization effects for the CH-type equations. In order to emphasize our ideas in a simple way, we only consider the noise as a 1-D Brownian motion in the current setting. That is, we consider the case that \(h(t,u) \, \textrm{d}\mathcal W=q(t,u) \, \textrm{d}W\), where W is a standard 1-D Brownian motion and \(q:[0,\infty )\times H^s\rightarrow H^s\) is a nonlinear function. Here we use the notation q rather than h because h needs to be a Hilbert–Schmidt operator [see (1.8)] to define the stochastic integral with respect to a cylindrical Wiener process \(\mathcal W\). Then we will focus on
In Theorem 1.3, we provide a sufficient condition on q such that global existence can be guaranteed. We refer to Remark 1.5 for further remarks on Theorem 1.3.
Before we introduce the notations, definitions and assumptions, we recall some recent results on stochastic CH-type equations. For the stochastic CH type equation with multiplicative noise, we refer to [46,47,48], where global existence and wave breaking were studied in the periodic case, i.e., \(x\in \mathbb T\). In particular, when the noise is of transport type, we refer to [1, 4, 22, 32, 33]. We also refer to [12, 13, 45] for more results in stochastic CH type equations.
1.1 Notations
We begin by introducing some notations. Let \(({\Omega },\{\mathcal F_t\}_{t\ge 0}, \mathbb P)\) be a right-continuous complete filtration probability space. Formally, we consider a separable Hilbert space \(\mathfrak U\) and let \(\{e_n\}\) be a complete orthonormal basis of \(\mathfrak U\). Let \(\{W_n\}_{n\ge 1}\) be a sequence of mutually independent standard 1-D Brownian motions on \(({\Omega },\{\mathcal {F}_t\}_{t\ge 0},\mathbb P)\). Then we define the cylindrical Wiener process \(\mathcal W\) as
Let \(\mathcal X\) be a separable Hilbert space. \(\mathcal L_2(\mathfrak U; \mathcal X)\) stands for the Hilbert-Schmidt operators from \(\mathfrak U\) to \(\mathcal X\). If \(Z\in L^2 ({\Omega }; L^2_\textrm{loc}([0,\infty );\mathcal L_2(\mathfrak U; \mathcal X)))\) is progressively measurable, then the integral
is a well-defined \(\mathcal X\)-valued continuous square-integrable martingale [see [5, 23] for example]. Throughout the paper, when a stopping time is defined, we set \(\inf \emptyset :=\infty \) by convention.
For \(s\in \mathbb R\), the differential operator \(D^s:=(1-\partial _{x}^2)^{s/2}\) is defined by \(\widehat{D^sf}(\xi )=(1+\xi ^2)^{s/2}\widehat{f}(\xi )\), where \(\widehat{f}\) denotes the Fourier transform of f. The Sobolev space \(H^s(\mathbb R)\) is defined as
and the inner product on \(H^s(\mathbb R)\) is \((f,g)_{H^s}:=(D^sf,D^sg)_{L^2}.\) In the sequel, for simplicity, we will drop \(\mathbb R\) if there is no ambiguity. We will use \(\lesssim \) to denote estimates that hold up to some universal deterministic constant which may change from line to line but whose meaning is clear from the context. For linear operators A and B, \([A,B]:=AB-BA\) is the commutator of A and B.
1.2 Definitions and assumptions
We first make the precise notion of a solution to (1.4).
Definition 1.1
Let \(({\Omega }, \{\mathcal {F}_t\}_{t\ge 0},\mathbb P, \mathcal W)\) be a fixed in advance. Let \(s>3/2\), \(k\in \mathbb N_{>0}\) and \(u_0\) be an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable.
-
1.
A local solution to (1.4) is a pair \((u,\tau )\), where \(\tau \) is a stopping time satisfying \(\mathbb P\{\tau >0\}=1\) and \(u:{\Omega }\times [0,\infty )\rightarrow H^s\) is an \(\mathcal {F}_t\)-predictable \(H^s\)-valued process satisfying
$$\begin{aligned} u(\cdot \wedge \tau )\in C([0,\infty );H^s)\ \ {\mathbb P}\text {-}\mathrm{a.s.}, \end{aligned}$$and for all \(t>0\),
$$\begin{aligned} u(t\wedge \tau )-u(0)+\int _0^{t\wedge \tau } \left[ u^k\partial _xu+F(u)\right] \, \textrm{d}t' =\int _0^{t\wedge \tau }h(t',u) \, \textrm{d}\mathcal W\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$ -
2.
The local solutions are said to be unique, if given any two pairs of local solutions \((u_1,\tau _1)\) and \((u_2,\tau _2)\) with \(\mathbb P\left\{ u_1(0)=u_2(0)\right\} =1,\) we have
$$\begin{aligned} \mathbb P\left\{ u_1(t,x)=u_2(t,x),\ (t,x)\in [0,\tau _1\wedge \tau _2]\times \mathbb R\right\} =1. \end{aligned}$$ -
3.
Additionally, \((u,\tau ^*)\) is called a maximal solution to (1.4) if \(\tau ^*>0\) almost surely and if there is an increasing sequence \(\tau _n\rightarrow \tau ^*\) such that for any \(n\in \mathbb N\), \((u,\tau _n)\) is a solution to (1.4) and on the set \(\{\tau ^*<\infty \}\), we have
$$\begin{aligned} \sup _{t\in [0,\tau _n]}\Vert u\Vert _{H^s}\ge n. \end{aligned}$$ -
4.
If \((u,\tau ^*)\) is a maximal solution and \(\tau ^*=\infty \) almost surely, then we say that the solution exists globally.
Motivated by [46, 49], we introduce the concept on stability of exiting time in Sobolev spaces. Exiting time, as its name would suggest, is defined as the time when solution leaves a certain range.
Definition 1.2
(Stability of exiting time) Let \(({\Omega }, \{\mathcal {F}_t\}_{t\ge 0},\mathbb P, \mathcal W)\) be fixed, \(s>3/2\) and \(k\in \mathbb N_{>0}\). Let \(u_0\) be an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable such that \(\mathbb E\Vert u_0\Vert _{H^s}^2<\infty \). Assume that \(\{u_{0,n}\}\) is a sequence of \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variables satisfying \(\mathbb E\Vert u_{0,n}\Vert _{H^s}^2<\infty \). For each n, let u and \(u_n\) be the unique solutions to (1.4), as in Definition 1.1, with initial values \(u_0\) and \(u_{0,n}\), respectively. For any \(R>0\), define the R-exiting times
Now we define the following properties on stability:
-
1.
If \(u_{0,n}\rightarrow u_0\) in \(H^{s}\) \({\mathbb P}\text {-}\mathrm{a.s.}\) implies that
$$\begin{aligned} \lim _{n\rightarrow \infty }\tau ^R_{n}=\tau ^R\ \ {\mathbb P}\text {-}\mathrm{a.s.}, \end{aligned}$$(1.9)then the R-exiting time of u is said to be stable.
-
2.
If \(u_{0,n}\rightarrow u_0\) in \(H^{s'}\) for all \(s'<s\) almost surely implies that (1.9) holds true, the R-exiting time of u is said to be strongly stable.
Our main results rely on the following assumptions concerning the noise coefficient h(t, u) in (1.1).
Hypothesis \({\textbf {H}}_{\textbf {1}}\) For \(s>1/2\), we assume that \(h:[0,\infty )\times H^s\ni (t,u)\mapsto h(t,u)\in \mathcal L_2(\mathfrak U; H^s)\) is measurable and satisfies the following conditions:
- \(H_1(1)\):
-
There is a non-decreasing function \(f(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) such that for any \(u\in H^s\) with \(s>3/2\), we have the following growth condition
$$\begin{aligned} \sup _{t\ge 0}\Vert h(t,u)\Vert _{\mathcal L_2(\mathfrak U; H^s)}\le f(\Vert u\Vert _{W^{1,\infty }}) (1+\Vert u\Vert _{H^s}). \end{aligned}$$ - \(H_1(2)\):
-
There is a non-decreasing function \(g_1(\cdot ):[0,\infty )\rightarrow [0,\infty )\) such that for all \(N\ge 1\),
$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^s},\,\Vert v\Vert _{H^s}\le N}\left\{ \textbf{1}_{\{u\ne v\}} \frac{\Vert h(t,u)-h(t,v)\Vert _{\mathcal L_2(\mathfrak U, H^s)}}{\Vert u-v\Vert _{H^s}}\right\} \le g_1(N),\ \ s>3/2. \end{aligned}$$ - \(H_1(3)\):
-
There is a non-decreasing function \(g_2(\cdot ):[0,\infty )\rightarrow [0,\infty )\) such that for all \(N\ge 1\) and \(3/2\ge s>1/2\),
$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^{s+1}},\,\Vert v\Vert _{H^{s+1}}\le N}\left\{ \textbf{1}_{\{u\ne v\}} \frac{\Vert h(t,u)-h(t,v)\Vert _{\mathcal L_2(\mathfrak U, H^s)}}{\Vert u-v\Vert _{H^s}}\right\} \le g_2(N). \end{aligned}$$
Here we outline \(H_1(2)\) is the classical local Lipschitz condition. \(H_1(3)\) is needed to prove uniqueness in Lemma 3.1. Indeed, if one finds two solutions \(u,v\in H^s\) to (1.4), one can only estimate \(u-v\) in \(H^{s'}\) for \(s'\le s-1\) because the term \(u^ku_x\) loses one derivative. We refer to Remark 1.1 for more details.
Hypothesis \({\textbf {H}}_{\textbf {2}}\) When we consider (1.4) in Sect. 4, we assume that there is a real number \(\rho _0\in (1/2,1)\) such that for \(s\ge \rho _0\), \(h:[0,\infty )\times H^s\ni (t,u)\mapsto h(t,u)\in \mathcal L_2(\mathfrak U; H^s)\) is measurable. Besides, we suppose the following:
- \(H_2(1)\):
-
There exists a non-decreasing function \(l(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) such that for any \(u\in H^s\) with \(s>3/2\),
$$\begin{aligned} \sup _{t\ge 0}\Vert h(t,u)\Vert _{\mathcal L_2(\mathfrak U; H^s)}\le l(\Vert u\Vert _{W^{1,\infty }})\Vert u\Vert _{H^s}, \end{aligned}$$and \(H_1(2)\) holds.
- \(H_2(2)\):
-
There is a non-decreasing function \(g_3(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) such that for all \(N\ge 1\),
$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^s}\le N}\Vert h(t,u)\Vert _{\mathcal L_2(\mathfrak U; H^{\rho _0})}\le g_3(N) \textrm{e}^{-\frac{1}{\Vert u\Vert _{H^{\rho _0}}}},\ \ s>3/2, \end{aligned}$$(1.10)and
$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^{\rho _0}},\,\Vert v\Vert _{H^{\rho _0}}\le N}\left\{ \textbf{1}_{\{u\ne v\}} \frac{\Vert h(t,u)-h(t,v)\Vert _{\mathcal L_2(\mathfrak U, H^{\rho _0})}}{\Vert u-v\Vert _{H^{\rho _0}}}\right\} \le g_3(N). \end{aligned}$$
We remark here that (1.10) means that there is a \(\rho _0\in (1/2,1)\) such that, if \(u_n\) is bounded in \(H^s\) and \(u_n\) tends to zero in the topology of \(H^{\rho _0}\) as n tends to \(\infty \), then \(\Vert h(t,u_n)\Vert _{\mathcal L_2(\mathfrak U; H^{\rho _0})}\) tends to zero exponentially as n tends to \(\infty \). Examples of such noise structure can be found in Sect. 4.4.
As for the regularization effect of noise, we impose the following condition on q in (1.6):
Hypothesis \({\textbf {H}}_{\textbf {3}}\) We assume that when \(s>3/2\), \(q:[0,\infty )\times H^s\ni (t,u)\mapsto q(t,u)\in H^s\) is measurable. Define the set \(\mathcal {V}\) as a subset of \( C^2([0,\infty );[0,\infty ))\) such that
Then we assume the following:
- \(H_3(1)\):
-
There is a non-decreasing function \(g_4(\cdot ):[0,+\infty )\rightarrow [0,+\infty )\) such that for any \(u\in H^s\) with \(s>3/2\), we have the following growth condition
$$\begin{aligned} \sup _{t\ge 0}\Vert q(t,u)\Vert _{H^s}\le g_4(\Vert u\Vert _{W^{1,\infty }}) (1+\Vert u\Vert _{H^s}). \end{aligned}$$ - \(H_3(2)\):
-
\(q(\cdot ,u)\) is bounded for all \(u\in H^s\) and there is a non-decreasing function \(g_4(\cdot ):[0,\infty )\rightarrow [0,\infty )\), such that
$$\begin{aligned} \sup _{t\ge 0,\,\Vert u\Vert _{H^{s}},\,\Vert v\Vert _{H^{s}}\le N}\left\{ \textbf{1}_{\{u\ne v\}} \frac{\Vert q(t,u)-q(t,v)\Vert _{H^s}}{\Vert u-v\Vert _{H^s}}\right\} \le g_4(N),\ \ N\ge 1,\ s>3/2. \end{aligned}$$ - \(H_3(3)\):
-
There is a \(V\in \mathcal {V}\) and constants \(N_1, N_2>0\) such that for all \((t,u)\in [0,\infty )\times H^s\) with \(s>3/2\),
$$\begin{aligned} \mathcal {H}_{s}(t,u) \le \,&N_1 -N_2 \frac{\left\{ V'(\Vert u\Vert ^2_{H^s})\left| \left( q(t,u), u\right) _{H^s}\right| \right\} ^2}{ 1+V(\Vert u\Vert ^2_{H^s})},\ \ \end{aligned}$$where
$$\begin{aligned} \,&\mathcal {H}_{s}(t,u)\\ :=\,&V'(\Vert u\Vert ^2_{H^{s}})\Big \{ 2\lambda _s \Vert u\Vert ^k_{W^{1,\infty }}\Vert u\Vert ^2_{H^{s}}+\Vert q(t,u)\Vert ^2_{H^{s}} \Big \} + 2V''(\Vert u\Vert ^2_{H^{s}})\left| \left( q(t,u), u\right) _{H^{s}}\right| ^2 \end{aligned}$$and \(\lambda _s>0\) is the constant given in Lemma A.6 below.
Examples of the noise structure satisfying Hypothesis \(H_3\) can be found in Sect. 5.2.
1.3 Main results and remarks
Now we summarize our major contributions providing proofs later in the remainder of the paper.
Theorem 1.1
Let \(s>3/2\), \(k\ge 1\) and let h(t, u) satisfy Hypothesis \(H_1\). Assume that \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable satisfying \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \). Then
-
(i)
(Existence and uniqueness) There is a unique local solution \((u,\tau )\) to (1.4) in the sense of Definition 1.1 with
$$\begin{aligned} \mathbb E\sup _{t\in [0,\tau ]}\Vert u(t)\Vert ^2_{H^s}<\infty . \end{aligned}$$(1.11) -
(ii)
(Blow-up criterion) The local solution \((u,\tau )\) can be extended to a unique maximal solution \((u,\tau ^*)\) with
$$\begin{aligned} {\textbf {1}}_{\left\{ \limsup _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{H^{s}}=\infty \right\} }={\textbf {1}}_{\left\{ \limsup _{t\rightarrow \tau ^*}\Vert u(t)\Vert _{W^{1,\infty }}=\infty \right\} }\ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$(1.12) -
(iii)
(Stability for almost surely bounded initial data) Assume additionally that \(u_0\in L^\infty ({\Omega };H^s)\). Let \(v_0\in L^\infty ({\Omega };H^s)\) be another \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable. For any \(T>0\) and any \(\epsilon >0\), there is a \(\delta =\delta (\epsilon ,u_0,T)>0\) such that if
$$\begin{aligned} \Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}<\delta , \end{aligned}$$(1.13)then there is a stopping time \(\tau \in (0,T]\) \({\mathbb P}\text {-}\mathrm{a.s.}\) and
$$\begin{aligned} \mathbb E\sup _{t\in [0,\tau ]}\Vert u(t)-v(t)\Vert ^2_{H^s}<\epsilon , \end{aligned}$$(1.14)where u and v are the solutions to (1.4) with initial data \(u_0\) and \(v_0\), respectively.
Remark 1.1
Existence and uniqueness have been studied for abundant SPDEs. In many works, the authors did not address the continuous dependence on initial data. In this work, our Theorem 1.1 provides a local well-posedness result in the sense of Hadamard including the continuous dependence on initial data. Moreover, a blow-up criterion is also obtained. We refer to [11, 19, 42] for the study about the dependence on the initial data for cases that solutions to the target problems exist globally. However, it is necessary to point out that almost nothing is known on the analysis for dependence on initial data for SPDEs whose solutions may blow up in finite time.
The key difficulty for such a case is as follows: on one hand, if solutions to a nonlinear stochastic partial differential equation (SPDE) blow up in finite time, it is usually very difficult to obtain the lifespan estimates. On the other hand, we have to find a positive time \(\tau \) to obtain an inequality like (1.14). In addition, the target problem (1.4) is more difficult because the classical Itô formulae are not applicable. Indeed, for \(u_0\in H^s\), we can only know \(u\in H^s\) because this is a transport type equation, then \(u^ku_x\in H^{s-1}\). However, the inner product \( \left( u^ku_x, u\right) _{H^s}\) appears if one uses the Itô formula in a Hilbert space (cf. [23, Theorem 2.10]) and the dual product \({}_{H^{s-1}}\langle u^ku_x, u\rangle _{H^{s+1}}\) appears in the Itô formula under a Gelfand triplet (cf. [39, Theorem I.3.1]). Since we only have \(u\in H^s\) and \(u^ku_x\in H^{s-1}\), neither of them are well-defined. Likewise, when we consider the \(H^s\)-norm for the difference between two solutions \(u,v\in H^s\) to (1.4), we will have to handle \((u^ku_x-v^kv_x,u-v)_{H^s}\), which gives rise to control either \(\Vert u\Vert _{H^{s+1}}\) or \(\Vert v\Vert _{H^{s+1}}\).
Remark 1.2
Now we list some technical remarks on the statements of Theorem 1.1.
-
(1).
Our proof for (i) in Theorem 1.1 is motivated by the recent results in [55]. For the convenience of the reader, here we also give a brief comparison between our approach and the framework employed in many previous works.
-
We first briefly review the martingale approach used to prove existence of nonlinear SPDEs. Roughly speaking, in searching for a solution to a nonlinear SPDE in some space \(\mathcal X\), the martingale approach, as its name would suggest, includes obtaining martingale solution first and then establishing (pathwise) uniqueness to obtain the (pathwise) solution. To begin with, one needs to approximate the equation and establish uniform estimate. For nonlinear problems, one may have to add a cut-off function to cut the nonlinear parts growing in some space \(\mathcal {Z}\) with \(\mathcal X\hookrightarrow \mathcal {Z}\) (such choice of \(\mathcal {Z}\) depends on concrete problems). As far as we know, the technique of cut-off first appears in [17] for the stochastic Schrödinger equation. This cut-off enables us to split the expectation of nonlinear terms, and then the \(L^2({\Omega }; \mathcal X)\) estimate can be closed. For example, for (1.4), the estimate for \(\mathbb E\Vert u\Vert ^2_{H^s}\) will give rise to \(\mathbb E\left( \Vert u\Vert _{W^{1,\infty }}\Vert u\Vert ^2_{H^s}\right) \), and hence we need to add a function to cut \(\Vert \cdot \Vert _{W^{1,\infty }}\). With this additional cut-off, we need to consider the cut-off version of the problem first and remove it then. The first main step in the martingale approach is finding a martingale solution. Usually, this can be done by first obtaining tightness of the measures defined by the approximative solutions in some space \(\mathcal Y\), and then using Prokhorov’s Theorem and Skorokhod’s Theorem to obtain the convergence in \(\mathcal Y\). Since \(\mathcal X\) is usually infinite dimensional (usually, \(\mathcal X\) is a Sobolev space), to obtain tightness, it is required that \(\mathcal X\) is compactly embedded into \(\mathcal Y\), i.e, \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\). This brings another requirement to specify \(\mathcal {Z}\), that is, \(\mathcal Y\hookrightarrow \mathcal {Z}\). Otherwise, taking limits will not bring us back to the cut-off problem due to the additional cut-off term \(\Vert \cdot \Vert _{\mathcal {Z}}\) (in some cases, the choice of \(\mathcal {Z}\) may only give rise to a semi-norm and here we use this notation \(\Vert \cdot \Vert _{\mathcal {Z}}\) only for simplicity). Usually, in bounded domains, it is not difficult to pick \(\mathcal Y\) and \(\mathcal {Z}\) such that \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\hookrightarrow \mathcal {Z}\) (Sobolev spaces enjoy compact embeddings in bounded domains), see for example [4, 9, 18, 26, 48]. In unbounded domains, the difficulty lies in the choice of \(\mathcal Y\) and \(\mathcal {Z}\) such that \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\hookrightarrow \mathcal {Z}\). We refer to [7, 8] for fluid models with certain cancellation properties (for example, divergence free) and linear growing noise. However, it is difficult to achieve this for SPDEs with general nonlinear terms and nonlinear noise. For instance, the cut-off in our case will have to involve \(\Vert \cdot \Vert _{\mathcal {Z}}=\Vert \cdot \Vert _{W^{1,\infty }}\) [see \(H_1(1)\) and (2.3)]. Even though we can get the convergence in \(H_\textrm{loc}^{s'}\) with some \(\frac{3}{2}<s'<s\), it is still not clear whether the convergence holds true in \(W^{1,\infty }\), and this is because local convergence can not control a global object \(\Vert \cdot \Vert _{W^{1,\infty }}\). Therefore, technically speaking, nonlinear SPDEs are more non-local than its deterministic counterpart.
-
Due to the above unsolved technical issue, the martingale approach is difficult to apply in our problem and we will try to prove convergence directly, which is motivated by [41, 55] [see also [49, 53, 54] for recent developments]. Generally speaking, we will analyze the difference between two approximative solutions and directly find a space \(\mathcal Y\) such that \(\mathcal X\hookrightarrow \mathcal Y\hookrightarrow \mathcal {Z}\) and convergence (up to a subsequence) holds true in \(\mathcal Y\). The difficult part is finding convergence in \(\mathcal Y\) without compactness \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\) (compared to the martingale approach, tightness comes from the compact embedding \(\mathcal X\hookrightarrow \hookrightarrow \mathcal Y\)). In this paper, the target path space is \(C([0,T];H^{s})=\mathcal X\), and we are able to prove convergence (up to a subsequence) in \(C([0,T];H^{s-\frac{3}{2}})=\mathcal Y\) directly. After taking limits to obtain a solution, one can improve the regularity to \(H^s\) again, and the technical difficulty in this step is to prove the time continuity of the solution because the classical Itô formula is not applicable (see in Remark 1.1). To overcome this difficulty, we apply a mollifier \(J_\varepsilon \) to equation and estimate \(\mathbb E\Vert J_\varepsilon u\Vert ^2_{H^s}\) first [see (2.11)]. We also remark that the techniques in removing the cut-off have been used in [5, 25, 54]. Here we formulate such a technical result in Lemma A.7 in an abstract way.
-
-
(2).
Now we give a remark on (iii) in Theorem 1.1. For the question on dependence on initial data, there are some delicate differences between the stochastic and the deterministic case. In the deterministic counterpart of (1.4), due to the lifespan estimate [see (4.10) for instance], for given \(u_0\in H^s\), it can be shown that if \(\Vert u_0-v_0\Vert _{H^s}\) is small enough, then there is a \(T>0\) depending on \(u_0\) such that \(\sup _{t\in [0,T]}\Vert u(t)-v(t)\Vert ^2_{H^s}\) is also small. In stochastic setting, since existence and uniqueness are obtained in the framework of \(L^2({\Omega };H^s)\), it is therefore very natural to expect that, for given \(u_0\in L^2({\Omega };H^s)\), if \(\mathbb E\Vert u_0-v_0\Vert ^2_{H^s}\) is small enough, then for some almost surely positive \(\tau \) depending \(u_0\), \(\mathbb E\sup _{t\in [0,\tau ]}\Vert u(t)-v(t)\Vert ^2_{H^s}\) is also small. However, so far we have only proved it with assuming the smallness of \(\Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}\). Since \(L^\infty ({\Omega };H^s)\) can be viewed as being less random than \(L^2({\Omega };H^s)\), one may roughly conclude that what the solution map needs to be continuous/stable (the initial data and its perturbation are \(L^\infty ({\Omega };H^s)\)) is more “picky" in determinism than what the existence of such a solution map requires (existence and uniqueness guarantee that a solution map can be defined). For the technical difficulties involved, we have the following explanations:
-
As is mentioned in Remark 1.1, when we estimate the \(H^s\)-norm for the difference between two solutions u and v, \(H^{s+1}\)-norm will appear. Hence, we have to use smooth approximations to make the analysis valid. More precisely, we approximate u and v by smooth process \(u_\varepsilon \) and \(v_\varepsilon \) and consider
$$\begin{aligned} \Vert u-v\Vert _{H^s}\le \Vert u-u_{\varepsilon }\Vert _{H^s}+\Vert u_{\varepsilon }-v_{\varepsilon }\Vert _{H^s}+\Vert v_{\varepsilon }-v\Vert _{H^s}. \end{aligned}$$(1.15)Then all terms can be estimated because \(\Vert u_\varepsilon \Vert _{H^{s+1}}\) and \(\Vert v_\varepsilon \Vert _{H^{s+1}}\) make sense. Here we refer to Remark 3.2 for more details on the construction of such an approximation.
-
In dealing with the above three terms in the stochastic case, two sequences of stopping times (exiting times) are needed to control \(\Vert u_\varepsilon \Vert _{H^s}\) and \(\Vert v_\varepsilon \Vert _{H^s}\) [see (3.20) below]. Since we aim at obtaining \(\tau >0\) almost surely in (1.14) (otherwise the difference between two solutions on the set \(\{\tau =0\}\) can not be measured), we will have to guarantee that those stopping times used in bounding \(\Vert u_\varepsilon \Vert _{H^s}\) and \(\Vert v_\varepsilon \Vert _{H^s}\) have positive lower bounds almost surely. Up to now, we have only achieved this for initial values belonging to \(L^\infty ({\Omega };H^s)\). We also remark that this is different from the proof for existence. In the proof for existence, \(u_\varepsilon \) exists on a common interval [0, T] for all \(\varepsilon \) and enjoys a uniform-in-\(\varepsilon \) estimate (2.4), hence we can get rid of stopping times in convergence (from (2.8) to (2.9)). Here we do not have such common existence interval due to the lack of a lifespan estimate, which is a significant difference between the stochastic and the deterministic cases. Indeed, we can easily find the lifespan estimate for the deterministic counterpart of (1.4) [see (4.10) below].
-
Moreover, even if the above issue can be handled, in dealing with the three terms in (1.15), we are confronted with \(\frac{1}{\varepsilon ^2}\mathbb E\Vert u_0- v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\) for some suitably chosen \(s'\) (cf. (3.27)). After \(\varepsilon \) is fixed, the smallness of \(\mathbb E\Vert u_0-v_0\Vert ^2_{H^s}\) is not enough to control \(\frac{1}{\varepsilon ^2}\mathbb E\Vert u_0- v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\), either. We use the \(L^\infty ({\Omega };H^s)\) condition to take \(\Vert u_0\Vert ^2_{H^{s}}\) out of \(\frac{1}{\varepsilon ^2}\mathbb E\Vert u_0- v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\). In deterministic case, no expectation is involved, \(\frac{1}{\varepsilon ^2}\Vert u_0- v_0\Vert ^2_{H^{s'}}\Vert u_0\Vert ^2_{H^{s}}\) can be controlled by \(\Vert u_0- v_0\Vert ^2_{H^{s}}\).
-
Roughly speaking, (iii) in Theorem 1.1 means that for any fixed \(u_0\in L^\infty ({\Omega };H^s)\) and any \(T>0\), if \(\Vert u_0-v_0\Vert _{L^\infty ({\Omega };H^s)}\rightarrow 0\), then
where u, v are solutions corresponding to \(u_0\), \(v_0\), respectively. Below we will study this issue quantitatively. The next result addresses at least a partially negative answer.
Theorem 1.2
(Weak instability) Let \(s>5/2\) and \(k\ge 1\). If h satisfies Hypothesis \(H_2\), then at least one of the following properties holds true:
-
(i)
For any \(R\gg 1\), the R-exiting time is not strongly stable for the zero solution to (1.4) in the sense of Definition 1.2;
-
(ii)
There is a \(T>0\) such that the solution map \(u_0\mapsto u\) defined by (1.4) is not uniformly continuous as a map from \(L^\infty ({\Omega };H^s)\) into \(L^1({\Omega };C([0,T];H^s))\). More precisely, there exist two sequences of solutions \(u^{1,n}\) and \(u^{2,n}\), and two sequences of stopping time \(\tau _{1,n}\) and \(\tau _{2,n}\), such that
-
For \(i=1,2\), \(\mathbb P\{\tau _{i,n}>0\}=1\) for each \(n>1\). Besides,
$$\begin{aligned} \lim _{n\rightarrow \infty }\tau _{1,n}=\lim _{n\rightarrow \infty }\tau _{2,n}=\infty \ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$(1.16) -
For \(i=1,2\), \(u^{i,n}\in C([0,\tau _{i,n}];H^s)\) \({\mathbb P}\text {-}\mathrm{a.s.}\), and
$$\begin{aligned} \mathbb E\left( \sup _{t\in [0,\tau _{1,n}]}\Vert u^{1,n}(t)\Vert _{H^s}+\sup _{t\in [0,\tau _{2,n}]}\Vert u^{2,n}(t)\Vert _{H^s}\right) \lesssim 1. \end{aligned}$$(1.17) -
At initial time \(t=0\), for any \(p\in [1,\infty ]\),
$$\begin{aligned} \lim _{n\rightarrow \infty }\Vert u^{1,n}(0)-u^{2,n}(0)\Vert _{L^p({\Omega };H^s)}=0. \end{aligned}$$(1.18) -
When \(t>0\),
$$\begin{aligned} \liminf _{n\rightarrow \infty }\mathbb E\sup _{t\in [0,T\wedge \tau _{1,n}\wedge \tau _{2,n}]} \Vert u^{1,n}(t)&-u^{2,n}(t)\Vert _{H^s}\nonumber \\ \gtrsim&\left\{ \begin{aligned}&\sup _{t\in [0,T]}|\sin (t)|,\ \text {if}\ k\ \text {is odd},\\&\sup _{t\in [0,T]}\big |\sin \big (\frac{t}{2}\big )\big |,\ \text {if}\ k\ \text {is even}. \end{aligned}\right. \end{aligned}$$(1.19)
-
Remark 1.3
We first briefly outline the main difficulties encountered in the proof for Theorem 1.2 and the main strategies we used.
-
(1).
Because we can not get an explicit expression of the solution to (1.4), to obtain (1.19), we will construct two sequences of approximative solutions \(\{u_{m,n}\}\) (\(m\in \{1,2\}\)) such that the actual solutions \(\{u^{m,n}\}\) with \(u^{m,n}(0)=u_{m,n}(0)\) satisfy
$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb E\sup _{[0,\tau _{m,n}]}\Vert u^{m,n}-u_{m,n}\Vert _{H^s} =0, \end{aligned}$$(1.20)where \(u^{m,n}\) exists at least on \([0,\tau _{m,n}]\). Then, one can establish (1.19) by estimating \(\{u_{m,n}\}\) rather than \(\{u^{m,n}\}\). We also remark that the construction of approximative solution \(u_{m,n}\) for \(x\in \mathbb R\) is more difficult than the construction of approximative solution for \(x\in \mathbb T\) [see [46]] since the approximative solution involves both high and low frequency parts (high frequency part is already enough for the case \(x\in \mathbb T\), cf. [46, 55]). The key point is that we need to guarantee \(\inf _{n}\tau _{m,n}>0\) almost surely in dealing with (1.20). Hence we are confronted with a common difficulty in SPDEs again, that is, the lack of lifespan estimate. In deterministic cases, one can easily obtain the lifespan estimate, which enables us to find a common interval [0, T] such that all actual solutions exist on [0, T] (see for example Lemma 4.1). In the stochastic case, so far we have not been able to prove this.
-
(2).
To settle the above difficulty, we observe that the bound \(\inf _{n}\tau _{m,n}>0\) can be connected to the stability property of the exiting time (see Definition 1.2). The condition that the \(R_0\)-exiting time is strongly stable at the zero solution will be used to provide a common existence time \(T>0\) such that for all n, \(u^{m,n} \) exists up to T (see Lemma 4.4 below). Therefore, to prove Theorem 1.2, we will show that, if the \(R_0\)-exiting time is strongly stable at the zero solution for some \(R_0\gg 1\), then the solution map \(u_0\mapsto u\) defined by (1.4) can not be uniformly continuous. To get (1.20), we estimate the error in \(H^{2s-\rho _0}\) and \(H^{\rho _0}\), respectively, where \(\rho _0\) is given in \(H_2\). Then (1.20) is a consequence of the interpolation. We remark that (1.18) holds because the approximative solutions are constructed deterministically.
Remark 1.4
With regard to similar results in the literature and further hypotheses, we give some more remarks on Theorem 1.2.
-
(1).
In deterministic cases, the issue of the (optimal) initial-data dependence of solutions has been extensively investigated for various nonlinear dispersive and integrable equations. We refer to [35] for the inviscid Burgers equation and to [37] for the Benjamin–Ono equation. For the CH equation we refer the readers to [29, 30] concerning the non-uniform dependence on initial data in Sobolev spaces \(H^s\) with \(s>3/2\). For the first results of this type in Besov spaces, we refer to [50, 56]. Particularly, non-uniform dependence on initial data in critical Besov space first appears in [51, 52]. In this work, Theorem 1.2 and (iii) in Theorem 1.1 demonstrate that the continuity of the solution map \(u_{0}\mapsto u\) is almost an optimal result in the sense that, when the growth of the noise coefficient satisfies certain conditions (cf. Hypothesis \(H_2\)), the map \(u_{0}\mapsto u\) is continuous, but one can not improve the stability of the exiting time and simultaneously the continuity of the map \(u_{0}\mapsto u\). Up to our knowledge, results of this type for SPDEs first appeared in [46, 49]. We also refer to [3, 43, 55] for recent developments.
-
(2).
It is worthwhile mentioning that, as noted in (1) of Remark 1.3, the strong stability of exiting times is used as a technical “assumption" to handle the lower bound of a sequence of stopping times. So far we have not been able to verify the non-emptyness of this strong stability assumption for the current model. However, if the transport noise \(u_x\circ \, \textrm{d}W\) is considered (W is a standard 1-D Brownian motion and \(\circ \, \textrm{d}W\) means the Stratonovich stochastic differential), we might conjecture that either the notion of strong stability of exiting times can be captured, or the solution map \(u_0\mapsto u\) can become more regular than being continuous. Indeed, if \(h(t,u) \, \textrm{d}\mathcal W\) is replaced by \(u_x\circ \, \textrm{d}W\) in (1.4), one can rewrite the equation into Itô’s form with an additional viscous term \(-\frac{1}{2}u_{xx}\) on the left hand side of the equation. Therefore, it is reasonable to expect that in this case, either the strong stability of exiting times or the continuity of the solution map \(u_0\mapsto u\) can be improved. We refer to [31] and [27] for deterministic examples on the continuity of the solution map.
Theorem 1.3
(Noise prevents blow-up) Let \(s>5/2\), \(k\ge 1\) and \(u_0\in H^s\) be an \(\mathcal {F}_0\)-measurable random variable with \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \). If Hypothesis \(H_3\) holds true, then the corresponding maximal solution \((u,\tau ^*)\) to (1.6) satisfies
Remark 1.5
We notice that many of the existing results on regularization effects by noise are essentially restricted to linear equations or linear growing noise. In Theorem 1.3, both the drift and diffusion term are nonlinear. We also remark that the blow-up can actually occur in the deterministic counterpart of (1.6). For example, when \(k=1\), blow-up (as wave breaking) of solutions to the CH equation can be found in [16]. Therefore, Theorem 1.3 demonstrates that large enough noise can prevent singularities. Indeed, \(H_3(3)\) means that the growth of \(u^ku_x+F(u)\) can be controlled provided that the noise grows fast enough in terms of a Lyapunov type function V. In contrast to \(H_1(2)\) and \(H_1(3)\), we require \(s>3/2\) in both \(H_3(2)\) and \(H_3(3)\). As is stated in Hypothesis \(H_1\), \(H_3(2)\) implies that uniqueness holds true for solutions in \(H^s\) with \(s>5/2\). It seems that one can require \(s>1/2\) in \(H_3(2)\) to guarantee uniqueness in \(H^\rho \) with \(\rho >3/2\), but at present we can only construct examples for the case that \(s>3/2\) is required in both \(H_3(2)\) and \(H_3(3)\).
We outline the remainder of the paper. In Sect. 2, we study the cut-off version of (1.4) and then we remove the cut-off and prove Theorem 1.1 in Sect. 3. We prove Theorem 1.2 in Sect. 4. Concerning the interplay of noise vs blow-up, we prove Theorem 1.3 in Sect. 5.
2 Cut-off version: regular solutions
We first consider a cut-off version of (1.4). To this end, for any \(R>1\), we let \(\chi _R(x):[0,\infty )\rightarrow [0,1]\) be a \(C^{\infty }\)-function such that \(\chi _R(x)=1\) for \(x\in [0,R]\) and \(\chi _R(x)=0\) for \(x>2R\). Then we consider the following cut-off problem
In this section, we aim at proving the following result:
Proposition 2.1
Let \(s>3\), \(k\ge 1\), \(R>1\) and Hypothesis \(H_1\) be satisfied. Assume that \(u_0\in L^2({\Omega };H^{s})\) is an \(H^{s}\)-valued \(\mathcal {F}_0\)-measurable random variable. Then, for any \(T>0\), (2.1) has a solution \(u \in L^2\left( {\Omega }; C\left( [0,T];H^{s}\right) \right) \). More precisely, there is a constant \(C(R,T,u_0)>0\) such that
The proof for Proposition 2.1 is given in the following subsections.
2.1 The approximation scheme
The first step is to construct a suitable approximation scheme. From Lemma A.5, we see that the nonlinear term F(u) preserves the \(H^s\)-regularity of \(u\in H^s\) for any \(s>3/2\). However, to apply the theory of SDEs in Hilbert space to (2.1), we will have to mollify the transport term \(u^k\partial _xu\) since the product \(u^k\partial _xu\) loses one regularity. To this end, we consider the following approximation scheme:
where \(J_{\varepsilon }\) is the Friedrichs mollifier defined in Appendix A. After mollifying the transport term \(u^k\partial _xu\), it follows from \(H_1(2)\) and Lemmas A.1 and A.5 that for any \(\varepsilon \in (0,1)\), \(H_{1,\varepsilon }(\cdot )\) and \(H_{2}(t,\cdot )\) are locally Lipschitz continuous in \(H^s\) with \(s>\frac{3}{2}\). Besides, we notice that the cut-off function \(\chi _R(\Vert \cdot \Vert _{W^{1,\infty }})\) guarantees the linear growth condition (cf. Lemma A.5 and \(H_1(1)\)). Thus, for fixed \(({\Omega }, \{\mathcal {F}_t\}_{t\ge 0},\mathbb P, \mathcal W)\) and for \(u_0\in L^2({ \Omega };H^s)\) with \(s>3/2\), the existence theory of SDE in Hilbert space (see for example [23]) means that (2.3) admits a unique solution \(u_{\varepsilon }\in C([0,\infty );H^s)\) \({\mathbb P}\text {-}\mathrm{a.s.}\).
2.2 Uniform estimates
Now we establish some uniform-in-\(\varepsilon \) estimates for \(u_\varepsilon \).
Lemma 2.1
Let \(k\ge 1\), \(s>3/2\), \(R>1\) and \(\varepsilon \in (0,1)\). Assume that h satisfies Hypothesis \(H_1\) and \(u_0\in L^2({\Omega };H^s)\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable. Let \(u_{\varepsilon }\in C([0,\infty );H^{s})\) be the unique solution to (2.3). Then for any \(T>0\), there is a constant \(C=C(R,T,u_0)>0\) such that
Proof
Using the Itô formula for \(\Vert u_\varepsilon \Vert ^2_{H^s}\), we have that for any \(t>0\),
On account of Lemmas A.1 and A.3, we derive
Therefore, one can infer from the BDG inequality, \(H_1(1)\), Lemma A.5 and the above estimate that
which implies
Using Grönwall’s inequality in (2.5) implies (2.4). \(\square \)
2.3 Convergence of approximative solutions
Now we are going to show that the family \(\{u_\varepsilon \}\) contains a convergent subsequence. For different layers \(u_\varepsilon \) and \(u_\eta \), we see that \(v_{\varepsilon ,\eta }:=u_\varepsilon -u_\eta \) satisfies the following problem:
where
Lemma 2.2
Let \(s>3\) and \(k\ge 1\) and let \(\mathcal {G}(x):=x^{2k+2}+1\). For any \(\varepsilon ,\eta \in (0,1)\), we find a constant \(C>0\) such that
Proof
Using Lemmas A.1, A.3 and A.5, the mean value theorem for \(\chi _R(\cdot )\), and the embedding \(H^{s-\frac{3}{2}}\hookrightarrow W^{1,\infty }\), we have that for some \(C>0\),
and
Using Lemma A.1, we see that
For \(q_6\), using Lemma A.1 and then integrating by part, we have
Via the embedding \(H^{s-\frac{3}{2}}\hookrightarrow W^{1,\infty }\) and Lemmas A.1 and A.3, we obtain
Therefore, we can put this all together to find
which gives rise to the desired estimate. \(\square \)
Lemma 2.3
Let \(s>3\), \(R>1\) and \(\varepsilon \in (0,1)\). For any \(T>0\) and \(K>1\), we define
Then we have
Proof
By employing the BDG inequality to (2.6), for some constant \(C>0\), we arrive at
For \(q_9\) and \(q_{10}\), we use (2.7), the mean value theorem for \(\chi _R(\cdot )\), \(H_1(1)\) and \(H_1(2)\) to find a constant \(C=C(K)>0\) such that
On account of Lemma 2.2 and the above estimate, we find
Therefore, (2.8) holds true. \(\square \)
Lemma 2.4
For any fixed \(s>3\) and \(T>0\), there is an \(\{\mathcal {F}_t\}_{t\ge 0}\) progressive measurable \(H^{s-3/2}\)-valued process u and a countable subsequence of \(\{u_\varepsilon \}\) (still denoted as \(\{u_\varepsilon \})\) such that
Proof
We first let \(\varepsilon \) be discrete, i.e., \(\varepsilon =\varepsilon _n (n\ge 1)\) such that \(\varepsilon _n\rightarrow 0\) as \(n\rightarrow \infty \). In this way, for all n, \(u_{\varepsilon _n}\) can be defined on the same set \(\widetilde{{\Omega }}\) with \(\mathbb P\{\widetilde{{\Omega }}\}=1\). For brevity, \(u_{\varepsilon _n}\) is still denoted as \(u_\varepsilon \). For any \(\epsilon >0\), by using (2.7), Lemma 2.1 and Chebyshev’s inequality, we see that
Now (2.8) clearly forces
Letting \(K\rightarrow \infty \), we see that \(u_\varepsilon \) converges in probability in \(C\left( [0,T];H^{s-\frac{3}{2}}\right) \). Therefore, up to a further subsequence, (2.9) holds. \(\square \)
2.4 Proof for Proposition 2.1
By (2.9), since for each \(\varepsilon \in (0,1)\), \(u_\varepsilon \) is \(\{\mathcal {F}_t\}_{t\ge 0}\) progressive measurable, so is u. Notice that \(H^{s-3/2}\hookrightarrow W^{1,\infty }\). Then one can send \(\varepsilon \rightarrow 0\) in (2.3) to prove that u solves (2.1). Furthermore, it follows from Lemma 2.1 and Fatou’s lemma that
With (2.10), to prove (2.2), we only need to prove \(u\in C([0,T];H^{s})\), \({\mathbb P}\text {-}\mathrm{a.s.}\) Due to Lemma 2.4 and (1.11), \(u\in C([0,T];H^{s-3/2})\cap L^\infty \left( 0,T;H^s\right) \) almost surely. Since \(H^s\) is dense in \(H^{s-3/2}\), we see that \(u\in C_w\left( [0,T];H^s\right) \) (\(C_w\left( [0,T];H^s\right) \) is the set of weakly continuous functions with values in \(H^s\)). Therefore, we only need to prove the continuity of \([0,T]\ni t\mapsto \Vert u(t)\Vert _{H^s}\). As is mentioned in Remark 1.1, we first consider the following mollified version with \(J_\varepsilon \) being defined in (A.1):
By (2.10),
Then we only need to prove the continuity up to time \(\tau _N\wedge T\) for each \(N\ge 1\). Let \([t_2,t_1] \subset [0,T]\) with \(t_1-t_2<1\). We use Lemma A.6, the BDG inequality, Hypothesis \(H_1\) and (2.12) to find
We notice that for any \(T>0\), \(J_\varepsilon u\) tends to u in \(C\left( [0,T];H^{s}\right) \) as \(\varepsilon \rightarrow 0\). This, together with Fatou’s lemma, implies
This and Kolmogorov’s continuity theorem ensure the continuity of \(t\mapsto \Vert u(t\wedge \tau _N)\Vert _{H^{s}}\).
3 Proof for Theorem 1.1
Now we can prove Theorem 1.1. For the sake of clarity, we provide the proof in several subsections.
3.1 Proof for (i) in Theorem 1.1: Existence and uniqueness
3.1.1 Uniqueness
Before we prove the existence of a solution in \(H^s\) with \(s>3/2\), we first prove uniqueness since some estimates here will be used later.
Lemma 3.1
Let \(s>3/2\), \(k\ge 1\), and Hypothesis \(H_1\) hold. Suppose that \(u_0\) and \(v_0\) are two \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variables satisfying \( u_0,v_0 \in L^2({\Omega };H^s)\). Let \((u,\tau _1)\) and \((v,\tau _2)\) be two local solutions to (1.4) in the sense of Definition 1.1 such that \(u(0)=u_0\), \(v(0)=v_0\) almost surely. For any \(N>0\) and \(T>0\), we denote
and \(\tau ^T_{u,v}:=\tau _{u}\wedge \tau _{v}\wedge T\). Then for \(s'\in \left( \frac{1}{2},\min \left\{ s-1,\frac{3}{2}\right\} \right) \), we have that
Proof
Let \(w(t)=u(t)-v(t)\) for \(t\in [0,\tau _1\wedge \tau _2]\). We have
Then we use the Itô formula for \(\Vert w\Vert ^2_{H^{s'}}\) with \(s'\in \left( \frac{1}{2},\min \left\{ s-1,\frac{3}{2}\right\} \right) \) to find that
where \(P_k=u^{k}+u^{k-1}v+\cdots +u v^{k-1}+v^k\). Taking the supremum over \(t\in [0,\tau ^T_{u,v}]\) and using the BDG inequality, \(H_1(3)\) and the Cauchy–Schwarz inequality yield
Using Lemma A.4, integration by parts and \(H^s\hookrightarrow W^{1,\infty }\), we have
Therefore, for some constant \(C(N)>0\), we have that
Similarly, Lemma A.5 and \(H_1(3)\) yield
Therefore, we combine the above estimates to find
Using the Grönwall inequality in the above estimate leads to (3.1). \(\square \)
Similarly, one can obtain the following uniqueness result for the original problem (1.4), and we omit the details for simplicity.
Lemma 3.2
Let \(s>3/2\), and let Hypothesis \(H_1\) be true. Let \(u_0\) be an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable such that \(u_0\in L^2({\Omega };H^s)\). If \((u_1,\tau _1)\) and \((u_2,\tau _2)\) are two local solutions to (1.4) satisfying \(u_i(\cdot \wedge \tau _i)\in L^2\left( {\Omega };C([0,\infty );H^s)\right) \) for \(i=1,2\) and \(\mathbb P\{u_1(0)=u_2(0)=u_0(x)\}=1\), then
3.1.2 The case \(s>3\)
To begin with, we first state the following existence and uniqueness results in \(H^s\) with \(s>3\) for the Cauchy problem (1.4):
Proposition 3.1
Let \(s>3\), \(k\ge 1\), and h(t, u) satisfy Hypothesis \(H_1\). If \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable satisfying \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \), then there is a unique local solution \((u,\tau )\) to (1.4) in the sense of Definition 1.1 with
Proof
Since uniqueness has been obtained in Lemma 3.2, via Proposition 2.1, we only need to remove the cut-off function. For \(u_0(\omega ,x)\in L^2({\Omega }; H^s)\), we let
Let \(u_{0,m}:=u_0{\textbf {1}}_{\{m-1\le \Vert u_0\Vert _{H^s}<m\}}\). For any \(R>0\), on account of Proposition 2.1, we let \(u_{m,R}\) be the global solution to the cut-off problem (2.1) with initial value \(u_{0,m}\) and cut-off function \(\chi _R(\cdot )\). Define
Then for any \(R>0\) and \(m\in \mathbb N\), it follows from the time continuity of the solution that \(\mathbb P\{\tau _{m,R}>0\}=1\). Particularly, for any \(m\in \mathbb N\), we assign \(R=R_m\) such that \(R^2_m>c^2m^2+2c^2\), where \(c>0\) is the embedding constant such that \(\Vert \cdot \Vert _{W^{1,\infty }} \le c \Vert \cdot \Vert _{H^s}\) for \(s>3\). For simplicity, we denote \((u_m,\tau _m):=(u_{m,R_m},\tau _{m,R_m})\). Then we have
which means \(\mathbb P\left\{ \chi _{R_m}(\Vert u_m\Vert _{W^{1,\infty }})=1,\ t\in [0,\tau _m],\ m\ge 1\right\} =1.\) Therefore, \((u_m,\tau _m)\) is the solution to (1.4) with initial value \(u_{0,m}\). Since \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \), the condition (A.5) is satisfied with \(I=\mathbb N^+\). Applying Lemma A.7 means that
is a solution to (1.4) corresponding to the initial condition \(u_0\). Besides,
Taking expectation gives rise to (3.2). \(\square \)
3.1.3 The case \(s>3/2\)
When \(s>3/2\), we first consider the following problem
where \(J_\varepsilon \) is the mollifier defined in (A.1). Proposition 3.1 implies that for each \(\varepsilon \in (0,1)\), (3.3) has a local pathwise solution \((u_\varepsilon ,\tau _\varepsilon )\) such that \(u_\varepsilon \in L^2\left( {\Omega }; C\left( [0,\tau _\varepsilon ];H^{s}\right) \right) \).
Lemma 3.3
Assume \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable such that \(\Vert u_0\Vert _{H^s}\le M\) for some \(M>0\). For any \(T>0\) and \(s>3/2\), we define
Let \(K\ge 2M+5\) be fixed and let \(s'\in \left( \frac{1}{2},\min \left\{ s-1,\frac{3}{2}\right\} \right) \). Then, there is a constant \(C(K,T)>0\) such that \(w_{\varepsilon ,\eta }=u_\varepsilon -u_\eta \) satisfies
Proof
To start with, we notice that Lemma A.1 implies
Since (3.4) and (3.6) are used frequently in the following, they will be used without further notice. Let
Applying the Itô formula to \(\Vert w_{\varepsilon ,\eta }\Vert ^2_{H^s}\) gives rise to
Since \(H^{s'}\hookrightarrow L^\infty \) and \(H^{s}\hookrightarrow W^{1,\infty }\), we can use Lemmas A.3 and A.5 to find
and
The above estimates and \(H_1(2)\) imply that there is a constant \(C(K)>0\) such that
For \(Q_{1,s}\), applying the BDG inequality and \(H_1(2)\), we derive
Summarizing the above estimates and then using Grönwall’s inequality, we find some constant \(C=C(K,T)>0\) such that
Now we estimate \(\mathbb E\sup _{t\in [0,\tau ^{T}_{\varepsilon ,\eta }]}\Vert w_{\varepsilon ,\eta }(t)\Vert ^2_{H^{s'}}\Vert u_\varepsilon (t)\Vert ^2_{H^{s+1}}\). To this end, we first recall (1.7) and then apply the Itô formula to deduce that for any \(\rho >0\),
In the same way, we also rewrite \(Q_{1,s}\) in (3.8) as
With the summation form (3.11) at hand, applying the Itô product rule to (3.8) and (3.10), we derive
We first notice that
where \(P_{k}\) is defined by (3.7). As a result, Lemma A.4, integration by parts and \(H^s\hookrightarrow W^{1,\infty }\) give rise to
Using Lemma A.3, Hypothesis \(H_1\), Lemma A.5 as well as the embedding of \(H^s\hookrightarrow W^{1,\infty }\) for \(s>3/2\), we obtain that for some \(C(K)>0\),
Then one can infer from the above three inequalities, the BDG inequality and Hypothesis \(H_1\) that for some constant \(C(K)>0\),
For the last term, we proceed as follows:
Consequently, (3.12) reduces to
which means that for some \(C(K,T)>0\),
Combining (3.9) and (3.13), we obtain (3.5). \(\square \)
To proceed further, we state the following lemma in [25] as a form which is convenient for our purposes.
Lemma 3.4
(Lemma 5.1, [25]) Let all the conditions in Lemma 3.3 hold true. Assume
and
hold true. Then we have:
-
(a)
There exists a sequence of stopping times \(\xi _{\varepsilon _n}\), for some countable sequence \(\{\varepsilon _n\}\) with \(\varepsilon _n\rightarrow 0\) as \(n\rightarrow \infty \), and a stopping time \(\tau \) such that
$$\begin{aligned} \xi _{\varepsilon _n}\le \tau ^T_{\varepsilon _n},\ \ \lim _{n\rightarrow \infty }\xi _{\varepsilon _n}=\tau \in (0,T] \ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$ -
(b)
There is a process \(u\in C([0,\tau ];H^s)\) such that
$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{t\in [0,\tau ]}\Vert u_{\varepsilon _n}-u\Vert _{H^s}=0,\ \sup _{t\in [0,\tau ]}\Vert u\Vert _{H^s}\le \Vert u_0\Vert _{H^s}+2\ \ {\mathbb P}\text {-}\mathrm{a.s.}\end{aligned}$$ -
(c)
There is a sequence of sets \({\Omega }_n \uparrow {\Omega }\) such that for any \(p\in [1,\infty )\),
$$\begin{aligned} \textbf{1}_{{\Omega }_n }\sup _{t\in [0,\tau ]}\Vert u_{\varepsilon _n}\Vert _{H^s}\le \Vert u_0\Vert _{H^s}+2\ \ {\mathbb P}\text {-}\mathrm{a.s.}, \ \text {and}\ \sup _{n}\mathbb E\left( \textbf{1}_{{\Omega }_n } \sup _{t\in [0,\tau ]}\Vert u_{\varepsilon _n}\Vert ^p_{H^s}\right) <\infty . \end{aligned}$$
Remark 3.1
In the original form of [25, Lemma 5.1], the authors only emphasize the existence of stopping time \(\tau \in (0,T]\) such that (b) and (c) in Lemma 3.4 hold true. However, here we point out that they obtained such \(\tau \) by constructing stopping times \(\xi _{\varepsilon _n}\). We refer to (5.2), (5.12), (5.15), (5.20) and (5.24) in [25] for the details. The properties (a) and (c) in Lemma 3.4 will be used in the proof for (iii) in Theorem 1.1.
Proposition 3.2
Let Hypothesis \(H_1\) hold. Assume that \(s>3/2\), \(k\ge 1\) and let \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable such that \(\Vert u_0\Vert _{H^s}\le M\) for some \(M>0\). Then (1.4) has a unique pathwise solution \((u,\tau )\) in the sense of Definition 1.1 such that
Proof
We first prove that \(\{u_\varepsilon \}\) satisfies the estimates (3.14) and (3.15).
(i) (3.14) is satisfied. Lemma A.1 tells us that
Since \(\Vert J_\varepsilon u_0\Vert _{H^s}\le M\), as in Lemma 3.1, we have
Moreover, it follows from Lemma A.1 that
Summarizing (3.16), (3.17), (3.18) and Lemma 3.3, (3.14) holds true.
(ii) (3.15) is satisfied. Recall (3.10) and let \(a>0\). We have
which clearly forces that
Due to the Chebyshev inequality, Lemmas A.3 and A.5, Hypothesis \(H_1\), the embedding of \(H^s\hookrightarrow W^{1,\infty }\) for \(s>3/2\), (3.4) and (3.6), we have
Then we can infer from the Doob’s maximal inequality and the Itô isometry that
Hence we have
which gives (3.15).
(iii) Applying Lemma 3.4. By Lemma 3.4, we can take limit in some subsequence of \(\{u_{\varepsilon _n}\}\) to build a solution u to (1.4) such that \(u\in C([0,\tau ];H^s)\) and \(\sup _{t\in [0,\tau ]}\Vert u\Vert _{H^s}\le \Vert u_0\Vert _{H^s}+2.\) Uniqueness is a direct corollary of Lemma 3.2. \(\square \)
Now we can finish the proof for (i) in Theorem 1.1.
Proof for (i) in Theorem 1.1. As in Proposition 3.1, we let
For each \(m\ge 1\), we can infer from Proposition 3.2 that (1.4) has a unique solution \((u_m,\tau _m)\) with \(u_m(0)=u_{0,m}\) almost surely. Furthermore, \(\sup _{t\in [0,\tau _m]}\Vert u_m\Vert _{H^s}\le \Vert u_{0,m}\Vert _{H^s}+2\) \({\mathbb P}\text {-}\mathrm{a.s.}\) Using Lemma A.7 in a similar way as in Proposition 3.2, we find that
is a solution to (1.4) satisfying (1.11) and \(u(0)=u_0\) almost surely. Uniqueness is given by Lemma 3.2. \(\square \)
3.2 Proof for (ii) in Theorem 1.1: Blow-up criterion
With a local solution \((u,\tau )\) at hand, one may pass from \((u,\tau )\) to the maximal solution \((u,\tau ^*)\) as in [5, 26]. In the periodic setting, i.e., \(x\in \mathbb T=\mathbb R/2\pi \mathbb Z\), the blow-up criterion (1.12) for a maximal solution has been proved in [46] by using energy estimate and some stopping-time techniques. When \(x\in \mathbb R\), (1.12) can be also obtained in the same way, and we omit the details for brevity.
3.3 Proof for (iii) in Theorem 1.1: Stability
Let \(u_0, v_0\in L^\infty ({\Omega };H^s)\) be two \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variables. Let u and v be the corresponding solutions with initial conditions \(u_0\) and \(v_0\). To prove (iii) in Theorem 1.1, for any \(\epsilon >0\) and \(T>0\), we need to find a \(\delta =\delta (\epsilon ,u_0,T)>0\) and a \(\tau \in (0,T]\) \({\mathbb P}\text {-}\mathrm{a.s.}\) such that (1.14) holds true as long as (1.13) is satisfied. Without loss of generality, by (1.13), we can first assume
From now on \(\epsilon >0\) and \(T>0\) are given.
However, as is mentioned in Remark 1.1, the term \(u^ku_x\) loses one regularity and the estimate for \(\mathbb E\sup _{t\in [0,\tau ]}\Vert u(t)-v(t)\Vert ^2_{H^s}\) will involve \(\Vert u\Vert _{H^{s+1}}\) or \(\Vert v\Vert _{H^{s+1}}\), which might be infinite since we only know \(u,v\in H^s\). To overcome this difficulty, we will consider (3.3). Let \(\varepsilon \in (0, 1)\). By (i) in Theorem 1.1, there is a unique solution \(u_{\varepsilon }\) (resp. \(v_{\varepsilon }\)) to the problem (3.3) with initial data \(J_\varepsilon u_0\) (resp. \(J_\varepsilon v_0\)). Then the \(H^{s+1}\)-norm is well-defined for the smooth solution \(u_\varepsilon \) and \(v_\varepsilon \). Similar to (3.4), for any \(T>0\), we define
Recalling the analysis in Lemma 3.3 and Proposition 3.2 (for the case \(f=v\), we notice (3.19)), we can use Lemma 3.4 to find that there exists a unified subsequence \(\{\varepsilon _n\}\) with \(\varepsilon _n\rightarrow 0\) as \(n\rightarrow \infty \) such that for \(f\in \{u,v\}\), there is a sequence of stopping times \(\xi ^f_{\varepsilon _n}\) and a stopping time \(\tau ^f\) satisfying
and
Moreover, for \(f\in \{u,v\}\), there exists \({\Omega }^f_n\uparrow {\Omega }\) such that
Next, we let \({\Omega }_n:={\Omega }^u_n\, \cap \, {\Omega }^v_n.\) Then \({\Omega }_n\uparrow {\Omega }\). This, (3.22), (3.23) and Lebesgue’s dominated convergence theorem yield
Therefore, we have, when n is large enough, that
Now we consider \(\mathbb E\sup _{t\in [0,\tau ^u\wedge \tau ^v]} \Vert \textbf{1}_{{\Omega }_n}u_{\varepsilon _n}-\textbf{1}_{{\Omega }_n}v_{\varepsilon _n}\Vert ^2_{H^s}\). It follows from (3.21) that for all \(n\ge 1\),
By (3.23),
Consequently, by Lebesgue’s dominated convergence theorem and (3.21), we have for \(n\gg 1\) that,
Now we estimate \(\mathbb E\sup _{t\in [0,\tau ^{u,T}_{\varepsilon _n}\wedge \tau ^{v,T}_{\varepsilon _n}]}\Vert u_{\varepsilon _n}(t)-v_{\varepsilon _n}(t)\Vert ^2_{H^s}\). Similar to (3.5), by using (3.19), one can show that for \(s'\in \left( \frac{1}{2},\min \left\{ s-1,\frac{3}{2}\right\} \right) \),
where \(C=C\left( \Vert u_0\Vert _{L^\infty ({\Omega };H^s)},T\right) \) and Lemma A.1 is used in the last step. Since \(u_0\in L^\infty ({\Omega };H^s)\), by Lemmas 3.1 and A.1 again, we have
where \(C=C\left( \Vert u_0\Vert _{L^\infty ({\Omega };H^s)},T\right) \) as before. Fix \(n=n_0\gg 1\) such that (3.24) and (3.26) are satisfied, i.e.,
Then, for (3.28) with \(n=n_0\), we can find a \(\delta =\delta (\epsilon ,u_0,T)\in (0,1)\) such that (3.19) is satisfied and
As a result, for (3.25) with fixed \(n=n_0\), we use (3.29)\(_2\) and (3.30) to derive that
This inequality and (3.29)\(_1\) yield that
Hence we obtain (1.14) with \(\tau =\tau ^u\wedge \tau ^v\). Due to (3.21), \(\tau \in (0,T]\) almost surely.
Remark 3.2
Here we remark that the restriction \(\textbf{1}_{{\Omega }_{n}}\) is needed to estimate
for \(f\in \{u,v\}\). This is because we only have \(\lim _{n\rightarrow \infty }\sup _{t\in [0,\tau ^f]}\Vert f-f_{\varepsilon _n}\Vert _{H^s}=0\) \({\mathbb P}\text {-}\mathrm{a.s.}\) (cf. (b) in Lemma 3.4), and we need to interchange limit and expectation. By (c) in Lemma 3.4,
Hence Lebesgue’s dominated convergence theorem can be used. In the deterministic case, one can directly consider \(\Vert f-f_{\varepsilon _{n}}\Vert ^2_{H^s}.\)
4 Weak instability
Now we prove Theorem 1.2. As is mentioned in Remark 1.3, since we can not get an explicit expression of the solution to (1.4), we start with constructing some approximative solutions from which (1.19) can be established.
4.1 Approximative solutions and actual solutions
Following the approach in [28, 46], now we construct the approximative solutions. We fix two functions \(\phi ,\tilde{\phi }\in C_c^{\infty }\) such that
Let \(k\ge 1\) and
Then we consider the following sequence of approximative solutions
where \(u_h= u_{h,m,n}\) is the high-frequency part defined by
and \(u_l=u_{l,m,n}\) is the low-frequency part constructed such that \(u_l\) is the solution to the following problem:
The parameter \(\delta >0\) in (4.4) and (4.5) will be determined later for different \(k\ge 1\). Particularly, when \(m=0\), we have \(u_l=0\). In this case the approximative solution \(u_{0,n}\) has no low-frequency part and
Next, we consider the problem (1.4) with initial data \(u_{m,n}(0,x)\), i.e.,
where \(F(\cdot )\) is defined by (1.5). Since h satisfies \(H_2(1)\), similar to the proof for Theorem 1.1, we see that for each fixed \(n\in \mathbb N\), (4.6) has a unique solution \((u^{m,n},\tau ^{m,n})\) such that \(u^{m,n}\in C\left( [0,\tau ^{m,n}];H^s\right) \) \({\mathbb P}\text {-}\mathrm{a.s.}\) with \(s>5/2\).
4.2 Estimates on the errors
Substituting (4.3) into (1.4), we define the error \(\mathcal E(\omega ,t,x)\) as
For simplicity, we let
where \(C_{q}^{j}\) is the binomial coefficient. By using (4.3), (4.5) and (4.7), \(\mathcal E(\omega ,t,x)\) can be reformulated as
4.2.1 Estimates on the low-frequency part
The following lemma gives a decay estimate for the low-frequency part of \(u_{m,n}\), that is, \(u_l\).
Lemma 4.1
Let \(k\ge 1\), \(|m|=1\) or \(m=0\), \(s>3/2\), \(\delta \in (0,2/k)\) and \(n\gg 1\). Then there is a \(T_l>0\) such that for all \(n\gg 1\), the initial value problem (4.5) has a unique smooth solution \(u_l=u_{l,m,n}\in C([0,T_l];H^s)\) such that \(T_l\) does not depend on n. Besides, for all \(r>0\), there is a constant \(C=C_{r,\tilde{\phi },T_l}>0\) such that \(u_l\) satisfies
Proof
When \(m=0\), as is mentioned above, \(u_l\equiv 0\) for all \(t\ge 0\). It remains to prove the case \(|m|=1\). For any fixed \(n\ge 1\), since \(u_l(0,x)\in H^\infty \), by applying Theorem 1.1 with \(h=0\) and deterministic initial data, we see that for any \(s>3/2\), (4.5) has a unique (deterministic) solution \(u_l=u_{l,m,n}\in C\left( [0,T_{m,n}];H^s\right) \). Different from the stochastic case, here we will show that there is a lower bound of the existence time, i.e., there is a \(T_l>0\) such that for all \(n\gg 1\), \(u_l=u_{l,m,n}\) exists on \([0,T_l]\) and satisfies (4.9).
Step 1: Estimate \(\Vert u_l(0,x)\Vert _{H^r}\). When \(n\gg 1\), we have
for some constant \(C=C_{r,\tilde{\phi }}>0\). As a result, we have
Step 2: Prove (4.9) for \(r>3/2\). In this case, we apply Lemmas A.3 and A.5, \(H^r\hookrightarrow W^{1,\infty }\) to find
Solving the above inequality gives
Therefore, we arrive at
By Step 1, we have \(T_{m,n}\gtrsim \frac{1}{2Ckn^{k(\frac{\delta }{2}-\frac{1}{k})}}\rightarrow \infty ,\ \text {as}\ n\rightarrow \infty \). Consequently, we can find a common time interval \([0,T_{l}]\) such that
which is (4.9).
Step 3: Prove (4.9) for \(0<r\le 3/2\). Similarly, by applying Lemmas A.3 and A.5, we have
It follows from the embedding \(H^{r+\frac{3}{2}}\hookrightarrow H^{r+1}\) and \(H^{r+\frac{3}{2}}\hookrightarrow W^{1,\infty }\) that
Using the conclusion of Step 2 for \(r+\frac{3}{2}>\frac{3}{2}\), we have
and hence
Applying Grönwall’s inequality to the above inequality, we have
Since \(\delta \in (0,2/k)\), we can infer from Step 1 that \(\exp \left\{ \Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k}T_l\right\} <C(T_l)\) for some constant \(C(T_l)>0\) and \(\Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}^{k+1}\le \Vert u_l(0)\Vert _{H^{r+\frac{3}{2}}}\le C|m|n^{\frac{\delta }{2}-\frac{1}{k}}\). Hence we see that there is a constant \(C=C_{r,\tilde{\phi },T_l}>0\) such that
which is (4.9). \(\square \)
Recall the approximative solution defined by (4.3). The above result means that the \(H^s\)-norm of the low-frequency part \(u_l\) is decaying. For the high-frequency part \(u_h\), as in Lemma A.8, its \(H^s\)-norm is bounded.
4.2.2 Estimate on \(\mathcal E\)
Recall the error \(\mathcal E\) given in (4.8). By using (4.1) and (4.2), we have \(m=m^k\) and \(\phi =\tilde{\phi }^k\phi \) for all \(k\ge 1\). Then by (4.4) and \(u_l(0,x)\) in (4.5), we see that as long as \(m\ne 0\),
If \(m=0\), then \(u_l=0\) and we also have
To sum up, we find that for all \(k\ge 1\), m satisfying (4.2), \(u_h\) given by (4.4) and \(u_l(0,x)\) in (4.5),
On the other hand, for all \(k\ge 1\),
Combining (4.11), (4.12) and (1.5) into (4.8) yields
where
We remark here that \(E_4\) disappears when \(k=1\). Recalling \(\rho _0\in (1/2,1)\) in Hypothesis \(H_2\), now we shall estimate the \(H^{\rho _0}\)-norm of the error \(\mathcal E\). Actually, we will show that the \(H^{\rho _0}\)-norm of \(\mathcal E\) is decaying.
Lemma 4.2
Let \(T_l>0\) be given in Lemma 4.1, and \(\rho _0\in (1/2,1)\) be given in \(H_2\). Let \(n\gg 1\), \(s>5/2\). Let
and
Then the error \(\mathcal E\) given by (4.13) satisfies that for some \(C=C(T_l)>0\),
Proof
The proof is technical and it is given in Appendix B. \(\square \)
4.2.3 Estimate on \(u_{m,n}-u^{m,n}\)
Recall the approximative solutions \(u_{m,n}\) given by (4.3). Then we have the following estimates on the difference between the actual solutions and the approximative solutions.
Lemma 4.3
Let \(k\ge 1\), \(s>5/2\) and \(\rho _0\) be given in \(H_2\). Let (4.14) hold true and \(r_s<0\) be given in (4.15). For any \(R>1\), we define
Then for \(n\gg 1\),
where \(T_l>0\) is given in Lemma 4.1 and \(C=C(R,T_l)>0\).
Proof
Let \(v=v_{m,n}=u_{m,n}-u^{m,n}\). Then v satisfies \(v(0)=0\) and
where
On \([0,T_l]\), by the Itô formula, we have that
Taking supremum with respect to \(t\in [0,T_l\wedge \tau ^{m,n}_R]\), and then using the BDG inequality yield
It follows from Lemmas A.8 and 4.1 that \(\Vert u_{m,n}\Vert _{H^{s}}\lesssim 1\) on \([0,\tau ^{m,n}_R\wedge T_l]\). Hence we can infer from Hypothesis \(H_2\) that
where \(g_3(\cdot )\) is given in \(H_2(2)\). As a result, for any fixed \(s>5/2\), by applying Lemmas A.8 and 4.1 again, we can pick \(N=N(s,k)\gg 1\) to derive
Consequently, we can infer from the above inequalities that
Via Lemma 4.2, we have
Using Lemma A.4 and integration by parts, we obtain that
Then, we use Lemma A.5 to find that
To sum up, by (4.16), Lemmas 4.1 and A.8, we arrive at
Using the Grönwall inequality, we obtain (4.17).
Now we prove (4.18). Since \(2\,s-\rho _0>s>\frac{5}{2}\) and \(u^{m,n}\) is the unique solution to (4.6), similar to (2.5), we can use (4.16) and \(H_2(1)\) to find for each fixed \(n\in \mathbb N\) that
Using the Grönwall inequality and Lemmas 4.1 and A.8, we find a constant \(C=C(R,T_l)\) such that for all \(n\ge 1\),
Hence, by Lemmas 4.1 and A.8 again, we arrive at
The proof is therefore completed. \(\square \)
4.3 Finish the proof for Theorem 1.2
To begin with, we observe the following property:
Lemma 4.4
Let \(H_2(1)\) hold true. Suppose that for some \(R_0\gg 1\), the \(R_0\)-exiting time of the zero solution to (1.4) is strongly stable. Then we have
Proof
By \(H_2(1)\), the unique solution with zero initial data to (1.4) is zero. On the other hand, we notice that for all \(s'<s\), \(\lim _{n\rightarrow \infty }\Vert u_{m,n}(0)\Vert _{H^{s'}}=\lim _{n\rightarrow \infty }\Vert u_{m,n}(0)-0\Vert _{H^{s'}}=0\). Since the \(R_0\)-exiting time of the zero solution is \(\infty \), we see that (4.19) holds provided that the \(R_0\)-exiting time of the zero solution to (1.4) is strongly stable. \(\square \)
Proof for Theorem 1.2 Our strategy is to show that if the \(R_0\)-exiting time is strongly stable at the zero solution for some \(R_0\gg 1\), then \(\{u^{-1,n}\}\) and \(\{u^{1,n}\}\) (if k is odd) or \(\{u^{0,n}\}\) and \(\{u^{1,n}\}\) (if k is even) are two sequences of solutions such that (1.16), (1.17), (1.18) and (1.19) are satisfied.
For each \(n>1\) and for fixed \(R_0\gg 1\), Lemmas 4.1, A.8 and (4.16) give \(\mathbb P\{\tau ^{m,n}_{R_0}>0\}=1\), and Lemma 4.4 implies (1.16). Then, it follows from (4.16) that \(u^{m,n}\in C([0,\tau ^{m,n}_{R_0}];H^s)\) \({\mathbb P}\text {-}\mathrm{a.s.}\) and (1.17) holds true. Next, we check (1.18). By interpolation, we have
For \(T_l>0\), combining Lemma 4.3 and the above estimate yields
where \(r_s\) is defined by (4.14) and \( r'_s=r_s\cdot \frac{1}{2}+(s-\rho _0)\cdot \frac{1}{2}=\frac{k\delta -1}{2}<0. \) Consequently, we can deduce that
When k is odd,
When k is even
The above two estimates imply that (1.18) holds true.
Now we prove (1.19). Let \(T_l>0\) be given in Lemma 4.1. When k is odd, we use (4.20) to derive
It follows from the construction of \(u_{m,n}\), Fatou’s lemma, Lemmas 4.1, A.8 and 4.4 that
which is (1.19) in the case that k is odd. When k is even, one has
Similar to (4.21), we can also obtain (1.19) in the case that k is even. The proof is completed. \(\square \)
4.4 Example
Now we give an example of noise structure satisfying Hypothesis \(H_2\). For simplicity, we consider the case that \(h(t,u) \, \textrm{d}\mathcal W=b(t,u) \, \textrm{d}W\), where W is a standard 1-D Brownian motion. Let \(m\ge 1\) and \(f(\cdot )\) be a continuous and bounded function, then
satisfies Hypothesis \(H_2\).
5 Noise prevents blow up
5.1 Proof for Theorem 1.3
Our approach is motivated by [6, 45]. Let \(s>5/2\) and \(u_0\) be an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable with \(\mathbb E\Vert u_0\Vert ^2_{H^s}<\infty \). With \(H_3(1)\) and \(H_3(2)\) at hand, one can follow the steps in the proof for Theorem 1.1 to obtain a unique solution u to (1.6) such that \(u\in C([0,\tau ^*);H^s)\) \({\mathbb P}\text {-}\mathrm{a.s.}\) and
Here we remark that \(H_3(2)\) is the condition of locally Lipschitz continuous in \(H^{\sigma }\) with \(\sigma >3/2\), hence uniqueness can only be considered for solution in \(H^s\) with \(s>5/2\). This is because, if two solutions to (1.6) belong to \(H^s\), the difference between them can be only estimated in \(H^{s'}\) for \(s'\le s-1\) (Recalling (3.9), \(H^{s+1}\)-norm appears).
Define
Due to (5.1), we have \(\tau _{m}<\widetilde{\tau ^*}=\tau ^*\) \({\mathbb P}\text {-}\mathrm{a.s.}\) and hence we only need to show
For \(V\in \mathcal {V}\), applying the Itô formula to \(\Vert u(t)\Vert ^2_{H^{s-1}}\) and then to \(V(\Vert u\Vert ^2_{H^{s-1}})\), we find
Next, we recall \(\tau _{m}<\widetilde{\tau ^*}=\tau ^*\) and \(s-1>3/2\), take expectation and then use Hypothesis \(H_3\) and Lemma A.6 to find that
where \(\mathcal {H}_{\sigma }(t,u)\) (\(u\in H^{\sigma }\) and \(\sigma >3/2\)) is defined in Hypothesis \(H_3(3)\). Then we can infer from the above estimate that there is a constant \(C(u_0,N_1,N_2,t)>0\) such that
Next, for any \(T>0\), it follows from the BDG inequality that
Thus we use (5.3) to obtain
As a result, for all \(m\ge 1\),
Since \(\mathbb P\{\widetilde{\tau ^*}<T\}\) does not depend on m, sending \(m\rightarrow \infty \) gives rise to \(\mathbb P\{\tau ^*<T\}=0\). Since \(T>0\) is arbitrary, we obtain (5.2), which completes the proof for Theorem 1.3.
5.2 Example
As in (1.12), for the solution to (1.4), its \(H^s\)-norm blows up if and only if its \(W^{1,\infty }\)-norm blows up. On the other hand, \(H_3(3)\) means that the growth of \(2\lambda _s\Vert u\Vert ^k_{W^{1,\infty }}\Vert u\Vert ^2_{H^s}\) can be canceled by \(2V''(\Vert u\Vert ^2_{H^s})|(q(t,u), u)_{H^s}|\). Motivated by these two observations, we consider the following examples where the \({W^{1,\infty }}\)-norm of u will be involved, that is,
where \(\beta (t,x)\) satisfies the following conditions:
Hypothesis \({\textbf {H}}_{\textbf {4}}\) We assume that
-
The function \(\beta (t,x)\in C\left( [0,\infty )\times [0,\infty )\right) \) such that for any \(x\ge 0\), \(\beta (\cdot ,x)\) is bounded as a function of t, and for all \(t\ge 0\), \(\beta (t,\cdot )\) is locally Lipschitz continuous as a function of x;
-
The function \(\beta (t,x)\ne 0\) for all \((t,x)\in [0,\infty )\times [0,\infty )\), and \(\limsup _{x\rightarrow +\infty }\frac{2\lambda _s x^k}{\beta ^2(t,x)}<1\) for all \(t\ge 0\), where \(\lambda _s>0\) is given in Lemma A.6.
Now we give a concrete example \(\beta (t,x)\) satisfying Hypothesis \(H_4\). Let \(b:[0,\infty )\rightarrow [0,\infty )\) be a continuous function and there are constants \(b_*,b^*>0\) such that \(b_*\le b^2(t)\le b^*<\infty \) for all t. For all \(k\ge 1\), if
then \(\beta (t,x)=b(t)(1+x)^\theta \) satisfies Hypothesis \(H_4\). Moreover, by the following two lemmas, we will see that \(q(t,u)=b(t)(1+\Vert u\Vert _{W^{1,\infty }})^\theta u\) satisfies Hypothesis \(H_3\).
Lemma 5.1
Let \(\lambda _s\) be given in Lemma A.6. Let \(K>0\). If Hypothesis \(H_4\) holds true, then there is an \(M_1>0\) such that for any \(M_2>0\) and all \(0<x\le Ky<\infty \),
Proof
By Hypothesis \(H_4\), we have
which implies (5.5). \(\square \)
Lemma 5.2
If \(\beta (t,x)\) satisfies Hypothesis \(H_4\), then q(t, u) defined by (5.4) satisfies Hypothesis \(H_3\).
Proof
It follows from Lemma 5.1 that \(H_3(3)\) holds true with the choice \(V(x)=\log (1+x)\in \mathcal {V}\). Since \(H^s\hookrightarrow W^{1,\infty }\) with \(s>3/2\), it is obvious that the other requirements in Hypothesis \(H_3\) are verified. \(\square \)
References
Albeverio, S., Brzeźniak, Z., Daletskii, A.: Stochastic Camassa–Holm equation with convection type noise. J. Differ. Equ. 276, 404–432 (2021)
Alonso-Orán, D., Bethencourt de León, A., Takao, S.: The Burgers’ equation with stochastic transport: shock formation, local and global existence of smooth solutions. NoDEA Nonlinear Differ. Equ. Appl. 26(6), Paper No. 57, 33 (2019)
Alonso-Orán, D., Miao, Y., Tang, H.: Global existence, blow-up and stability for a stochastic transport equation with non-local velocity. J. Differ. Equ. 335, 244–293 (2022)
Alonso-Orán, D., Rohde, C., Tang, H.: A local-in-time theory for singular SDEs with applications to fluid models with transport noise. J. Nonlinear Sci. 31(6), Paper No. 98,55 (2021)
Breit, D., Feireisl, E., Hofmanová, M.: Stochastically forced compressible fluid flows. De Gruyter Series in Applied and Numerical Mathematics, vol. 3. De Gruyter, Berlin (2018)
Brzeźniak, Z., Maslowski, B., Seidler, J.: Stochastic nonlinear beam equations. Probab. Theory Relat. Fields 132(1), 119–149 (2005)
Brzeźniak, Z., Motyl, E.: Existence of a martingale solution of the stochastic Navier–Stokes equations in unbounded 2D and 3D domains. J. Differ. Equ. 254(4), 1627–1685 (2013)
Brzeźniak, Z., Motyl, E.: Fractionally dissipative stochastic quasi-geostrophic type equations on \(\mathbb{R} ^d\). SIAM J. Math. Anal. 51(3), 2306–2358 (2019)
Brzeźniak, Z., Ondreját, M.: Strong solutions to stochastic wave equations with values in Riemannian manifolds. J. Funct. Anal. 253(2), 449–481 (2007)
Camassa, R., Holm, D.D.: An integrable shallow water equation with peaked solitons. Phys. Rev. Lett. 71(11), 1661–1664 (1993)
Chen, G.-Q.G., Pang, P.H.C.: Nonlinear anisotropic degenerate parabolic–hyperbolic equations with stochastic forcing. J. Funct. Anal. 281(12), Paper No. 109222,48 (2021)
Chen, Y., Duan, J., Gao, H.: Global well-posedness of the stochastic Camassa–Holm equation. Commun. Math. Sci. 19(3), 607–627 (2021)
Chen, Y., Duan, J., Gao, H.: Wave-breaking and moderate deviations of the stochastic Camassa–Holm equation with pure jump noise. Phys. D 424, Paper No. 132944, 12 (2021)
Chen, Y., Gao, H.: Well-posedness and large deviations of the stochastic modified Camassa–Holm equation. Potential Anal. 45(2), 331–354 (2016)
Chen, Y., Miao, Y., Shi, S.: Global existence and wave breaking for a stochastic two-component Camassa–Holm system. J. Math. Phys. 64(1), Paper No. 011505,28 (2023)
Constantin, A., Escher, J.: Wave breaking for nonlinear nonlocal shallow water equations. Acta Math. 181(2), 229–243 (1998)
de Bouard, A., Debussche, A.: A stochastic nonlinear Schrödinger equation with multiplicative noise. Commun. Math. Phys. 205(1), 161–181 (1999)
Debussche, A., Glatt-Holtz, N.E., Temam, R.: Local martingale and pathwise solutions for an abstract fluids model. Physica D 240(14–15), 1123–1144 (2011)
Fedrizzi, E., Flandoli, F.: Pathwise uniqueness and continuous dependence of SDEs with non-regular drift. Stochastics 83(3), 241–257 (2011)
Flandoli, F., Gubinelli, M., Priola, E.: Well-posedness of the transport equation by stochastic perturbation. Invent. Math. 180(1), 1–53 (2010)
Fuchssteiner, B., Fokas, A.S.: Symplectic structures, their Bäcklund transformations and hereditary symmetries. Physica D 4(1), 47–66 (1981/82)
Galimberti, L., Holden, H., Karlsen, K.H., Pang, P.H.C.: Global existence of dissipative solutions to the Camassa–Holm equation with transport noise (2022)
Gawarecki, L., Mandrekar, V.: Stochastic Differential Equations in Infinite Dimensions with Applications to Stochastic Partial Differential Equations Probability and Its Applications (New York). Springer, Heidelberg (2011)
Geng, X., Xue, B.: An extension of integrable Peakon equations with cubic nonlinearity. Nonlinearity 22(8), 1847–1856 (2009)
Glatt-Holtz, N., Ziane, M.: Strong pathwise solutions of the stochastic Navier–Stokes system. Adv. Differ. Equ. 14(5–6), 567–600 (2009)
Glatt-Holtz, N.E., Vicol, V.C.: Local and global existence of smooth solutions for the stochastic Euler equations with multiplicative noise. Ann. Probab. 42(1), 80–145 (2014)
Henry, D.: Geometric theory of semilinear parabolic equations. 840:iv+348 (1981)
Himonas, A.A., Holliman, C.: The Cauchy problem for a generalized Camassa–Holm equation. Adv. Differ. Equ. 19(1–2), 161–200 (2014)
Himonas, A.A., Kenig, C.: Non-uniform dependence on initial data for the CH equation on the line. Differ. Integral Equ. 22(3–4), 201–224 (2009)
Himonas, A.A., Kenig, C., Misiołek, G.: Non-uniform dependence for the periodic CH equation. Commun. Partial Differ. Equ. 35(6), 1145–1162 (2010)
Himonas, A.A., Misiołek, G.: Non-uniform dependence on initial data of solutions to the Euler equations of hydrodynamics. Commun. Math. Phys. 296(1), 285–301 (2010)
Holden, H., Karlsen, K.H., Pang, P.H.C.: The Hunter–Saxton equation with noise. J. Differ. Equ. 270, 725–786 (2021)
Holden, H., Karlsen, K.H., Pang, P.H.C.: Global well-posedness of the viscous Camassa–Holm equation with gradient noise. Discrete Contin. Dyn. Syst. 43(2), 568–618 (2023)
Hone, A.N.W., Lundmark, H., Szmigielski, J.: Explicit multipeakon solutions of Novikov’s cubically nonlinear integrable Camassa–Holm type equation. Dyn. Partial Differ. Equ. 6(3), 253–289 (2009)
Kato, T.: The Cauchy problem for quasi-linear symmetric hyperbolic systems. Arch. Ration. Mech. Anal. 58(3), 181–205 (1975)
Kato, T., Ponce, G.: Commutator estimates and the Euler and Navier–Stokes equations. Commun. Pure Appl. Math. 41(7), 891–907 (1988)
Koch, H., Tzvetkov, N.: Nonlinear wave interactions for the Benjamin–Ono equation. Int. Math. Res. Not. 30, 1833–1847 (2005)
Kröker, I., Rohde, C.: Finite volume schemes for hyperbolic balance laws with multiplicative noise. Appl. Numer. Math. 62(4), 441–456 (2012)
Krylov, N.V., Rozovskiĭ, B.L.: Stochastic evolution equations. In: Current problems in mathematics, Vol. 14 (Russian), pp. 71–147, 256. Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Informatsii, Moscow (1979)
Lenells, J., Wunsch, M.: On the weakly dissipative Camassa–Holm, Degasperis–Procesi, and Novikov equations. J. Differ. Equ. 255(3), 441–448 (2013)
Li, J., Liu, H., Tang, H.: Stochastic MHD equations with fractional kinematic dissipation and partial magnetic diffusion in \({\mathbb{R} }^2\). Stoch. Process. Appl. 135, 139–182 (2021)
Marinelli, C., Prévôt, C., Röckner, M.: Regular dependence on initial data for stochastic evolution equations with multiplicative Poisson noise. J. Funct. Anal. 258(2), 616–649 (2010)
Miao, Y., Wang, Z., Zhao, Y.: Noise effect in a stochastic generalized Camassa–Holm equation. Commun. Pure Appl. Anal. 21(10), 3529–3558 (2022)
Novikov, V.: Generalizations of the Camassa–Holm equation. J. Phys. A 42(34), 342002, 14 (2009)
Ren, P., Tang, H., Wang, F.-Y.: Distribution-path dependent nonlinear SPDEs with application to stochastic transport type equations. arXiv:2007.09188 (2020)
Rohde, C., Tang, H.: On a stochastic Camassa–Holm type equation with higher order nonlinearities. J. Dyn. Differ. Equ. 33(4), 1823–1852 (2021)
Rohde, C., Tang, H.: On the stochastic Dullin–Gottwald–Holm equation: global existence and wave-breaking phenomena. NoDEA Nonlinear Differ. Equ. Appl. 28(1), Paper No. 5,34 (2021)
Tang, H.: On the pathwise solutions to the Camassa–Holm equation with multiplicative noise. SIAM J. Math. Anal. 50(1), 1322–1366 (2018)
Tang, H.: On stochastic Euler–Poincaré equations driven by pseudo-differential/multiplicative noise (2022). arXiv:2002.08719v4
Tang, H., Liu, Z.: Continuous properties of the solution map for the Euler equations. J. Math. Phys. 55(3), 031504, 10 (2014)
Tang, H., Liu, Z.: Well-posedness of the modified Camassa–Holm equation in Besov spaces. Z. Angew. Math. Phys. 66(4), 1559–1580 (2015)
Tang, H., Shi, S., Liu, Z.: The dependences on initial data for the b-family equation in critical Besov space. Monatsh. Math. 177(3), 471–492 (2015)
Tang, H., Wang, F.-Y.: A general framework for solving singular spdes with applications to fluid models driven by pseudo-differential noise (2022). arXiv:2208.08312
Tang, H., Wang, Z.-A.: Strong solutions to a nonlinear stochastic aggregation–diffusion equation. Commun. Contemp. Math. (2023). https://doi.org/10.1142/S0219199722500730
Tang, H., Yang, A.: Noise effects in some stochastic evolution equations: global existence and dependence on initial data. Ann. Inst. Henri Poincaré Probab. Stat. 59(1), 378–410 (2023)
Tang, H., Zhao, Y., Liu, Z.: A note on the solution map for the periodic Camassa–Holm equation. Appl. Anal. 93(8), 1745–1760 (2014)
Taylor, M.: Commutator estimates. Proc. Am. Math. Soc. 131(5), 1501–1507 (2003)
Taylor, M.: Partial Differential Equations III. Applied Mathematical Sciences, vol. 117. Springer, New York (2011)
Wu, S., Yin, Z.: Blow-up and decay of the solution of the weakly dissipative Degasperis–Procesi equation. SIAM J. Math. Anal. 40(2), 475–490 (2008)
Zhou, S., Mu, C.: The properties of solutions for a generalized \(b\)-family equation with Peakons. J. Nonlinear Sci. 23(5), 863–889 (2013)
Acknowledgements
The authors would like to express their gratitude to the anonymous referee for the valuable suggestions, which have led to substantial improvements in this paper. H.T would also like to record his indebtedness to Professor Giulia Di Nunno.
Funding
Open access funding provided by University of Oslo (incl Oslo University Hospital)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest and data sharing is not applicable to this article since no datasets were generated or analyzed during the current study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
C. R. is funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy–EXC 2075–390740016. The major part of this work was carried out when H. T. was supported by the Alexander von Humboldt Foundation.
Appendices
Appendix: Auxiliary results
In this appendix we formulate and prove some estimates employed in the above proofs. We first recall the Friedrichs mollifier \(J_{\varepsilon }\) defined as
where \(\star \) stands for the convolution, \(j_{\varepsilon }(x)=\frac{1}{\varepsilon }j(\frac{x}{\varepsilon })\) and j(x) is a Schwartz function satisfying \(\widehat{j}(\xi ):\mathbb R\rightarrow [0,1]\) and \(\widehat{j}(\xi )=1\) for \(\xi \in [-1,1]\). From the above construction, we have
Lemma A.1
[41, 48] For all \(\varepsilon \in (0,1)\), \(s,r\in \mathbb R\) and \(u\in H^s\), \(J_\varepsilon \) constructed in (A.1) satisfies
and
where \(\mathcal L(\mathcal X;\mathcal Y)\) is the space of bounded linear operators from \(\mathcal X\) to \(\mathcal Y\).
Lemma A.2
[58] Let f, g be two functions such that \(g\in W^{1,\infty }\) and \(f\in L^2\). Then for some \(C>0\),
Lemma A.3
[36] If \(f\in H^s\bigcap W^{1,\infty },\ g\in H^{s-1}\bigcap L^{\infty }\) for \(s>0\), then there exists a constant \(C_s>0\) such that
Besides, if \(s>0\), then we have for all \(f,g \in H^s\bigcap L^{\infty }\),
Lemma A.4
(Proposition 4.2, [57]) Let \(\rho >3/2\) and \(0\le \eta +1\le \rho \). We have for some \(c>0\),
Lemma A.5
For \(F(\cdot )\) defined in (1.5), we have for all \(k\ge 1\) the following estimates:
Proof
We only estimate \(\Vert F(v)\Vert _{H^s}\) for \(0<s\le 3/2\) since the other cases can be found in [46, 52, 56]. When \(s>0\), by using (1.5) and Lemma A.3, we derive
When \(k\ge 2\), we have
When \(k=1\), \(F_2(v)=\frac{1}{2}(1-\partial _{x}^2)^{-1}\partial _x\left( v_x^2\right) \) and hence
Combining the above two cases for \(F_2\), we arrive at
Now we consider \(F_3\). When \(k\ge 3\), we have
When \(k=2\), we have \(F_3(v)=\frac{1}{2}(1-\partial _{x}^2)^{-1}\left( v_x^3\right) \) and then
Combining the above two cases for \(F_3\) with noticing that \(F_3=0\) for \(k=1\), we find
Then the desired estimate is a consequence of (A.2), (A.3) and (A.4). \(\square \)
Lemma A.6
Let \(s>3/2\), \(k\ge 1\), \(F(\cdot )\) be given in (1.5) and \(J_\varepsilon \) be the mollifier defined in (A.1). There exists a constant \(\lambda _{s}>0\) such that for all \(\varepsilon >0\),
If \(u\in H^{s+1}\), then \(u^ku_x\in H^{s}\), and the above estimate also holds true without \(J_\varepsilon \).
Proof
We only prove the case that \(u\in H^s\). It follows from Lemmas A.1, A.2 and A.3, integration by parts and \(H^s\hookrightarrow W^{1,\infty }\) that
From Lemma A.5, we also have
Combining the above two inequalities gives rise to the desired estimate of the lemma. \(\square \)
The following technique has been used in [4, 5, 25]. Here we formulate such a technique result in an abstract way.
Lemma A.7
Suppose \(u_0\) is an \(H^s\)-valued \(\mathcal {F}_0\)-measurable random variable, and suppose \(H_1(1)\) holds true. Let I be a countable index set and let \(\{{\Omega }_i\}_{i\in I}\) satisfy
If \((u_i,\tau _i)\) with \(i\in I\) is a solution to (1.4) with initial value \({\textbf {1}}_{{\Omega }_i}u_0\), then
is a solution to (1.4) with initial data \(u_0\).
Proof
Since \((u_i,\tau _i)\) is a solution to (1.4) with initial value \(u_0{\textbf {1}}_{{\Omega }_i}\), we find
Therefore, we restrict the above equation to \({\Omega }_i\) and we obtain
It is clear that almost surely,
By \(H_1(1)\), we have \(\Vert h(t,\textbf{0})\Vert _{\mathcal L_2(\mathfrak U;H^s)}<\infty \). Then, from the above three equations, we have that almost surely
which means \(({\textbf {1}}_{{\Omega }_i}u_i,{\textbf {1}}_{{ \Omega }_i}\tau _i)\) also solves (1.4) with initial data \({\textbf {1}}_{{\Omega }_i}u_{0}\). By summing up both sides of the above equation with noticing (A.5), we derive that (A.6) is a solution to (1.4) with initial data \(u_0\) almost surely. Indeed, for the initial data, we have \(u_0=\sum _{i\in I}{} {\textbf {1}}_{{\Omega }_i}u_0\) \({\mathbb P}\text {-}\mathrm{a.s.}\) For the nonlinear term \(u^k\partial _x u\), by (A.5), we have that \({\mathbb P}\text {-}\mathrm{a.s.}\),
The other terms can also be justified in the same way, here we omit the details. \(\square \)
Finally, we recall the following estimate on the product of a Schwartz function and a trigonometric function.
Lemma A.8
[29, 37] Let \(\mathscr {S}(\mathbb R)\) be the set of Schwartz functions. Let \(\delta >0\) and \(\alpha \in \mathbb R\). Then for any \(r\ge 0\) and \(\psi \in \mathscr {S}(\mathbb R)\), we have that
Relation (A.7) is also true if \(\cos \) is replaced by \(\sin \).
Appendix: Proof for Lemma 4.2
As \(u_{m,n}=u_l+u_h\) is explictly given, we will firstly estimate \(E_i\) (\(i=1,2,3,4\)). Let \(T_l>0\) be given in Lemma 4.1 such that \(u_l\) exits on \([0,T_l]\) for all \(n\gg 1\) and (4.9) is satisfied.
(i) Estimating \(\Vert E_1\Vert _{H^{\rho _0}}\). We apply the embedding \(H^{\rho _0}\hookrightarrow L^\infty \), Lemmas 4.1 and A.8 to obtain
Next, we estimate \(\Vert u_l^k(0)-u_l^k(t)\Vert _{H^{\rho _0}}\). Using the fundamental theorem of calculus and the algebra property, we have that for all \(k\ge 1\) and \(t\in [0,T_l]\),
Using (4.5) with \(t\in [0,T_l]\), (1.5), Lemmas A.5 and 4.1 and the embedding \(H^{\rho _0+1}\hookrightarrow W^{1,\infty }\), we get
which implies
Again, applying the algebra property and using Lemmas 4.1, A.8 and (4.7), we have that for all \(k\ge 1\) and \(t\in [0,T_l]\),
Here we used the facts that \(-s+\rho _0-\frac{\delta }{2}+\frac{1}{k}<-s+1-\frac{\delta }{2}+1=-s+2-\frac{\delta }{2}<-\frac{1}{2}-\frac{\delta }{2}<0\) for all \(k\ge 1\), which means that the term corresponding to \(j=1\) dominates.
For the last term \(\mathcal Z_k\partial _xu_h\), by using (4.4), (4.7), Lemma A.3, Lemmas 4.1 and A.8, we obtain that
When \(k\ge 1\), \(-s+\rho _0-\frac{\delta }{2}+\frac{1}{k}<0\) and \(-s-\delta +\frac{1}{k}<0\), therefore, both sums are bounded by \(n^{-2s+\rho _0+\frac{1}{k}+(k-2)\frac{\delta }{2}}\). Furthermore, when \(k\ge 1\), \(-2\,s+\rho _0+\frac{1}{k}+(k-2)\frac{\delta }{2}\le r_s\), which means
Finally, inserting (B.2), (B.3) and (B.4) into (B.1), we arrive at
(ii) Estimating \(\Vert E_2\Vert _{H^{\rho _0}}\). For \(E_2\), we first recall (4.7). Applying the embedding \(H^{\rho _0}\hookrightarrow L^\infty \) and Lemma 4.1, and then taking the dominated term \(j=1\), we find that for all \(k\ge 1\) and \(t\in [0,T_l]\),
(iii) Estimating \(\Vert E_3\Vert _{H^{\rho _0}}\). As in the estimate for (B.4), we have obtained that
When \(k=1\), \(Z_{k-1}=0\) and then we find
Since \(\delta >0\), \(-2s+2-\frac{\delta }{2}-r_s=-s+3-\rho _0-\frac{3}{2}\delta<-\frac{5}{2}+3-\frac{1}{2}-\frac{3}{2}\delta =-\frac{3}{2}\delta <0\), hence
When \(k\ge 2\), we can use the above estimate, Lemma 4.1, the facts \(\Vert f\Vert _{H^{\rho _0-1}}\le \Vert f\Vert _{L^2}\) and \(\Vert fg\Vert _{L^2}\le \Vert f\Vert _{L^2}\Vert g\Vert _{L^{\infty }}\) and take the dominate term \(j=1\) to obtain
Combining (B.8) and (B.7), we have the following conclusion for \(E_3\):
(iv) Estimating \(\Vert E_4\Vert _{H^{\rho _0}}\). For \(k=1\), \(E_4=0\) since \(F_3\) disappears. When \(k=2\), \(\mathcal Z_{k-2}=0\) and then
Finally, for \(k\ge 3\) and \(t\in [0,T_l]\),
Repeating the analysis in (B.8), one has that for \(k\ge 3\) and \(t\in [0,T_l]\),
and therefore it suffices to estimate the different terms, which are
Combining the above estimations, we get
(v) Estimating \(\Vert \mathcal E\Vert _{H^{\rho _0}}\). Let \(T_l>0\) be given in Lemma 4.1 such that \(T_l\) does not depend on n. Let \(t\in [0,T_l]\), by virtue of the Itô formula and (4.13), we derive that
Taking supremum with respect to \(t\in [0,T_l]\) and then using the BDG inequality give rise to
For any fixed \(s>\frac{5}{2}\), since \(\Vert u_{m,n}\Vert _{H^s}\lesssim 1\), on account of (1.10), Lemmas A.8 and 4.1, we can pick \(N=N(s,k)\gg 1\) such that
This, (B.5), (B.6), (B.9) and (B.10) yield
Obviously, for each \(n\ge 1\), \(\mathbb E\sup _{t\in [0,T_l]}\Vert \mathcal E(t)\Vert _{H^{\rho _0}}^2\) is finite. Then by the Grönwall inequality, we have
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Miao, Y., Rohde, C. & Tang, H. Well-posedness for a stochastic Camassa–Holm type equation with higher order nonlinearities. Stoch PDE: Anal Comp 12, 614–674 (2024). https://doi.org/10.1007/s40072-023-00291-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-023-00291-z
Keywords
- Stochastic generalized Camassa–Holm equation
- Noise effect
- Exiting time
- Dependence on initial data
- Global existence