Abstract
In this paper, we present a globalization argument for stochastic nonlinear dispersive PDEs with additive noises by adapting the Imethod (= the method of almost conservation laws) to the stochastic setting. As a model example, we consider the defocusing stochastic cubic nonlinear Schrödinger equation (SNLS) on \({\mathbb {R}}^3\) with additive stochastic forcing, white in time and correlated in space, such that the noise lies below the energy space. By combining the Imethod with Ito’s lemma and a stopping time argument, we construct globalintime dynamics for SNLS below the energy space.
Introduction
Stochastic nonlinear Schrödinger equation
We consider the Cauchy problem for the stochastic nonlinear Schrödinger equation (SNLS) with an additive noise:
where \(\xi (t,x)\) denotes a (Gaussian) space–time white noise on \({\mathbb {R}}\times {\mathbb {R}}^d\) and \(\phi \) is a bounded operator on \(L^2({\mathbb {R}}^d)\). In this paper, we restrict our attention to the defocusing case. Our main goal is to establish global wellposedness of (1.1) in the energysubcritical case with a rough noise, namely, with a noise not belonging to the energy space \(H^1({\mathbb {R}}^d)\). Here, the energysubcriticality refers to the following range of p: (i) \(1< p < 1 + \frac{4}{d2}\) for \(d \ge 3\) and (ii) \(1< p < \infty \) for \(d = 1, 2\). In terms of the scalingcritical regularity \(s_\mathrm{crit}\) defined by
the energysubcriticality is equivalent to the condition \( s_\mathrm{crit} < 1\).
We say that u is a solution to (1.1) on a given time interval \([T, T]\) if it satisfies the following Duhamel formulation (= mild formulation):
in \(C([T, T]; B({\mathbb {R}}^d))\), where \(S(t) = e^{it \Delta }\) denotes the linear Schrödinger propagator and \(B({\mathbb {R}}^d)\) is a suitable Banach space of functions on \({\mathbb {R}}^d\). In this paper, we take \(B({\mathbb {R}}^d)\) to be the \(L^2\)based Sobolev space \(H^s({\mathbb {R}}^d)\) for some suitable \(s \in {\mathbb {R}}\). We say that u is a global solution to (1.1) if (1.2) holds in \(C([T, T]; B({\mathbb {R}}^d))\) for any \(T> 0\). We often construct a solution u belonging to \(C([T, T]; B({\mathbb {R}}^d))\cap X([T, T])\), where \(X([T, T])\) denotes some auxiliary function space such as the Strichartz spaces \(L^q([T, T]; W^{s, r}({\mathbb {R}}^d))\); see [15, 32]. For our purpose, we take this auxiliary function space \(X([T, T])\) to be (localintime version of) the Fourier restriction norm space (namely, the \(X^{s, b}\)space defined in (2.1)).
The last term on the righthand side of (1.2) represents the effect of the stochastic forcing and is called the stochastic convolution, which we denote by \(\Psi \):
See Sect. 2.3 for the precise meaning of the definition (1.3); see (2.8) and (2.9). In the following, we assume that \(\phi \in {HS }(L^2; H^s)\) for appropriate values of \(s \ge 0\), namely, \(\phi \) is taken to be a Hilbert–Schmidt operator from \(L^2({\mathbb {R}}^d)\) to \(H^s({\mathbb {R}}^d)\). It is easy to see that \(\phi \in {HS }(L^2; H^s)\) implies \(\Psi \in C({\mathbb {R}}; H^s({\mathbb {R}}^d))\) almost surely; see [14]. Our main interest is to study (1.1) when \(\phi \in {HS }(L^2; H^s)\) for \(s < 1\) such that the stochastic convolution does not belong to the energy space \(H^1({\mathbb {R}}^d)\).
When \(\phi = 0\), Eq. (1.1) reduces to the (deterministic) defocusing nonlinearSchrödinger equation (NLS):
A standard contraction argument with the Strichartz estimates (see (2.5) below) yields local wellposedness of (1.4) in \(H^s({\mathbb {R}}^d)\) when \(s \ge \max (s_\mathrm{crit}, 0)\); see [5, 18, 25, 38].^{Footnote 1} On the other hand, (1.4) is known to be illposed in the scaling supercritical regime: \(s < s_\mathrm{crit}\). See [6, 28, 30]. In the energysubcritical case, global wellposedness of (1.4) in \(H^1({\mathbb {R}}^d)\) easily follows from iterating the localintime argument in view of the following conservation laws for (1.4):
providing a globalintime a priori control on the \(H^1\)norm of a solution to (1.4).
There are analogues of these wellposedness results in the context of SNLS (1.1). de Bouard and Debussche [15] studied (1.1) in the energysubcritical setting, assuming that \(\phi \in {HS }(L^2;H^1)\). By using the Strichartz estimates, they showed that the stochastic convolution \(\Psi \) almost surely belongs to a right Strichartz space, which allowed them to prove local wellposedness of (1.1) in \(H^1({\mathbb {R}}^d)\). When \(s \ge \max (s_\mathrm{crit}, 0)\), a slight modification of the argument in [15] and the improved space–time regularity of the stochastic convolution (see Lemma 2.2 below) yields local wellposedness of (1.1) in \(H^s({\mathbb {R}}^d)\), provided that \(\phi \in {HS }(L^2; H^s)\). In the energysubcritical case, one can adapt the globalization argument for the deterministic NLS (1.4), based on the conservation laws (1.5), to the stochastic setting with a sufficiently regular noise. More precisely, assuming \(\phi \in {HS }(L^2; H^1)\), de Bouard and Debussche [15] proved global wellposedness of (1.1) in \(H^1({\mathbb {R}}^d)\) by applying Ito’s lemma to the mass M(u) and the energy E(u) in (1.5) and establishing an a priori \(H^1\)bound of solutions to (1.1). In this paper, we also consider the energysubcritical case but we treat a rougher noise: \(\phi \in {HS }(L^2; H^s)\) for \(s < 1\).
In the deterministic setting, Colliander et al. [8] introduced the socalled Imethod (also known as the method of almost conservation laws) and proved global wellposedness of the energysubcritical defocusing cubic NLS ((1.4) with \(p = 3\)) on \({\mathbb {R}}^d\), \(d = 2, 3\), below the energy space. Since then, the Imethod has been applied to a wide class of dispersive models in establishing global wellposedness below the energy spaces (or more generally below regularities associated with conservation laws), where there is no a priori bound on relevant norms (for iterating a localintime argument) directly given by a conservation law. Our strategy for proving global wellposedness of SNLS (1.1) when \(\phi \in {HS }(L^2; H^s)\), \(s < 1\), is to implement the Imethod in the stochastic PDE setting. This will provide a general framework for establishing global wellposedness of stochastic dispersive equations with additive noises below energy spaces.
Main result
For the sake of concreteness, we consider SNLS (1.1) in the threedimensional cubic case (\(d = 3\) and \(p = 3\)):
We point out, however, that our implementation of the Imethod in the stochastic PDE setting is sufficiently general and can be easily adapted to other dispersive models with rough additive stochastic forcing. We now state our main result.
Theorem 1.1
Let \( d=3 \). Suppose that \(\phi \in {HS }(L^2; H^s)\) for some \(s>\frac{5}{6}\). Then, the defocusing stochastic cubic NLS (1.6) on \({\mathbb {R}}^3\) is globally wellposed in \(H^s({\mathbb {R}}^3)\).
Our main goal is to present an argument which combines the Imethod in [8] with the Ito calculus approach in [15]. Note that the regularity range \(s > \frac{5}{6}\) in Theorem 1.1 agrees with the regularity range in the deterministic case [8]. We expect that this regularity range may be improved by employing more sophisticated tools such as the resonant decomposition [11]; see Remark 1.5. In view of the global wellposedness result in \(H^1({\mathbb {R}}^3)\) by de Bouard and Debussche [15], we only consider \(\frac{5}{6}< s < 1\) in the following.
Let us first go over the main idea of the Imethod argument in [8] applied to the deterministic cubic NLS on \({\mathbb {R}}^3\), i.e., (1.6) with \(\phi = 0\). Fix \(u_0 \in H^s({\mathbb {R}}^3)\) for some \(\frac{5}{6} < s \le 1\). Then, the standard Strichartz theory yields local wellposedness of (1.4) with \(u_{ t= 0} = u_0\) in the subcritical sense, namely time of local existence depends only on the \(H^s\)norm of the initial data \(u_0\). Hence, once we obtain an a priori control of the \(H^s\)norm of the solution, we can iterate the localintime argument and prove global existence. When \(s = 1\), the conservation of the mass and energy in (1.5) provides a globalintime a priori control of the \(H^1\)norm of the solution. When \(\frac{5}{6}< s < 1\), the conservation of the energy E(u) is no longer available (since \(E(u) = \infty \) in general), while the mass M(u) is still finite and conserved. Therefore, the main goal is to control the growth of the homogeneous Sobolev \(\dot{H}^s\)norm of the solution.
Unlike the \(s = 1\) case, we do not aim to obtain a globalintime boundedness of the \(\dot{H}^s\)norm of the solution. Instead, the goal is to show that, given any large target time \(T\gg 1\), the \(\dot{H}^s\)norm of the solution remains finite on the time interval [0, T], with a bound depending on T. The main idea of the Imethod is to introduce a smoothing operator \(I = I_N\), known as the Ioperator, mapping \(H^s({\mathbb {R}}^3)\) into \(H^1({\mathbb {R}}^3)\). Here, the Ioperator depends on a parameter \(N = N(T) \gg 1\) (to be chosen later) such that \(I_N \) acts essentially as the identity operator on low frequencies \(\{\xi \lesssim N\}\) and as a fractional integration operator of order \(1  s\) on high frequencies \(\{\xi  \gg N\}\); see Sect. 3 for the precise definition. Thanks to the smoothing of the Ioperator, the modified energy:
is finite for \(u \in H^s({\mathbb {R}}^3)\). Moreover, the modified energy \(E(I_N u)\) controls \(\Vert u\Vert _{\dot{H}^s}^2\). See (3.3). Hence, the main task is reduced to controlling the growth of the modified energy \(E(I_N u)\).
While the energy E(u) is conserved for (smooth) solutions to NLS (1.4), the modified energy \(E(I_N u)\) is no longer conserved since \(I_N u\) does not satisfy the original equation. Instead, \(I_N u\) satisfies the following INLS:
where \({\mathcal {N}}(u) = u^2 u\) denotes the cubic nonlinearity. The commutator term
is the source of nonconservation of the modified energy \(E(I_N u)\). A direct computation shows
See (5.1). Thanks to the commutator structure, it is possible to obtain a good estimate (with a decay in the large parameter N) for \(\partial _tE(I_N u)\) on each local time interval (See Proposition 4.1 in [8]). Then, by using a scaling argument (with a parameter \(\lambda = \lambda (T) \gg 1 \), depending on the target time T), we (i) first reduce the situation to the small data setting, (ii) then iterate the localintime argument with a good bound on \(\partial _tE(I_N u^\lambda )\) on the scaled solution \(u^\lambda \), and (iii) choose \(N = N(T) \gg 1\) sufficiently large such that the scaled target time \(\lambda ^2 T\) is (at most) the doubling time for the modified energy \(E(I_N u^\lambda )\). This yields the regularity restriction \(s > \frac{5}{6}\) in [8].
Let us turn to the case of the stochastic NLS (1.6). In proceeding with the Imethod, we need to estimate the growth of the modified energy \(E(I_Nu)\). In this stochastic setting, we have two sources for nonconservation of \(E(I_N u)\). The first one is the commutator term \([I_N, {\mathcal {N}}](u)\) in (1.8) as in the deterministic case described above. This term can be handled almost in the same manner as in [8], but some care must be taken due to a weaker regularity in time (\(b < \frac{1}{2}\)). See Proposition 4.1. The second source for nonconservation of \(E(I_N u)\) is the stochastic forcing. In particular, in estimating the growth of the modified energy \(E(I_N u)\), we need to apply Ito’s lemma to \(E(I_N u)\), which introduces several correction terms.
In the deterministic case [8], one iteratively applies the localintime argument and estimate energy increment on each local time interval. A naive adaptation of this argument to the stochastic setting would lead to iterative applications of Ito’s lemma to estimate the growth of the modified energy \(E(I_N u)\). In controlling an expression of the form
we need to apply Burkholder–Davis–Gundy inequality, which introduces a multiplicative constant \(C>1\). See Lemma 5.1. Namely, if we were to apply Ito’s lemma iteratively on each time interval of local existence, then this would lead to an exponential growth of the constant in front of the modified energy. This causes an iteration argument to break down.
We instead apply Ito’s lemma only once on the global time interval \([0, \lambda ^2T]\). At the same time, we estimate the contribution from the commutator term iteratively on each local time interval. Note that this latter task requires a small data assumption, which we handle by introducing a suitable stopping time and iteratively verifying such a small data assumption. See Sect. 6.
As in the deterministic setting, we employ a scaling argument to reduce the problem to the small data regime. In the stochastic setting, we need to proceed with care in applying a scaling to the noise \(\phi \xi \) since we need to apply Ito’s lemma after scaling. Namely, we need to express the scaled noise as \(\phi ^\lambda \xi ^\lambda \), where \( \xi ^\lambda \) is another space–time white noise (defined by the white noise scaling; see (3.18)) such that Ito calculus can be applied. This forces us to study the scaled Hilbert–Schmidt operator \(\phi ^\lambda \). In the application of Ito’s lemma, there are correction terms due to \(I_N \phi ^\lambda \) besides the commutator term \([I_N, {\mathcal {N}}](u^\lambda )\). In order to carry out an iterative procedure, we need to make sure that the contribution from the correction terms involving \(I_N \phi ^\lambda \) is negligible as compared to that from the commutator term. See Sects. 3.3 and 6. As a result, the regularity restriction \(s > \frac{5}{6}\) comes from the commutator term as in the deterministic case.
We conclude this introduction by several remarks.
Remark 1.2
In this paper, we implement the Imethod in the stochastic PDE setting. There is a recent work [23] by Gubinelli, Koch, Tolomeo, and the third author, establishing global wellposedness of the (renormalized) defocusing stochastic cubic nonlinear wave equation on the twodimensional torus \({\mathbb {T}}^2\), forced by space–time white noise. The Imethod was also employed in [23]. We point out that our argument in this paper is a genuine extension of the Imethod to the stochastic setting, which can be applied to a wide class of stochastic dispersive equations. On the other hand, in [23], the Imethod was applied to the residual term \(v = u  \Psi _{\text {wave}}\) in the Da PratoDebussche trick [13], where \(\Psi _{\text {wave}}\) denotes the stochastic convolution in the wave setting. Furthermore, the Imethod argument in [23] is pathwise, namely, entirely deterministic once we take the pathwise regularity of \(\Psi _{\text {wave}}\) (and its Wick powers) from [21].
Remark 1.3
As in the usual application of the Imethod in the deterministic setting, our implementation of the Imethod in the stochastic setting yields a polynomialintime growth bound of the \(H^s\)norm of a solution. See Remark 6.2. We point out that the Imethod approach to the singular stochastic nonlinear wave equation on \({\mathbb {T}}^2\) with the defocusing cubic nonlinearity in [23] yields a much worse double exponential growth bound.
Remark 1.4
In a recent paper [31], the third author and Okamoto studied SNLS (1.1) with an additive noise in the masscritical case (\(p = 1+ \frac{4}{d}\)) and the energycritical case (\(p = 1+ \frac{4}{d2}\), \(d \ge 3\)). By adapting the recent deterministic masscritical and energycritical global theory, they proved global wellposedness of (1.1) in the critical spaces. In particular, when \(d = 2\) and \(p = 3\), this yields global wellposedness the twodimensional defocusing stochastic cubic NLS in \(L^2({\mathbb {R}}^2)\). This is the reason why we only considered the threedimensional case in Theorem 1.1, since our Imethod argument would yield global wellposedness only for \(s > \frac{4}{7}\) in the twodimensional cubic case (just as in the deterministic case [8]), which is subsumed by the aforementioned global wellposedness result in [31].
Remark 1.5
In an application of the Imethod, it is possible to introduce a correction term (away from a nearly resonant part) and improve the regularity range. See [11]. It would be of interest to implement such an argument to the stochastic PDE setting since a computation of a correction term would involve Ito’s lemma.
Remark 1.6
We mentioned that our implementation of the Imethod in the stochastic PDE setting is sufficiently general and is applicable to other dispersive equations forced by additive noise. This is conditional to an assumption that a commutator term can be treated with a weaker temporal regularity \(b < \frac{1}{2}\). In the case of SNLS, this can be achieved by a simple interpolation argument, at a slight loss of spatial regularity. See Sect. 4. See also [7] for an analogous argument in the periodic case. In this regard, it is of interest to study the stochastic KdV equation in negative Sobolev spaces since crucial estimates for KdV require the temporal regularity to be \(b = \frac{1}{2}\). See [3, 10, 26].
This paper is organized as follows. In Sect. 2, we go over the preliminary materials from deterministic and stochastic analysis. We then reduce a proof of Theorem 1.1 to controlling the homogeneous \(\dot{H}^s\)norm of a solution (Remark 2.4). In Sect. 3, we introduce the Ioperator and go over local wellposedness of ISNLS (3.4). Then, we discuss the scaling properties of ISNLS in Sect. 3.3. In Sect. 4, we briefly go over the nonlinear estimates, indicating required modifications from [8]. In Sect. 5, we apply Ito calculus to bound the modified energy in terms of a term involving the commutator \([I_N, {\mathcal {N}}]\). Lastly, we put all the ingredients together and present a proof of Theorem 1.1 in Sect. 6.
Preliminaries
In this section, we first introduce notations and function spaces along with the relevant linear estimates. We also go over preliminary lemmas from stochastic analysis. We then discuss a reduction of the proof of Theorem 1.1; see Remark 2.4.
Notations
For simplicity, we drop \( 2\pi \) in dealing with the Fourier transforms. We first recall the Fourier restriction norm spaces \( X^{s,b}({\mathbb {R}}\times {\mathbb {R}}^d) \) introduced by Bourgain [2]. The \(X^{s, b}\)space is defined by the norm:
where \( \langle \,\cdot \, \rangle = (1 +\cdot ^2)^\frac{1}{2} \). When \(b > \frac{1}{2}\), we have the following embedding:
Given \(\delta > 0\), we define the localintime version \( X^{s,b}_{\delta } \) on \( [0,\delta ] \times {\mathbb {R}}^d \) by
Given a time interval \(J \subset {\mathbb {R}}\), we also define the localintime version \( X^{s,b}(J)\) in an analogous manner.
When we work with space–time function spaces, we use shorthand notations such as \(C_T H^s_x = C([0, T]; H^s({\mathbb {R}}^d))\).
We write \( A \lesssim B \) to denote an estimate of the form \( A \le CB \). Similarly, we write \( A \sim B \) to denote \( A \lesssim B \) and \( B \lesssim A \) and use \( A \ll B \) when we have \(A \le c B\) for small \(c > 0\). We may use subscripts to denote dependence on external parameters; for example, \(A\lesssim _{p, q} B\) means \(A\le C(p, q) B\), where the constant C(p, q) depends on parameters p and q. We also use \( a+ \) (and \( a \)) to mean \( a + \varepsilon \) (and \( a\varepsilon \), respectively) for arbitrarily small \( \varepsilon >0 \). As it is common in probability theory, we use \(A\wedge B\) to denote \(\min (A, B)\).
Given dyadic \(M \ge 1\), we use \({\mathbf {P}}_M\) to denote the Littlewood–Paley projector onto the frequencies^{Footnote 2}\(\{\xi \sim M\}\) such that
In view of the time reversibility of the problem, we only consider positive times in the following.
Linear estimates
We first recall the Strichartz estimate. We say that a pair of indices (q, r) is Strichartz admissible if \(2\le q, r \le \infty \), \((q, r, d) \ne (2, \infty , 2)\), and
Then, given any admissible pair (q, r), the following Strichartz estimates are known to hold:
Next, we recall the standard linear estimates for the \(X^{s, b}\)spaces. See, for example, [17, 36] for the proofs of (i) and (ii).
Lemma 2.1

(i)
(homogeneous linear estimate). Given \(s, b \in {\mathbb {R}}\), we have
$$\begin{aligned} \Vert S(t)f\Vert _{X^{s,b}_{T}}\lesssim \Vert f\Vert _{H^s} \end{aligned}$$for any \(0 < T \le 1\). Moreover, we have \(S(t) f \in C([0, T]; H^s({\mathbb {R}}^d))\) for \(f \in H^s({\mathbb {R}}^d)\).

(ii)
(nonhomogeneous linear estimate). Given \(s \in {\mathbb {R}}\), \(b > \frac{1}{2}\) sufficiently close to \(\frac{1}{2}\), and small \(\theta > 0\), we have
$$\begin{aligned} \bigg \Vert \int ^{t}_{0}S(tt')F(t')dt'\bigg \Vert _{X^{s, b }_T}\lesssim T^{\theta }\Vert F\Vert _{X^{s, b  1 + \theta }_T} \end{aligned}$$for any \(0 < T \le 1\).

(iii)
(transference principle). Let (q, r) be Strichartz admissible. Then, for any \(b > \frac{1}{2} \), we have
$$\begin{aligned} \Vert u \Vert _{L^q_tL^r_x} \lesssim \Vert u \Vert _{X^{0, b}}.\end{aligned}$$ 
(iv)
Let \( d=3 \). Then, given any \(2\le p <\frac{10}{3}\), there exists small \(\varepsilon > 0\) such that
$$\begin{aligned} \Vert u\Vert _{L_{t,x}^{p}({\mathbb {R}}\times {\mathbb {R}}^3)} \lesssim \Vert u\Vert _{X^{0,\frac{1}{2}\varepsilon }}. \end{aligned}$$(2.6)
Proof
As for (iii), see, for example, Lemma 2.9 in [36]. In the following, we only discuss Part (iv). Noting that \((\frac{10}{3}, \frac{10}{3})\) is Strichartz admissible when \( d= 3\), it follows from the transference principle and the Strichartz estimate (2.5) that
for \( b > \frac{1}{2} \). Interpolating this with the trivial bound: \(\Vert u\Vert _{L_{t,x}^{2}} = \Vert u\Vert _{X^{0,0}}\), we obtain the desired estimate (2.6). \(\square \)
Tools from stochastic analysis
Lastly, we go over basic tools from stochastic analysis and then provide some reduction for the proof of Theorem 1.1.
We first recall the regularity properties of the stochastic convolution \(\Psi \) defined in (1.3). Given two separable Hilbert spaces H and K, we denote by \({HS }(H;K)\) the space of Hilbert–Schmidt operators \(\phi \) from H to K, endowed with the norm:
where \(\{ f_n \}_{n \in {\mathbb {N}}}\) is an orthonormal basis of H. Recall that the Hilbert–Schmidt norm of \(\phi \) is independent of the choice of an orthonormal basis of H.
Next, recall the definition of a cylindrical Wiener process W on \( L^2({\mathbb {R}}^d) \). Let \((\Omega , {\mathcal {F}}, P)\) be a probability space endowed with a filtration \(\{ {\mathcal {F}}_t \}_{t \ge 0}\). Fix an orthonormal basis \(\{ e_n \}_{n \in {\mathbb {N}}}\) of \(L^2({\mathbb {R}}^d)\). We define an \(L^2 ({\mathbb {R}}^d)\)cylindrical Wiener process W by
where \(\{ \beta _n \}_{n \in {\mathbb {N}}}\) is a family of mutually independent complexvalued Brownian motions^{Footnote 3} associated with the filtration \(\{ {\mathcal {F}}_t \}_{t \ge 0}\). Note that a space–time white noise \(\xi \) is given by a distributional derivative (in time) of W. Hence, we can express the stochastic convolution \(\Psi \) in (1.3) as
The next lemma summarizes the regularity properties of the stochastic convolution.
Lemma 2.2
Let \( d \ge 1 \), \( T > 0 \), and \( s \in {\mathbb {R}}\). Suppose that \( \phi \in {HS }(L^2;H^s) \).

(i)
We have \( \Psi \in C([0,T];H^s({\mathbb {R}}^d)) \) almost surely.

(ii)
Given any \(1 \le q < \infty \) and finite \(r \ge 2\) such that \( r \le \frac{2d}{d2}\) when \(d \ge 3\), we have \(\Psi \in L^q([0, T]; W^{s, r}({\mathbb {R}}^d))\) almost surely.

(iii)
Given \( b < \frac{1}{2}\), we have \(\Psi \in X^{s,b}([0,T])\) almost surely. Moreover, there exists \(\theta > 0\) such that
$$\begin{aligned} {\mathbb {E}}\Big [ \Vert \Psi \Vert ^{p}_{X^{s,b}([0,T])} \Big ] \lesssim p^\frac{p}{2} \langle T \rangle ^{\theta p} \Vert \phi \Vert ^{p}_{{HS }(L^2;H^s)} \end{aligned}$$(2.10)for any finite \(p \ge 1\).
Regarding the proof of Lemma 2.2, see [14] for (i) and [15, 32] for (ii). As for (iii), see [16, Proposition 2.1], [29, Proposition 4.1], and [7, Lemma 3.3] for the proofs of the \(X^{s, b}\)regularity of the stochastic convolution. The works [7, 16, 29] treat a different equation (the KdV equation) and/or a different setting (on the circle), but the proofs can be easily adapted to our context. For example, one can follow the argument in the proof of Proposition 2.1 in [16] to obtain (2.10) for \(p = 2\). Then, by noting from (2.9) that the stochastic convolution \(\Psi \) is nothing but (a limit of) a linear combination of Wiener integrals, the general case follows from the \(p =2 \) case and the Wiener chaos estimate;^{Footnote 4} see, for example, [22, Lemma 2.5]. See also [34, Theorem I.22] and [37, Proposition 2.4].
Once we have Lemma 2.2, we can use the Strichartz estimates (2.5) (without the \(X^{s, b}\)spaces) to prove local wellposedness of SNLS (1.6) in \(H^s({\mathbb {R}}^3)\) for \(s \ge s_\mathrm{crit} = \frac{1}{2}\), provided that \(\phi \in HS(L^2; H^s)\). See [15, 31]. In particular, for the subcritical range \(s > \frac{1}{2}\), the random time \(\delta = \delta (\omega ) \) of local existence, starting from \(t = t_0\), satisfies
for some \(\theta > 0\), where \(C_{t_0}(\Psi )>0\) denotes certain Strichartz norms of the stochastic convolution \(\Psi \), restricted to a time interval \([t_0, t_0 + 1]\). Given \(T > 0\), it follows from Lemma 2.2 that \(C_{t_0}(\Psi )\) remains finite almost surely for any \(t_0 \in [0, T]\). Therefore, Theorem 1.1 follows once we show that \(\sup _{t \in [0, T]} \Vert u(t) \Vert _{H^s}\) remains finite almost surely for any \(T>0\) (with a bound depending on \(T>0\)).
Lastly, we recall the a priori mass control from [15] whose proof follows from Ito’s lemma applied to the mass M(u) in (1.5) and Burkholder–Davis–Gundy inequality (see [14, Theorem 4.36]).
Lemma 2.3
Assume \( \phi \in {HS (L^2;L^2)}\) and \( u_0\in L^2({\mathbb {R}}^3) \). Let u be the solution to SNLS (1.6) with \(u_{t = 0}=u_0\) and \(T^{*} = T^{*}_{\omega } (u_0)\) be the forward maximal time of existence. Then, given \(T>0\), there exists \(C_1 = C_1 (M(u_0), T, \Vert \phi \Vert _{{HS }(L^2; L^2)})>0\) such that for any stopping time \(\tau \) with \(0<\tau < \min (T^{*}, T)\) almost surely, we have
Remark 2.4
In view of Lemma 2.3, given finite \(T > 0\), the \(L^2\)norm of the solution remains bounded almost surely on \([0, T^*_\omega \wedge T]\), where \(T^*_\omega \) is the forward maximal time of existence.^{Footnote 5} Therefore, it follows from the discussion above that, in order to prove Theorem 1.1, it suffices to show that the homogeneous Sobolev norm \( \Vert u(t) \Vert _{\dot{H}^s}\) remains finite almost surely on each bounded time interval [0, T]. In the following, our analysis involves only homogeneous Sobolev spaces.
Ioperator, ISNLS, and their scaling properties
Ioperator
Bourgain [4] introduced the socalled highlow method in establishing global wellposedness of the defocusing cubic NLS on \({\mathbb {R}}^2\) below the energy space. The highlow method is based on truncating the dynamics by a sharp frequency cutoff and separately studying the lowfrequency and highfrequency dynamics. Colliander et al. [8] proposed to use a smooth positive frequency multiplier instead.
Let \(0< s < 1\). Given \(N \ge 1\), we define a smooth, radially symmetric, nonincreasing (in \(\xi \)) multiplier \(m_N\), satisfying
Since \(m_N\) is radial, with a slight abuse of notation, we may use the notation \(m_N(\xi )\) by viewing \(m_N\) as a function on \([0, \infty )\).
We then define the Ioperator \(I = I_N\) to be the Fourier multiplier operator with the multiplier \(m_N\):
As mentioned in the introduction, \(I_N\) acts as the identity operator on low frequencies \(\{ \xi  \le N\}\), while it acts as a fractional integration operator of order \(1s\) on high frequencies \(\{ \xi  \ge 2N\}\). As a result, we have the following bound:
ISNLS
By applying the Ioperator to SNLS (1.6), we obtain the following ISNLS:
In this subsection, we study local wellposedness of the Cauchy problem (3.4). A similar local wellposedness result for the (deterministic) INLS (namely, (3.4) with \(\phi = 0\)) was studied in [8, Proposition 4.2]. In order to capture the temporal regularity of the stochastic convolution (Lemma 2.2), we need to work with the \(X^{s, b}\)space with \(b < \frac{1}{2}\) and hence need to establish a trilinear estimate in this setting. See Lemma 3.2. The following proposition allows us to avoid using the \( L^2\)norm which is supercritical with respect to scaling (as in [8]).
Proposition 3.1
Let \(\frac{1}{2}<s<1\), \( \phi \in {HS }(L^2;{\dot{H}}^s) \), and \(u_0 \in \dot{H}^s({\mathbb {R}}^3)\). Then, there exist an almost surely positive stopping time
and a unique localintime solution \( I_N u\in C( [0,\delta ];{\dot{H}}^1({\mathbb {R}}^3) ) \) to ISNLS (3.4). Furthermore, if \( T^*=T^*_{\omega } \) denotes the forward maximal time of existence, the following blowup alternative holds:
Proposition 3.1 follows from a standard contraction argument once we prove the following trilinear estimate.
Lemma 3.2
Let \(\frac{1}{2}<s<1\). Then, there exists small \(\varepsilon > 0\) such that
for any \(0 \le T \le 1\), where the implicit constant is independent of \( N \ge 1\).
As compared to Proposition 4.2 in [8], we need to work with a slightly weaker temporal regularity on the righthand side of (3.6).
Before going over a proof of Lemma 3.2, let us briefly discuss a proof of Proposition 3.1. By writing (3.4) in the Duhamel formulation, we have
where \(\Phi = \Phi _{I_N u_0, I_N \phi }\) and we interpreted the nonlinearity as a function of \(I_Nu\):
Fix small \(\varepsilon > 0\). Then, by Lemmas 2.1 and 2.2 followed by Lemma 3.2, we have
for an almost surely finite random constant \(C_\omega >0\) and for any \(0 \le \delta \le 1\). Similarly, we have
From (3.7) and (3.8), we conclude that \(\Phi \) is almost surely a contraction on the ball of radius
in \(\nabla ^{1} X^{0, \frac{1}{2}  \varepsilon }\) by choosing \(\delta = \delta _\omega (R) >0\) sufficiently small. Moreover, from (2.2) and Lemmas 2.1 and 2.2with (3.7), we also conclude that \( I_N u \in C( [0,\delta ];{\dot{H}}^1({\mathbb {R}}^3) )\). This proves Proposition 3.1. The following remark plays an important role in iteratively applying the localintime argument in Sect. 6.
Remark 3.3
The argument above shows that there exist small \(\eta _0, \eta _1 > 0\) such that if, for a given interval \( J = [t_0, t_0 + 1] \subset [0, \infty )\) of length 1 and \( \omega \in \Omega \), we have
then a solution \(I_N u \) to ISNLS (3.4) exists on the interval J with the bound:
for some absolute constant \(C_0\), uniformly in \( N \ge 1. \)
We now present a proof of Lemma 3.2.
Proof of Lemma 3.2
By the interpolation lemma ([9, Lemma 12.1]), it suffices to prove (3.6) for \(N = 1\). Let \(I = I_1\). By the definition (2.3) of the time restriction norm, duality, and Leibniz rule for \( \nabla I \), it suffices to show that^{Footnote 6}
For \(j\in \{2,3\}\), we split the functions \(u_j\) into high and low frequency components:
where the spatial Fourier supports of \(u_j^{hi }\) and \(u_j^{low }\) are contained in \(\{\xi \ge \frac{1}{2} \}\) and \(\{\xi \le 1 \}\), respectively.
By noting \( u_j^{low }=Iu_j^{low }\) and Sobolev’s inequality (both in space and time), we have
As for \(u_j^{hi }\), we claim
Since \(N = 1\), we have \(I\sim \nabla ^{s1}\). Then, by Sobolev’s inequality and the transference principle (Lemma 2.1 (iii)) with an admissible pair \((q, r ) =\big (5+, \frac{30}{11}\big )\), we have
provided that \(s > \frac{1}{2}\). On the other hand, by Sobolev’s inequality, we have
By interpolating (3.14) and (3.15), we obtain (3.13).
We now estimate (3.10) by expanding \(u_j\), \(j = 2, 3\), as \(u_j^{hi }+u_j^{{low }}\). For \(j = 2, 3\), let \(p_j = 6\) in treating \(u_j^{low }\) and \(p_j = 5+\) in treating \(u_j^{hi }\). Then, the claimed estimate (3.10) follows from \( L^{\frac{10}{3}}_{t,x}, L^{p_2}_{t,x}, L^{p_3}_{t,x}, L^{p_4}_{t,x}\)Hölder’s inequality, Lemma 2.1 (iv), (3.12), and (3.13), where \(p_4\) is defined by \(\frac{1}{p_4} = 1  \big (\frac{3}{10}\big )  \frac{1}{p_2}  \frac{1}{p_3}\) such that \( 2 \le p_4 < \frac{10}{3}\). \(\square \)
Scaling property
In this subsection, we discuss the scaling properties of SNLS (1.6) and ISNLS (3.4). Before doing so, we first recall the scaling property of the (deterministic) cubic NLS:
This equation enjoys the following scaling invariance; if u is a solution to (3.16), then the scaled function
also satisfies Eq. (3.16) with the scaled initial data. In the application of the Imethod in the deterministic case (as in [8]), we apply this scaling first and then apply the Ioperator to obtain INLS (1.7) (with \(u^\lambda \) in place of u).
In our current stochastic setting, when we apply the scaling, we also need to scale the noise \(\phi \xi \). In order to apply Ito calculus to the scaled noise, we need to make sure that the scaled noise is given by another space–time white noise \(\xi ^\lambda \) (with a scaled Hilbert–Schmidt operator \(\phi ^\lambda \)). For this purpose, we first recall the scaling property of a space–time white noise. Given a space–time white noise \(\xi \) on \({\mathbb {R}}\times {\mathbb {R}}^d\), it is well known that the scaled noise \(\xi ^\lambda \) defined by^{Footnote 7}
is also a space–time white noise for any \(a_1, a_2 \in {\mathbb {R}}\).
Next, let us study the scaling property of the Hilbert–Schmidt operator \(\phi \) via its kernel representation. Recall from [33, Theorem VI.23] that a bounded linear operator \(\phi \) on \(L^2({\mathbb {R}}^3)\) is Hilbert–Schmidt if and only if it is represented as an integral operator with a kernel \(k \in L^2({\mathbb {R}}^3 \times {\mathbb {R}}^3)\):
with \( \Vert \phi \Vert _{{HS }(L^2; L^2)} = \Vert k \Vert _{L^2_{x, y}}.\) More generally, we have
With this in mind, let us evaluate \(\phi \xi \) at \((\frac{t}{\lambda ^2}, \frac{x}{\lambda })\) with a factor of \(\lambda ^{3}\). By a change of variables and (3.18) with \((a_1, a_2) = (2, 0)\), we have
This motivates us to define the scaled kernel \(k^\lambda \) by
and the associated Hilbert–Schmidt operator \(\phi ^\lambda \) with an integral kernel \(k^\lambda \). Then, it follows from (3.20) and (3.21) that
Therefore, by applying the scaling (3.17) with (3.22) to SNLS (1.6) and then applying the Ioperator, we obtain
In the following lemma, we record the scaling property of the HilbertSchmidt norm of \(I_N \phi ^\lambda \).
Lemma 3.4
Let \(d = 3\), \( 0<s<1 \), and \( \phi \in {HS }(L^2,{\dot{H}}^s)\). Then, we have
As a consequence, given any \(\varepsilon > 0\), there exists \(\theta > 0\) such that
for any finite \( p\ge 1 \) and \(T > 0\), where \(\Psi ^\lambda \) is the stochastic convolution corresponding to the scaled noise \(\phi ^\lambda \xi ^\lambda \).
Furthermore, if we assume \( \phi \in {HS }(L^2,{\dot{H}}^\frac{3}{4})\), then we have
uniformly in \(N \ge 1\).
Proof
From (3.19), (3.3), and (3.21), we have
The second estimate (3.25) follows from Lemma 2.2 and (3.24). The last claim (3.26) follows from proceeding as in (3.27) but using the uniform bound \(m_N(\xi ) \le 1\) in the second step. \(\square \)
Remark 3.5
(i) It is easy to check that Lemma 3.4 remains true even if we proceed with a scaling argument in (3.20) for any \(a_2 \in {\mathbb {R}}\).
(ii) Lemma 3.4 states that by choosing \(\lambda = \lambda (N) \gg 1\), we can make the Hilbert–Schmidt norm of \(I_N \phi ^\lambda \) arbitrarily small (even after multiplying by \(\lambda T^\frac{1}{2}\); see (6.7) and (6.8)).
We conclude this section by going over the scaling of the modified energy. Let \(u_0^\lambda = u^\lambda (0)\). Then, from (3.3), the HörmanderMikhlin multiplier theorem [20], (3.17), and Sobolev’s inequality, we have
Hence, for \(\frac{1}{2}< s < 1\), by choosing \(\lambda = \lambda (N) \gg 1\), we can make the modified energy \(E(I_Nu_0^\lambda )\) of the scaled initial data arbitrarily small.
On the commutator estimates
In this section, we go over the commutator estimates (Proposition 4.1), corresponding to the deterministic component in our application of the Imethod.
Proposition 4.1
Let \( \frac{5}{6}<s<1 \). Then, given \(\beta > 0\), there exists small \(\varepsilon > 0\) such that
for any \(N\ge 1\) and any interval \(J \subset [0, \infty )\), where \({\mathcal {N}}(u) = u^2 u\) and the implicit constants are independent of \(N \ge 1\) and \(J \subset [0, \infty )\).
The estimates (4.1) and (4.2) are essentially the same as those appearing in the proof of Proposition 4.1 in [8]. The difference appears in the temporal regularity; on the righthand sides of (4.1) and (4.2), we have \(b = \frac{1}{2}  \varepsilon \), whereas the temporal regularity in [8] was \(b = \frac{1}{2} + \varepsilon \). The desired estimates in Proposition 4.1 follow from the corresponding estimates in [8] and an interpolation argument.
Lemma 4.2
(i) Let u be a function on \({\mathbb {R}}\times {\mathbb {R}}^3\) with the spatial frequency support in \(\{\xi \sim M\}\) for some dyadic \(M\ge 1\). Then, there exists \( \theta > 0\) such that
for any interval \(J \subset [0, \infty )\), where the implicit constants are independent of \(N \ge 1\) and \(J \subset [0, \infty )\).
(ii) Let \( \frac{2}{3}<s<1 \). Then, the following trilinear estimate holds:
for any \(N\ge 1\) and any interval \(J \subset [0, \infty )\), where the implicit constants are independent of \(N \ge 1\) and \(J \subset [0, \infty )\).
Note that \(\frac{2}{3} < \frac{5}{6}\).
Proof
Part (i) follows from (4.19), (4.20), and (4.21) in [8], showing the corresponding estimates with \(b = \frac{1}{2} + \varepsilon \) on the righthand sides and a simple interpolation argument. From Sobolev’s inequality and Lemma 2.1 (iii), we have
Then, (4.3) with the \(+\) sign follows from interpolating (4.7) and
As for (4.3) with the − sign, interpolate (4.7) with
As for (4.4), we interpolate (2.7) with
while (4.5) follows from Sobolev’s inequality and (4.4).
Part (ii) corresponds to Lemma 4.3 in [8] but with \(b = \frac{1}{2}\). By the interpolation lemma [9, Lemma 12.1], we may assume \(N = 1\). As in (3.11), write \(u_j=u_j^{hi }+u_j^{{low }}\). As for \(u_j^{low }\), we have
As for \(u_j^{hi }\), noting \(I\sim \nabla ^{s1}\), we have
provided that \(s > \frac{2}{3}\), where we used Lemma 2.1 (iii) in the last step. Interpolating this with
we obtain
Then, (4.6) follows from the boundedness of \(m_1(\xi )\) and \(L^6_{t, x}, L^6_{t, x}, L^6_{t, x}\)Hölder’s inequality with (4.8) and (4.9). \(\square \)
We now briefly discuss a proof of Proposition 4.1.
Proof of Proposition 4.1
The estimates (4.1) and (4.2) (with \(b = \frac{1}{2}\)) follow from a small modification of the proof of Proposition 4.1 in [8] (with \(b = \frac{1}{2}+\)), using Lemma 4.2. In the following, we present the proof of the second estimate (4.2). As for the first estimate (4.1), we briefly discuss the required modifications at the end of the proof. In the following, we fix N and drop the subscript N from \(I_N\) and \(m_N\).
From the definition (3.2) of the Ioperator, we can rewrite the lefthand side of (4.2) as
where the multiplier \(M_N ({{\bar{\xi }}})\) is given by
We suppress the tdependence in the following.
With the Littlewood–Paley projector \({\mathbf {P}}_{N_j}\) in (2.4), we have
where \(U_1= \overline{ {\mathbf {P}}_{N_1} I {\mathcal {N}}(u)}\), \(u_j = {\mathbf {P}}_{N_j} I u\), \(j = 2, 4\), and \(u_3 = \overline{{\mathbf {P}}_{N_3}I u}\). Without loss of generality, we assume that \(N_2 \ge N_3 \ge N_4\). Note that we have \(N_1 \lesssim N_2\) under \(\xi _1+\xi _2+\xi _3+\xi _4 = 0\) and \(\xi _j \sim N_j\). Thus, if we have \(N_2 \ll N\) in addition, then it follows from (3.1) and (4.11) that \(M_N({{\bar{\xi }}}) = 0\). Therefore, we assume that \(N_2 \gtrsim N\) in the following. We also note that under \(N_2 \gtrsim N_1\) and \(N_2 \ge N_3 \ge N_4\), we have
Then, from (4.11) and (4.13) with (3.1), we have
for any sufficiently small \(\varepsilon > 0\), provided that \(s > \frac{2 + 2\varepsilon }{3}\).
Case 1: \(N_j \gtrsim 1\), \(j = 1, \dots , 4\).
In this case, by applying the \(L^2_{t, x}, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}+}_{x}, L^{10}_{t} L^{10}_{x}, L^{10}_{t} L^{10}_{x}\)Hölder’s inequality^{Footnote 8} and (4.14) to (4.12) and then applying Lemma 4.2, we obtain
Case 2: \(N_1 \gtrsim 1 \gg N_4\).
We proceed as in Case 1 but we place \(u_j\), \(j = 3, 4\), in the \( L^{10}_{t} L^{10+}_{x}\)norm when \(N_j \ll 1\). In view of Lemma 4.2 (i), this creates a small positive power of \(N_j\), allowing us to sum over dyadic \(N_j \ll 1\).
For \(N_3 \gtrsim 1\), we apply the \(L^2_{t, x}, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}+}_{x}, L^{10}_{t} L^{10}_{x}, L^{10}_{t} L^{10+}_{x}\)Hölder’s inequality and proceed as in Case 1. For \(N_3 \ll 1\), we apply the \(L^2_{t, x}, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}}_{x},L^{10}_{t} L^{10+}_{x}, L^{10}_{t} L^{10+}_{x}\)Hölder’s inequality and proceed as in Case 1.
Case 3: \(N_1 \ll 1\).
In this case, we use
to gain a small power of \(N_1\). For \(N_4 \gtrsim 1\), we apply the \(L^2_{t}L^{2+}_x, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}+}_{x},L^{10}_{t} L^{10}_{x}, L^{10}_{t} L^{10}_{x}\)Hölder’s inequality and proceed as in Case 1 with Lemma 4.2 and (4.15).
For \(N_3 \gtrsim 1\gg N_4\), we apply the \(L^2_{t}L^{2+}_x, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}+}_{x}, L^{10}_{t} L^{10}_{x}, L^{10}_{t} L^{10+}_{x}\)Hölder’s inequality. For \(N_3, N_4 \ll 1 \), we apply the \(L^2_{t}L^{2+}_x, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}}_{x}, L^{10}_{t} L^{10+}_{x}, L^{10}_{t} L^{10+}_{x}\)Hölder’s inequality. Then, proceeding as in Case 1 with Lemma 4.2 and (4.15), we obtain the desired estimate (4.2).
As for the first estimate (4.1), we can simply repeat the argument in [8] with Lemma 4.2 (i) in place of [8, (4.19), (4.20), and (4.21)] and replacing \(L^{\frac{10}{3}}_{t, x}\) by \(L^{\frac{10}{3}}_{t}L^{\frac{10}{3}}_{x}\) (so that (4.4) with \(b = \frac{1}{2}\) is applicable). We point out that the regularity restriction \(s > \frac{5}{6}\) comes from this part. \(\square \)
On growth of the modified energy
In this section, we use stochastic analysis to study growth of the modified energy \(E(I_Nu)\) associated with ISNLS (3.4). Before doing so, we first go over the deterministic setting. Given a smooth solution u to the cubic NLS (3.16), we can verify the conservation of the energy \( E(u)=\frac{1}{2}\int _{{\mathbb {R}}^3} \nabla u^2 dx +\frac{1}{4}\int _{{\mathbb {R}}^3}u^4 dx \) by simply differentiating in time and using the Eq. (3.16):
In a similar manner, given a smooth solution u to the cubic NLS (3.16), the time derivative of the modified energy \(E(I_Nu)\) is given by
Then, the fundamental theorem of calculus yields
for any \(t_1, t_2 \in {\mathbb {R}}\), where the righthand side can be estimated by the commutator estimate; see [8, Proposition 4.1].
For our problem. we need to estimate growth of the modified energy \(E(I_N u)\), where u is now a solution to SNLS (1.6) with a stochastic forcing.^{Footnote 9} As such, we need to proceed with Ito’s lemma.
Lemma 5.1
Given \(N \ge 1\), let \(I_Nu\) be the solution to ISNLS (3.4) with \(I_N u_{t = 0} = I_N u_0\), where \(\phi \) and \(u_0\) are as in Proposition 3.1. Moreover, given \(T >0 \), let \(\tau \) be a stopping time with \( 0<\tau <\min (T^{*}, T) \) almost surely, where \( T^* = T^*_\omega \) is the (random) forward maximal time of existence for ISNLS (3.4), satisfying (3.5). Then, we have
Furthermore, if we assume that \(I_N \phi \in {HS }(L^2;{\dot{H}}^\frac{3}{4})\) in addition, then there exists \( C>0 \) such that
Remark 5.2
(i) The second term on the righthand side of (5.3) corresponds to the contribution from the commutator term \([I_N, {\mathcal {N}}]\), also present in the deterministic case. The remaining terms are the additional terms, appearing from the application of Ito’s lemma. Part (ii) follows from (5.4) and Burkholder–Davis–Gundy inequality. We need to hide some terms on the righthand side of (5.3) to the lefthand side, which results in the factor of 2 appears in the first term on the righthand side of (5.4).
In the deterministic setting, one can apply the commutator estimate to (5.2) on each localintime interval. In the current stochastic setting, however, it is not possible to apply the estimate (5.4) (and the commutator estimates (Proposition 4.1)) on each localintime interval since this factor of 2 on \(E(I_Nu_0)\) in (5.4) would lead to an exponential growth of the constant in iterating the localintime argument.
(ii) Our convention states that \(\beta _n\) in (2.8) is a complexvalued Brownian motion. This is the reason we do not have a factor \(\frac{1}{2}\) on the last term in (5.3).
(iii) In controlling the fourth and fifth terms on the righthand side of (5.3), we need to use the \({HS }(L^2;{\dot{H}}^\frac{3}{4})\)norm of \(I_N \phi \) in (5.4).
Proof
A formal application of Ito’s lemma yields (5.3). One can justify the computation by inserting several truncations and the local wellposedness argument. See [15] for details when there is no Ioperator.
Let us now turn to Part (ii). By Burkholder–Davis–Gundy inequality ([24, Theorem 3.28 on p. 166]), Cauchy–Schwarz inequality, and Cauchy’s inequality, we estimate the third term on the righthand side of (5.3) as
By Burkholder–Davis–Gundy inequality and Sobolev’s inequality \(\dot{H}^\frac{3}{4}({\mathbb {R}}^3) \subset L^4({\mathbb {R}}^3)\), the fourth terms is estimated as
By Sobolev’s inequality, we estimate the fifth term as
Finally, the desired estimate (5.4) follows from (5.3), (5.5), (5.6), and (5.7). \(\square \)
Proof of Theorem 1.1
In this section, we present global wellposedness of SNLS (1.6) (Theorem 1.1). In the current stochastic setting, it suffices to prove the following “almost” almost sure global wellposedness.
Proposition 6.1
Given \(\frac{5}{6}< s < 1\), let \(u_0 \in H^s({\mathbb {R}}^3)\) and \(\phi \in {HS }(L^2; H^s)\). Then, given any \(T, \varepsilon > 0\), there exists a set \( \Omega _{T, \varepsilon }\subset \Omega \) such that

(i)
\(P( \Omega _{T, \varepsilon }^c) < \varepsilon \).

(ii)
For each \(\omega \in \Omega _{T, \varepsilon }\), there exists a (unique) solution u to SNLS (1.6) in \(C([0,T];H^s({\mathbb {R}}^3)) \) with \(u_{t = 0} = u_0\) and the noise given by \(\phi \xi = \phi \xi (\omega )\).
Once we prove Proposition 6.1, Theorem 1.1 follows from the Borel–Cantelli lemma. See, for example, [1, 12]. See also Remark 6.2. Hence, in the remaining part of this paper, we focus on proving Proposition 6.1.
Proof of Proposition 6.1
As in the deterministic setting [8], we first apply the scaling (3.17), where \(\lambda = \lambda (N) = \lambda (T, \varepsilon )\gg 1 \) is to be chosen later. Note that, given \(\omega \in \Omega \), \( u = u(\omega ) \) solves (1.6) on [0, T] if and only if the scaled function \( u^\lambda = u^\lambda (\omega )\) solves (1.6) on \([0, \lambda ^2 T]\) with the scaled initial data \( u_0^\lambda = u^\lambda (0)\). We then apply the Ioperator to the scaled function \(u^\lambda \). In the following, we focus on studying the scaled ISNLS (3.23). In view of Remark 2.4 and (3.3) with Lemma 2.3, it suffices to show that \( \Vert I_N u^\lambda \Vert _{{\dot{H}}^1} \) remains finite on \([0, \lambda ^2 T]\) with a large probability.
Fix \(\frac{5}{6}< s < 1\) and \(u_0 \in H^s({\mathbb {R}}^3)\). Given large \(T \gg 1 \) and small \(\varepsilon > 0\), fix \(N= N(T, \varepsilon ) \gg 1 \) (to be chosen later). We now choose \(\lambda = \lambda (N) \gg 1\) such that
for some small \(\theta >0\). More precisely, we can choose
under the condition that \(\frac{1}{2}< s < 1\). Then, from the scaling property (3.28) of the modified energy and (6.1), we have
by choosing \(N = N(\varepsilon ) \gg 1\), where \(\eta _0\) is as in (3.9).
Let \(\Psi ^\lambda \) denote the stochastic convolution corresponding to the scaled noise \(\phi ^\lambda \xi ^\lambda \). By Lemma 3.4 with \(p \gg 1\), (6.1), (6.2), and choosing \(N = N(T, \varepsilon ) \gg 1\), we have, for \(p \ge 2\),
uniformly in \(j \in {\mathbb {N}}\cup \{0\}\), where \(\eta _1\) is as in (3.9). Note that by choosing \(p \gg 1 \), (6.4) imposes only a mild condition \(N \ge (\varepsilon ^{1} T)^{0+}\). For \(j \in {\mathbb {N}}\cup \{0\}\), define \(A^j_\varepsilon \subset \Omega \) by
Now, set \(\Omega _{T, \varepsilon }^{(1)}\) by setting
where \([ \lambda ^2T ]\) denotes the integer part of \(\lambda ^2 T\). Then, it follows from Chebyshev’s inequality and (6.4) that
Lastly, note that from Lemma 3.4 with (6.1) and (6.2), we have
and
by choosing \(N = N(T, \varepsilon ) \gg 1\) sufficiently large, where \(\gamma \) is given by
Now, we define a stopping time \(\tau \) by
where \(\eta _0\) is as in (3.9). Note that in view of the blowup alternative stated in Proposition 3.1, the condition (6.9) guarantees that the solution \(I_N u^\lambda \) to the scaled ISNLS (3.23) exists on \([0, \tau ]\). Then, set
and
We claim that
Then, it follows from (6.11) with (6.6) and (6.12) that
In the following, we prove (6.12). Let \( \omega \in \Omega _{T, \varepsilon }^{(1)} \setminus \Omega ^{(2)}_{T, \varepsilon }\). Then, from Remark 3.3 with (6.5) and (6.9), we have
for any \(j \in {\mathbb {N}}\cup \{0\}\) such that \(j + 1\le \tau \). Hence, it follows from Proposition 4.1 with (3.23) that
Then, from Lemma 5.1 and (6.14), we have one can write (5.4) as:
On the other hand, from (6.9) and the continuity of the modified energy (in time), we have
for any \( \omega \in \Omega _{T, \varepsilon }^{(1)} \setminus \Omega ^{(2)}_{T, \varepsilon }\). Hence, from (6.15) and (6.16) with (6.3), (6.7), and (6.8),we have
As in the deterministic case [8], we can make the last term on the righthand side of (6.17) small, provided that \(s > \frac{5}{6}\). In fact, with (6.2), we can choose \(N = N(T, \varepsilon ) \gg 1\) such that
guaranteeing
Note that (6.18) is possible only when \(6s > 5 + 4\theta \), which can be satisfied when \(s > \frac{5}{6}\) by choosing \(\theta = \theta (s) > 0\) sufficiently small.
Therefore, the desired bound (6.12) follows from (6.17) and (6.19), and thus, (6.13) holds by choosing \(N = N(T, \varepsilon ) \gg 1\) such that (6.3), (6.4), (6.7), (6.8), and (6.18) are satisfied.
By the definition (6.11), for any \(\omega \in \Omega _{T, \varepsilon }\), the solution \(I_Nu^\lambda = I_N u^\lambda (\omega )\) to the scaled ISNLS (3.23) exists on the time interval \([0, \lambda ^2T]\). Together with (6.13), this proves Proposition 6.1. \(\square \)
Remark 6.2
As in the usual application of the Imethod in the deterministic setting, our proof of Proposition 6.1 yields a polynomial growth bound on the \(H^s\)norm of a solution.
From the scaling (3.17), we have
Thus, given \(T>0\), from (6.20), (6.9), (6.10), (6.11), (6.2), and (6.18), we have
for \(\omega \in \Omega _{T, \varepsilon }\), where the implicit constant depends on \(u_0\) and \(\Omega _{T, \varepsilon }\). On the other hand, from (3.3), we have
Lastly, it follows Remark 2.4 (in particular, Footnote 5) that
Therefore, from (6.21), (6.22), and (6.23), we conclude that
for any \(0 \le t \le T\) and \(\omega \in \Omega _{T, \varepsilon }\).
Let \(u_0 \in H^s({\mathbb {R}}^3)\) for some \(s > \frac{5}{6}\). Given small \(\varepsilon > 0\), we apply Proposition 6.1 and construct a set \(\Omega _{2^j, 2^{j}\varepsilon }\) for each \(j \in {\mathbb {N}}\). Now, set \(\Sigma = \bigcup _{0 < \varepsilon \ll 1} \bigcap _{j \in {\mathbb {N}}} \Omega _{2^j, 2^{j}\varepsilon }\). Then, for each \(\omega \in \Sigma \), there exists \(\varepsilon > 0\) such that \(\omega \in \bigcap _{j \in {\mathbb {N}}} \Omega _{2^j, 2^{j}\varepsilon }\). In particular, the corresponding solution u to (1.1) exists globally in time. Furthermore, from (6.24), we have
for any \(t > 0\).
Notes
 1.
When p is not an odd integer, we may need to impose an extra assumption due to the nonsmoothness of the nonlinearity. A similar comment applies to the case of SNLS.
 2.
When \(M = 1\), \({\mathbf {P}}_1\) is a smooth projector onto the frequencies \(\{\xi \lesssim 1\}\).
 3.
Namely, the real and imaginary parts of \(\beta _n\) are independent (realvalued) Brownian motions.
 4.
The proof of Proposition 2.1 in [16] reduces to bounding the second moment of the \(H^b\)norm of a certain (scalar) Wiener integral. Similarly, the proof of Lemma 2.2 (iii) for general finite \(p \ge 1\) reduces to bounding the pth moment of the \(H^b\)norm of the same Wiener integral. By the Wiener chaos estimate, this can be further reduced to bounding the second moment.
 5.
 6.
Here, we are essentially using the triangle inequality \( \langle \xi _1+\cdots + \xi _4\rangle ^s\lesssim \langle \xi _1 \rangle ^s +\cdots \langle \xi _4 \rangle ^s \) for \(s\ge 0\) and the fact that \(X^{s, b}\) is a Fourier lattice.
 7.
Since \(\xi \) is merely a distribution, a pointwise evaluation does not quite make sense. Strictly speaking, we need to apply the (inverse) scaling to test functions. For simplicity, however, we use this slightly abusive notation.
 8.
It is understood that the time integrations are restricted to the interval J. The same comment applies in the following.
 9.
As we see in Sect. 6, we in fact apply the argument in this section after applying a suitable scaling.
References
 1.
Á. Bényi, T. Oh, O. Pocovnicu, On the probabilistic Cauchy theory of the cubic nonlinear Schrödinger equation on\({\mathbb{R}}^d\), \(d \ge 3\), Trans. Amer. Math. Soc. Ser. B 2 (2015), 1–50.
 2.
J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations, Geom. Funct. Anal. 3 (1993), no. 2, 107–156.
 3.
J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. II. The KdVequation, Geom. Funct. Anal. 3 (1993), no. 3, 209–262.
 4.
J. Bourgain, Refinements of Strichartz’s inequality and applications to 2DNLS with critical nonlinearity, Int. Math. Res. Not., 1998, 253–283.
 5.
T. Cazenave, F. Weissler, Some remarks on the nonlinear Schrödinger equation in the critical case, Nonlinear semigroups, partial differential equations and attractors (Washington, DC, 1987), 18–29, Lecture Notes in Math., 1394.
 6.
M. Christ, J. Colliander, T. Tao, Illposedness for nonlinear Schrödinger and wave equations, arXiv:math/0311048 [math.AP].
 7.
K. Cheung, R. Mosincat, Stochastic nonlinear Schrödinger equations on tori, Stoch. PDE: Anal. Comp. 7 (2019), no. 2, 169–208.
 8.
J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao, Almost conservation laws and global rough solutions to a nonlinear Schrödinger equation, Math. Res. Lett. 9 (2002), no. 56, 659–682.
 9.
J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao, Multilinear estimates for periodic KdV equations, and applications, J. Funct. Anal. 211 (2004), no. 1, 173–218.
 10.
J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao, Sharp global wellposedness for KdV and modified KdV on\( {\mathbb{R}} \)and\( {\mathbb{T}} \), J. Amer. Math. Soc. 16 (2003), no. 3, 705–749.
 11.
J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao, Resonant decompositions and the\(I\)method for the cubic nonlinear Schrödinger equation on\({\mathbb{R}}^2\), Discrete Contin. Dyn. Syst. 21 (2008), no. 3, 665–686.
 12.
J. Colliander, T. Oh, Almost sure wellposedness of the cubic nonlinear Schrödinger equation below\( L^2({\mathbb{T}}) \), Duke Math. J. 161 (2012), no. 3, 367–414.
 13.
G. Da Prato, A. Debussche, Strong solutions to the stochastic quantization equations, Ann. Probab. 31 (2003), no. 4, 1900–1916.
 14.
G. Da Prato, J. Zabczyk, Stochastic equations in infinite dimensions, Second edition. Encyclopedia of Mathematics and its Applications, 152. Cambridge University Press, Cambridge, 2014. xviii+493 pp.
 15.
A. de Bouard, A. Debussche, The stochastic nonlinear Schrödinger equation in\(H^1\), Stochastic Anal. Appl. 21 (2003), no. 1, 97–126.
 16.
A. de Bouard, A. Debussche, Y. Tsutsumi, White noise driven Kortewegde Vries equation, J. Funct. Anal. 169 (1999), no. 2, 532–558.
 17.
J. Ginibre, Y. Tsutsumi, G. Velo, On the Cauchy problem for the Zakharov system, J. Funct. Anal. 151 (1997), no. 2, 384–436.
 18.
J. Ginibre, G. Velo, On a class of nonlinear Schrödinger equations. I. The Cauchy problem, general case, J. Funct. Anal. 32 (1979), no. 1, 1–32.
 19.
J. Ginibre, G. Velo, Smoothing properties and retarded estimates for some dispersive evolution equations, Comm. Math. Phys. 144 (1992), no. 1, 163–188.
 20.
L. Grafakos, Classical Fourier analysis, Third edition. Graduate Texts in Mathematics, 249. Springer, New York, 2014. xviii+638 pp.
 21.
M. Gubinelli, H. Koch, T. Oh, Renormalization of the twodimensional stochastic nonlinear wave equations, Trans. Amer. Math. Soc. 370 (2018), no 10, 7335–7359.
 22.
M. Gubinelli, H. Koch, T. Oh, Paracontrolled approach to the threedimensional stochastic nonlinear wave equation with quadratic nonlinearity, arXiv:1811.07808 [math.AP].
 23.
M. Gubinelli, H. Koch, T. Oh, L. Tolomeo, Global dynamics for the twodimensional stochastic nonlinear wave equations,arXiv:2005.10570 [math.AP].
 24.
I. Karatzas, S. Shreve, Brownian motion and stochastic calculus, Second edition. Graduate Texts in Mathematics, 113. SpringerVerlag, New York, 1991. xxiv+470 pp.
 25.
T. Kato, On nonlinear Schrödinger equations, Ann. Inst. H. Poincaré Phys. Théor. 46 (1987), no. 1, 113–129.
 26.
C. Kenig, G. Ponce, L. Vega, A bilinear estimate with applications to the KdV equation, J. Amer. Math. Soc. 9 (1996), no. 2, 573–603.
 27.
M. Keel, T. Tao, Endpoint Strichartz estimates, Amer. J. Math. 120 (1998), no. 5, 955–980.
 28.
N. Kishimoto, A remark on norm inflation for nonlinear Schrödinger equations, Commun. Pure Appl. Anal. 18 (2019), no. 3, 1375–1402.
 29.
T. Oh, Periodic stochastic Kortewegde Vries equation with additive spacetime white noise, Anal. PDE 2 (2009), no. 3, 281–304.
 30.
T. Oh, A remark on norm inflation with general initial data for the cubic nonlinear Schrödinger equations in negative Sobolev spaces, Funkcial. Ekvac. 60 (2017), 259–277.
 31.
T. Oh, M. Okamoto On the stochastic nonlinear Schrödinger equations at critical regularities, Stoch. Partial Differ. Equ. Anal. Comput. 8 (2020), no. 4, 869–894.
 32.
T. Oh, O. Pocovnicu, Y. Wang, On the stochastic nonlinear Schrödinger equations with nonsmooth additive noise, Kyoto J. Math. 60 (2020), no. 4, 1227–1243.
 33.
M. Reed, B. Simon, Methods of modern mathematical physics. I. Functional analysis, Second edition. Academic Press, Inc., New York, 1980. xv+400 pp.
 34.
B. Simon, The\(P(\varphi )_2\)Euclidean (quantum) field theory, Princeton Series in Physics. Princeton University Press, Princeton, N.J., 1974. xx+392 pp.
 35.
R.S. Strichartz, Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations, Duke Math. J. 44 (1977), no. 3, 705–714.
 36.
T. Tao, Nonlinear dispersive equations. Local and global analysis, CBMS Regional Conference Series in Mathematics, 106. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2006. xvi+373 pp.
 37.
L. Thomann, N. Tzvetkov, Gibbs measure for the periodic derivative nonlinear Schrödinger equation, Nonlinearity 23 (2010), no. 11, 2771–2791.
 38.
Y. Tsutsumi, \(L^2\)solutions for nonlinear Schrödinger equations and nonlinear groups, Funkcial. Ekvac. 30 (1987), no. 1, 115–125.
 39.
K. Yajima, Existence of solutions for Schrödinger evolution equations, Comm. Math. Phys. 110 (1987), no. 3, 415–426.
Acknowledgements
K.C. and G.L. were supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (Grant EP/L016508/01), the Scottish Funding Council, HeriotWatt University and the University of Edinburgh. K.C. and G.L. also acknowledge support from the European Research Council (grant no. 637995 “ProbDynDispEq”). T.O. was supported by the European Research Council (grant no. 637995 “ProbDynDispEq” and grant no. 864138 “SingStochDispDyn”). The authors would like to thank the anonymous referee for helpful comments.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Cheung, K., Li, G. & Oh, T. Almost conservation laws for stochastic nonlinear Schrödinger equations. J. Evol. Equ. (2021). https://doi.org/10.1007/s0002802000659x
Accepted:
Published:
Keywords
 Stochastic nonlinear Schrödinger equation
 Global wellposedness
 Imethod
 Almost conservation law
Mathematics Subject Classification
 35Q55
 60H15