1 Introduction

1.1 Stochastic nonlinear Schrödinger equation

We consider the Cauchy problem for the stochastic nonlinear Schrödinger equation (SNLS) with an additive noise:

$$\begin{aligned} {\left\{ \begin{array}{ll} i \partial _tu + \Delta u = |u|^{p-1} u + \phi \xi \\ u|_{t = 0} = u_0, \end{array}\right. } \qquad (t, x) \in {\mathbb {R}}\times {\mathbb {R}}^d, \end{aligned}$$
(1.1)

where \(\xi (t,x)\) denotes a (Gaussian) space–time white noise on \({\mathbb {R}}\times {\mathbb {R}}^d\) and \(\phi \) is a bounded operator on \(L^2({\mathbb {R}}^d)\). In this paper, we restrict our attention to the defocusing case. Our main goal is to establish global well-posedness of (1.1) in the energy-subcritical case with a rough noise, namely, with a noise not belonging to the energy space \(H^1({\mathbb {R}}^d)\). Here, the energy-subcriticality refers to the following range of p: (i) \(1< p < 1 + \frac{4}{d-2}\) for \(d \ge 3\) and (ii) \(1< p < \infty \) for \(d = 1, 2\). In terms of the scaling-critical regularity \(s_\mathrm{crit}\) defined by

$$\begin{aligned} s_\mathrm{crit} = \frac{d}{2} - \frac{ 2}{p-1}, \end{aligned}$$

the energy-subcriticality is equivalent to the condition \( s_\mathrm{crit} < 1\).

We say that u is a solution to (1.1) on a given time interval \([-T, T]\) if it satisfies the following Duhamel formulation (= mild formulation):

$$\begin{aligned} u(t) = S(t) u_0 -i \int _0^t S(t-t') |u|^{p-1}u(t') dt' -i \int _0^t S(t-t') \phi \xi (dt') \end{aligned}$$
(1.2)

in \(C([-T, T]; B({\mathbb {R}}^d))\), where \(S(t) = e^{it \Delta }\) denotes the linear Schrödinger propagator and \(B({\mathbb {R}}^d)\) is a suitable Banach space of functions on \({\mathbb {R}}^d\). In this paper, we take \(B({\mathbb {R}}^d)\) to be the \(L^2\)-based Sobolev space \(H^s({\mathbb {R}}^d)\) for some suitable \(s \in {\mathbb {R}}\). We say that u is a global solution to (1.1) if (1.2) holds in \(C([-T, T]; B({\mathbb {R}}^d))\) for any \(T> 0\). We often construct a solution u belonging to \(C([-T, T]; B({\mathbb {R}}^d))\cap X([-T, T])\), where \(X([-T, T])\) denotes some auxiliary function space such as the Strichartz spaces \(L^q([-T, T]; W^{s, r}({\mathbb {R}}^d))\); see [15, 32]. For our purpose, we take this auxiliary function space \(X([-T, T])\) to be (local-in-time version of) the Fourier restriction norm space (namely, the \(X^{s, b}\)-space defined in (2.1)).

The last term on the right-hand side of (1.2) represents the effect of the stochastic forcing and is called the stochastic convolution, which we denote by \(\Psi \):

$$\begin{aligned} \Psi (t) = - i \int _0^t S(t - t') \phi \xi (dt'). \end{aligned}$$
(1.3)

See Sect. 2.3 for the precise meaning of the definition (1.3); see (2.8) and (2.9). In the following, we assume that \(\phi \in {HS }(L^2; H^s)\) for appropriate values of \(s \ge 0\), namely, \(\phi \) is taken to be a Hilbert–Schmidt operator from \(L^2({\mathbb {R}}^d)\) to \(H^s({\mathbb {R}}^d)\). It is easy to see that \(\phi \in {HS }(L^2; H^s)\) implies \(\Psi \in C({\mathbb {R}}; H^s({\mathbb {R}}^d))\) almost surely; see [14]. Our main interest is to study (1.1) when \(\phi \in {HS }(L^2; H^s)\) for \(s < 1\) such that the stochastic convolution does not belong to the energy space \(H^1({\mathbb {R}}^d)\).

When \(\phi = 0\), Eq. (1.1) reduces to the (deterministic) defocusing nonlinearSchrödinger equation (NLS):

$$\begin{aligned} i \partial _t u + \Delta u = |u|^{p-1} u. \end{aligned}$$
(1.4)

A standard contraction argument with the Strichartz estimates (see (2.5) below) yields local well-posedness of (1.4) in \(H^s({\mathbb {R}}^d)\) when \(s \ge \max (s_\mathrm{crit}, 0)\); see [5, 18, 25, 38].Footnote 1 On the other hand, (1.4) is known to be ill-posed in the scaling supercritical regime: \(s < s_\mathrm{crit}\). See [6, 28, 30]. In the energy-subcritical case, global well-posedness of (1.4) in \(H^1({\mathbb {R}}^d)\) easily follows from iterating the local-in-time argument in view of the following conservation laws for (1.4):

$$\begin{aligned} \begin{aligned} \text {Mass: }&M(u(t)) = \int _{{\mathbb {R}}^d} |u(t, x)|^2 dx,\\ \text {Energy: }&E(u(t)) = \frac{1}{2} \int _{{\mathbb {R}}^d} |\nabla u(t, x)|^2 dx + \frac{1}{p+1} \int _{{\mathbb {R}}^d} |u(t, x)|^{p+1} dx, \end{aligned} \end{aligned}$$
(1.5)

providing a global-in-time a priori control on the \(H^1\)-norm of a solution to (1.4).

There are analogues of these well-posedness results in the context of SNLS (1.1). de Bouard and Debussche [15] studied (1.1) in the energy-subcritical setting, assuming that \(\phi \in {HS }(L^2;H^1)\). By using the Strichartz estimates, they showed that the stochastic convolution \(\Psi \) almost surely belongs to a right Strichartz space, which allowed them to prove local well-posedness of (1.1) in \(H^1({\mathbb {R}}^d)\). When \(s \ge \max (s_\mathrm{crit}, 0)\), a slight modification of the argument in [15] and the improved space–time regularity of the stochastic convolution (see Lemma 2.2 below) yields local well-posedness of (1.1) in \(H^s({\mathbb {R}}^d)\), provided that \(\phi \in {HS }(L^2; H^s)\). In the energy-subcritical case, one can adapt the globalization argument for the deterministic NLS (1.4), based on the conservation laws (1.5), to the stochastic setting with a sufficiently regular noise. More precisely, assuming \(\phi \in {HS }(L^2; H^1)\), de Bouard and Debussche [15] proved global well-posedness of (1.1) in \(H^1({\mathbb {R}}^d)\) by applying Ito’s lemma to the mass M(u) and the energy E(u) in (1.5) and establishing an a priori \(H^1\)-bound of solutions to (1.1). In this paper, we also consider the energy-subcritical case but we treat a rougher noise: \(\phi \in {HS }(L^2; H^s)\) for \(s < 1\).

In the deterministic setting, Colliander et al. [8] introduced the so-called I-method (also known as the method of almost conservation laws) and proved global well-posedness of the energy-subcritical defocusing cubic NLS ((1.4) with \(p = 3\)) on \({\mathbb {R}}^d\), \(d = 2, 3\), below the energy space. Since then, the I-method has been applied to a wide class of dispersive models in establishing global well-posedness below the energy spaces (or more generally below regularities associated with conservation laws), where there is no a priori bound on relevant norms (for iterating a local-in-time argument) directly given by a conservation law. Our strategy for proving global well-posedness of SNLS (1.1) when \(\phi \in {HS }(L^2; H^s)\), \(s < 1\), is to implement the I-method in the stochastic PDE setting. This will provide a general framework for establishing global well-posedness of stochastic dispersive equations with additive noises below energy spaces.

1.2 Main result

For the sake of concreteness, we consider SNLS (1.1) in the three-dimensional cubic case (\(d = 3\) and \(p = 3\)):

$$\begin{aligned} {\left\{ \begin{array}{ll} i \partial _tu + \Delta u =|u|^2 u +\phi \xi \\ u|_{t=0}=u_0\in H^s({\mathbb {R}}^3), \end{array}\right. } \qquad (t, x) \in {\mathbb {R}}\times {\mathbb {R}}^3. \end{aligned}$$
(1.6)

We point out, however, that our implementation of the I-method in the stochastic PDE setting is sufficiently general and can be easily adapted to other dispersive models with rough additive stochastic forcing. We now state our main result.

Theorem 1.1

Let \( d=3 \). Suppose that \(\phi \in {HS }(L^2; H^s)\) for some \(s>\frac{5}{6}\). Then, the defocusing stochastic cubic NLS (1.6) on \({\mathbb {R}}^3\) is globally well-posed in \(H^s({\mathbb {R}}^3)\).

Our main goal is to present an argument which combines the I-method in [8] with the Ito calculus approach in [15]. Note that the regularity range \(s > \frac{5}{6}\) in Theorem 1.1 agrees with the regularity range in the deterministic case [8]. We expect that this regularity range may be improved by employing more sophisticated tools such as the resonant decomposition [11]; see Remark 1.5. In view of the global well-posedness result in \(H^1({\mathbb {R}}^3)\) by de Bouard and Debussche [15], we only consider \(\frac{5}{6}< s < 1\) in the following.

Let us first go over the main idea of the I-method argument in [8] applied to the deterministic cubic NLS on \({\mathbb {R}}^3\), i.e., (1.6) with \(\phi = 0\). Fix \(u_0 \in H^s({\mathbb {R}}^3)\) for some \(\frac{5}{6} < s \le 1\). Then, the standard Strichartz theory yields local well-posedness of (1.4) with \(u|_{ t= 0} = u_0\) in the subcritical sense, namely time of local existence depends only on the \(H^s\)-norm of the initial data \(u_0\). Hence, once we obtain an a priori control of the \(H^s\)-norm of the solution, we can iterate the local-in-time argument and prove global existence. When \(s = 1\), the conservation of the mass and energy in (1.5) provides a global-in-time a priori control of the \(H^1\)-norm of the solution. When \(\frac{5}{6}< s < 1\), the conservation of the energy E(u) is no longer available (since \(E(u) = \infty \) in general), while the mass M(u) is still finite and conserved. Therefore, the main goal is to control the growth of the homogeneous Sobolev \(\dot{H}^s\)-norm of the solution.

Unlike the \(s = 1\) case, we do not aim to obtain a global-in-time boundedness of the \(\dot{H}^s\)-norm of the solution. Instead, the goal is to show that, given any large target time \(T\gg 1\), the \(\dot{H}^s\)-norm of the solution remains finite on the time interval [0, T], with a bound depending on T. The main idea of the I-method is to introduce a smoothing operator \(I = I_N\), known as the I-operator, mapping \(H^s({\mathbb {R}}^3)\) into \(H^1({\mathbb {R}}^3)\). Here, the I-operator depends on a parameter \(N = N(T) \gg 1\) (to be chosen later) such that \(I_N \) acts essentially as the identity operator on low frequencies \(\{|\xi |\lesssim N\}\) and as a fractional integration operator of order \(1 - s\) on high frequencies \(\{|\xi | \gg N\}\); see Sect. 3 for the precise definition. Thanks to the smoothing of the I-operator, the modified energy:

$$\begin{aligned} E(I_N u) =\frac{1}{2}\int _{{\mathbb {R}}^3}|\nabla I_N u|^2 dx+\frac{1}{4}\int _{{\mathbb {R}}^3}|I_N u|^4dx \end{aligned}$$

is finite for \(u \in H^s({\mathbb {R}}^3)\). Moreover, the modified energy \(E(I_N u)\) controls \(\Vert u\Vert _{\dot{H}^s}^2\). See (3.3). Hence, the main task is reduced to controlling the growth of the modified energy \(E(I_N u)\).

While the energy E(u) is conserved for (smooth) solutions to NLS (1.4), the modified energy \(E(I_N u)\) is no longer conserved since \(I_N u\) does not satisfy the original equation. Instead, \(I_N u\) satisfies the following I-NLS:

$$\begin{aligned} \begin{aligned} i\partial _tI_N u + \Delta I_N u&= I_N (|u|^2u) \\&= |I_N u|^2 I_N u + \big \{I_N (|u|^2u) - |I_N u|^2I_N u\big \}\\&=: {\mathcal {N}}(I_N u) + [I_N, {\mathcal {N}}](u), \end{aligned} \end{aligned}$$
(1.7)

where \({\mathcal {N}}(u) = |u|^2 u\) denotes the cubic nonlinearity. The commutator term

$$\begin{aligned}{}[I_N, {\mathcal {N}}](u) = I_N (|u|^2u) - |I_N u|^2I_N u \end{aligned}$$
(1.8)

is the source of non-conservation of the modified energy \(E(I_N u)\). A direct computation shows

$$\begin{aligned} \partial _tE(I_N u) = - {{\,\mathrm{Re}\,}}\int _{{\mathbb {R}}^3} \overline{\partial _tI_N u }\, [I_N, {\mathcal {N}}](u) dx. \end{aligned}$$

See (5.1). Thanks to the commutator structure, it is possible to obtain a good estimate (with a decay in the large parameter N) for \(\partial _tE(I_N u)\) on each local time interval (See Proposition 4.1 in [8]). Then, by using a scaling argument (with a parameter \(\lambda = \lambda (T) \gg 1 \), depending on the target time T), we (i) first reduce the situation to the small data setting, (ii) then iterate the local-in-time argument with a good bound on \(\partial _tE(I_N u^\lambda )\) on the scaled solution \(u^\lambda \), and (iii) choose \(N = N(T) \gg 1\) sufficiently large such that the scaled target time \(\lambda ^2 T\) is (at most) the doubling time for the modified energy \(E(I_N u^\lambda )\). This yields the regularity restriction \(s > \frac{5}{6}\) in [8].

Let us turn to the case of the stochastic NLS (1.6). In proceeding with the I-method, we need to estimate the growth of the modified energy \(E(I_Nu)\). In this stochastic setting, we have two sources for non-conservation of \(E(I_N u)\). The first one is the commutator term \([I_N, {\mathcal {N}}](u)\) in (1.8) as in the deterministic case described above. This term can be handled almost in the same manner as in [8], but some care must be taken due to a weaker regularity in time (\(b < \frac{1}{2}\)). See Proposition 4.1. The second source for non-conservation of \(E(I_N u)\) is the stochastic forcing. In particular, in estimating the growth of the modified energy \(E(I_N u)\), we need to apply Ito’s lemma to \(E(I_N u)\), which introduces several correction terms.

In the deterministic case [8], one iteratively applies the local-in-time argument and estimate energy increment on each local time interval. A naive adaptation of this argument to the stochastic setting would lead to iterative applications of Ito’s lemma to estimate the growth of the modified energy \(E(I_N u)\). In controlling an expression of the form

$$\begin{aligned} {\mathbb {E}}\bigg [ \sup _{ 0 \le t \le t_0} E(I_N u)\bigg ], \end{aligned}$$

we need to apply Burkholder–Davis–Gundy inequality, which introduces a multiplicative constant \(C>1\). See Lemma 5.1. Namely, if we were to apply Ito’s lemma iteratively on each time interval of local existence, then this would lead to an exponential growth of the constant in front of the modified energy. This causes an iteration argument to break down.

We instead apply Ito’s lemma only once on the global time interval \([0, \lambda ^2T]\). At the same time, we estimate the contribution from the commutator term iteratively on each local time interval. Note that this latter task requires a small data assumption, which we handle by introducing a suitable stopping time and iteratively verifying such a small data assumption. See Sect. 6.

As in the deterministic setting, we employ a scaling argument to reduce the problem to the small data regime. In the stochastic setting, we need to proceed with care in applying a scaling to the noise \(\phi \xi \) since we need to apply Ito’s lemma after scaling. Namely, we need to express the scaled noise as \(\phi ^\lambda \xi ^\lambda \), where \( \xi ^\lambda \) is another space–time white noise (defined by the white noise scaling; see (3.18)) such that Ito calculus can be applied. This forces us to study the scaled Hilbert–Schmidt operator \(\phi ^\lambda \). In the application of Ito’s lemma, there are correction terms due to \(I_N \phi ^\lambda \) besides the commutator term \([I_N, {\mathcal {N}}](u^\lambda )\). In order to carry out an iterative procedure, we need to make sure that the contribution from the correction terms involving \(I_N \phi ^\lambda \) is negligible as compared to that from the commutator term. See Sects. 3.3 and 6. As a result, the regularity restriction \(s > \frac{5}{6}\) comes from the commutator term as in the deterministic case.

We conclude this introduction by several remarks.

Remark 1.2

In this paper, we implement the I-method in the stochastic PDE setting. There is a recent work [23] by Gubinelli, Koch, Tolomeo, and the third author, establishing global well-posedness of the (renormalized) defocusing stochastic cubic nonlinear wave equation on the two-dimensional torus \({\mathbb {T}}^2\), forced by space–time white noise. The I-method was also employed in [23]. We point out that our argument in this paper is a genuine extension of the I-method to the stochastic setting, which can be applied to a wide class of stochastic dispersive equations. On the other hand, in [23], the I-method was applied to the residual term \(v = u - \Psi _{\text {wave}}\) in the Da Prato-Debussche trick [13], where \(\Psi _{\text {wave}}\) denotes the stochastic convolution in the wave setting. Furthermore, the I-method argument in [23] is pathwise, namely, entirely deterministic once we take the pathwise regularity of \(\Psi _{\text {wave}}\) (and its Wick powers) from [21].

Remark 1.3

As in the usual application of the I-method in the deterministic setting, our implementation of the I-method in the stochastic setting yields a polynomial-in-time growth bound of the \(H^s\)-norm of a solution. See Remark 6.2. We point out that the I-method approach to the singular stochastic nonlinear wave equation on \({\mathbb {T}}^2\) with the defocusing cubic nonlinearity in [23] yields a much worse double exponential growth bound.

Remark 1.4

In a recent paper [31], the third author and Okamoto studied SNLS (1.1) with an additive noise in the mass-critical case (\(p = 1+ \frac{4}{d}\)) and the energy-critical case (\(p = 1+ \frac{4}{d-2}\), \(d \ge 3\)). By adapting the recent deterministic mass-critical and energy-critical global theory, they proved global well-posedness of (1.1) in the critical spaces. In particular, when \(d = 2\) and \(p = 3\), this yields global well-posedness the two-dimensional defocusing stochastic cubic NLS in \(L^2({\mathbb {R}}^2)\). This is the reason why we only considered the three-dimensional case in Theorem 1.1, since our I-method argument would yield global well-posedness only for \(s > \frac{4}{7}\) in the two-dimensional cubic case (just as in the deterministic case [8]), which is subsumed by the aforementioned global well-posedness result in [31].

Remark 1.5

In an application of the I-method, it is possible to introduce a correction term (away from a nearly resonant part) and improve the regularity range. See [11]. It would be of interest to implement such an argument to the stochastic PDE setting since a computation of a correction term would involve Ito’s lemma.

Remark 1.6

We mentioned that our implementation of the I-method in the stochastic PDE setting is sufficiently general and is applicable to other dispersive equations forced by additive noise. This is conditional to an assumption that a commutator term can be treated with a weaker temporal regularity \(b < \frac{1}{2}\). In the case of SNLS, this can be achieved by a simple interpolation argument, at a slight loss of spatial regularity. See Sect. 4. See also [7] for an analogous argument in the periodic case. In this regard, it is of interest to study the stochastic KdV equation in negative Sobolev spaces since crucial estimates for KdV require the temporal regularity to be \(b = \frac{1}{2}\). See [3, 10, 26].

This paper is organized as follows. In Sect. 2, we go over the preliminary materials from deterministic and stochastic analysis. We then reduce a proof of Theorem 1.1 to controlling the homogeneous \(\dot{H}^s\)-norm of a solution (Remark 2.4). In Sect. 3, we introduce the I-operator and go over local well-posedness of I-SNLS (3.4). Then, we discuss the scaling properties of I-SNLS in Sect. 3.3. In Sect. 4, we briefly go over the nonlinear estimates, indicating required modifications from [8]. In Sect. 5, we apply Ito calculus to bound the modified energy in terms of a term involving the commutator \([I_N, {\mathcal {N}}]\). Lastly, we put all the ingredients together and present a proof of Theorem 1.1 in Sect. 6.

2 Preliminaries

In this section, we first introduce notations and function spaces along with the relevant linear estimates. We also go over preliminary lemmas from stochastic analysis. We then discuss a reduction of the proof of Theorem 1.1; see Remark 2.4.

2.1 Notations

For simplicity, we drop \( 2\pi \) in dealing with the Fourier transforms. We first recall the Fourier restriction norm spaces \( X^{s,b}({\mathbb {R}}\times {\mathbb {R}}^d) \) introduced by Bourgain [2]. The \(X^{s, b}\)-space is defined by the norm:

$$\begin{aligned} \Vert u\Vert _{X^{s,b}}=\Vert \langle \xi \rangle ^{s}\langle \tau +|\xi |^2 \rangle ^b \widehat{u}(\tau ,\xi )\Vert _{L^2_{\tau }L^2_{\xi }({\mathbb {R}}\times {\mathbb {R}}^d)}, \end{aligned}$$
(2.1)

where \( \langle \,\cdot \, \rangle = (1 +|\cdot |^2)^\frac{1}{2} \). When \(b > \frac{1}{2}\), we have the following embedding:

$$\begin{aligned} X^{s, b}({\mathbb {R}}\times {\mathbb {R}}^d ) \subset C({\mathbb {R}}; H^s({\mathbb {R}}^d)). \end{aligned}$$
(2.2)

Given \(\delta > 0\), we define the local-in-time version \( X^{s,b}_{\delta } \) on \( [0,\delta ] \times {\mathbb {R}}^d \) by

$$\begin{aligned} \Vert u\Vert _{X^{s,b}_{\delta }}:=\inf \big \{ \Vert v\Vert _{X^{s,b}({\mathbb {R}}\times {\mathbb {R}}^d)}: v|_{[0,\delta ]}=u \big \}. \end{aligned}$$
(2.3)

Given a time interval \(J \subset {\mathbb {R}}\), we also define the local-in-time version \( X^{s,b}(J)\) in an analogous manner.

When we work with space–time function spaces, we use short-hand notations such as \(C_T H^s_x = C([0, T]; H^s({\mathbb {R}}^d))\).

We write \( A \lesssim B \) to denote an estimate of the form \( A \le CB \). Similarly, we write \( A \sim B \) to denote \( A \lesssim B \) and \( B \lesssim A \) and use \( A \ll B \) when we have \(A \le c B\) for small \(c > 0\). We may use subscripts to denote dependence on external parameters; for example, \(A\lesssim _{p, q} B\) means \(A\le C(p, q) B\), where the constant C(pq) depends on parameters p and q. We also use \( a+ \) (and \( a- \)) to mean \( a + \varepsilon \) (and \( a-\varepsilon \), respectively) for arbitrarily small \( \varepsilon >0 \). As it is common in probability theory, we use \(A\wedge B\) to denote \(\min (A, B)\).

Given dyadic \(M \ge 1\), we use \({\mathbf {P}}_M\) to denote the Littlewood–Paley projector onto the frequenciesFootnote 2\(\{|\xi |\sim M\}\) such that

$$\begin{aligned} f = \sum _{\begin{array}{c} M\ge 1\\ \text {dyadic} \end{array}}^\infty {\mathbf {P}}_M f. \end{aligned}$$
(2.4)

In view of the time reversibility of the problem, we only consider positive times in the following.

2.2 Linear estimates

We first recall the Strichartz estimate. We say that a pair of indices (qr) is Strichartz admissible if \(2\le q, r \le \infty \), \((q, r, d) \ne (2, \infty , 2)\), and

$$\begin{aligned} \frac{2}{q} + \frac{d}{r} = \frac{d}{2}. \end{aligned}$$

Then, given any admissible pair (qr), the following Strichartz estimates are known to hold:

$$\begin{aligned} \Vert S(t)f\Vert _{L_t^qL_x^r({\mathbb {R}}\times {\mathbb {R}}^d)}&\lesssim \Vert f\Vert _{H^s}. \end{aligned}$$
(2.5)

See [19, 27, 35, 39].

Next, we recall the standard linear estimates for the \(X^{s, b}\)-spaces. See, for example, [17, 36] for the proofs of (i) and (ii).

Lemma 2.1

  1. (i)

    (homogeneous linear estimate). Given \(s, b \in {\mathbb {R}}\), we have

    $$\begin{aligned} \Vert S(t)f\Vert _{X^{s,b}_{T}}\lesssim \Vert f\Vert _{H^s} \end{aligned}$$

    for any \(0 < T \le 1\). Moreover, we have \(S(t) f \in C([0, T]; H^s({\mathbb {R}}^d))\) for \(f \in H^s({\mathbb {R}}^d)\).

  2. (ii)

    (nonhomogeneous linear estimate). Given \(s \in {\mathbb {R}}\), \(b > \frac{1}{2}\) sufficiently close to \(\frac{1}{2}\), and small \(\theta > 0\), we have

    $$\begin{aligned} \bigg \Vert \int ^{t}_{0}S(t-t')F(t')dt'\bigg \Vert _{X^{s, b }_T}\lesssim T^{\theta }\Vert F\Vert _{X^{s, b - 1 + \theta }_T} \end{aligned}$$

    for any \(0 < T \le 1\).

  3. (iii)

    (transference principle). Let (qr) be Strichartz admissible. Then, for any \(b > \frac{1}{2} \), we have

    $$\begin{aligned} \Vert u \Vert _{L^q_tL^r_x} \lesssim \Vert u \Vert _{X^{0, b}}.\end{aligned}$$
  4. (iv)

    Let \( d=3 \). Then, given any \(2\le p <\frac{10}{3}\), there exists small \(\varepsilon > 0\) such that

    $$\begin{aligned} \Vert u\Vert _{L_{t,x}^{p}({\mathbb {R}}\times {\mathbb {R}}^3)} \lesssim \Vert u\Vert _{X^{0,\frac{1}{2}-\varepsilon }}. \end{aligned}$$
    (2.6)

Proof

As for (iii), see, for example, Lemma 2.9 in [36]. In the following, we only discuss Part (iv). Noting that \((\frac{10}{3}, \frac{10}{3})\) is Strichartz admissible when \( d= 3\), it follows from the transference principle and the Strichartz estimate (2.5) that

$$\begin{aligned} \Vert u\Vert _{L_{t, x}^{\frac{10}{3}}}&\lesssim \Vert u\Vert _{X^{0,b}} \end{aligned}$$
(2.7)

for \( b > \frac{1}{2} \). Interpolating this with the trivial bound: \(\Vert u\Vert _{L_{t,x}^{2}} = \Vert u\Vert _{X^{0,0}}\), we obtain the desired estimate (2.6). \(\square \)

2.3 Tools from stochastic analysis

Lastly, we go over basic tools from stochastic analysis and then provide some reduction for the proof of Theorem 1.1.

We first recall the regularity properties of the stochastic convolution \(\Psi \) defined in (1.3). Given two separable Hilbert spaces H and K, we denote by \({HS }(H;K)\) the space of Hilbert–Schmidt operators \(\phi \) from H to K, endowed with the norm:

$$\begin{aligned} \Vert \phi \Vert _{{HS }(H;K)} = \bigg ( \sum _{n \in {\mathbb {N}}} \Vert \phi f_n \Vert _K^2 \bigg )^{\frac{1}{2}}, \end{aligned}$$

where \(\{ f_n \}_{n \in {\mathbb {N}}}\) is an orthonormal basis of H. Recall that the Hilbert–Schmidt norm of \(\phi \) is independent of the choice of an orthonormal basis of H.

Next, recall the definition of a cylindrical Wiener process W on \( L^2({\mathbb {R}}^d) \). Let \((\Omega , {\mathcal {F}}, P)\) be a probability space endowed with a filtration \(\{ {\mathcal {F}}_t \}_{t \ge 0}\). Fix an orthonormal basis \(\{ e_n \}_{n \in {\mathbb {N}}}\) of \(L^2({\mathbb {R}}^d)\). We define an \(L^2 ({\mathbb {R}}^d)\)-cylindrical Wiener process W by

$$\begin{aligned} W(t) = \sum _{n \in {\mathbb {N}}} \beta _n (t) e_n, \end{aligned}$$
(2.8)

where \(\{ \beta _n \}_{n \in {\mathbb {N}}}\) is a family of mutually independent complex-valued Brownian motionsFootnote 3 associated with the filtration \(\{ {\mathcal {F}}_t \}_{t \ge 0}\). Note that a space–time white noise \(\xi \) is given by a distributional derivative (in time) of W. Hence, we can express the stochastic convolution \(\Psi \) in (1.3) as

$$\begin{aligned} \begin{aligned} \Psi (t)&= - i \int _0^t S(t - t') \phi dW (t')\\&= -i \sum _{n \in {\mathbb {N}}} \int _0^t S(t-t') \phi e_n \, d \beta _n (t'). \end{aligned} \end{aligned}$$
(2.9)

The next lemma summarizes the regularity properties of the stochastic convolution.

Lemma 2.2

Let \( d \ge 1 \), \( T > 0 \), and \( s \in {\mathbb {R}}\). Suppose that \( \phi \in {HS }(L^2;H^s) \).

  1. (i)

    We have \( \Psi \in C([0,T];H^s({\mathbb {R}}^d)) \) almost surely.

  2. (ii)

    Given any \(1 \le q < \infty \) and finite \(r \ge 2\) such that \( r \le \frac{2d}{d-2}\) when \(d \ge 3\), we have \(\Psi \in L^q([0, T]; W^{s, r}({\mathbb {R}}^d))\) almost surely.

  3. (iii)

    Given \( b < \frac{1}{2}\), we have \(\Psi \in X^{s,b}([0,T])\) almost surely. Moreover, there exists \(\theta > 0\) such that

    $$\begin{aligned} {\mathbb {E}}\Big [ \Vert \Psi \Vert ^{p}_{X^{s,b}([0,T])} \Big ] \lesssim p^\frac{p}{2} \langle T \rangle ^{\theta p} \Vert \phi \Vert ^{p}_{{HS }(L^2;H^s)} \end{aligned}$$
    (2.10)

    for any finite \(p \ge 1\).

Regarding the proof of Lemma 2.2, see [14] for (i) and [15, 32] for (ii). As for (iii), see [16, Proposition 2.1], [29, Proposition 4.1], and [7, Lemma 3.3] for the proofs of the \(X^{s, b}\)-regularity of the stochastic convolution. The works [7, 16, 29] treat a different equation (the KdV equation) and/or a different setting (on the circle), but the proofs can be easily adapted to our context. For example, one can follow the argument in the proof of Proposition 2.1 in [16] to obtain (2.10) for \(p = 2\). Then, by noting from (2.9) that the stochastic convolution \(\Psi \) is nothing but (a limit of) a linear combination of Wiener integrals, the general case follows from the \(p =2 \) case and the Wiener chaos estimate;Footnote 4 see, for example, [22, Lemma 2.5]. See also [34, Theorem I.22] and [37, Proposition 2.4].

Once we have Lemma 2.2, we can use the Strichartz estimates (2.5) (without the \(X^{s, b}\)-spaces) to prove local well-posedness of SNLS (1.6) in \(H^s({\mathbb {R}}^3)\) for \(s \ge s_\mathrm{crit} = \frac{1}{2}\), provided that \(\phi \in HS(L^2; H^s)\). See [15, 31]. In particular, for the subcritical range \(s > \frac{1}{2}\), the random time \(\delta = \delta (\omega ) \) of local existence, starting from \(t = t_0\), satisfies

$$\begin{aligned} \delta \gtrsim \Big ( \Vert u(t_0)\Vert _{H^s} + C_{t_0}(\Psi ) \Big )^{-\theta } \end{aligned}$$

for some \(\theta > 0\), where \(C_{t_0}(\Psi )>0\) denotes certain Strichartz norms of the stochastic convolution \(\Psi \), restricted to a time interval \([t_0, t_0 + 1]\). Given \(T > 0\), it follows from Lemma 2.2 that \(C_{t_0}(\Psi )\) remains finite almost surely for any \(t_0 \in [0, T]\). Therefore, Theorem 1.1 follows once we show that \(\sup _{t \in [0, T]} \Vert u(t) \Vert _{H^s}\) remains finite almost surely for any \(T>0\) (with a bound depending on \(T>0\)).

Lastly, we recall the a priori mass control from [15] whose proof follows from Ito’s lemma applied to the mass M(u) in (1.5) and Burkholder–Davis–Gundy inequality (see [14, Theorem 4.36]).

Lemma 2.3

Assume \( \phi \in {HS (L^2;L^2)}\) and \( u_0\in L^2({\mathbb {R}}^3) \). Let u be the solution to SNLS (1.6) with \(u|_{t = 0}=u_0\) and \(T^{*} = T^{*}_{\omega } (u_0)\) be the forward maximal time of existence. Then, given \(T>0\), there exists \(C_1 = C_1 (M(u_0), T, \Vert \phi \Vert _{{HS }(L^2; L^2)})>0\) such that for any stopping time \(\tau \) with \(0<\tau < \min (T^{*}, T)\) almost surely, we have

$$\begin{aligned} {\mathbb {E}}\bigg [ \sup _{0\le t \le \tau } M(u(t)) \bigg ] \le C_1. \end{aligned}$$
(2.11)

Remark 2.4

In view of Lemma 2.3, given finite \(T > 0\), the \(L^2\)-norm of the solution remains bounded almost surely on \([0, T^*_\omega \wedge T]\), where \(T^*_\omega \) is the forward maximal time of existence.Footnote 5 Therefore, it follows from the discussion above that, in order to prove Theorem 1.1, it suffices to show that the homogeneous Sobolev norm \( \Vert u(t) \Vert _{\dot{H}^s}\) remains finite almost surely on each bounded time interval [0, T]. In the following, our analysis involves only homogeneous Sobolev spaces.

3 I-operator, I-SNLS, and their scaling properties

3.1 I-operator

Bourgain [4] introduced the so-called high-low method in establishing global well-posedness of the defocusing cubic NLS on \({\mathbb {R}}^2\) below the energy space. The high-low method is based on truncating the dynamics by a sharp frequency cutoff and separately studying the low-frequency and high-frequency dynamics. Colliander et al. [8] proposed to use a smooth positive frequency multiplier instead.

Let \(0< s < 1\). Given \(N \ge 1\), we define a smooth, radially symmetric, non-increasing (in \(|\xi |\)) multiplier \(m_N\), satisfying

$$\begin{aligned} m_N(\xi )= \left\{ \begin{array}{ll} 1, &{} \text {for } |\xi |\le N, \\ \big (\frac{N}{|\xi |}\big )^{1-s}, &{} \text {for } |\xi |\ge 2N. \end{array}\right. \end{aligned}$$
(3.1)

Since \(m_N\) is radial, with a slight abuse of notation, we may use the notation \(m_N(|\xi |)\) by viewing \(m_N\) as a function on \([0, \infty )\).

We then define the I-operator \(I = I_N\) to be the Fourier multiplier operator with the multiplier \(m_N\):

$$\begin{aligned} \widehat{I_Nf}(\xi )=m_N(\xi )\widehat{f}(\xi ). \end{aligned}$$
(3.2)

As mentioned in the introduction, \(I_N\) acts as the identity operator on low frequencies \(\{ |\xi | \le N\}\), while it acts as a fractional integration operator of order \(1-s\) on high frequencies \(\{ |\xi | \ge 2N\}\). As a result, we have the following bound:

$$\begin{aligned} \Vert f\Vert _{\dot{H}^s}\lesssim \Vert f\Vert _{L^2} + \Vert I_N f\Vert _{\dot{H}^1} \qquad \text {and} \qquad \Vert I_N f\Vert _{\dot{H}^1}\lesssim N^{1-s}\Vert f\Vert _{\dot{H}^s}. \end{aligned}$$
(3.3)

3.2 I-SNLS

By applying the I-operator to SNLS (1.6), we obtain the following I-SNLS:

$$\begin{aligned} {\left\{ \begin{array}{ll} i\partial _tI_Nu + \Delta I_Nu=I_N(|u|^2u)+I_N\phi \xi \\ I_Nu|_{t=0}=I_Nu_0\in H^1({\mathbb {R}}^3). \end{array}\right. } \end{aligned}$$
(3.4)

In this subsection, we study local well-posedness of the Cauchy problem (3.4). A similar local well-posedness result for the (deterministic) I-NLS (namely, (3.4) with \(\phi = 0\)) was studied in [8, Proposition 4.2]. In order to capture the temporal regularity of the stochastic convolution (Lemma 2.2), we need to work with the \(X^{s, b}\)-space with \(b < \frac{1}{2}\) and hence need to establish a trilinear estimate in this setting. See Lemma 3.2. The following proposition allows us to avoid using the \( L^2\)-norm which is supercritical with respect to scaling (as in [8]).

Proposition 3.1

Let \(\frac{1}{2}<s<1\), \( \phi \in {HS }(L^2;{\dot{H}}^s) \), and \(u_0 \in \dot{H}^s({\mathbb {R}}^3)\). Then, there exist an almost surely positive stopping time

$$\begin{aligned} \delta =\delta _\omega \big (\Vert I_N u_0\Vert _{\dot{H}^1}, \Vert I_N \phi \Vert _{{HS }(L^2; \dot{H}^1)}\big ) \end{aligned}$$

and a unique local-in-time solution \( I_N u\in C( [0,\delta ];{\dot{H}}^1({\mathbb {R}}^3) ) \) to I-SNLS (3.4). Furthermore, if \( T^*=T^*_{\omega } \) denotes the forward maximal time of existence, the following blowup alternative holds:

$$\begin{aligned} T^*=\infty \qquad \text {or} \qquad \lim _{T\nearrow T^*} \Vert I_N u\Vert _{L^\infty _{T}{\dot{H}}^1_x}=\infty . \end{aligned}$$
(3.5)

Proposition 3.1 follows from a standard contraction argument once we prove the following trilinear estimate.

Lemma 3.2

Let \(\frac{1}{2}<s<1\). Then, there exists small \(\varepsilon > 0\) such that

$$\begin{aligned} \Vert \nabla I_N(u_1\overline{u_2}u_3)\Vert _{X^{0,-\frac{1}{2}+2\varepsilon }_T} \lesssim \prod _{j=1}^{3}\Vert \nabla I_N u_j\Vert _{X^{0,\frac{1}{2}-\varepsilon }_T} \end{aligned}$$
(3.6)

for any \(0 \le T \le 1\), where the implicit constant is independent of \( N \ge 1\).

As compared to Proposition 4.2 in [8], we need to work with a slightly weaker temporal regularity on the right-hand side of (3.6).

Before going over a proof of Lemma 3.2, let us briefly discuss a proof of Proposition 3.1. By writing (3.4) in the Duhamel formulation, we have

$$\begin{aligned} \begin{aligned} I_N u(t)&= \Phi (I_Nu) \\: \!&= S(t) I_N u_0 - i \int _0^t S(t - t') I_N(|u|^2 u)(t') dt' + I_N \Psi (t), \end{aligned} \end{aligned}$$

where \(\Phi = \Phi _{I_N u_0, I_N \phi }\) and we interpreted the nonlinearity as a function of \(I_Nu\):

$$\begin{aligned} I_N(|u|^2 u) = I_N(|I_N^{-1} (I_N u)|^2 I_N^{-1} (I_N u)).\end{aligned}$$

Fix small \(\varepsilon > 0\). Then, by Lemmas 2.1 and 2.2 followed by Lemma 3.2, we have

$$\begin{aligned} \begin{aligned} \Vert \nabla \Phi (I_Nu) \Vert _{X^{0, \frac{1}{2}-\varepsilon }_\delta } \le&\Vert \nabla S(t) I_N u_0 \Vert _{X^{0, \frac{1}{2}-\varepsilon }_\delta }\\&+ \bigg \Vert \nabla \int _0^t S(t - t') I_N(|u|^2 u)(t') dt' \bigg \Vert _{X^{0, \frac{1}{2}+\varepsilon }_\delta } + \Vert \nabla I_N \Psi \Vert _{X^{0, \frac{1}{2}-\varepsilon }_\delta }\\ \lesssim&\Vert I_N u_0 \Vert _{\dot{H}^1} + \delta ^\varepsilon \Vert \nabla I_N(|u|^2 u)\Vert _{X^{0, - \frac{1}{2} + 2 \varepsilon }_\delta } + C_\omega \Vert I_N \phi \Vert _{{HS }(L^2; \dot{H}^1)}\\ \lesssim&\Vert I_N u_0 \Vert _{\dot{H}^1} + C_\omega \Vert I_N \phi \Vert _{{HS }(L^2; \dot{H}^1)} + \delta ^\varepsilon \Vert \nabla I_N u\Vert _{X^{0, \frac{1}{2} - \varepsilon }_\delta }^3 \end{aligned} \end{aligned}$$
(3.7)

for an almost surely finite random constant \(C_\omega >0\) and for any \(0 \le \delta \le 1\). Similarly, we have

$$\begin{aligned} \begin{aligned}&\Vert \nabla (\Phi ( I_Nu) - \Phi (I_Nv)) \Vert _{X^{0, \frac{1}{2} - \varepsilon }_\delta }\\&\quad \lesssim \delta ^\varepsilon \Big (\Vert \nabla I_N u\Vert _{X^{0, \frac{1}{2} - \varepsilon }_\delta }^2 +\Vert \nabla I_N v\Vert _{X^{0, \frac{1}{2} - \varepsilon }_\delta }^2 \Big ) \Vert \nabla (I_N u - I_N v)\Vert _{X^{0, \frac{1}{2} - \varepsilon }_\delta }. \end{aligned} \end{aligned}$$
(3.8)

From (3.7) and (3.8), we conclude that \(\Phi \) is almost surely a contraction on the ball of radius

$$\begin{aligned} R = 2\Big ( \Vert I_N u_0 \Vert _{\dot{H}^1} + C_\omega \Vert I_N \phi \Vert _{{HS }(L^2; \dot{H}^1)}\Big ) \end{aligned}$$

in \(\nabla ^{-1} X^{0, \frac{1}{2} - \varepsilon }\) by choosing \(\delta = \delta _\omega (R) >0\) sufficiently small. Moreover, from (2.2) and Lemmas 2.1 and 2.2with (3.7), we also conclude that \( I_N u \in C( [0,\delta ];{\dot{H}}^1({\mathbb {R}}^3) )\). This proves Proposition 3.1. The following remark plays an important role in iteratively applying the local-in-time argument in Sect. 6.

Remark 3.3

The argument above shows that there exist small \(\eta _0, \eta _1 > 0\) such that if, for a given interval \( J = [t_0, t_0 + 1] \subset [0, \infty )\) of length 1 and \( \omega \in \Omega \), we have

$$\begin{aligned} E(I_N u(t_0)) \le \eta _0 \qquad \text {and} \qquad \Vert \nabla I_N \Psi (\omega )\Vert _{X^{0,\frac{1}{2}-\varepsilon }(J)}\le \eta _1, \end{aligned}$$
(3.9)

then a solution \(I_N u \) to I-SNLS (3.4) exists on the interval J with the bound:

$$\begin{aligned} \Vert \nabla I_N u\Vert _{X^{0,\frac{1}{2}-\varepsilon }(J)}\le C_0 \end{aligned}$$

for some absolute constant \(C_0\), uniformly in \( N \ge 1. \)

We now present a proof of Lemma 3.2.

Proof of Lemma 3.2

By the interpolation lemma ([9, Lemma 12.1]), it suffices to prove (3.6) for \(N = 1\). Let \(I = I_1\). By the definition (2.3) of the time restriction norm, duality, and Leibniz rule for \( \nabla I \), it suffices to show thatFootnote 6

$$\begin{aligned} \bigg |\iint _{{\mathbb {R}}\times {\mathbb {R}}^3}\big (\nabla I u_1\big )\overline{u_2}u_3u_4 \,dx dt\bigg | \lesssim \prod _{j=1}^{3}\Vert \nabla Iu_j\Vert _{X^{0,\frac{1}{2}-\varepsilon }} \Vert u_4\Vert _{X^{0,\frac{1}{2}-2\varepsilon }}. \end{aligned}$$
(3.10)

For \(j\in \{2,3\}\), we split the functions \(u_j\) into high and low frequency components:

$$\begin{aligned} u_j=u_j^{hi }+u_j^{{low }}, \end{aligned}$$
(3.11)

where the spatial Fourier supports of \(u_j^{hi }\) and \(u_j^{low }\) are contained in \(\{|\xi |\ge \frac{1}{2} \}\) and \(\{|\xi |\le 1 \}\), respectively.

By noting \( u_j^{low }=Iu_j^{low }\) and Sobolev’s inequality (both in space and time), we have

$$\begin{aligned} \Vert u_j^{low }\Vert _{L^6_{t,x}}\lesssim \Vert \nabla I u_j\Vert _{X^{0,\frac{1}{2}-}}. \end{aligned}$$
(3.12)

As for \(u_j^{hi }\), we claim

$$\begin{aligned} \Vert {u}_j^{hi }\Vert _{L^{5+}_{t,x}}\lesssim \Vert \nabla I u_j\Vert _{X^{0,\frac{1}{2}-}}. \end{aligned}$$
(3.13)

Since \(N = 1\), we have \(I\sim |\nabla |^{s-1}\). Then, by Sobolev’s inequality and the transference principle (Lemma 2.1 (iii)) with an admissible pair \((q, r ) =\big (5+, \frac{30}{11}-\big )\), we have

$$\begin{aligned} \begin{aligned} \Vert u_j^{hi }\Vert _{L^{5+}_{t,x}}&= \big \Vert |\nabla |^{1-s} I u_j^{hi }\big \Vert _{L^{5+}_{t,x}} \lesssim \big \Vert \langle \nabla \rangle ^{s-} |\nabla |^{1-s} Iu_j^{hi }\big \Vert _{L^{5+}_{t}L^{\frac{30}{11}-}_x}\\&\lesssim \big \Vert |\nabla |^{1-} Iu_j^{{hi }}\big \Vert _{X^{0,\frac{1}{2}+}}, \end{aligned} \end{aligned}$$
(3.14)

provided that \(s > \frac{1}{2}\). On the other hand, by Sobolev’s inequality, we have

$$\begin{aligned} \big \Vert | \nabla |^{1-s} Iu_j^{hi }\big \Vert _{L^{5+}_{t, x}} \lesssim \big \Vert |\nabla |^{\frac{19}{10} - s+} Iu_j^{hi }\big \Vert _{X^{0,\frac{3}{10}+}}. \end{aligned}$$
(3.15)

By interpolating (3.14) and (3.15), we obtain (3.13).

We now estimate (3.10) by expanding \(u_j\), \(j = 2, 3\), as \(u_j^{hi }+u_j^{{low }}\). For \(j = 2, 3\), let \(p_j = 6\) in treating \(u_j^{low }\) and \(p_j = 5+\) in treating \(u_j^{hi }\). Then, the claimed estimate (3.10) follows from \( L^{\frac{10}{3}-}_{t,x}, L^{p_2}_{t,x}, L^{p_3}_{t,x}, L^{p_4}_{t,x}\)-Hölder’s inequality, Lemma 2.1 (iv), (3.12), and (3.13), where \(p_4\) is defined by \(\frac{1}{p_4} = 1 - \big (\frac{3}{10-}\big ) - \frac{1}{p_2} - \frac{1}{p_3}\) such that \( 2 \le p_4 < \frac{10}{3}\). \(\square \)

3.3 Scaling property

In this subsection, we discuss the scaling properties of SNLS (1.6) and I-SNLS (3.4). Before doing so, we first recall the scaling property of the (deterministic) cubic NLS:

$$\begin{aligned} i\partial _tu +\Delta u =|u|^2u. \end{aligned}$$
(3.16)

This equation enjoys the following scaling invariance; if u is a solution to (3.16), then the scaled function

$$\begin{aligned} u^\lambda (t,x):= \lambda ^{-1} u(\lambda ^{-2}t,\lambda ^{-1} x) \end{aligned}$$
(3.17)

also satisfies Eq. (3.16) with the scaled initial data. In the application of the I-method in the deterministic case (as in [8]), we apply this scaling first and then apply the I-operator to obtain I-NLS (1.7) (with \(u^\lambda \) in place of u).

In our current stochastic setting, when we apply the scaling, we also need to scale the noise \(\phi \xi \). In order to apply Ito calculus to the scaled noise, we need to make sure that the scaled noise is given by another space–time white noise \(\xi ^\lambda \) (with a scaled Hilbert–Schmidt operator \(\phi ^\lambda \)). For this purpose, we first recall the scaling property of a space–time white noise. Given a space–time white noise \(\xi \) on \({\mathbb {R}}\times {\mathbb {R}}^d\), it is well known that the scaled noise \(\xi ^\lambda \) defined byFootnote 7

$$\begin{aligned} \xi ^\lambda (t,x)=\xi _{a_1,a_2}^\lambda (t, x):= \lambda ^{-\frac{a_1+da_2}{2}} \xi (\lambda ^{-a_1}t,\lambda ^{-a_2}x) \end{aligned}$$
(3.18)

is also a space–time white noise for any \(a_1, a_2 \in {\mathbb {R}}\).

Next, let us study the scaling property of the Hilbert–Schmidt operator \(\phi \) via its kernel representation. Recall from [33, Theorem VI.23] that a bounded linear operator \(\phi \) on \(L^2({\mathbb {R}}^3)\) is Hilbert–Schmidt if and only if it is represented as an integral operator with a kernel \(k \in L^2({\mathbb {R}}^3 \times {\mathbb {R}}^3)\):

$$\begin{aligned} (\phi f)(x) = \int _{{\mathbb {R}}^3} k(x, y) f(y) dy \end{aligned}$$

with \( \Vert \phi \Vert _{{HS }(L^2; L^2)} = \Vert k \Vert _{L^2_{x, y}}.\) More generally, we have

$$\begin{aligned} \Vert \phi \Vert _{{HS }(L^2; {\dot{H}}^s)} = \Vert k \Vert _{{\dot{H}}^s_xL^2_y}. \end{aligned}$$
(3.19)

With this in mind, let us evaluate \(\phi \xi \) at \((\frac{t}{\lambda ^2}, \frac{x}{\lambda })\) with a factor of \(\lambda ^{-3}\). By a change of variables and (3.18) with \((a_1, a_2) = (2, 0)\), we have

$$\begin{aligned} \begin{aligned} \lambda ^{-3}\phi \xi \big (\tfrac{t}{\lambda ^2}, \tfrac{x}{\lambda }\big )&= \lambda ^{-3 } \int _{{\mathbb {R}}^3} k\big (\tfrac{x}{\lambda }, y \big ) \xi \big (\tfrac{t}{\lambda ^2}, y \big ) dy\\&= \lambda ^{-2 } \int _{{\mathbb {R}}^3} k\big (\tfrac{x}{\lambda }, y \big ) \xi ^\lambda (t, y) dy. \end{aligned} \end{aligned}$$
(3.20)

This motivates us to define the scaled kernel \(k^\lambda \) by

$$\begin{aligned} k^{\lambda }(x, y ) = \lambda ^{-2 } k(\lambda ^{-1}x, y ) \end{aligned}$$
(3.21)

and the associated Hilbert–Schmidt operator \(\phi ^\lambda \) with an integral kernel \(k^\lambda \). Then, it follows from (3.20) and (3.21) that

$$\begin{aligned} \lambda ^{-3}\phi \xi \big (\tfrac{t}{\lambda ^2}, \tfrac{x}{\lambda }\big ) = \phi ^\lambda \xi ^\lambda (t, x). \end{aligned}$$
(3.22)

Therefore, by applying the scaling (3.17) with (3.22) to SNLS (1.6) and then applying the I-operator, we obtain

$$\begin{aligned} i \partial _tI_N u^\lambda + \Delta I_N u^\lambda = I_N (|u^\lambda |^2 u^\lambda ) + I_N \phi ^\lambda \xi ^\lambda . \end{aligned}$$
(3.23)

In the following lemma, we record the scaling property of the Hilbert-Schmidt norm of \(I_N \phi ^\lambda \).

Lemma 3.4

Let \(d = 3\), \( 0<s<1 \), and \( \phi \in {HS }(L^2,{\dot{H}}^s)\). Then, we have

$$\begin{aligned} \Vert I_N \phi ^\lambda \Vert _{{HS }(L^2;{\dot{H}}^1)}\lesssim N^{1-s}\lambda ^{-\frac{1}{2} - s} \Vert \phi \Vert _{{HS }(L^2;{\dot{H}}^s)}. \end{aligned}$$
(3.24)

As a consequence, given any \(\varepsilon > 0\), there exists \(\theta > 0\) such that

$$\begin{aligned} \Big \Vert \Vert \nabla I_N \Psi ^\lambda \Vert _{{X^{0,\frac{1}{2}-\varepsilon }_T}}\Big \Vert _{L^{p}(\Omega )} \le C_p \langle T \rangle ^\theta N^{1-s}\lambda ^{-\frac{1}{2} - s}\Vert \phi \Vert _{{HS }(L^2;{\dot{H}}^s)} \end{aligned}$$
(3.25)

for any finite \( p\ge 1 \) and \(T > 0\), where \(\Psi ^\lambda \) is the stochastic convolution corresponding to the scaled noise \(\phi ^\lambda \xi ^\lambda \).

Furthermore, if we assume \( \phi \in {HS }(L^2,{\dot{H}}^\frac{3}{4})\), then we have

$$\begin{aligned} \Vert I_N \phi ^\lambda \Vert _{{HS }(L^2;{\dot{H}}^\frac{3}{4})}\lesssim \lambda ^{-\frac{5}{4}} \Vert \phi \Vert _{{HS }(L^2;{\dot{H}}^\frac{3}{4})}, \end{aligned}$$
(3.26)

uniformly in \(N \ge 1\).

Proof

From (3.19), (3.3), and (3.21), we have

$$\begin{aligned} \begin{aligned} \Vert I_N \phi ^\lambda \Vert _{{HS }(L^2;{\dot{H}}^1)}&= \Vert I_N k^\lambda \Vert _{{\dot{H}}^1_xL^2_y} \lesssim N^{1-s} \Vert \lambda ^{-2} k(\lambda ^{-1} x,y )\Vert _{{\dot{H}}^s_xL^2_y}\\&= N^{1-s}\lambda ^{-\frac{1}{2} - s} \Vert \phi \Vert _{{HS }(L^2;{\dot{H}}^s)}. \end{aligned} \end{aligned}$$
(3.27)

The second estimate (3.25) follows from Lemma 2.2 and (3.24). The last claim (3.26) follows from proceeding as in (3.27) but using the uniform bound \(|m_N(\xi )| \le 1\) in the second step. \(\square \)

Remark 3.5

(i) It is easy to check that Lemma 3.4 remains true even if we proceed with a scaling argument in (3.20) for any \(a_2 \in {\mathbb {R}}\).

(ii) Lemma 3.4 states that by choosing \(\lambda = \lambda (N) \gg 1\), we can make the Hilbert–Schmidt norm of \(I_N \phi ^\lambda \) arbitrarily small (even after multiplying by \(\lambda T^\frac{1}{2}\); see (6.7) and (6.8)).

We conclude this section by going over the scaling of the modified energy. Let \(u_0^\lambda = u^\lambda (0)\). Then, from (3.3), the Hörmander-Mikhlin multiplier theorem [20], (3.17), and Sobolev’s inequality, we have

$$\begin{aligned} \begin{aligned} E(I_Nu_{0}^{\lambda })&=\frac{1}{2}\Vert \nabla I_N u^\lambda _0\Vert ^2_{L^2}+\frac{1}{4}\Vert I_N u_0^\lambda \Vert _{L^4}^4 \\&\lesssim N^{2-2s}\Vert u_0^\lambda \Vert ^2_{{\dot{H}}^s}+\lambda ^{-1} \Vert u_0\Vert ^4_{L^4} \\&\lesssim N^{2-2s}\lambda ^{1-2s}\Vert u_0\Vert ^2_{{\dot{H}}^s}+\lambda ^{-1}\Vert u_0\Vert ^4_{H^s} \\&\le C_1N^{2-2s}\lambda ^{1-2s}\big (1+\Vert u_0\Vert _{H^s}\big )^4. \end{aligned} \end{aligned}$$
(3.28)

Hence, for \(\frac{1}{2}< s < 1\), by choosing \(\lambda = \lambda (N) \gg 1\), we can make the modified energy \(E(I_Nu_0^\lambda )\) of the scaled initial data arbitrarily small.

4 On the commutator estimates

In this section, we go over the commutator estimates (Proposition 4.1), corresponding to the deterministic component in our application of the I-method.

Proposition 4.1

Let \( \frac{5}{6}<s<1 \). Then, given \(\beta > 0\), there exists small \(\varepsilon > 0\) such that

$$\begin{aligned} \bigg |\int _J\int _{{\mathbb {R}}^3}\overline{\Delta I_N u} [I_N, {\mathcal {N}}](u)dxdt\bigg |&\lesssim N^{-1+\beta } \Vert \nabla I_N u\Vert _{X^{0,\frac{1}{2}-\varepsilon }(J)}^4, \end{aligned}$$
(4.1)
$$\begin{aligned} \bigg |\int _J\int _{{\mathbb {R}}^3}\overline{I_N {\mathcal {N}}(u)} [I_N, {\mathcal {N}}](u) dxdt\bigg |&\lesssim N^{-1+\beta } \Vert \nabla I_N u\Vert _{X^{0,\frac{1}{2}-\varepsilon }(J)}^6 \end{aligned}$$
(4.2)

for any \(N\ge 1\) and any interval \(J \subset [0, \infty )\), where \({\mathcal {N}}(u) = |u|^2 u\) and the implicit constants are independent of \(N \ge 1\) and \(J \subset [0, \infty )\).

The estimates (4.1) and (4.2) are essentially the same as those appearing in the proof of Proposition 4.1 in [8]. The difference appears in the temporal regularity; on the right-hand sides of (4.1) and (4.2), we have \(b = \frac{1}{2} - \varepsilon \), whereas the temporal regularity in [8] was \(b = \frac{1}{2} + \varepsilon \). The desired estimates in Proposition 4.1 follow from the corresponding estimates in [8] and an interpolation argument.

Lemma 4.2

(i) Let u be a function on \({\mathbb {R}}\times {\mathbb {R}}^3\) with the spatial frequency support in \(\{|\xi |\sim M\}\) for some dyadic \(M\ge 1\). Then, there exists \( \theta > 0\) such that

$$\begin{aligned} \Vert u\Vert _{L^{10}_tL^{10\pm }_x(J\times {\mathbb {R}}^3)}&\lesssim M^{\pm \theta }\Vert \nabla u\Vert _{X^{0,\frac{1}{2}-}(J)}, \end{aligned}$$
(4.3)
$$\begin{aligned} \Vert u\Vert _{L^{\frac{10}{3}}_tL^{\frac{10}{3}-}_x(J\times {\mathbb {R}}^3)}&\lesssim \Vert u\Vert _{X^{0,\frac{1}{2}-}(J)}, \end{aligned}$$
(4.4)
$$\begin{aligned} \Vert u\Vert _{L^{\frac{10}{3}}_tL^{\frac{10}{3}+}_x(J\times {\mathbb {R}}^3)}&\lesssim M^{\theta }\Vert u\Vert _{X^{0,\frac{1}{2}-}(J)}, \end{aligned}$$
(4.5)

for any interval \(J \subset [0, \infty )\), where the implicit constants are independent of \(N \ge 1\) and \(J \subset [0, \infty )\).

(ii) Let \( \frac{2}{3}<s<1 \). Then, the following trilinear estimate holds:

$$\begin{aligned} \Vert I_N (u_1 u_2 u_3)\Vert _{L^2_{t,x}(J\times {\mathbb {R}}^3)} \lesssim \prod _{j=1}^{3}\Vert \nabla I_N u_j\Vert _{X^{0,\frac{1}{2}-}(J)} \end{aligned}$$
(4.6)

for any \(N\ge 1\) and any interval \(J \subset [0, \infty )\), where the implicit constants are independent of \(N \ge 1\) and \(J \subset [0, \infty )\).

Note that \(\frac{2}{3} < \frac{5}{6}\).

Proof

Part (i) follows from (4.19), (4.20), and (4.21) in [8], showing the corresponding estimates with \(b = \frac{1}{2} + \varepsilon \) on the right-hand sides and a simple interpolation argument. From Sobolev’s inequality and Lemma 2.1 (iii), we have

$$\begin{aligned} \Vert u \Vert _{L^{10}_t L^{10}_x(J \times {\mathbb {R}}^3)} \lesssim \Vert \nabla u \Vert _{L^{10}_t L^\frac{30}{13}_x(J \times {\mathbb {R}}^3)} \lesssim \Vert \nabla u \Vert _{X^{0, \frac{1}{2} +}}. \end{aligned}$$
(4.7)

Then, (4.3) with the \(+\) sign follows from interpolating (4.7) and

$$\begin{aligned} \Vert u \Vert _{L^{10}_t L^{10+}_x(J \times {\mathbb {R}}^3)} \lesssim M^{\frac{1}{5}+} \Vert \nabla u \Vert _{X^{0, \frac{2}{5}}}.\end{aligned}$$

As for (4.3) with the − sign, interpolate (4.7) with

$$\begin{aligned} \Vert u \Vert _{L^{10}_t L^{2}_x(J \times {\mathbb {R}}^3)} \lesssim M^{-1} \Vert \nabla u \Vert _{X^{0, \frac{2}{5}}}.\end{aligned}$$

As for (4.4), we interpolate (2.7) with

$$\begin{aligned} \Vert u \Vert _{L^{\frac{10}{3}}_t L^{2}_x(J \times {\mathbb {R}}^3)} \lesssim \Vert u \Vert _{X^{0, \frac{1}{5}}}, \end{aligned}$$

while (4.5) follows from Sobolev’s inequality and (4.4).

Part (ii) corresponds to Lemma 4.3 in [8] but with \(b = \frac{1}{2}-\). By the interpolation lemma [9, Lemma 12.1], we may assume \(N = 1\). As in (3.11), write \(u_j=u_j^{hi }+u_j^{{low }}\). As for \(u_j^{low }\), we have

$$\begin{aligned} \Vert u^{low }_j \Vert _{L^6_{t, x}} \lesssim \Vert \nabla I u^{low }_j \Vert _{L^6_{t} L^2_x} \lesssim \Vert \nabla I u^{low }_j \Vert _{X^{0, \frac{1}{3}}}. \end{aligned}$$
(4.8)

As for \(u_j^{hi }\), noting \(I\sim |\nabla |^{s-1}\), we have

$$\begin{aligned} \Vert u^{hi }_j \Vert _{L^6_{t, x}}&\sim \big \Vert |\nabla |^{1-s} I u^{hi }_j \big \Vert _{L^6_{t, x}} \lesssim \big \Vert |\nabla |^{\frac{5}{3}-s} I u^{hi }_j \big \Vert _{L^6_{t}L^\frac{18}{7}_x}\\&\lesssim \big \Vert \langle \nabla \rangle ^{1-} I u^{hi }_j \big \Vert _{X^{0, \frac{1}{2}+}}, \end{aligned}$$

provided that \(s > \frac{2}{3}\), where we used Lemma 2.1 (iii) in the last step. Interpolating this with

$$\begin{aligned} \Vert u^{hi }_j \Vert _{L^6_{t, x}}&\sim \big \Vert |\nabla |^{1-s} I u^{hi }_j \big \Vert _{L^6_{t, x}} \lesssim \big \Vert |\nabla |^{2-s} I u^{hi }_j \big \Vert _{L^6_{t}L^2_x}\\&\lesssim \big \Vert \langle \nabla \rangle ^{2-s} I u^{hi }_j \big \Vert _{X^{0, \frac{1}{3}}}, \end{aligned}$$

we obtain

$$\begin{aligned} \Vert u^{hi }_j \Vert _{L^6_{t, x}}&\lesssim \big \Vert \langle \nabla \rangle I u^{hi }_j \big \Vert _{X^{0, \frac{1}{2}-}} \sim \Vert \nabla I u^{hi }_j \Vert _{X^{0, \frac{1}{2}-}}. \end{aligned}$$
(4.9)

Then, (4.6) follows from the boundedness of \(m_1(\xi )\) and \(L^6_{t, x}, L^6_{t, x}, L^6_{t, x}\)-Hölder’s inequality with (4.8) and (4.9). \(\square \)

We now briefly discuss a proof of Proposition 4.1.

Proof of Proposition 4.1

The estimates (4.1) and (4.2) (with \(b = \frac{1}{2}-\)) follow from a small modification of the proof of Proposition 4.1 in [8] (with \(b = \frac{1}{2}+\)), using Lemma 4.2. In the following, we present the proof of the second estimate (4.2). As for the first estimate (4.1), we briefly discuss the required modifications at the end of the proof. In the following, we fix N and drop the subscript N from \(I_N\) and \(m_N\).

From the definition (3.2) of the I-operator, we can rewrite the left-hand side of (4.2) as

$$\begin{aligned} \bigg | \int _J {{\,\mathrm{\int }\,}}_{\xi _1+\xi _2+ \xi _3+\xi _4= 0} M_N({{\bar{\xi }}}) \widehat{\overline{I {\mathcal {N}}(u)} } (\xi _{1}, t) \widehat{Iu}(\xi _2, t)\widehat{\overline{Iu}}(\xi _3, t)\widehat{Iu}(\xi _4, t) d\xi _1d\xi _2d\xi _3 dt \bigg |, \end{aligned}$$
(4.10)

where the multiplier \(M_N ({{\bar{\xi }}})\) is given by

$$\begin{aligned} M_N({{\bar{\xi }}}) = M_N(\xi _1, \xi _2, \xi _3, \xi _4) = \frac{m(\xi _{1})}{m(\xi _2)m(\xi _3)m(\xi _4)} - 1. \end{aligned}$$
(4.11)

We suppress the t-dependence in the following.

With the Littlewood–Paley projector \({\mathbf {P}}_{N_j}\) in (2.4), we have

(4.12)

where \(U_1= \overline{ {\mathbf {P}}_{N_1} I {\mathcal {N}}(u)}\), \(u_j = {\mathbf {P}}_{N_j} I u\), \(j = 2, 4\), and \(u_3 = \overline{{\mathbf {P}}_{N_3}I u}\). Without loss of generality, we assume that \(N_2 \ge N_3 \ge N_4\). Note that we have \(N_1 \lesssim N_2\) under \(\xi _1+\xi _2+\xi _3+\xi _4 = 0\) and \(|\xi _j| \sim N_j\). Thus, if we have \(N_2 \ll N\) in addition, then it follows from (3.1) and (4.11) that \(M_N({{\bar{\xi }}}) = 0\). Therefore, we assume that \(N_2 \gtrsim N\) in the following. We also note that under \(N_2 \gtrsim N_1\) and \(N_2 \ge N_3 \ge N_4\), we have

$$\begin{aligned} m(N_2) m(N_3) m(N_4) \le m(N_2) \lesssim m(N_1) \end{aligned}$$
(4.13)

Then, from (4.11) and (4.13) with (3.1), we have

$$\begin{aligned} \begin{aligned} \frac{M_N(N_1, N_2, N_3, N_4)}{N_2^{1-\varepsilon }}&\sim \frac{m(N_1)}{N_2^{1-\varepsilon } \prod _{j = 2}^4 m(N_j)} \le \frac{1}{N_2^{1-\varepsilon } (m(N_2))^3}\\&\le N^{-1+2\varepsilon } N_2^{-\varepsilon } \bigg (\frac{N}{N_2}\bigg )^{1-2\varepsilon - 3(1-s)}\\&\lesssim N^{-1+2\varepsilon } N_2^{-\varepsilon } \end{aligned} \end{aligned}$$
(4.14)

for any sufficiently small \(\varepsilon > 0\), provided that \(s > \frac{2 + 2\varepsilon }{3}\).

Case 1: \(N_j \gtrsim 1\), \(j = 1, \dots , 4\).

In this case, by applying the \(L^2_{t, x}, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}+}_{x}, L^{10}_{t} L^{10-}_{x}, L^{10}_{t} L^{10-}_{x}\)-Hölder’s inequalityFootnote 8 and (4.14) to (4.12) and then applying Lemma 4.2, we obtain

Case 2: \(N_1 \gtrsim 1 \gg N_4\).

We proceed as in Case 1 but we place \(u_j\), \(j = 3, 4\), in the \( L^{10}_{t} L^{10+}_{x}\)-norm when \(N_j \ll 1\). In view of Lemma 4.2 (i), this creates a small positive power of \(N_j\), allowing us to sum over dyadic \(N_j \ll 1\).

For \(N_3 \gtrsim 1\), we apply the \(L^2_{t, x}, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}+}_{x}, L^{10}_{t} L^{10-}_{x}, L^{10}_{t} L^{10+}_{x}\)-Hölder’s inequality and proceed as in Case 1. For \(N_3 \ll 1\), we apply the \(L^2_{t, x}, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}-}_{x},L^{10}_{t} L^{10+}_{x}, L^{10}_{t} L^{10+}_{x}\)-Hölder’s inequality and proceed as in Case 1.

Case 3: \(N_1 \ll 1\).

In this case, we use

$$\begin{aligned} \Vert U_1 \Vert _{L^2_t L^{2+}_x} \lesssim N_1^{0+} \Vert U_1 \Vert _{L^2_{t, x}} \end{aligned}$$
(4.15)

to gain a small power of \(N_1\). For \(N_4 \gtrsim 1\), we apply the \(L^2_{t}L^{2+}_x, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}+}_{x},L^{10}_{t} L^{10-}_{x}, L^{10}_{t} L^{10-}_{x}\)-Hölder’s inequality and proceed as in Case 1 with Lemma 4.2 and (4.15).

For \(N_3 \gtrsim 1\gg N_4\), we apply the \(L^2_{t}L^{2+}_x, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}+}_{x}, L^{10}_{t} L^{10-}_{x}, L^{10}_{t} L^{10+}_{x}\)-Hölder’s inequality. For \(N_3, N_4 \ll 1 \), we apply the \(L^2_{t}L^{2+}_x, L^{\frac{10}{3}}_{t} L^{\frac{10}{3}-}_{x}, L^{10}_{t} L^{10+}_{x}, L^{10}_{t} L^{10+}_{x}\)-Hölder’s inequality. Then, proceeding as in Case 1 with Lemma 4.2 and (4.15), we obtain the desired estimate (4.2).

As for the first estimate (4.1), we can simply repeat the argument in [8] with Lemma 4.2 (i) in place of [8, (4.19), (4.20), and (4.21)] and replacing \(L^{\frac{10}{3}}_{t, x}\) by \(L^{\frac{10}{3}}_{t}L^{\frac{10}{3}-}_{x}\) (so that (4.4) with \(b = \frac{1}{2}-\) is applicable). We point out that the regularity restriction \(s > \frac{5}{6}\) comes from this part. \(\square \)

5 On growth of the modified energy

In this section, we use stochastic analysis to study growth of the modified energy \(E(I_Nu)\) associated with I-SNLS (3.4). Before doing so, we first go over the deterministic setting. Given a smooth solution u to the cubic NLS (3.16), we can verify the conservation of the energy \( E(u)=\frac{1}{2}\int _{{\mathbb {R}}^3} |\nabla u|^2 dx +\frac{1}{4}\int _{{\mathbb {R}}^3}|u|^4 dx \) by simply differentiating in time and using the Eq. (3.16):

$$\begin{aligned} \partial _tE(u(t))&= {{\,\mathrm{Re}\,}}\int _{{\mathbb {R}}^3} \overline{ \partial _tu} (|u|^2u-\Delta u) dx \\&={{\,\mathrm{Re}\,}}\int _{{\mathbb {R}}^3} \overline{ \partial _tu} (|u|^2u-\Delta u- i\partial _tu) dx =0. \end{aligned}$$

In a similar manner, given a smooth solution u to the cubic NLS (3.16), the time derivative of the modified energy \(E(I_Nu)\) is given by

$$\begin{aligned} \begin{aligned} \partial _tE(I_N u (t) )&= {{\,\mathrm{Re}\,}}\int _{{\mathbb {R}}^3} \overline{ \partial _tI_N u} \big (|I_N u|^2 I_N u - I_N(|u|^2u)\big ) dx \\&= - {{\,\mathrm{Re}\,}}\int _{{\mathbb {R}}^3} \overline{\partial _tI_N u} [I_N, {\mathcal {N}}](u) dx. \end{aligned} \end{aligned}$$
(5.1)

Then, the fundamental theorem of calculus yields

$$\begin{aligned} | E(I_N u(t_2))- E(I_N u(t_1)) |=\bigg | {{\,\mathrm{Re}\,}}\int _{t_1}^{t_2} \int _{{\mathbb {R}}^3} \overline{\partial _tI_N u} [I_N, {\mathcal {N}}](u)dxdt \bigg | \end{aligned}$$
(5.2)

for any \(t_1, t_2 \in {\mathbb {R}}\), where the right-hand side can be estimated by the commutator estimate; see [8, Proposition 4.1].

For our problem. we need to estimate growth of the modified energy \(E(I_N u)\), where u is now a solution to SNLS (1.6) with a stochastic forcing.Footnote 9 As such, we need to proceed with Ito’s lemma.

Lemma 5.1

Given \(N \ge 1\), let \(I_Nu\) be the solution to I-SNLS (3.4) with \(I_N u|_{t = 0} = I_N u_0\), where \(\phi \) and \(u_0\) are as in Proposition 3.1. Moreover, given \(T >0 \), let \(\tau \) be a stopping time with \( 0<\tau <\min (T^{*}, T) \) almost surely, where \( T^* = T^*_\omega \) is the (random) forward maximal time of existence for I-SNLS (3.4), satisfying (3.5). Then, we have

$$\begin{aligned} \begin{aligned} E(I_N u(\tau ))&=E(I_N u_0)\\&\quad -{{\,\mathrm{Im}\,}}\int _{0}^{\tau }\int _{{\mathbb {R}}^3} \overline{ \Delta I_N u - I_N {\mathcal {N}}(u)} \, [I_N, {\mathcal {N}}](u)dxdt \\&\quad -{{\,\mathrm{Im}\,}}\sum _{n \in {\mathbb {N}}}\int _{0}^{\tau }\int _{{\mathbb {R}}^3} \overline{\Delta I_N u}I_N \phi e_ndxd\beta _n(t) \\&\quad + {{\,\mathrm{Im}\,}}\sum _{n \in {\mathbb {N}}}\int _0^{\tau }\int _{{\mathbb {R}}^3} \overline{{\mathcal {N}}(I_N u)} I_N \phi e_n dxd\beta _n(t) \\&\quad +2 \sum _{n \in {\mathbb {N}}} \int _0^{\tau }\int _{{\mathbb {R}}^3}|I_N u I_N \phi e_n|^2dxdt \\&\quad + \tau \Vert I_N \phi \Vert _{{HS (L^2;{\dot{H}}^1)}}^2. \end{aligned} \end{aligned}$$
(5.3)

Furthermore, if we assume that \(I_N \phi \in {HS }(L^2;{\dot{H}}^\frac{3}{4})\) in addition, then there exists \( C>0 \) such that

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\bigg [ \sup _{0\le t\le \tau } E(I_N u(t)) \bigg ] \le 2E(I_N u_0) +CT\Vert I_N \phi \Vert ^{2}_{{HS }(L^2;{\dot{H}}^1)} + C T^2 \Vert I_N \phi \Vert ^4_{{HS }(L^2;{\dot{H}}^\frac{3}{4})} \\&\quad +C {\mathbb {E}}\Bigg [ \sup _{0\le t\le \tau } \bigg |{{\,\mathrm{Im}\,}}\int _{0}^t\int _{{\mathbb {R}}^3} \overline{ \Delta I_N u - I_N {\mathcal {N}}(u)} \, [I_N, {\mathcal {N}}](u)dxdt' \bigg |\Bigg ]. \end{aligned} \end{aligned}$$
(5.4)

Remark 5.2

(i) The second term on the right-hand side of (5.3) corresponds to the contribution from the commutator term \([I_N, {\mathcal {N}}]\), also present in the deterministic case. The remaining terms are the additional terms, appearing from the application of Ito’s lemma. Part (ii) follows from (5.4) and Burkholder–Davis–Gundy inequality. We need to hide some terms on the right-hand side of (5.3) to the left-hand side, which results in the factor of 2 appears in the first term on the right-hand side of (5.4).

In the deterministic setting, one can apply the commutator estimate to (5.2) on each local-in-time interval. In the current stochastic setting, however, it is not possible to apply the estimate (5.4) (and the commutator estimates (Proposition 4.1)) on each local-in-time interval since this factor of 2 on \(E(I_Nu_0)\) in (5.4) would lead to an exponential growth of the constant in iterating the local-in-time argument.

(ii) Our convention states that \(\beta _n\) in (2.8) is a complex-valued Brownian motion. This is the reason we do not have a factor \(\frac{1}{2}\) on the last term in (5.3).

(iii) In controlling the fourth and fifth terms on the right-hand side of (5.3), we need to use the \({HS }(L^2;{\dot{H}}^\frac{3}{4})\)-norm of \(I_N \phi \) in (5.4).

Proof

A formal application of Ito’s lemma yields (5.3). One can justify the computation by inserting several truncations and the local well-posedness argument. See [15] for details when there is no I-operator.

Let us now turn to Part (ii). By Burkholder–Davis–Gundy inequality ([24, Theorem 3.28 on p. 166]), Cauchy–Schwarz inequality, and Cauchy’s inequality, we estimate the third term on the right-hand side of (5.3) as

$$\begin{aligned} \begin{aligned} {\mathbb {E}}\Bigg [&\sup _{0\le t\le \tau }\bigg ({{\,\mathrm{Im}\,}}\sum _{n\in {\mathbb {N}}}\int _{0}^{t}\int _{{\mathbb {R}}^3}\overline{\Delta I_N u} I_N \phi e_n dx d\beta _n(t')\bigg )\Bigg ]\\&\le C \, {\mathbb {E}}\Bigg [\bigg (\sum _{n\in {\mathbb {N}}}\int _{0}^{\tau }\bigg |\int _{{\mathbb {R}}^3}\nabla I_N u\cdot \nabla I_N \phi e_ndx\bigg |^2dt \bigg )^\frac{1}{2}\Bigg ]\\&\le C T^\frac{1}{2}\Vert I_N \phi \Vert _{{HS (L^2;{\dot{H}}^1)}} {\mathbb {E}}\bigg [ \sup _{0\le t\le \tau }\Vert \nabla I_N u(t) \Vert _{L^2} \bigg ]\\&\le C T \Vert I_N \phi \Vert ^2_{{HS }(L^2;{\dot{H}}^1)}+\frac{1}{8}{\mathbb {E}}\bigg [\sup _{0\le t\le \tau }E(I_N u(t))\bigg ]. \end{aligned} \end{aligned}$$
(5.5)

By Burkholder–Davis–Gundy inequality and Sobolev’s inequality \(\dot{H}^\frac{3}{4}({\mathbb {R}}^3) \subset L^4({\mathbb {R}}^3)\), the fourth terms is estimated as

$$\begin{aligned} \begin{aligned} {\mathbb {E}}\Bigg [&\sup _{0\le t\le \tau }\bigg ({{\,\mathrm{Im}\,}}\sum _{n\in {\mathbb {N}}}\int _{0}^{t}\int _{{\mathbb {R}}^3}|I_N u|^2\overline{I_N u} I_N \phi e_ndxd\beta _n(t')\bigg )\Bigg ]\\&\le C{\mathbb {E}}\Bigg [\bigg (\sum _{n\in {\mathbb {N}}}\int _{0}^{\tau }\bigg |\int _{{\mathbb {R}}^3}|I_N u|^2\overline{I_N u} I_N \phi e_ndx\bigg |^2dt\bigg )^{\frac{1}{2}}\Bigg ]\\&\le C T^\frac{1}{2}\Vert I_N \phi \Vert _{{HS }(L^2\dot{H}^\frac{3}{4})} {\mathbb {E}}\bigg [ \sup _{0\le t\le \tau }\Vert I_N u(t) \Vert _{L^4}^3 \bigg ]\\&\le C T^2 \Vert I_N \phi \Vert ^4_{{HS }(L^2;{\dot{H}}^\frac{3}{4})}+\frac{1}{8}{\mathbb {E}}\bigg [\sup _{0\le t\le \tau }E(I_N u(t))\bigg ]. \end{aligned} \end{aligned}$$
(5.6)

By Sobolev’s inequality, we estimate the fifth term as

$$\begin{aligned} \begin{aligned} 2 {\mathbb {E}}\bigg [&\sum _{n\in {\mathbb {N}}}\int _0^\tau \int _{{\mathbb {R}}^3}|I_N u I_N \phi e_n|^2dxdt\bigg ]\\&\le C T \Vert I_N \phi \Vert _{{HS }(L^2\dot{H}^\frac{3}{4})}^2 {\mathbb {E}}\bigg [ \sup _{0\le t\le \tau }\Vert I_N u(t) \Vert _{L^4}^2 \bigg ]\\&\le C T^2 \Vert I_N \phi \Vert ^4_{{HS }(L^2;{\dot{H}}^\frac{3}{4})}+\frac{1}{8}{\mathbb {E}}\bigg [\sup _{0\le t\le \tau }E(I_N u(t))\bigg ]. \end{aligned} \end{aligned}$$
(5.7)

Finally, the desired estimate (5.4) follows from (5.3), (5.5), (5.6), and (5.7). \(\square \)

6 Proof of Theorem 1.1

In this section, we present global well-posedness of SNLS (1.6) (Theorem 1.1). In the current stochastic setting, it suffices to prove the following “almost” almost sure global well-posedness.

Proposition 6.1

Given \(\frac{5}{6}< s < 1\), let \(u_0 \in H^s({\mathbb {R}}^3)\) and \(\phi \in {HS }(L^2; H^s)\). Then, given any \(T, \varepsilon > 0\), there exists a set \( \Omega _{T, \varepsilon }\subset \Omega \) such that

  1. (i)

    \(P( \Omega _{T, \varepsilon }^c) < \varepsilon \).

  2. (ii)

    For each \(\omega \in \Omega _{T, \varepsilon }\), there exists a (unique) solution u to SNLS (1.6) in \(C([0,T];H^s({\mathbb {R}}^3)) \) with \(u|_{t = 0} = u_0\) and the noise given by \(\phi \xi = \phi \xi (\omega )\).

Once we prove Proposition 6.1, Theorem 1.1 follows from the Borel–Cantelli lemma. See, for example, [1, 12]. See also Remark 6.2. Hence, in the remaining part of this paper, we focus on proving Proposition 6.1.

Proof of Proposition 6.1

As in the deterministic setting [8], we first apply the scaling (3.17), where \(\lambda = \lambda (N) = \lambda (T, \varepsilon )\gg 1 \) is to be chosen later. Note that, given \(\omega \in \Omega \), \( u = u(\omega ) \) solves (1.6) on [0, T] if and only if the scaled function \( u^\lambda = u^\lambda (\omega )\) solves (1.6) on \([0, \lambda ^2 T]\) with the scaled initial data \( u_0^\lambda = u^\lambda (0)\). We then apply the I-operator to the scaled function \(u^\lambda \). In the following, we focus on studying the scaled I-SNLS (3.23). In view of Remark 2.4 and (3.3) with Lemma 2.3, it suffices to show that \( \Vert I_N u^\lambda \Vert _{{\dot{H}}^1} \) remains finite on \([0, \lambda ^2 T]\) with a large probability.

Fix \(\frac{5}{6}< s < 1\) and \(u_0 \in H^s({\mathbb {R}}^3)\). Given large \(T \gg 1 \) and small \(\varepsilon > 0\), fix \(N= N(T, \varepsilon ) \gg 1 \) (to be chosen later). We now choose \(\lambda = \lambda (N) \gg 1\) such that

$$\begin{aligned} N^{2-2s}\lambda ^{1-2s} \ll N^{-2\theta } \end{aligned}$$
(6.1)

for some small \(\theta >0\). More precisely, we can choose

$$\begin{aligned} \lambda \sim N^\frac{2- 2s+2\theta }{2s-1} \gg 1 \end{aligned}$$
(6.2)

under the condition that \(\frac{1}{2}< s < 1\). Then, from the scaling property (3.28) of the modified energy and (6.1), we have

$$\begin{aligned} E(I_N u_{0}^{\lambda }) \le C(u_0) N^{2-2s}\lambda ^{1-2s} \ll N^{-2\theta } \eta _0 \ll \varepsilon \eta _0 \end{aligned}$$
(6.3)

by choosing \(N = N(\varepsilon ) \gg 1\), where \(\eta _0\) is as in (3.9).

Let \(\Psi ^\lambda \) denote the stochastic convolution corresponding to the scaled noise \(\phi ^\lambda \xi ^\lambda \). By Lemma 3.4 with \(p \gg 1\), (6.1), (6.2), and choosing \(N = N(T, \varepsilon ) \gg 1\), we have, for \(p \ge 2\),

$$\begin{aligned} \begin{aligned} \Big \Vert \Vert \nabla I_N \Psi ^\lambda \Vert _{X^{0,\frac{1}{2}-}{([j,j+1])}} \Big \Vert _{L^p(\Omega )}&\lesssim N^{ - \theta } \lambda ^{-1} \Vert \phi \Vert _{{HS }(L^2; \dot{H}^s)}\\&\ll (\varepsilon \lambda ^{-2} T^{-1})^\frac{1}{p} \eta _1, \end{aligned} \end{aligned}$$
(6.4)

uniformly in \(j \in {\mathbb {N}}\cup \{0\}\), where \(\eta _1\) is as in (3.9). Note that by choosing \(p \gg 1 \), (6.4) imposes only a mild condition \(N \ge (\varepsilon ^{-1} T)^{0+}\). For \(j \in {\mathbb {N}}\cup \{0\}\), define \(A^j_\varepsilon \subset \Omega \) by

$$\begin{aligned} A^j_\varepsilon =\bigg \{ \omega \in \Omega : \Vert \nabla I_N \Psi ^\lambda \Vert _{X^{0,\frac{1}{2}-}_{[j,j + 1 ]}}\le \eta _1 \bigg \}. \end{aligned}$$
(6.5)

Now, set \(\Omega _{T, \varepsilon }^{(1)}\) by setting

$$\begin{aligned} \Omega ^{(1)}_{T, \varepsilon }=\bigcap _{j=0}^{[ \lambda ^2T ]} A^j_\varepsilon , \end{aligned}$$

where \([ \lambda ^2T ]\) denotes the integer part of \(\lambda ^2 T\). Then, it follows from Chebyshev’s inequality and (6.4) that

$$\begin{aligned} P\big ( \Omega \setminus \Omega ^{(1)}_{T, \varepsilon } \big ) < \frac{\varepsilon }{2}. \end{aligned}$$
(6.6)

Lastly, note that from Lemma 3.4 with (6.1) and (6.2), we have

$$\begin{aligned} \begin{aligned} \lambda T^\frac{1}{2} \Vert I_N \phi ^\lambda \Vert _{{HS }(L^2;{\dot{H}}^1)}&\lesssim N^{1-s}\lambda ^{\frac{1}{2} - s} T^\frac{1}{2}\Vert \phi \Vert _{{HS }(L^2;{\dot{H}}^s)} \ll N^{-\theta } T^\frac{1}{2}\Vert \phi \Vert _{{HS }(L^2;H^s)}\\&\ll \varepsilon ^\frac{1}{2} \Vert \phi \Vert _{{HS }(L^2;H^s)} \end{aligned} \end{aligned}$$
(6.7)

and

$$\begin{aligned} \begin{aligned} \lambda T^\frac{1}{2} \Vert I_N \phi ^\lambda \Vert _{{HS }(L^2;{\dot{H}}^\frac{3}{4})}&\lesssim \lambda ^{- \frac{1}{4}} T^\frac{1}{2}\Vert \phi \Vert _{{HS }(L^2;{\dot{H}}^\frac{3}{4})} \ll N^{-\gamma } T^\frac{1}{2}\Vert \phi \Vert _{{HS }(L^2;H^s)}\\&\ll \varepsilon ^\frac{1}{4} \Vert \phi \Vert _{{HS }(L^2;H^s)} \end{aligned} \end{aligned}$$
(6.8)

by choosing \(N = N(T, \varepsilon ) \gg 1\) sufficiently large, where \(\gamma \) is given by

$$\begin{aligned} \gamma = \frac{1-s+\theta }{4s-2}>0. \end{aligned}$$

Now, we define a stopping time \(\tau \) by

$$\begin{aligned} \tau =\tau _\omega (I_N u^\lambda _0, I_N \phi ^\lambda ): =\inf \Big \{ t: \sup _{0\le t' \le t} E(I_N u^\lambda (t'))\ge \eta _0 \Big \}, \end{aligned}$$
(6.9)

where \(\eta _0\) is as in (3.9). Note that in view of the blowup alternative stated in Proposition 3.1, the condition (6.9) guarantees that the solution \(I_N u^\lambda \) to the scaled I-SNLS (3.23) exists on \([0, \tau ]\). Then, set

$$\begin{aligned} \Omega ^{(2)}_{T, \varepsilon } = \Big \{ \omega \in \Omega : \tau \ge \lambda ^2 T \Big \} \end{aligned}$$
(6.10)

and

$$\begin{aligned} \Omega _{T, \varepsilon } = \Omega ^{(1)}_{T, \varepsilon } \cap \Omega ^{(2)}_{T, \varepsilon }. \end{aligned}$$
(6.11)

We claim that

$$\begin{aligned} P\big ( \Omega _{T, \varepsilon }^{(1)} \setminus \Omega ^{(2)}_{T, \varepsilon } \big ) < \frac{\varepsilon }{2}. \end{aligned}$$
(6.12)

Then, it follows from (6.11) with (6.6) and (6.12) that

$$\begin{aligned} P( \Omega _{T, \varepsilon }^c) < \varepsilon . \end{aligned}$$
(6.13)

In the following, we prove (6.12). Let \( \omega \in \Omega _{T, \varepsilon }^{(1)} \setminus \Omega ^{(2)}_{T, \varepsilon }\). Then, from Remark 3.3 with (6.5) and (6.9), we have

$$\begin{aligned} \Vert \nabla I_N u^\lambda \Vert _{X^{0,\frac{1}{2}-}([j,j+1])} \le C_0 \end{aligned}$$

for any \(j \in {\mathbb {N}}\cup \{0\}\) such that \(j + 1\le \tau \). Hence, it follows from Proposition 4.1 with (3.23) that

$$\begin{aligned} \bigg | {{\,\mathrm{Im}\,}}\int _{j}^{j+1}\int _{{\mathbb {R}}^3} \overline{ \Delta I_N u^\lambda - I_N {\mathcal {N}}( u^\lambda )} [I_N, {\mathcal {N}}](u^\lambda )dxdt \bigg | \lesssim N^{-1+}. \end{aligned}$$
(6.14)

Then, from Lemma 5.1 and (6.14), we have one can write (5.4) as:

$$\begin{aligned} \begin{aligned} {\mathbb {E}}&\bigg [\sup _{0\le t\le \tau \wedge \lambda ^2T} E(I_N u^\lambda (t))\bigg ] \\&\le 2E(I_N u^\lambda _0) +C \lambda ^2T\Vert I_N \phi ^\lambda \Vert ^2_{{HS (L^2;{\dot{H}}^1)}} + C \lambda ^4T^2 \Vert I_N \phi ^\lambda \Vert ^4_{{HS }(L^2;{\dot{H}}^\frac{3}{4})} \\&+C \lambda ^2 TN^{-1+}. \end{aligned} \end{aligned}$$
(6.15)

On the other hand, from (6.9) and the continuity of the modified energy (in time), we have

$$\begin{aligned} \sup _{0\le t\le \tau \wedge \lambda ^2 T} E(I_N u^\lambda (t;\omega )) =\eta _0 \end{aligned}$$
(6.16)

for any \( \omega \in \Omega _{T, \varepsilon }^{(1)} \setminus \Omega ^{(2)}_{T, \varepsilon }\). Hence, from (6.15) and (6.16) with (6.3), (6.7), and (6.8),we have

$$\begin{aligned} \begin{aligned} P\big ( \Omega _{T, \varepsilon }^{(1)} \setminus \Omega ^{(2)}_{T, \varepsilon } \big )&= {\mathbb {E}}\Big [ {\mathbf {1}}_{\Omega _{T, \varepsilon }^{(1)} \setminus \Omega ^{(2)}_{T, \varepsilon }}\Big ]\\&= \eta _0^{-1} {\mathbb {E}}\bigg [ {\mathbf {1}}_{\Omega _{T, \varepsilon }^{(1)} \setminus \Omega ^{(2)}_{T, \varepsilon }}\cdot \sup _{0\le t\le \tau \wedge \lambda ^2T} E(I_N u^\lambda (t))\bigg ] \\&\le \eta _0^{-1} {\mathbb {E}}\bigg [\sup _{0\le t\le \tau \wedge \lambda ^2T} E(I_N u^\lambda (t))\bigg ] \\&\le 2\eta _0^{-1} E(I_N u^\lambda _0)+C\eta _0^{-1} \lambda ^2T\Vert I_N \phi ^\lambda \Vert ^2_{{HS (L^2;{\dot{H}}^1)}} \\&\quad + C \eta _0^{-1} \lambda ^4T^2 \Vert I_N \phi ^\lambda \Vert ^4_{{HS }(L^2;{\dot{H}}^\frac{3}{4})} +C \eta _0^{-1} \lambda ^2 TN^{-1+}\\&\le \frac{\varepsilon }{4} +C \eta _0^{-1} \lambda ^2 TN^{-1+}. \end{aligned} \end{aligned}$$
(6.17)

As in the deterministic case [8], we can make the last term on the right-hand side of (6.17) small, provided that \(s > \frac{5}{6}\). In fact, with (6.2), we can choose \(N = N(T, \varepsilon ) \gg 1\) such that

$$\begin{aligned} T\lesssim \varepsilon \lambda ^{-2}N^{1-}&\sim \varepsilon N^{\frac{6s-5 -4\theta -}{2s-1}}, \end{aligned}$$
(6.18)

guaranteeing

$$\begin{aligned} C \eta _0^{-1} \lambda ^2 TN^{-1+} < \frac{\varepsilon }{4}. \end{aligned}$$
(6.19)

Note that (6.18) is possible only when \(6s > 5 + 4\theta \), which can be satisfied when \(s > \frac{5}{6}\) by choosing \(\theta = \theta (s) > 0\) sufficiently small.

Therefore, the desired bound (6.12) follows from (6.17) and (6.19), and thus, (6.13) holds by choosing \(N = N(T, \varepsilon ) \gg 1\) such that (6.3), (6.4), (6.7), (6.8), and (6.18) are satisfied.

By the definition (6.11), for any \(\omega \in \Omega _{T, \varepsilon }\), the solution \(I_Nu^\lambda = I_N u^\lambda (\omega )\) to the scaled I-SNLS (3.23) exists on the time interval \([0, \lambda ^2T]\). Together with (6.13), this proves Proposition 6.1. \(\square \)

Remark 6.2

As in the usual application of the I-method in the deterministic setting, our proof of Proposition 6.1 yields a polynomial growth bound on the \(H^s\)-norm of a solution.

From the scaling (3.17), we have

$$\begin{aligned} E(I_N u^\lambda (\lambda ^2t))=\lambda ^{-1} E(I_N u(t)). \end{aligned}$$
(6.20)

Thus, given \(T>0\), from (6.20), (6.9), (6.10), (6.11), (6.2), and (6.18), we have

$$\begin{aligned} \begin{aligned} E(I_N u(T))&= \lambda E(I_N u^\lambda (\lambda ^2T)) \le \lambda \eta _0 \lesssim \lambda \\&\lesssim N^{\frac{2-2s+2\theta }{2s-1}}\\&\lesssim T^{\frac{2-2s+2\theta }{6s-5-4\theta -}}=T^{\frac{1-s-\theta }{3(s-\frac{5}{6})-2\theta -}} \end{aligned} \end{aligned}$$
(6.21)

for \(\omega \in \Omega _{T, \varepsilon }\), where the implicit constant depends on \(u_0\) and \(\Omega _{T, \varepsilon }\). On the other hand, from (3.3), we have

$$\begin{aligned} \Vert u(t)\Vert ^2_{H^s}&\lesssim E(I_N u(t))+\Vert u(t)\Vert ^2_{L^2}. \end{aligned}$$
(6.22)

Lastly, it follows Remark 2.4 (in particular, Footnote 5) that

$$\begin{aligned} \sup _{0\le t \le T}\Vert u(t)\Vert ^2_{L^2} \le C(u_0, \phi , \omega ) T. \end{aligned}$$
(6.23)

Therefore, from (6.21), (6.22), and (6.23), we conclude that

$$\begin{aligned} \Vert u(t)\Vert ^2_{H^s}\le C(u_0, \phi , \omega ) \max \Big (T^{\frac{1-s-\theta }{3(s-\frac{5}{6})-2\theta -}}, T\Big ) \end{aligned}$$
(6.24)

for any \(0 \le t \le T\) and \(\omega \in \Omega _{T, \varepsilon }\).

Let \(u_0 \in H^s({\mathbb {R}}^3)\) for some \(s > \frac{5}{6}\). Given small \(\varepsilon > 0\), we apply Proposition 6.1 and construct a set \(\Omega _{2^j, 2^{-j}\varepsilon }\) for each \(j \in {\mathbb {N}}\). Now, set \(\Sigma = \bigcup _{0 < \varepsilon \ll 1} \bigcap _{j \in {\mathbb {N}}} \Omega _{2^j, 2^{-j}\varepsilon }\). Then, for each \(\omega \in \Sigma \), there exists \(\varepsilon > 0\) such that \(\omega \in \bigcap _{j \in {\mathbb {N}}} \Omega _{2^j, 2^{-j}\varepsilon }\). In particular, the corresponding solution u to (1.1) exists globally in time. Furthermore, from (6.24), we have

$$\begin{aligned} \Vert u(t)\Vert ^2_{H^s}\le C(u_0, \phi , \omega ) \max \Big (t^{\frac{1-s-\theta }{3(s-\frac{5}{6})-2\theta -}}, t\Big ) \end{aligned}$$

for any \(t > 0\).