1 Introduction

1.1 Stochastic nonlinear Schrödinger equations

In this paper, we study the following Cauchy problem associated to a stochastic nonlinear Schrödinger equation (SNLS) of the form:

$$\begin{aligned} {\left\{ \begin{array}{ll} i\partial _t u - \Delta u \pm |u|^{2k}u = F(u,\phi \xi )\\ u|_{t=0}=u_0 \in H^s(\mathbb {T}^d) \end{array}\right. } \ (t,x)\in (0,\infty )\times \mathbb {T}^d, \end{aligned}$$
(1.1)

where \(k,d\ge 1\) are integers, \(\mathbb {T}^d~:=~ \mathbb {R}^d/\mathbb {Z}^d\), and \(u:[0,\infty )\times \mathbb {T}^d\rightarrow \mathbb {C}\) is the unknown stochastic process. The term \(F(u,\phi \xi )\) is a stochastic forcing and in this paper we treat the following cases: the additive noise, i.e.

$$\begin{aligned} F(u,\phi \xi ) = \phi \xi \end{aligned}$$
(1.2)

and the (linear) multiplicative noise, i.e.

$$\begin{aligned} F(u,\phi \xi ) = u\cdot \phi \xi , \end{aligned}$$
(1.3)

where the right-hand side of (1.3) is understood as an Itô product.Footnote 1 Here, \(\xi \) is a space-time white noise, i.e. a Gaussian stochastic process with correlation function \(\mathbb {E}[\xi (t,x)\xi (s,y)] = \delta (t-s) \delta (x-y)\), where \(\delta \) denotes the Dirac delta function. We recall that the white noise is very rough: the spatial regularity of \(\xi \) is less than \(-\frac{d}{2}\). Since the linear Schrödinger equation does not provide any smoothing properties, we consider instead a spatially smoothed out version \(\phi \xi \), where \(\phi \) is a linear operator from \(L^2(\mathbb {T}^d)\) into \(H^s(\mathbb {T}^d)\), on which we make certain assumptions, depending on whether we are working with (1.2) or (1.3).

Our main goal in this paper is to prove local well-posedness of SNLS with either additive or multiplicative noise in the Sobolev space \(H^s(\mathbb {T}^d)\), for any subcritical non-negative regularity s (see below for the meaning of “subcritical”). In this work, solutions to (1.1) are understood as solutions to the mild formulation

$$\begin{aligned} u(t) = S(t) u_0 \pm i\int _0^t S(t-t') (|u|^{2k}u)(t')\,dt' - i \Psi (t),\ t\ge 0, \end{aligned}$$
(1.4)

where \(S(t) :=e^{-it\Delta }\) is the linear Schrödinger propagator. The term \(\Psi (t)\) is a stochastic convolution corresponding to the stochastic forcing \(F(u,\phi \xi )\), see (1.11) and (1.12) below. Our local-in-time argument uses the Fourier restriction norm method introduced by Bourgain [6] and Klainerman and Machedon [26], as well as the periodic Strichartz estimates proved by Bourgain and Demeter [5]. In establishing local well-posedness for the multiplicative SNLS, we also have to combine these tools with the truncation method used by de Bouard and Debussche [15,16,17]. Moreover, by proving probabilistic a priori bounds on the mass and energy of solutions, we establish global well-posedness in (i) \(L^2(\mathbb {T})\) for cubic nonlinearities (i.e. \(k=1\)) when \(d=1\), and (ii) \(H^1(\mathbb {T}^d)\) for all defocusing energy-subcritical nonlinearities—see Theorem 1.5 and the preceding discussion for more details.

Previously, de Bouard and Debussche [15, 16] studied SNLS on \(\mathbb {R}^d\). They considered noise \(\phi \xi \) that is white in time but correlated in space, where \(\phi \) is a smoothing operator from \(L^2(\mathbb {R}^d)\) to \(H^s(\mathbb {R}^d)\). They proved global existence and uniqueness of mild solutions in (i) \(L^2(\mathbb {R})\) for the one-dimensional cubic SNLS and (ii) \(H^1(\mathbb {R}^d)\) for defocusing energy-subcritical SNLS. Other works related to SNLS on \(\mathbb {R}^d\) include the works by Barbu, Röckner, and Zhang [1, 2] and by Hornung [23]. More recently, by using the dispersive estimate for the linear Schrödinger operator, Oh, Pocovnicu and Wang [32] treat additive SNLS equation with rougher noise than that considered in [16] (see also Remark 1.9).

On the \(\mathbb {R}^d\) setting, the arguments given in [15, 16] use fixed point arguments in the space \(C_tH_x^1 \cap L_t^pW_x^{1,q}([0,T]\times \mathbb {R}^d)\), for some \(T>0\) and some suitable \(p,q\ge 1\).Footnote 2 In particular, they use the (deterministic) Strichartz estimates:

$$\begin{aligned} \Vert S(t)f\Vert _{L_t^pL_x^q(\mathbb {R}\times \mathbb {R}^d)} \le C_{p,q} \Vert f\Vert _{L^2_x(\mathbb {R}^d)}, \end{aligned}$$
(1.5)

where the pair (pq) is admissible, i.e. \(\frac{2}{p}+ \frac{d}{q} = \frac{d}{2}\), \(2\le p,q,\le \infty \), and \((p,q,d)\ne (2,\infty ,2)\). On \(\mathbb {T}^d\), Bourgain and Demeter [5] proved the \(\ell ^2\)-decoupling conjecture, and as a corollary, the following periodic Strichartz estimates:

$$\begin{aligned} \big \Vert S(t)P_{\le N} f\big \Vert _{L_{t,x}^p([0,T]\times \mathbb {T}^d)} \le C_{p,T,\varepsilon } N^{\frac{d}{2}- \frac{d+2}{p} +\varepsilon } \Vert f\Vert _{L^2_x(\mathbb {T}^d)}. \end{aligned}$$
(1.6)

Here, \(P_{\le N}\) is the Littlewood-Paley projection onto frequencies \(\{n \in \mathbb {Z}^d : |n|\le N\}\), \(p\ge \frac{2(d+2)}{d}\), and \(\varepsilon >0\) is an arbitrarily small quantity.Footnote 3 However, such Strichartz estimates are not strong enough for a fixed point argument in mixed Lebesgue spaces for the deterministic NLS on \(\mathbb {T}^d\). To overcome this problem, we shall employ the Fourier restriction norm method by means of \(X^{s,b}\)-spaces defined via the norms

$$\begin{aligned} \left\| u\right\| _{X^{s,b}} :=\big \Vert \langle n \rangle ^s\langle \tau -|n|^2 \rangle ^b\mathcal {F}_{t,x}(u)(\tau ,n)\big \Vert _{L^2_\tau \ell ^2_n(\mathbb {R}\times \mathbb {Z}^d)}. \end{aligned}$$
(1.7)

The indices \(s,b\in \mathbb {R}\) measure the spatial and temporal regularities of functions \(u\in X^{s,b}\), and \({\mathcal {F}}_{t,x}\) denotes Fourier transform of functions defined on \(\mathbb {R}\times \mathbb {T}^d\). This harmonic analytic method was introduced by Bourgain [6] for the deterministic nonlinear Schrödinger equation (NLS):

$$\begin{aligned} i\partial _t u - \Delta u \pm |u|^{2k}u =0 . \end{aligned}$$
(1.8)

Independently, Klainerman and Machedon [26] used the same method in the context of nonlinear wave equations.

1.2 Main results

We now state more precisely the problems considered here. Let \((\Omega , {\mathcal {A}},\{{\mathcal {A}}_t\}_{t\ge 0}, {\mathbb {P}})\) be a filtrated probability space. Let W be the \(L^2(\mathbb {T}^d)\)-cylindrical Wiener process given by

$$\begin{aligned} W(t,x,\omega ) := \sum _{n\in \mathbb {Z}^d} \beta _n(t,\omega ) e_n(x),\, \end{aligned}$$
(1.9)

where \(\{\beta _n\}_{n\in \mathbb {Z}^d}\) is a family of independent complex-valued Brownian motions associated with the filtration \(\{{\mathcal {A}}_t\}_{t\ge 0}\) and \(e_n(x):= \exp ( 2\pi i n\cdot x)\), \(n\in \mathbb {Z}^d\). The space-time white noise \(\xi \) is given by the (distributional) time derivative of W, i.e. \(\xi =\frac{\partial W}{\partial t}\). Since the spatial regularity of W is too low (more precisely, for each fixed \(t\ge 0\), \(W(t)\in H^{-\frac{d}{2}-\varepsilon }(\mathbb {T}^d)\) almost surely for any \( \varepsilon >0\)), we consider a smoothed out version \(\phi W\) as follows. Recall that a bounded linear operator \(\phi \) from a separable Hilbert space H to a Hilbert space K is Hilbert-Schmidt if

$$\begin{aligned} \Vert \phi \Vert _{\mathcal {L}^2(H;K)}^2 := \sum _{n\in \mathbb {Z}^d} \Vert \phi h_n\Vert _{K}^2 <\infty , \end{aligned}$$
(1.10)

where \(\{h_n\}_{n\in \mathbb {Z}^d}\) is an orthonormal basis of H (recall that \(\Vert \cdot \Vert _{\mathcal {L}^2(H;K)}\) does not depend on the choice of \(\{h_n\}_{n\in \mathbb {Z}^d}\)). Throughout this work, we assume \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); H^s(\mathbb {T}^d))\) for appropriate \(s\ge 0\). In this case, \(\phi W\) is a Wiener process with sample paths in \(H^s(\mathbb {T}^d)\) and its time derivative \(\phi \xi \) corresponds to a noise which is white in time and correlated in space (with correlation function depending on \(\phi \)). We can now define the stochastic convolution \(\Psi (t)\) from (1.4) for (i) the additive noise (1.2):

$$\begin{aligned} \Psi (t) :=\int _0^t S(t-t') \phi \,dW(t') \end{aligned}$$
(1.11)

and (ii) the multiplicative noise (1.3):

$$\begin{aligned} \Psi (t):= \Psi [u](t) := \int _0^t S(t-t') u(t') \phi \,dW(t'). \end{aligned}$$
(1.12)

For the deterministic nonlinear Schrödinger equation (NLS), i.e. \(F\equiv 0\) in (1.1), there is the so-called scaling-critical regularity and is given by

$$\begin{aligned} s_{crit } := \frac{d}{2}-\frac{1}{k}. \end{aligned}$$
(1.13)

See [35, Section 3.1]. In this paper, we consider the Cauchy problem (1.1) (with either (1.2) or (1.3)) posed in \(H^s(\mathbb {T}^d)\) with \(s>s_{crit }\), i.e. we consider the scaling-subcritical Cauchy problem. For the one-dimensional cubic problems (i.e. when \((d,k)=(1,1)\)), we also require \(s\ge 0\). We are now ready to state our first result.

Theorem 1.1

(Local well-posedness for additive SNLS) Given \(s>s_{crit }\) non-negative, let \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); H^s(\mathbb {T}^d))\) and \(F(u,\phi )=\phi \xi \). Then for any \(u_0\in H^s(\mathbb {T}^d)\), there exist a stopping time T that is almost surely positive, and a unique solution \(u \in C([0,T];H^s(\mathbb {T}^d)) \cap X^{s,\frac{1}{2}-\varepsilon }([0,T])\) to SNLS with additive noise, for some \(\varepsilon >0\).

Here, \(X^{s,b}([0,T])\) is a time restricted version of the \(X^{s,b}\)-space, see (2.5) below. The proof of this result relies on a fixed point argument for (1.4) in a closed subset of \(X^{s,b}([0,T])\). We are required to use \(b=\frac{1}{2}-\varepsilon \) in order to capture the temporal regularity of \(\Psi \). Since \(X^{s,b}([0,T])\) does not embed into \(C([0,T];H^s)\) when \(b<\frac{1}{2}\), we need to prove the continuity in time of solutions a posteriori. Our local well-posedness result above (as well as Theorem 1.6 below) covers all non-negative subcritical regularities.

Remark 1.2

We point out that \(s_{crit }\) is negative only for the one-dimensional cubic NLS, i.e. \((d,k)=(1,1)\) for which \(s_{crit }=-\frac{1}{2}\). Below \(L^2(\mathbb {T})\), the deterministic cubic NLS on \(\mathbb {T}\) was shown to be ill-posed. Indeed, Christ, Colliander and Tao [11] and Molinet [29] showed that the solution map \(u_0\in H^s(\mathbb {T}) \mapsto u(t)\in H^s(\mathbb {T})\) is discontinuous whenever \(s<0\). More recently, Guo and Oh [19] showed an even stronger ill-posedness result, in the sense that for any \(u_0\in H^s(\mathbb {T})\), \(s\in (-\frac{1}{8},0)\), there is no distributional solution u that is also a limit of smooth solutions in \(C([-T,T]; H^s(\mathbb {T}))\). In the (super)critical regime, i.e. for \(s\le -\frac{1}{2} =s_{crit }\), Kishimoto [25] showed a norm inflation phenomenon at any \(u_0\in H^s(\mathbb {T})\): for any \(\varepsilon >0\) and \(u_0\in H^s(\mathbb {T})\), there exists a solution \(u^{\varepsilon }\) to NLS such that \(\Vert u^{\varepsilon }(0)-u_0\Vert _{H^s(\mathbb {T})}<\varepsilon \) and \(\Vert u^{\varepsilon }(t)\Vert _{H^s(\mathbb {T})}>\varepsilon ^{-1}\) for some \(t\in (0, \varepsilon )\). See also [31, 33].

Regarding the one-dimensional cubic SNLS on \(\mathbb {T}\), we point out that recently Forlano, Oh and Wang [18] studied a renormalized (Wick ordered, see also [12]) additive SNLS with a weaker assumption than that of Theorem 1.1 above. While we assume that \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}); L^2(\mathbb {T}))\), the work of [18] assumes that \(\phi \) is \(\gamma \)-radonifying from \(L^2(\mathbb {T})\) into the Fourier-Lebesgue space \(\mathcal {F}L^{s,p}(\mathbb {T})\) with \(s>0\) and \(1<p<\infty \). In particular, this allows them to handle almost space-time white noise, namely \(\phi =\langle \partial _x \rangle ^{-\alpha }\) with \(\alpha >0\) arbitrarily small.

Remark 1.3

Although we present our results for SNLS on the standard torus \(\mathbb {T}^d=\mathbb {R}^d/\mathbb {Z}^d\), our arguments hold on any torus \(\mathbb {T}_{{\varvec{\alpha }}}^d=\prod _{j=1}^d\mathbb {R}/{\alpha _j}\mathbb {Z}\,\), where \(\varvec{\alpha }=(\alpha _1, \ldots , \alpha _d)\in [0,\infty )^d\). This is because the periodic Strichartz estimates (1.6) of Bourgain and Demeter [5] hold for irrational tori (\(\mathbb {T}_{{\varvec{\alpha }}}^d\) is irrational if there is no \(\varvec{\gamma }\in {\mathbb {Q}}^d\) such that \(\varvec{\gamma }\cdot \varvec{\alpha }=0\)). Prior to [5], Strichartz estimates were harder to establish on irrational tori—see [20] and references therein.

Remark 1.4

The deterministic NLS is locally well-posed in the critical space \(H^{s_{crit }}(\mathbb {T}^d)\), for almost all pairs (dk), except for the cases (1, 2), (2, 1), (3, 1) which are still open—see [7, 21, 22, 36]. In these papers, the authors employ the critical spaces \(X^s, Y^s\) based on the spaces \(U^2\), \(V^2\) of Koch and Tataru [27]. We point out that Brownian motions belong almost surely to \(V^p\), for \(p>2\), but not \(V^2\) (hence neither to \(U^2\)). Consequently, the spaces \(X^s, Y^s\) are not suitable for obtaining local well-posedness of SNLS.

Now let us recall the following conservation laws for the deterministic NLS:

$$\begin{aligned} M(u(t))&:= \frac{1}{2}\int _{\mathbb {T}^d} |u(t,x)|^2\,dx \end{aligned}$$
(1.14)
$$\begin{aligned} E(u(t))&:= \frac{1}{2} \int _{\mathbb {T}^d} |\nabla _x u(t,x)|^2 \pm \frac{1}{2k+2} \int _{\mathbb {T}^d} |u(t,x)|^{2k+2} \,dx, \end{aligned}$$
(1.15)

where the sign ± in (1.15) matches that in (1.1) and (1.4). Recall that SNLS (1.1) with the \(+\) sign is called defocusing (and focusing for the − sign). We say that SNLS is energy-subcritical if \(s_{crit }<1\) (i.e. for any \(k\ge 1\) when \(d=1,2\) and for \(k=1\) when \(d=3\)).

For solutions of SNLS these quantities are no longer necessarily conserved. However, Itô’s lemma allows us to bound these in a probabilistic manner similarly to de Bouard and Debussche [15, 16]. Therefore, we obtain the following:

Theorem 1.5

(Global well-posedness for additive SNLS) Let \(s\ge 0\). Given \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); H^s(\mathbb {T}^d))\), let \(F(u,\phi )=\phi \xi \) and \(u_0\in H^s(\mathbb {T}^d)\). Then the \(H^s\)-valued solutions of Theorem 1.1 extend globally in time almost surely in the following cases:

  1. (i)

    the (focusing or defocusing) one-dimensional cubic SNLS for all \(s\ge 0\);

  2. (ii)

    the defocusing energy-subcritical SNLS for all \(s\ge 1\).

We now move onto the problem with multiplicative noise, i.e. SNLS with (1.3). For this case, we need a stronger assumption on \(\phi \). By a slight abuse of notation, for a bounded linear operator \(\phi \) from \(L^2(\mathbb {T}^d)\) to a Banach space B, we say that \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); B)\) ifFootnote 4

$$\begin{aligned} \Vert \phi \Vert _{\mathcal {L}^2(L^2(\mathbb {T}^d);B)}^2 := \sum _{n\in \mathbb {Z}^d} \Vert \phi e_n\Vert _{B}^2 <\infty . \end{aligned}$$

For \(s\in \mathbb {R}\) and \(r\ge 1\), we also define the Fourier-Lebesgue space \(\mathcal {F}L^{s,r}(\mathbb {T}^d)\) via the norm

$$\begin{aligned} \left\| f\right\| _{\mathcal {F}L^{s,r}(\mathbb {T}^d)}:=\big \Vert \langle n \rangle ^s\widehat{f}(n)\big \Vert _{\ell ^r_n(\mathbb {Z}^d)}. \end{aligned}$$

Clearly, when \(r=2\) we have \(\mathcal {F}L^{s,r}(\mathbb {T}^d) = H^s(\mathbb {T}^d)\) and for \(s_1\le s_2\) and \(r_1\le r_2\) we have \(\mathcal {F}L^{s_2,r_1}(\mathbb {T}^d)\subset \mathcal {F}L^{s_1,r_2}(\mathbb {T}^d)\). We now state our local well-posedness result for the multiplicative SNLS.

Theorem 1.6

(Local well-posedness for multiplicative SNLS) Given \(s>s_{crit }\) non-negative, let \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); H^s(\mathbb {T}^d))\). If \(s\le \frac{d}{2}\), we further impose that

$$\begin{aligned} \phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); \mathcal {F}L^{s,r}(\mathbb {T}^d)) \end{aligned}$$
(1.16)

for some \(r\in \big [1,\frac{d}{d-s}\big )\) when \(s>0\) and \(r=1\) when \(s=0\). Let \(F(u,\phi )=u\cdot \phi \xi \). Then for any \(u_0\in H^s(\mathbb {T}^d)\), there exist a stopping time T that is almost surely positive, and a unique solution \(u \in C([0,T];H^s(\mathbb {T}^d)) \cap X^{s,\frac{1}{2}-\varepsilon }([0,T]) \) to SNLS with multiplicative noise, for some \(\varepsilon >0\).

Remark 1.7

If \(\phi \xi \) is a spatially homogeneous noise, i.e. \(\phi \) is translation invariant, then the extra assumption (1.16) is superfluous. Indeed, if \(\widehat{ \phi e_n}(m) =0\), for all \(m,n\in \mathbb {Z}^d\), \(m\ne n\) and \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); H^s(\mathbb {T}^d))\), then \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); \mathcal {F}L^{s,r}(\mathbb {T}^d))\) for any \(r\ge 1\).

We point out that an extra condition in the multiplicative case was also used by de Bouard and Debussche [16] in their study of SNLS in \(H^1(\mathbb {R}^d)\), namely they required that \(\phi \) is a \(\gamma \)-radonifying operator from \(L^2(\mathbb {R}^d)\) into \(W^{1,\alpha }(\mathbb {R}^d)\) for some appropriate \(\alpha \), as compared to the requirement that \(\phi \) is Hilbert-Schmidt from \(L^2(\mathbb {R}^d)\) into \(H^s(\mathbb {R}^d)\) in the additive case.

In the multiplicative case, the stochastic convolution depends on the solution u and this forces us to work in the space in \(L^2\big (\Omega ; C([0,T];H^s(\mathbb {T}^d)) \cap X^{s,\frac{1}{2}-\varepsilon }([0,T]) \big )\). In order to control the nonlinearity in this space, we use a truncation method which has been used for SNLS on \(\mathbb {R}^d\) by de Bouard and Debussche [15, 16]. Moreover, we combine this method with the use of \(X^{s,b}\)-spaces in a similar manner as in [17], where the same authors studied the stochastic KdV equation with low regularity initial data on \(\mathbb {R}\). This introduces some technical difficulties which did not appear when using the more classical Strichartz spaces as those used in [15, 16].

Next, we prove global well-posedness of SNLS (1.1) with multiplicative noise. Similarly to the additive case, the main ingredient is the probabilistic a priori bound on the mass and energy of a local solution u. However, we further need to obtain uniform control on the \(X^{s,b}\)-norms for solutions to truncated versions of (1.4).

Theorem 1.8

(Global well-posedness for multiplicative SNLS) Let \(s\ge 0\). Given \(\phi \) with the same assumptions as in Theorem 1.6, let \(F(u,\phi )=u\cdot \phi \xi \) and \(u_0\in H^s(\mathbb {T}^d)\). Then the \(H^s\)-valued solutions of Theorem 1.6 extend globally in time in the following cases:

  1. (i)

    the (focusing or defocusing) one-dimensional cubic SNLS for all \(s\ge 0\);

  2. (ii)

    the defocusing energy-subcritical SNLS for all \(s\ge 1\).

Before concluding this introduction let us state two remarks.

Remark 1.9

We point out that Theorems 1.1 and 1.6 are almost optimal for handling the regularity of initial data since the deterministic NLS is ill-posed for \(s<s_{crit }\) (see Remark 1.2). In terms of the regularity of the noise, at least in the additive noise case, it is possible to consider rougher noise by employing the Da Prato-Debussche trick, namely by writing a solution u to (1.4) as \(u= v+\Psi \) and considering the equation for the residual part v. In general, this procedure allows one to treat rougher noise, see for example [3, 4, 12] where they treat NLS with rough random initial data and more recently [32] where they handled supercritical noise for the additive SNLS on \(\mathbb {R}^d\). In the periodic setting however, the argument gets more complicated (see for example [3, 4] on \(\mathbb {R}^d\) vs. [12, 30] on \(\mathbb {T}^d\)). The actual implementation of the aforementioned trick requires cumbersome case-by-case analysis where the number of cases grows exponentially in k. Even for the cubic case on \(\mathbb {T}^d\) the analysis is involved, whereas on \(\mathbb {R}^d\) one can use bilinear Strichartz estimates which are not available on \(\mathbb {T}^d\).

Remark 1.10

In the multiplicative noise case, there are well-posedness results on a general compact Riemannian manifold M without boundaries. In [9], Brzeźniak and Milllet use the Strichartz estimates of [10] and the standard space-time Lebesgue spaces (i.e. without the Fourier restriction norm method). For \(M=\mathbb {T}^d\), Theorem 1.6 improves the result in [9] since it requires less regularity on the noise and initial data. In [8], Brzeźniak, Hornung, and Weiss construct martingale solutions in \(H^1(M)\) for the multiplicative SNLS with energy-subcritical defocusing nonlinearities and mass-subcritical focusing nonlinearities.

Organization of the paper In Sect. 2, we provide some preliminaries for the Fourier restriction norm method and prove the multilinear estimates necessary for the local well-posedness results. In Sect. 3, we prove some properties of the stochastic convolutions \(\Psi \) and \(\Psi [u]\) given respectively by (1.11) and (1.12). We prove Theorems 1.1 and 1.6 in Sect. 4. Finally, in Sect. 5 we prove the global results Theorems 1.5 and 1.8.

Notations Given \(A,B\in \mathbb {R}\), we use the notation \(A\lesssim B\) to mean \(A\le CB\) for some constant \(C\in (0,\infty )\) and write \(A\sim B\) to mean \(A\lesssim B\) and \(B\lesssim A\). We sometimes emphasize any dependencies of the implicit constant as subscripts on \(\lesssim \), \( > rsim \), and \(\sim \); e.g. \(A\lesssim _{p} B\) means \(A\le C B\) for some constant \(C=C(p)\in (0,\infty )\) that depends on the parameter p. We denote by \(A\wedge B\) and \(A\vee B\) the minimum and maximum between the two quantities respectively. Also, \(\lceil A\rceil \) denotes the smallest integer greater or equal to A, while \(\lfloor A\rfloor \) denotes the largest integer less than or equal to A.

Given a function \(g:U\rightarrow \mathbb {C}\), where U is either \(\mathbb {T}^d\) or \(\mathbb {R}\), our convention of the Fourier transform of g is given by

$$\begin{aligned} \widehat{g}(\xi )=\int _U e^{2\pi i \xi \cdot x}g(x)\,dx, \end{aligned}$$

where \(\xi \) is either an element of \(\mathbb {Z}^d\) (if \(U=\mathbb {T}^d\)) or an element of \(\mathbb {R}\) (if \(U=\mathbb {R}\)). For the sake of convenience, we shall omit the \(2\pi \) from our writing since it does not play any role in our arguments.

For \(c\in \mathbb {R}\), we sometimes write \(c+\) to denote \(c+\varepsilon \) for sufficiently small \(\varepsilon >0\), and write \(c-\) for the analogous meaning. For example, the statement ‘\(u\in X^{s,\frac{1}{2}-}\)’ should be read as ‘\(u\in X^{s,\frac{1}{2}-\varepsilon }\) for sufficiently small \(\varepsilon >0\)’.

For the sake of readability, in the proofs we sometimes omit the underlying domain \(\mathbb {T}^d\) from various norms, e.g. we write \(\Vert f\Vert _{H^s}\) instead of \(\Vert f\Vert _{H^s(\mathbb {T}^d)}\) and \(\Vert \phi \Vert _{\mathcal {L}^2(L^2;H^s)}\) instead of \(\Vert \phi \Vert _{\mathcal {L}^2(L^2(\mathbb {T}^d);H^s(\mathbb {T}^d))}\).

2 Fourier restriction norm method

Let \(s,b\in \mathbb {R}\). The Fourier restriction norm space \(X^{s,b}\) adapted to the Schrödinger equation on \(\mathbb {T}^d\) is the space of tempered distributions u on \(\mathbb {R}\times \mathbb {T}^d\) such that the norm

$$\begin{aligned} \left\| u\right\| _{X^{s,b}} :=\left\| \langle n \rangle ^s\langle \tau -|n|^2 \rangle ^b\mathcal {F}_{t,x}(u)(\tau ,n)\right\| _{\ell ^2_nL^2_\tau (\mathbb {Z}^d\times \mathbb {R})} \end{aligned}$$

is finite. Equivalently, the \(X^{s,b}\)-norm can be written in its interaction representation form:

$$\begin{aligned} \left\| u\right\| _{X^{s,b}}=\left\| \langle n \rangle ^s\langle \tau \rangle ^b\mathcal {F}_{t,x}\left( S(-t)u(t)\right) (n,\tau )\right\| _{\ell ^2_nL^2_\tau (\mathbb {Z}^d\times \mathbb {R})}, \end{aligned}$$
(2.1)

where \(S(t)=e^{-it\Delta }\) is the linear Schrödinger propagator. We now state some facts on \(X^{s,b}\)-spaces. The interested reader can find the proof of these and further properties in [35]. Firstly, we have the following continuous embeddings

$$\begin{aligned} X^{s,b}&\hookrightarrow C(\mathbb {R}; H_x^s(\mathbb {T}^d)) \ \text{, } \text{ for } b>\frac{1}{2}, \end{aligned}$$
(2.2)
$$\begin{aligned} X^{s',b'}&\hookrightarrow X^{s,b}\ \text{, } \text{ for } s'\ge s \text{ and } b'\ge b. \end{aligned}$$
(2.3)

We have the duality relation

$$\begin{aligned} \left\| u\right\| _{X^{s,b}}=\sup _{\left\| v\right\| _{X^{-s,-b}}\le 1}\left| \int _{\mathbb {R}\times \mathbb {T}^d} u(t,x) \overline{v(t,x)}\,dt\,dx\right| . \end{aligned}$$
(2.4)

Lemma 2.1

(Transference principle, [35, Lemma 2.9]) Let Y be a Banach space of functions on \(\mathbb {R}\times \mathbb {T}^d\) such that

$$\begin{aligned} \Vert e^{it\lambda } e^{\pm it\Delta } f\Vert _Y \lesssim \Vert f\Vert _{H^s(\mathbb {T}^d)} \end{aligned}$$

for all \(\lambda \in \mathbb {R}\) and all \(f\in H^s(\mathbb {T}^d)\). Then, for any \(b>\frac{1}{2}\),

$$\begin{aligned} \Vert u\Vert _Y \lesssim \Vert u\Vert _{X^{s,b}} \end{aligned}$$

for all \(u\in X^{s,b}\).

Given a time interval \(I\subseteq \mathbb {R}\), one defines the time restricted space \(X^{s,b}(I)\) via the norm

$$\begin{aligned} \left||u\right||_{X^{s,b}({I})}:=\inf \left\{ \Vert {{\tilde{u}}}\Vert _{X^{s,b}}: {\tilde{u}}|_{I} = u\right\} . \end{aligned}$$
(2.5)

We note that for \(s\ge 0\) and \(0\le b<\frac{1}{2}\), we have

$$\begin{aligned} \left\| u\right\| _{X^{s,b}(I)}\sim \left||\mathbb {1}_{I}(t)u(t)\right||_{X^{s,b}}, \end{aligned}$$
(2.6)

see for example [17, Lemma 2.1] for a proof (for \(X^{s,b}\) spaces adapted to the KdV equation).

Lemma 2.2

(Linear estimates, [35, Proposition 2.12]) Let \(s\in \mathbb {R}\) and suppose \(\eta \) is smooth and compactly supported. Then, we have

$$\begin{aligned}&\Vert \eta (t) S(t)f\Vert _{X^{s,b}} \lesssim \Vert f\Vert _{H^s(\mathbb {T}^d)}, \qquad \text {for }b\in \mathbb {R}; \end{aligned}$$
(2.7)
$$\begin{aligned}&\left\| \eta (t) \int _0^t S(t-t') F(t')dt' \right\| _{X^{s,b}} \lesssim \Vert F\Vert _{X^{s,b-1}}, \qquad \text {for }b>\frac{1}{2}. \end{aligned}$$
(2.8)

By localizing in time, we can gain a smallness factor, as per lemma below.

Lemma 2.3

(Time localization property, [35, Lemma 2.11]) Let \(s\in \mathbb {R}\) and \(-\frac{1}{2}<b'<b<\frac{1}{2}\). For any \(T\in (0,1)\), we have

$$\begin{aligned} \left||f\right||_{X^{s,b'}({[0,T]})}\lesssim _{b,b'} T^{b-b'}\left||f\right||_{X^{s,b}({[0,T]})}. \end{aligned}$$

We now give the proofs of the multilinear estimates necessary to control the nonlinearity \(|u|^{2k}u\). Recall the \(L^4\)-Strichartz estimate due to Bourgain [6] (see also [35, Proposition 2.13]):

$$\begin{aligned} \Vert u\Vert _{L^4_{t,x}(\mathbb {R}\times \mathbb {T})} \lesssim \Vert u\Vert _{X^{0,\frac{3}{8}}}. \end{aligned}$$
(2.9)

Lemma 2.4

Let \(d=1\), \(s\ge 0\), \(b\ge \frac{3}{8}\), and \(b'\le \frac{5}{8}\). Then, for any time interval \(I\subset \mathbb {R}\), we have

$$\begin{aligned} \left\| u_1 \overline{u_2} u_{3} \right\| _{X^{s,b'-1}(I)} \lesssim \prod _{j=1}^{3} \Vert u_j\Vert _{X^{s,b}(I)}. \end{aligned}$$
(2.10)

Proof

By the triangle inequality it suffices to prove (2.10) for \(s=0\). We claim that

$$\begin{aligned} \left| \int _{\mathbb {R}\times \mathbb {T}^d} u_1 \overline{u_2} u_3 \overline{v}\,dxdt \right| \lesssim \prod _{j=1}^{3} \Vert u_j\Vert _{X^{0,b}} \Vert v\Vert _{X^{0,1-b'}} \end{aligned}$$

for any factors \(u_1, u_2, u_3, v\). Indeed, this follows immediately from Hölder inequality and (2.9) for each of the four factors (hence the restrictions \(b,1-b'~\ge ~\frac{3}{8}\)). Thus, the global-in-time version of (2.10), i.e. \(I=\mathbb {R}\), follows by the duality relation (2.4). For an arbitrary time interval I, if \(\tilde{u}_j\) is an extension of \(u_j\), \(j=1,2,3\), then \(\tilde{u}_1 \overline{\tilde{u}_2} \tilde{u}_3\) is an extension of \(u_1 \overline{u_2} u_3\). We use the previous step to get

$$\begin{aligned} \left\| u_1 \overline{u_2} u_{3} \right\| _{X^{s,b'-1}(I)} \le \left\| \tilde{u}_1 \overline{\tilde{u}_2} \tilde{u}_{3} \right\| _{X^{s,b'-1}} \lesssim \prod _{j=1}^{3} \Vert \tilde{u}_j\Vert _{X^{0,b}} \end{aligned}$$

and then we take infimum over all extensions \(\tilde{u}_j\)’s and (2.10) follows. \(\square \)

Due to the scaling and Galilean symmetries of the linear Schrödinger equation, the periodic Strichartz estimate (1.6) of Bourgain and Demeter [5] is equivalent with

$$\begin{aligned} \Vert S(t) P_Q f\Vert _{L^p_{t,x}(I\times \mathbb {T}^d)} \lesssim _{|I|} |Q|^{\frac{1}{2} - \frac{d+2}{pd}+} \Vert f\Vert _{L^2_x(\mathbb {T}^d)}, \end{aligned}$$
(2.11)

for any \(d\ge 1\), \(p\ge \frac{2(d+2)}{d}\), \(I\subset \mathbb {R}\) finite time interval, and \(Q\subset \mathbb {R}^d\) dyadic cube. Here, \(P_Q\) denotes the frequency projection onto Q, i.e. \(\widehat{P_Qf}(n) = \mathbf 1_Q(n) \widehat{f}(n)\). By the transference principle (Lemma 2.1), we get

$$\begin{aligned} \Vert P_{Q}u\Vert _{L^{p}_{t,x}(I \times \mathbb {T}^d)} \lesssim _{|I|} |Q|^{\frac{1}{2} - \frac{d+2}{pd} +} \Vert u\Vert _{X^{0, b}(I)}, \end{aligned}$$
(2.12)

for any \(b>\frac{1}{2}\). By interpolating (2.12) with

$$\begin{aligned} \Vert P_{Q}u\Vert _{L^{p}_{t,x}(I \times \mathbb {T}^d)} \lesssim |Q|^{\frac{1}{2}-\frac{1}{p}} \Vert u\Vert _{X^{0, \frac{1}{2}-\frac{1}{p}}(I)}, \end{aligned}$$
(2.13)

(which follows immediately from Sobolev inequalities, (2.1), and the \(H^s(\mathbb {T}^d)\)-isometry of \(S(-t)\)), we can lower the time regularity from \(b=\frac{1}{2}+\delta \) to \(\tilde{b}=\frac{1}{2}-\delta \), for sufficiently small \(\delta >0\). Thus, we also have

$$\begin{aligned} \Vert P_{Q}u\Vert _{L^{p}_{t,x}(I\times \mathbb {T}^d)} \lesssim _{|I|,\delta } |Q|^{\frac{1}{2} - \frac{d+2}{pd} + o(\delta )} \Vert u\Vert _{X^{0,\frac{1}{2}-\delta }(I)} \end{aligned}$$
(2.14)

Lemma 2.4 only treats the cubic nonlinearity when \(d=1\). We now prove the following general multilinear estimates to treat other cases. The proof borrows techniques from [20].

Lemma 2.5

Let \(d,k\ge 1\) such that \(dk\ge 2\) and let \(I\subset \mathbb {R}\) be a finite time interval. Then for any \(s>s_{c }\), there exist \(b=\frac{1}{2}-\) and \(b'=\frac{1}{2}+\) such that

$$\begin{aligned} \left\| u_1 \overline{u_2} \cdots \overline{u_{2k}} u_{2k+1} \right\| _{X^{s,b'-1}(I)} \lesssim _{|I|} \prod _{j=1}^{2k+1} \Vert u_j\Vert _{X^{s,b}(I)}. \end{aligned}$$
(2.15)

Proof

In view of (2.6), we can assume that \(u_j(t)=\mathbb {1}_I(t) u_j(t)\) and thus by the duality relation (2.4), it suffices to show

$$\begin{aligned} \left| \int _{\mathbb {R}\times \mathbb {T}^{d}} \big (\langle \nabla \rangle ^s (u_1 \overline{u_2} \cdots u_{2k+1})\big ) \overline{v} \,dxdt \right| \lesssim \Vert v\Vert _{X^{0,1-b'}} \prod _{j=1}^{2k+1}\Vert u_j\Vert _{X^{s,b}}. \end{aligned}$$
(2.16)

We use Littlewood-Paley decomposition: we estimate the left-hand side of (2.16) when \(v=P_Nv\), \(u_j=P_{N_j}u_j\) for some dyadic numbers \(N, N_j\in 2^{\mathbb {Z}}\), \(1\le j\le 2k+1\). Then the claim follows by triangle inequality and performing the summation

$$\begin{aligned} \sum _{N_1} \sum _{\begin{array}{c} N\\ N\lesssim N_1 \end{array}}\sum _{\begin{array}{c} N_2\\ N_2\le N_1 \end{array}} \cdots \sum _{\begin{array}{c} N_{2k+1}\\ N_{2k+1}\le N_{2k} \end{array}} . \end{aligned}$$
(2.17)

Notice that without loss of generality, we may assume that \(N_1\ge N_2\ge \cdots \ge N_{2k+1}\), in which case we also have \(N\lesssim N_1\), and that the factors v and \(u_j\) are real-valued and non-negative.

Let \(\varepsilon :=s-s_{c }\), and we distinguish two cases.

Case 1:\(N_1\sim N_2\). By Hölder inequality,

$$\begin{aligned} N^s \int _{\mathbb {R}\times \mathbb {T}^{d}} u_1 {u_2} \cdots u_{2k+1} {v} \,dxdt \lesssim N_1^{\frac{s}{2}} \Vert u_1\Vert _{L^q_{t,x}} N_2^{\frac{s}{2}} \Vert u_2\Vert _{L^q_{t,x}} \prod _{j=3}^{2k+1} \Vert u_j\Vert _{L^p_{t,x}} \Vert v\Vert _{L^r_{t,x}}, \end{aligned}$$
(2.18)

with pqr chosen such that \(\frac{2k-1}{p}+\frac{2}{q} +\frac{1}{r}=1\). We take pq such that \(\frac{d}{2}-\frac{d+2}{p}=s_{crit }\) and \(\frac{d}{2}-\frac{d+2}{q}=\frac{1}{2} s_{crit }\), or equivalently \(p=k(d+2)\) and \(q=\frac{4k(d+2)}{dk+2}\). These give the Hölder exponent \(r= \frac{2(d+2)}{d}\). By (2.14) and (2.12), we get

$$\begin{aligned} N_j^{\frac{s}{2}} \Vert u_j\Vert _{L^q_{t,x}}&\lesssim N_j^{-\frac{\varepsilon }{2}+} \Vert u_j\Vert _{X^{s,b}}, \quad j=1,2 \end{aligned}$$
(2.19)
$$\begin{aligned} \Vert u_j\Vert _{L^p_{t,x}}&\lesssim N_j^{-\varepsilon +} \Vert u_j\Vert _{X^{s,b}},\quad 3\le j\le 2k+1, \end{aligned}$$
(2.20)
$$\begin{aligned} \Vert v\Vert _{L^r_{t,x}}&\lesssim N^{0+} \Vert v\Vert _{X^{0,1-b'}} . \end{aligned}$$
(2.21)

By choosing \(\delta , \delta ' \ll \varepsilon \) in \(b:=\frac{1}{2}-\delta \) and in \(1-b'=\frac{1}{2}-\delta '\), respectively, we get

$$\begin{aligned} \text {RHS of } (2.18) \lesssim N^{-\frac{\varepsilon }{4}}\Vert v\Vert _{X^{0,1-b'}} \prod _{j=1}^{2k+1} N_j^{-\frac{\varepsilon }{4}} \Vert u_j\Vert _{X^{s,b}} . \end{aligned}$$
(2.22)

The factors \(N^{-\frac{\varepsilon }{4}}\), \(N_j^{-\frac{\varepsilon }{4}}\) guarantee that we can perform (2.17).

Case 2:\(N_1\gg N_2\). Then, we necessarily have \(N_1\sim N\) or else the left hand side of (2.16) vanishes. By Hölder inequality,

$$\begin{aligned} N^s \int _{\mathbb {R}\times \mathbb {T}^{d}} u_1 {u_2} \cdots u_{2k+1} {v} \,dxdt \lesssim N_1^s \Vert u_1\Vert _{L^q_{t,x}} \prod _{j=2}^{2k+1} \Vert u_j\Vert _{L^p_{t,x}} \Vert v\Vert _{L^r_{t,x}} , \end{aligned}$$
(2.23)

with \(\frac{2k}{p}+\frac{1}{q}+\frac{1}{r}=1\). As in Case 1, we would like to have p such that \(\frac{d}{2}-\frac{d+2}{p}= s_{crit }\), or equivalently \(p=k(d+2)\). However, the best we can do with the Strichartz estimate for the remaining factors is to choose \(q=r=\frac{2(d+2)}{d}\), so that we have

$$\begin{aligned} N_1^s \Vert u_1\Vert _{L^q_{t,x}}&\lesssim N_1^{0+} \Vert u_1\Vert _{X^{s,b}} , \end{aligned}$$
(2.24)
$$\begin{aligned} \Vert u_j\Vert _{L^p_{t,x}}&\lesssim N_j^{-\varepsilon +} \Vert u_j\Vert _{X^{s,b}},\quad 2\le j\le 2k+1, \end{aligned}$$
(2.25)
$$\begin{aligned} \Vert v\Vert _{L^r_{t,x}}&\lesssim N_1^{0+} \Vert v\Vert _{X^{0,1-b'}} . \end{aligned}$$
(2.26)

Notice that we can overcome the loss of derivative \(N_1^s\) only up to a logarithmic factor. We need a slightly refined analysis.

We cover the dyadic frequency annuli of \(u_1\) and of v with dyadic cubes of side-length \(N_2\), i.e.

$$\begin{aligned} \{\xi _1 : |\xi _1|\sim N_1\}\subset \bigcup _{\ell } Q_{\ell } \quad , \quad \{\xi : |\xi |\sim N\}\subset \bigcup _{j} R_j . \end{aligned}$$

There are approximately \(\left( \frac{N_1}{N_2}\right) ^d\)-many cubes needed, and so

$$\begin{aligned} u_1=\sum _{\ell } P_{Q_{\ell }} u_1 =: \sum _{\ell } u_{1,\ell }\quad , \quad v= \sum _j P_{R_j}v =: \sum _j v_j \end{aligned}$$

are decompositions into finitely many terms. Since \(|\xi _1-\xi | \lesssim N_2\) for \(\xi _1\in {{\mathrm{supp}}}(\widehat{u_1}), \xi \in {{\mathrm{supp}}}(\widehat{v})\) on the convolution hyperplane, there exists a constant K such that if \(\mathrm {dist}(Q_{\ell }, Q_j)>KN_2\), then the integral in (2.16) vanishes. Hence the summation (2.17) is replaced by

$$\begin{aligned} \sum _{N_1} \sum _{\begin{array}{c} N_2\\ N_2\ll N_1 \end{array}} \cdots \sum _{\begin{array}{c} N_{2k+1}\\ N_{2k+1}\le N_{2k} \end{array}} \sum _{\begin{array}{c} \ell ,j\\ j\approx \ell \end{array}} . \end{aligned}$$
(2.27)

Also, in place of (2.24)–(2.25), we now have

$$\begin{aligned} N_1^s \Vert u_{1,\ell }\Vert _{L^q_{t,x}}&\lesssim N_2^{0+} \Vert u_{1,\ell }\Vert _{X^{s,b}} , \end{aligned}$$
(2.28)
$$\begin{aligned} \Vert u_i\Vert _{L^p_{t,x}}&\lesssim N_i^{-\varepsilon +} \Vert u_i\Vert _{X^{s,b}},\quad 2\le i\le 2k+1, \end{aligned}$$
(2.29)
$$\begin{aligned} \Vert v_j\Vert _{L^q_{t,x}}&\lesssim N_2^{0+} \Vert v_j\Vert _{X^{0,1-b'}} , \end{aligned}$$
(2.30)

Therefore, by Cauchy-Schwarz inequality and Plancherel identity,

$$\begin{aligned} \text {LHS of } (2.16)&\lesssim \sum _{N_2} \sum _{\begin{array}{c} N_1\\ N_1\gg N_2 \end{array}} \sum _{\begin{array}{c} \ell ,j\\ \ell \approx j \end{array}} N_2^{-\varepsilon +} \Vert u_{1,\ell }\Vert _{X^{s,b}} \Vert v_j\Vert _{X^{0,1-b'}} \prod _{i=2}^{2k+1} \Vert u_i\Vert _{X^{s,b}}\\&\lesssim \sum _{N_2} N_2^{-\varepsilon +} \left( \sum _{\begin{array}{c} N_1\\ N_1\gg N_2 \end{array}} \sum _{\ell } \Vert u_{1,\ell }\Vert _{X^{s,b}}^2\right) ^{\frac{1}{2}} \left( \sum _{\begin{array}{c} N\\ N\gg N_2 \end{array}} \sum _{j} \Vert v_j\Vert _{X^{0,1-b'}}^2 \right) ^{\frac{1}{2}} \prod _{i=2}^{2k+1} \Vert u_i\Vert _{X^{s,b}}\\&\lesssim \sum _{N_2} N_2^{-\varepsilon +} \Vert u_1\Vert _{X^{s,b}} \Vert v\Vert _{X^{0,1-b'}} \prod _{i=2}^{2k+1} \Vert u_i\Vert _{X^{s,b}}\\&\lesssim \prod _{i=1}^{2k+1} \Vert u_i\Vert _{X^{s,b}} \Vert v\Vert _{X^{0,1-b'}} \end{aligned}$$

and the proof is complete. \(\square \)

3 The stochastic convolution

In this section, we prove some \(X^{s,b}\)-estimates on the stochastic convolution \(\Psi (t)\) given either by (1.11) or (1.12). We first record the following Burkholder-Davis-Gundy inequality, which is a consequence of [28, Theorem 1.1].

Lemma 3.1

(Burkholder–Davis–Gundy inequality) Let HK be separable Hilbert spaces, \(T>0\), and W is an H-valued Wiener process on [0, T]. Suppose that \(\{\psi (t)\}_{t\in [0,T]}\) is an adapted process taking values in \(\mathcal {L}^2(H;K)\). Then for \(p\ge 1\),

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}\left\| \int _0^t\psi (t')\,dW(t')\right\| _K^p\right] \lesssim _p \mathbb {E}\left[ \left( \int _0^T \left\| \psi (t')\right\| _{\mathcal {L}^2(H;K)}^2\,dt'\right) ^\frac{p}{2}\right] . \end{aligned}$$

In addition, we prove that \(\Psi (t)\) is pathwise continuous in both cases. To this end, we employ the factorization method of Da Prato [13, Lemma 2.7], i.e. we make use of the following lemma and (3.3) below.

Lemma 3.2

Let H be a Hilbert space, \(T>0\), \(\alpha \in (0,1)\), and \(\sigma >\big (\frac{1}{\alpha },\infty \big )\). Suppose that \({f\in L^{\sigma }([0,T];H)}\). Then the function

$$\begin{aligned} F(t):=\int _{0}^{t}\,{S(t-t')(t-t')^{\alpha -1}f(t')}\, d{t'},\quad t\in [0,T] \end{aligned}$$
(3.1)

belongs to C([0, T]; H). Moreover,

$$\begin{aligned} \sup _{t\in [0,T]}\left\| F(t)\right\| _{H}\lesssim _{\sigma ,T}\left\| f\right\| _{L^{\sigma }([0,T];H)}. \end{aligned}$$
(3.2)

We make use of the above lemma in conjunction with the following fact:

$$\begin{aligned} \int _{\mu }^{t}\,{(t-t')^{\alpha -1}(t'-\mu )^{-\alpha }}\, d{t'}=\frac{\pi }{\sin (\pi \alpha )}, \end{aligned}$$
(3.3)

for all \(0<\alpha <1\) and all \(0\le \mu <t\). This can be seen via considerations with Euler-Beta functions, see [13].

We now treat the additive and multiplicative cases separately below in Sects. 3.1 and 3.2 respectively. The arguments for the two cases are similar, albeit with some extra technicalities in the multiplicative case.

3.1 The additive stochastic convolution

By Fourier expansion, the stochastic convolution (1.11) for the additive noise problem can be written as

$$\begin{aligned} \Psi (t) = \sum _{n\in \mathbb {Z}^d} e_n \sum _{j\in \mathbb {Z}^d} \widehat{(\phi e_j )}(n) \int _0^t e^{i(t-t')|n|^2}d\beta _j(t')\, . \end{aligned}$$
(3.4)

We first prove the following \(X^{s,b}\)-estimate on \(\Psi \):

Lemma 3.3

Let \(s\ge 0\), \(0\le b<\frac{1}{2}\), \(T>0\), and \(\sigma \in [2,\infty )\). Assume that \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); H^s(\mathbb {T}^d))\). Then for \(\Psi \) given by (3.4) we have

$$\begin{aligned} \mathbb {E}\left[ \Vert \Psi \Vert ^{\sigma }_{X^{s,b}{([0,T])}} \right]&\lesssim T^\frac{\sigma }{2}(1+T^2)^\frac{\sigma }{2} \Vert \phi \Vert _{\mathcal {L}^2(L^2(\mathbb {T}^d);H^s(\mathbb {T}^d))}^{\sigma }. \end{aligned}$$
(3.5)

Proof

Since \(\mathbb {1}_{[0,T]}(t)\mathbb {1}_{[0,T]}(t')=\mathbb {1}_{[0,T]}(t)=1\) whenever \(0\le t'\le t\le T\), we have

$$\begin{aligned} \mathbb {1}_{[0,T]}(t)\Psi (t)(x)&= \sum _{n\in \mathbb {Z}^d} e_n \sum _{j\in \mathbb {Z}^d} \widehat{\phi e_j}(n) \mathbb {1}_{[0,T]}(t) e^{it|n|^2} \int _0^t \mathbb {1}_{[0,T]}(t') e^{-it'|n|^2}{d}\beta _j(t') \end{aligned}$$

By (2.6), we have

$$\begin{aligned} \left||\Psi (t)\right||_{X^{s,b}({[0,T]})}&\sim \left||\mathbb {1}_{[0,T]}(t)\Psi (t)\right||_{X^{s,b}}\nonumber \\&= \Vert \langle n \rangle ^s\langle \tau \rangle ^b\mathcal {F}_{t,x} \left( S(-t)\mathbb {1}_{[0,T]}(t) \Psi (t)\right) (\tau ,n) \Vert _{L^2_\tau \ell ^2_n}\nonumber \\&=\Big \Vert \langle n \rangle ^s\langle \tau \rangle ^b {\mathcal {F}}_t\big [g_n(t)\big ](\tau ) \Big \Vert _{L^2_{\tau }\ell ^2_n}, \end{aligned}$$
(3.6)

where

$$\begin{aligned} g_n(t):=\sum _{j\in \mathbb {Z}^d} \mathbb {1}_{[0,T]}(t) \int _0^t \mathbb {1}_{[0,T]}(t') e^{-it'|n|^2}\widehat{\phi e_j}(n){d}\beta _j(t'). \end{aligned}$$

By the stochastic Fubini theorem (see [14, Theorem 4.33]), we have

$$\begin{aligned} {\mathcal {F}}_t[g_n(t)](\tau )&=\int _{\mathbb {R}}{e^{-it\tau }g_n(t)} dt\\&=\sum _{j\in \mathbb {Z}^d}\int _{-\infty }^\infty \mathbb {1}_{[0,T]}(t') e^{-it'|n|^2} \widehat{\phi e_j}(n) \int _{t'}^{\infty } \mathbb {1}_{[0,T]}(t)e^{-it\tau }\,{d}t\, {d}\beta _j(t'). \end{aligned}$$

Since

$$\begin{aligned} \left| \int _{t'}^{\infty } \mathbb {1}_{[0,T]}(t)e^{-it\tau }\,{d}t\right| \lesssim \min \{ T,|\tau |^{-1}\}, \end{aligned}$$
(3.7)

by Burkholder-Davis-Gundy inequality (Lemma 3.1), we get

$$\begin{aligned} \begin{aligned} \mathbb {E}\Big [|{\mathcal {F}}_t[g_n(t)](\tau )|^{\sigma }\Big ]&\lesssim \left[ \int _0^T \sum _{j\in \mathbb {Z}^d} \left| \widehat{\phi e_j}(n) \int _{t'}^{\infty }\mathbb {1}_{[0,T]}(t)e^{-it\tau }\,dt \right| ^2 dt'\right] ^{\frac{\sigma }{2}}\\&\lesssim \left[ T \sum _{j\in \mathbb {Z}^d} |\widehat{\phi e_j}(n)|^2 \min \{ T^2,|\tau |^{-2}\}\right] ^{\frac{\sigma }{2}}. \end{aligned} \end{aligned}$$
(3.8)

By (3.6), (3.8), and Minkowski inequality, we get

$$\begin{aligned} \left\| \Psi \right\| _{L^{\sigma }(\Omega ;X^{s,b}([0,T]))}&\le \left( \sum _{n\in \mathbb {Z}^d}\int _{-\infty }^\infty {\langle n \rangle ^{2s}\langle \tau \rangle ^{2b} \left( \mathbb {E}\left[ \left| {\mathcal {F}}[g_n](\tau )\right| ^{\sigma }\right] \right) ^\frac{2}{\sigma }}{\,d\tau } \right) ^{\frac{1}{2}}\\&\lesssim T^{\frac{1}{2}} \left( \sum _{n,j\in \mathbb {Z}^d}\langle n \rangle ^{2s}|\widehat{\phi e_j}(n)|^2\int _{-\infty } ^\infty {\langle \tau \rangle ^{2b}\min \{ T^2,|\tau |^{-2}\}}{\,d\tau }\right) ^{\frac{1}{2}}\\&\lesssim T^{\frac{1}{2}}\left\| \phi \right\| _{\mathcal {L}^2(L^2;H^s)} \left( T^2\int _{|\tau |<1}{}\, d{\tau }+ \int _{|\tau |\ge 1}{\langle \tau \rangle ^{2b-2}}\, d{\tau }\right) ^{\frac{1}{2}} . \end{aligned}$$

This completes the proof of Lemma 3.3. \(\square \)

We now prove that \(\Psi \) has a continuous version taking values in \(H^s(\mathbb {T}^d)\). This is the content of the next lemma.

Lemma 3.4

(Continuity of the additive noise) Let \(s\ge 0\), \(T>0\), and \(2\le \sigma <\infty \). Assume that \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d); H^s(\mathbb {T}^d))\). Then \(\Psi (\cdot )\) belongs to \(C([0,T];H^s(\mathbb {T}^d))\) almost surely and

$$\begin{aligned} \mathbb {E}\Bigg [\sup _{t\in [0,T]}\left\| \Psi (t)\right\| _{H^s(\mathbb {T}^d)}^{\sigma }\Bigg ] \lesssim _{T} \,\left\| \phi \right\| ^{\sigma }_{\mathcal {L}^2(L^2(\mathbb {T}^d);H^s(\mathbb {T}^d))}. \end{aligned}$$
(3.9)

Proof

We fix \(\alpha \in \left( 0, \frac{1}{2}\right) \) and we write the stochastic convolution as follows:

$$\begin{aligned} \begin{aligned} \Psi (t)&=\frac{\sin (\pi \alpha )}{\pi }\int _{0}^{t}\,{\left[ \int _{\mu }^{t}\,{(t-t') ^{\alpha -1}(t'-\mu )^{-\alpha }}\, d{t'}\right] S(t-\mu )\phi }\, d{W(\mu )}\\&=\frac{\sin (\pi \alpha )}{\pi }\int _{0}^{t}\,{S(t-t')(t-t')^{\alpha -1} \int _{0}^{t'}\,{S(t'-\mu )(t'-\mu )^{-\alpha }\phi }\, d{W(\mu )} }\, d{t'}, \end{aligned} \end{aligned}$$
(3.10)

where we used the stochastic Fubini theorem [14, Theorem 4.33] and the group property of \(S(\cdot )\). By Lemma 3.2 and (3.10) it suffices to show that the process

$$\begin{aligned} f(t'):=\int _{0}^{t'}\,{S(t'-\mu )(t'-\mu )^{-\alpha }\phi }\, d{W(\mu )} \end{aligned}$$

satisfies

$$\begin{aligned} \mathbb {E}\bigg [\int _{0}^{T}\,{\left\| f(t')\right\| _{H^s_x}^{\sigma }}\, d{t'}\bigg ] \le C\big (T,\sigma ,\left\| \phi \right\| _{\mathcal {L}^2(L^2;H^s)}\big ) <\infty , \end{aligned}$$
(3.11)

for some \(\sigma >\frac{1}{\alpha }\).

By Burkholder-Davis-Gundy inequality (Lemma 3.1), for any \(\sigma \ge 2\) and any \(t'\in [0,T]\), we get

$$\begin{aligned} \mathbb {E}\left[ \left\| f(t')\right\| _{H^s_x}^{\sigma }\right]&\lesssim \left( \int _0^{t'} \Vert S(t'-\mu )(t'-\mu )^{-\alpha } \phi \Vert ^2_{\mathcal {L}^2(L^2;H^s)} d\mu \right) ^{\frac{\sigma }{2}}\\&= \left( \int _0^{t'} (t'-\mu )^{-2\alpha } \sum _{j\in \mathbb {Z}^d} \Vert S(t'-\mu )\phi e_j\Vert ^2_{H^s} d\mu \right) ^{\frac{\sigma }{2}}\\&\le \Vert \phi \Vert _{\mathcal {L}^2(L^2;H^s)}^{\sigma } \left( \frac{T^{1-2\alpha }}{1-2\alpha }\right) ^{\frac{\sigma }{2}}, \end{aligned}$$

where in the last step we used \(2\alpha \in (0,1)\) and the \(H^s(\mathbb {T}^d)\)-isometry property of \(S(t'-\mu )\). Hence

$$\begin{aligned} \text {LHS of } (3.11) = \int _{0}^{T}\,{\mathbb {E}\left[ \left\| f(t')\right\| _{H^s_x}^{\sigma }\right] }\, d{t'}\lesssim \left\| \phi \right\| _{{\mathcal {L}^2(L^2;H^s)}}^{\sigma }{T^{\frac{\sigma }{2}(1-2\alpha )+1}}<\infty . \end{aligned}$$

The estimate (3.9) follows from (3.2). \(\square \)

3.2 The multiplicative stochastic convolution

The multiplicative stochastic convolution \(\Psi =\Psi [u]\) from (1.12) can be written as

$$\begin{aligned} \Psi [u](t) = \sum _{n\in \mathbb {Z}^d} e_n \sum _{j\in \mathbb {Z}^d} \int _0^t e^{i(t-t')|n|^2} \widehat{(u(t') \phi e_j )}(n) d\beta _j(t') . \end{aligned}$$
(3.12)

Recall that if \(s>\frac{d}{2}\), then we have access to the algebra property of \(H^s(\mathbb {T}^d)\):

$$\begin{aligned} \left\| fg\right\| _{H^s(\mathbb {T}^d)}\lesssim \left\| f\right\| _{H^s(\mathbb {T}^d)}\left\| g\right\| _{H^s(\mathbb {T}^d)} \end{aligned}$$
(3.13)

which is an easy consequence of the Cauchy-Schwarz inequality. This simple fact is useful for our analysis in the multiplicative case. On the other hand, (3.13) is not available to us for regularities below \(\frac{d}{2}\), but we use the following inequalities.

Lemma 3.5

Let \(0<s\le \frac{d}{2}\) and \(1\le r < \frac{d}{d-s}\). Then

$$\begin{aligned} \Vert f u \Vert _{H^s(\mathbb {T}^d)} \lesssim \Vert f\Vert _{\mathcal {F}L^{s,r}(\mathbb {T}^d)} \Vert u\Vert _{H^s(\mathbb {T}^d)} . \end{aligned}$$
(3.14)

Also, for \(s=0\), we have

$$\begin{aligned} \Vert f u \Vert _{L^2(\mathbb {T}^d)} \lesssim \Vert f\Vert _{\mathcal {F}L^{0,1}(\mathbb {T}^d)} \Vert u\Vert _{L^2(\mathbb {T}^d)} . \end{aligned}$$
(3.15)

Proof

Assume that \(0<s\le \frac{d}{2}\) and let \(n_1\) and \(n_2\) denote the spatial frequencies of f and u respectively. By separating the regions \(\{|n_1| > rsim |n_2|\}\) and \(\{|n_1|\ll |n_2|\}\), and then applying Young’s inequality, we have

$$\begin{aligned} \Vert f u \Vert _{H^s(\mathbb {T}^d)}&\lesssim \Big \Vert \big ( \widehat{\langle \nabla \rangle ^s f} * \widehat{u} \big )(n)\Big \Vert _{\ell _n^2} + \Big \Vert \big ( \widehat{f} * \widehat{\langle \nabla \rangle ^s u} \big )(n)\Big \Vert _{\ell _n^2} \\&\lesssim \Vert f\Vert _{\mathcal {F}L^{s,r}} \Vert \widehat{u}\Vert _{\ell ^p} + \Vert \widehat{f}\Vert _{\ell ^1} \Vert u\Vert _{H^s}, \end{aligned}$$

where p is chosen such that \(\frac{1}{r} + \frac{1}{p} =\frac{3}{2}\). By Hölder inequality, for \(r'\) and q such that \(\frac{1}{r}+\frac{1}{r'}=1\) and \(\frac{1}{q}+\frac{1}{2} =\frac{1}{p}\),

$$\begin{aligned} \Vert \widehat{f}\Vert _{\ell ^1}&\lesssim \Vert \langle n \rangle ^{-s}\Vert _{\ell ^{r'}} \Vert f\Vert _{\mathcal {F}L^{s,r}}, \\ \Vert \widehat{u}\Vert _{\ell ^p}&\lesssim \Vert \langle n \rangle ^{-s}\Vert _{\ell ^{q}} \Vert u\Vert _{H^s} . \end{aligned}$$

Since \(sr'>d\) and \(sq>d\) provided that \(r<\frac{d}{d-s}\), the conclusion (3.14) follows.

If \(s=0\), (3.15) follows easily from Young’s inequality:

$$\begin{aligned} \Vert f u \Vert _{L^2(\mathbb {T}^d)} = \Vert \widehat{f} * \widehat{u}\Vert _{\ell ^2} \lesssim \Vert \widehat{f} \Vert _{\ell ^1} \Vert \widehat{u}\Vert _{\ell ^2} = \Vert f\Vert _{\mathcal {F}L^{0,1}} \Vert u\Vert _{L^2} . \end{aligned}$$

\(\square \)

Given \(\phi \) as in Theorem 1.6, let us denote

$$\begin{aligned} C(\phi ):= \left\| \phi \right\| _{\mathcal {L}^2(L^2(\mathbb {T}^d);\mathcal {F}L^{s,r}(\mathbb {T}^d))} <\infty , \end{aligned}$$
(3.16)

for \(r=2\) when \(s>\frac{d}{2}\), for some \(r\in \big [1,\frac{d}{d-s}\big )\) when \(0<s\le \frac{d}{2}\), and for \(r=1\) when \(s=0\). Recall that if \(\phi \) is translation invariant, then it is sufficient to assume that \(C(\phi )<\infty \) with \(r=2\), for all \(s\ge 0\). We now proceed to prove the following \(X^{s,b}\)-estimate of \(\Psi [u]\).

Lemma 3.6

Let \(s\ge 0\), \(0\le b<\frac{1}{2}\), \(T>0\), and \(2\le \sigma <\infty \). Suppose that \(\phi \) satisfies the assumptions of Theorem 1.6. Then, for \(\Psi [u]\) given by (1.12) we have the estimate

$$\begin{aligned} \mathbb {E}\left[ \left||\Psi [u]\right||_{X^{s,b}({[0,T]})}^{\sigma }\right]&\lesssim (T^2+1)^\frac{\sigma }{2} C(\phi )^{\sigma }\, \mathbb {E}\left[ \Vert u\Vert _{L^2([0,T]; H^s(\mathbb {T}^d))}^{\sigma }\right] . \end{aligned}$$
(3.17)

Proof

We first prove (3.17). Let \(g(t):=\mathbb {1}_{[0,T]}(t)S(-t)\Psi (t)\). By the stochastic Fubini theorem [14, Theorem 4.33],

$$\begin{aligned} {\mathcal {F}}_{t,x}(g)(\tau ,n)&=\int _{\mathbb {R}}{ e^{-it\tau }\mathbb {1}_{[0,T]}(t)\sum _{j\in \mathbb {Z}^d}\int _{0}^{t}\,{ e^{-it'n^2} (\widehat{u(t')\phi e_j})(n) }\, d{\beta _j(t')}}\, d{t}\\&=\sum _{j\in \mathbb {Z}^d}\int _{0}^{T}\,{ \int _{t'}^{\infty }\,{\mathbb {1}_{[0,T]}(t) e^{-it\tau }e^{-it'n^2}(\widehat{u(t')\phi e_j})(n) }\, d{t} }\, d{\beta _j(t')}. \end{aligned}$$

Then by (2.6) and the assumption \(0\le b<\frac{1}{2}\), the Burkholder-Davis-Gundy inequality (Lemma 3.1), and (3.7), we have

$$\begin{aligned}&\text {LHS of } (3.17) \\&\quad \sim \mathbb {E}\left[ \left\| \langle n \rangle ^s\langle \tau \rangle ^b {\mathcal {F}}[g](n,\tau )\right\| _{L^2_{\tau }\ell ^2_n}^{\sigma }\right] \\&\quad \lesssim \mathbb {E}\left[ \left( \sum _{j,n\in \mathbb {Z}^d} \int _{\mathbb {R}}{ \int _{0}^{T}\,{ \langle n \rangle ^{2s}\langle \tau \rangle ^{2b} \left| \int _{t'}^{\infty }\,{\mathbb {1}_{[0,T]}(t)e^{-it\tau }}\, d{t}\right| ^2 \left| (\widehat{u(t')\phi e_j})(n)\right| ^2 }\, d{t'} }\, d{\tau } \right) ^{\frac{\sigma }{2}}\right] \\&\quad \lesssim (T^2+1)^\frac{\sigma }{2}\,\mathbb {E}\left[ \left( \int _{0}^{T}\,{ \sum _{j,n\in \mathbb {Z}^d} \langle n \rangle ^{2s}\left| (\widehat{u(t')\phi e_j})(n)\right| ^2 }\, d{t'} \right) ^{\frac{\sigma }{2}}\right] . \end{aligned}$$

If \(s>\frac{d}{2}\), we apply the algebra property of \(H^s(\mathbb {T}^d)\) to get

$$\begin{aligned} \Vert u(t') \phi e_j\Vert _{\ell ^2_j H^s} \lesssim {\Vert \phi \Vert _{\mathcal {L}^2(L^2;H^s)}} \Vert u(t')\Vert _{H^s}. \end{aligned}$$

If \(0\le s\le \frac{d}{2}\), we have

$$\begin{aligned} \Vert u(t') \phi e_j\Vert _{\ell ^2_j H^s} \lesssim C(\phi ) \Vert u(t')\Vert _{H^s}. \end{aligned}$$
(3.18)

and thus (3.17) follows. \(\square \)

Next, we prove the continuity of \(\Psi [u](t)\) in the same way as in Lemma 3.4, i.e. by using Lemma 3.2.

Lemma 3.7

(Continuity of the multiplicative noise) Let \(T>0\), \(s\ge 0\), \(0\le b<\frac{1}{2}\), and \(2\le \sigma <\infty \). Suppose that \(u\in L^{\sigma }\big (\Omega ;X^{s,b}([0,T])\big )\) and that \(\phi \) satisfies the assumptions of Theorem 1.6. Then \(\Psi [u](\cdot )\) given by (3.12) belongs to \(C([0,T];H^s(\mathbb {T}^d))\) almost surely. Moreover,

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}\left\| \Psi [u](t)\right\| _{H^s(\mathbb {T}^d)}^{\sigma } \right] \lesssim C(\phi )^{\sigma } \, \mathbb {E}\left[ \Vert u\Vert _{X^{s,b}([0,T])}^{\sigma }\right] . \end{aligned}$$
(3.19)

Proof

Applying the same factorisation procedure as in the proof of Lemma 3.4 reduces the problem to proving that the process

$$\begin{aligned} f(t'):=\int _{0}^{t'}\,{(t'-\mu )^{-\alpha }S(t'-\mu )\big [u(\mu )\phi \big ]}\, d{W(\mu )} \end{aligned}$$

satisfies

$$\begin{aligned} \mathbb {E}\left[ \int _{0}^{T}\,{\left\| f(t')\right\| _{H^s_x}^{\sigma }}\, d{t'}\right] \le C'\left( T,\sigma , C(\phi )\right) <\infty \, \end{aligned}$$
(3.20)

for some \(0<\alpha <1\) satisfying \(\alpha >\frac{1}{\sigma }\). By the Burkholder-Davis-Gundy inequality (Lemma 3.1) and Lemma 3.5, we have

$$\begin{aligned} \mathbb {E}\left[ \left\| f(t')\right\| _{H^s_x}^{\sigma }\right]&\lesssim \mathbb {E}\left[ \left( \int _0^{t'} \Vert (t'-\mu )^{-\alpha } S(t'-\mu ) [u(\mu )\phi ]\Vert ^2_{\mathcal {L}^2(L^2;H^s)} d\mu \right) ^{\frac{\sigma }{2}}\right] \\&= \mathbb {E}\left[ \left( \int _0^{t'} (t'-\mu )^{-2\alpha } \sum _{j\in \mathbb {Z}^d} \Vert S(t'-\mu )u(\mu )\phi e_j\Vert ^2_{H^s} d\mu \right) ^{\frac{\sigma }{2}}\right] \\&\lesssim \mathbb {E}\left[ \left( \sum _{j\in \mathbb {Z}^d} \Vert \phi e_j\Vert ^2_{\mathcal {F}L^{s,r}} \int _0^{T} (t'-\mu )^{-2\alpha } \Vert u(\mu )\Vert ^2_{H^s} d\mu \right) ^{\frac{\sigma }{2}}\right] . \end{aligned}$$

Then, by Fubini theorem and Minkowski inequality, we obtain

$$\begin{aligned} \mathbb {E}\left[ \int _0^T \left\| f(t')\right\| _{H^s_x}^{\sigma } dt' \right]&= \Big \Vert \, \Vert f\Vert _{H^s_x}\Big \Vert ^{\sigma }_{L^\sigma (\Omega ; L^\sigma _{t'}[0,T])}\\&\lesssim C(\phi )^{\sigma }\, \bigg \Vert \, \Big \Vert (t'-\mu )^{-\alpha } \Vert u(\mu )\Vert _{H^s_x}\Big \Vert _{L^2_{\mu }(0,T])} \bigg \Vert ^{\sigma }_{L^\sigma (\Omega ; L^\sigma _{t'}[0,T])}\\&\le C(\phi )^{\sigma }\, \mathbb {E}\Bigg [ \bigg \Vert \, \Big \Vert (t'-\mu )^{-\alpha } \Vert u(\mu )\Vert _{H^s_x}\Big \Vert _{L^{\sigma }_{t'}(0,T])} \bigg \Vert ^{\sigma }_{L_{\mu }^{2}([0,T])}\Bigg ]\\&\lesssim C(\phi )^{\sigma }\, \mathbb {E}\Bigg [ \Bigg (\int _0^T (T-\mu )^{2(\frac{1}{\sigma }-\alpha )} \Vert u(\mu )\Vert _{H^s_x}^2d\mu \Bigg )^{\frac{\sigma }{2}} \Bigg ] \end{aligned}$$

By Hölder and Sobolev inequalities and (2.6), we have

$$\begin{aligned} \Bigg ( \int _0^T (T-\mu )^{2(\frac{1}{\sigma }-\alpha )} \Vert u(\mu )\Vert ^2_{H_x^s} d\mu \Bigg )^{\frac{1}{2}}&\le \Big \Vert (T-\mu )^{\frac{1}{\sigma } - \alpha } \Big \Vert _{L_{\mu }^{\frac{4}{1+2b}}([0,T])} \Big \Vert \Vert u(\mu )\Vert _{H^s_x}\Big \Vert _{L_{\mu }^{\frac{4}{1-2b}}([0,T])}\\&\lesssim T^{1+\frac{4}{1+2b}(\frac{1}{\sigma }-\alpha )} \Big \Vert \mathbb {1}_{[0,T]}(\mu ) \Vert S(-\mu ) u(\mu )\Vert _{H^s_x} \Big \Vert _{L_{\mu }^{\frac{4}{1-2b}}}. \end{aligned}$$

There exists \(\alpha =\alpha (\sigma ):=\frac{1}{\sigma }+\frac{1}{4}\) for which we have

$$\begin{aligned} \mathbb {E}\left[ \int _0^T \left\| f(t')\right\| _{H^s_x}^{\sigma } dt' \right]&\lesssim \mathbb {E}\Big [ T^{\frac{2b\sigma }{1+2b}} \Vert u\Vert ^{\sigma }_{X^{s,b}([0,T])} \Big ] <\infty . \end{aligned}$$

\(\square \)

4 Local well-posedness

4.1 SNLS with additive noise

In this subsection, we prove Theorem 1.1. Let \(b=b(k)=\frac{1}{2}-\) be given by Lemma 2.4 (in the case \(d=k=1\)) or by Lemma 2.5 (in the case \(dk\ge 2\)). By Lemma 3.3, for any \(T>0\), there is an event \(\Omega '\) of full probability such that the stochastic convolution \(\Psi \) has finite \(X^{s,b}([0,T])\)-norm on \(\Omega '\).

Now fix \(\omega \in \Omega '\) and \(u_0\in H^s(\mathbb {T}^d)\). Consider the ball

$$\begin{aligned} B_R:=\big \{u\in X^{s,b}([0,T]):\Vert u\Vert _{X^{s,b}([0,T])}\le R\big \} \end{aligned}$$

where \(0<T<1\) and \(R>0\) are to be determined later. We aim to show that the operator \(\Lambda \) given by

$$\begin{aligned} \Lambda u(t) = S(t) u_0 \pm i\int _0^t S(t-t') \big (|u|^{2k}u\big )(t')dt' - i \Psi (t)\ ,\ t\ge 0,\, \end{aligned}$$

where \(\Psi \) is the additive stochastic convolution given by (3.4), is a contraction on \(B_R\). To this end, it remains to estimate the \(X^{s,b}([0,T])\)-norm of

$$\begin{aligned} D(u):=\int _{0}^{t}\,{S(t-t')\big (|u|^{2k}u\big )(t')}\, d{t'} . \end{aligned}$$

For any \(\delta >0\) sufficiently small (such that \(b+\delta <\frac{1}{2}\)), by Lemma 2.3 and (2.6):

$$\begin{aligned} \left\| D(u)\right\| _{X^{s,b}([0,T])} \lesssim T^{\delta } \left\| D(u)\right\| _{X^{s,b+\delta }([0,T])} \lesssim T^\delta \left\| \mathbb {1}_{[0,T]}(t) D(u)(t) \right\| _{X^{s,\frac{1}{2}+\delta }}. \end{aligned}$$

Let \(\eta \) be a smooth cut-off function, supported on \([-1, T+1]\), with \(\eta (t)=1\) for all \(t\in [0,T]\). For any \(w\in X^{s,-\frac{1}{2}+\delta }\) that agrees with \(|u|^{2k}u\) on [0, T], by Lemma 2.2, we obtain

$$\begin{aligned} \left\| \mathbb {1}_{[0,T]}(t) D(u)(t)\right\| _{X^{s,\frac{1}{2}+\delta }}&\lesssim \left\| \eta (t) \int _0^t S(t-t') w(t') dt' \right\| _{X^{s,\frac{1}{2}+\delta }} \lesssim \Vert w\Vert _{X^{s,-\frac{1}{2}+\delta }} \end{aligned}$$
(4.1)

Then after taking the infimum over all such w, we use Lemma 2.4 or 2.5 and we get

$$\begin{aligned} \left\| D(u)\right\| _{X^{s,b}([0,T])}&\lesssim T^\delta \Vert (u \overline{u})^k u\Vert _{X^{s,-\frac{1}{2}+\delta }([0,T])} \lesssim T^\delta \left\| u\right\| ^{2k+1}_{X^{s,b}([0,T])}. \end{aligned}$$
(4.2)

It follows that

$$\begin{aligned} \left\| \Lambda u\right\| _{X^{s,b}([0,T])}\le c \left\| u_0\right\| _{H^s_x} +c T^{\delta } \left\| u\right\| ^{2k+1}_{X^{s,b}([0,T])} +\left||\Psi (t)\right||_{X^{s,b}({[0,T]})}, \end{aligned}$$
(4.3)

for some \(c>0\). Similarly, we obtain

$$\begin{aligned} \left\| \Lambda u-\Lambda v\right\| _{X^{s,b}([0,T])}\le c T^{\delta } \left( \left\| u\right\| ^{2k}_{X^{s,b}([0,T])} +\left\| v\right\| ^{2k}_{X^{s,b}([0,T])}\right) \left\| u-v\right\| _{X^{s,b}([0,T])}. \end{aligned}$$
(4.4)

Let \(R:= 2c \left\| u_0\right\| _{H_x^s}+2\left||\Psi (t)\right||_{X^{s,b}({[0,T]})}\). From (4.3) and (4.4), we see that \(\Lambda \) is a contraction from \(B_R\) to \(B_R\) provided

$$\begin{aligned} cT^\delta R^{2k+1}\le \frac{1}{2}R \ \text{ and } \ c T^{\delta } \left( 2R^{2k}\right) \le \frac{1}{2}. \end{aligned}$$
(4.5)

This is always possible if we choose \(T\ll 1\) sufficiently small. This shows the existence of a unique solution \(u\in X^{s,b}([0,T])\) to (1.4) on \(\Omega '\).

Finally, we check that \(u\in C([0,T]; H^s)\) on the set of full probability \(\Omega ''\cap \Omega '\), where \(\Omega ''\) is given by Lemma 3.4, that is \(\Psi \in C([0,T];H^s)\) on \(\Omega ''\). By (2.6), (4.1) and Lemma 2.4 or 2.5, we also get

$$\begin{aligned} \left||D(u)\right||_{X^{s,\frac{1}{2}+\delta }({[0,T]})}\lesssim \left\| \mathbb {1}_{[0,T]}(t) D(u)(t)\right\| _{X^{s,\frac{1}{2}+\delta }} \lesssim \left||u\right||_{X^{s,b}({[0,T]})}^{2k+1} . \end{aligned}$$
(4.6)

By the embedding \(X^{s,\frac{1}{2}+\delta }([0,T]) \hookrightarrow C([0,T];H^s(\mathbb {T}^d))\), we have \(D(u)\in C([0,T];H^s(\mathbb {T}^d))\). Since the linear term \(S(t)u_0\) also belongs to \(C([0,T];H^s(\mathbb {T}^d))\), we conclude that

$$\begin{aligned} u=\Lambda u\in C\big ([0,T];H^s(\mathbb {T}^d)\big ) \text { on } \Omega ''\cap \Omega '. \end{aligned}$$

Remark 4.1

From (4.5), we obtain the time of existence

$$\begin{aligned} T_{\text {max}}:=\max \bigg \{\tilde{T}>0: \tilde{T}\le c\Big (\left\| u_0\right\| _{H^s}+\left\| \Psi \right\| _{X^{s,b}([0,\tilde{T}])}\Big )^{-\theta }\bigg \}, \end{aligned}$$
(4.7)

where \(\theta =\frac{2k}{\delta }\). Note that (4.7) will be useful in our global argument.

4.2 SNLS with multiplicative noise

In this subsection, we prove Theorem 1.6. Following [17], we use a truncated version of (1.4). The main idea is to apply an appropriate cut-off function on the nonlinearity to obtain a family of truncated SNLS, and then prove global well-posedness of these truncated equations. Since solutions started with the same initial data coincide up to suitable stopping times, we obtain a solution to the original SNLS in the limit.

Let \(\eta :\mathbb {R}\rightarrow [0,1]\) be a smooth cut-off function such that \(\eta \equiv 1\) on [0, 1] and \(\eta \equiv 0\) outside \([-1,2]\). Set \(\eta _{R}:=\eta \left( \frac{\cdot }{R}\right) \) and consider the equation

$$\begin{aligned} i\partial _tu_{R} -\Delta u_{R} \pm \eta _R\big (\left||u_R\right||_{X^{s,b}({[0,t]})}\big )^{2k+1}|{u}_{R}|^{2k} u_{R} = u_R\cdot \phi \xi , \end{aligned}$$
(4.8)

with initial data \(u_R|_{t=0} = u_0\). Its mild formulation is \(u_R=\Lambda _R u_R\), where \(\Lambda _R\) is given by

$$\begin{aligned} \Lambda _{R} u_R&:= S(t)u_0 \pm i\int _{0}^{t}\,{S(t-t')\eta _R\left( \left||u_R\right||_{X^{s,b}({[0,t']})}\right) ^{2k+1}|u _{R}|^{2k}u_{R}(t')}\, d{t'}-i\Psi [u_{R}](t). \end{aligned}$$
(4.9)

The key ingredient for Theorem 1.6 is the following proposition.

Proposition 4.2

(Global well-posedness for (4.8)) Let \(s>s_{\text {crit}}\), \(s\ge 0\), and \(T, R>0\). Suppose that \(\phi \) is as in Theorem 1.6. Given \(u_0\in H^s(\mathbb {T}^d)\), there exists \(b=\frac{1}{2}-\) and a unique adapted process

$$\begin{aligned}u_{R}\in L^2\Big (\Omega ;C\big ([0,T];H^s(\mathbb {T}^d)\big )\cap X^{s,b}([0,T])\Big )\end{aligned}$$

solving (4.8) on [0, T].

Before proving this result, we state and prove the following lemma.

Lemma 4.3

(Boundedness of cut-off) Let \(s\ge 0\), \(b\in [0,\frac{1}{2})\), \(R>0\) and \(T>0\). There exist constants \(C_1, C_2(R)>0\) such that

$$\begin{aligned}&\left||\eta _{R} \left( \left||u\right||_{X^{s,b}({[0,t]})} \right) u(t) \right||_{X^{s,b}({[0,T]})}\le \min \left\{ C_1\left||u\right||_{X^{s,b}({[0,T]})}, C_2(R)\right\} ; \end{aligned}$$
(4.10)
$$\begin{aligned}&\left||\eta _{R}\left( \left||u\right||_{X^{s,b}({[0,t]})}\right) u(t) -\eta _{R}\left( \left||v\right||_{X^{s,b}({[0,t]})}\right) v(t) \right||_{X^{s,b}({[0,T]})}\le C_2(R)\left||u-v\right||_{X^{s,b}({[0,T]})}. \end{aligned}$$
(4.11)

Proof

We first prove (4.10). Let \(w(t,n)=\mathcal {F}_x[S(-t)u(t)](n)\), \(\kappa _{R}(t)=\eta _{R}\left( \left||u\right||_{X^{s,b}({[0,t]})}\right) \) and

$$\begin{aligned} \tau _{R}:=\inf \left\{ t\ge 0: \left||u\right||_{X^{s,b}({[0,t]})}\ge 2R\right\} . \end{aligned}$$
(4.12)

Then \(\kappa _{R}(t)=0\) when \(t>\tau _{R}\). By (2.6) and (2.1),

$$\begin{aligned} \left||\kappa _{R}(t) u(t) \right||_{X^{s,b}({[0,T]})}^2&\sim \left||\mathbb {1}_{[0,T\wedge \tau _{R}]}\kappa _{R}(t)u(t)\right||_{X^{s,b}}^2 \sim \left||\kappa _{R}(t)u(t) \right||_{X^{s,b}({[0,T\wedge \tau _{R}]})}^2 \nonumber \\&\sim \sum _{n\in \mathbb {Z}^d}\langle n \rangle ^{2s}\left\| \kappa _{R}(t) w(t,n)\right\| ^2_{H^b(0,T\wedge \tau _{R})}. \end{aligned}$$
(4.13)

We now estimate the \(H^b(0,T\wedge \tau _{R})\)-norm, for which we use the following characterization (see for example [34]):

$$\begin{aligned} \left\| f\right\| _{H^b(a_1,a_2)}^2\sim \left\| f\right\| _{L^2(a_1,a_2)}^2+ \int _{a_1}^{a_2}\,{ \int _{a_1}^{a_2}\,{\frac{|f(x)-f(y)|^2}{|x-y|^{1+2b}}}\, d{x} }\, d{y}\ , \quad 0<b<1. \end{aligned}$$
(4.14)

For the inhomogeneous contribution (i.e. coming from the \(L^2\)-norm above), we have

$$\begin{aligned} \sum _{n\in \mathbb {Z}^d}\langle n \rangle ^{2s}\left\| \kappa _{R}(t)w(t,n)\right\| ^2_{L^2_{t}(0,T \wedge \tau _{R})}&\le \min \left\{ \left||u\right||_{X^{s,b}({[0,\tau _{R}]})}^2, \left||u\right||_{X^{s,b}({[0,T]})}^2\right\} \\&\le \min \left\{ \left( 2R\right) ^2, \left||u\right||_{X^{s,b}({[0,T]})}^2\right\} . \end{aligned}$$

The remaining part of (4.13) needs a bit more work. Fix \(n\in \mathbb {Z}^d\), then

$$\begin{aligned}&\int _{0}^{T\wedge \tau _{R}}\,{\int _{0}^{T\wedge \tau _{R}}\,{\frac{|\kappa _{R}(t)w(t,n)-\kappa _{R}(t')w(t',n)|^2}{|t-t'|^{1+2b}}}\, d{t'}}\, d{t}\\&\quad \lesssim \int _{0}^{T\wedge \tau _{R}}\,{\int _{0}^{t}\,{\frac{|\kappa _{R}(t)(w(t,n) -w(t',n))|^2}{|t-t'|^{1+2b}}}\, d{t'}}\, d{t}\\&\qquad +\int _{0}^{T\wedge \tau _{R}}\,{ \int _{0}^{t}\,{\frac{|(\kappa _{R}(t)-\kappa _{R}(t'))w(t',n)|^2 }{|t-t'|^{1+2b}}}\, d{t'}}\, d{t}\\&\quad =: \mathrm {I}(n)+\mathrm {I\!I}(n). \end{aligned}$$

It is clear that

$$\begin{aligned} \mathrm {I}(n)\lesssim \min \left\{ \left\| w(n)\right\| _{H^b((0,\tau _{R}))}^2, \left\| w(n)\right\| _{H^b((0,T))}^2 \right\} , \end{aligned}$$

and hence

$$\begin{aligned} \sum _{n \in \mathbb {Z}^d}\mathrm {I}(n)\lesssim \min \left\{ \left( 2R\right) ^2, \left||u\right||_{X^{s,b}({[0,T]})}^2\right\} . \end{aligned}$$

For \(\mathrm {I\!I}(n)\), the mean value theorem infers that

$$\begin{aligned} \left| \kappa _R(t)-\kappa _R(t')\right| ^2&\lesssim \frac{\left( \left||u\right||_{X^{s,b}({[0,t]})}-\left||u\right||_{X^{s,b}({[0,t']})}\right) ^2}{R^2}\left( \sup _{r\in \mathbb {R}}\eta '(r)\right) ^2\\&\lesssim \frac{\left||\mathbb {1}_{[t',t]}u\right||_{X^{s,b}}^2}{R^2}\\&\lesssim \frac{1}{R^2}\sum _{n'\in \mathbb {Z}^d}\langle n' \rangle ^{2s}\Vert {w(\cdot ,n')}\Vert ^2_{H^b(t',t)} . \end{aligned}$$

Again, we split \(\Vert {w(\cdot ,n')}\Vert ^2_{H^b(t',t)}\) using (4.14) into the inhomogeneous contribution (the \(L^2\)-norm squared part) and the homogeneous contribution (the second term of (4.14)). We control here only the homogeneous contributions for \(\mathrm {I\!I}(n)\) as the inhomogeneous contributions are easier. The homogeneous part of \(\mathrm {I\!I}(n)\) is controlled by

$$\begin{aligned}&\frac{1}{R^2}\sum _{n'\in \mathbb {Z}^d}\langle n' \rangle ^{2s} \int _{0}^{T\wedge \tau _{R}}\,{ \int _{0}^{t}\,{ \int _{t'}^{t}\,{ \int _{t'}^{\lambda }\,{ \frac{|w(t',n)|^2}{|t-t'|^{1+2b}}\cdot \frac{|w(\lambda ,n')-w(\lambda ',n')|^2}{|\lambda -\lambda '|^{1+2b}}}\, d{\lambda '} }\, d{\lambda } }\, d{t'}}\, d{t} \end{aligned}$$
(4.15)
$$\begin{aligned}&= \frac{1}{R^2}\sum _{n'\in \mathbb {Z}^d}\langle n' \rangle ^{2s} \int _0^{T\wedge \tau _{R}}\,\int ^\lambda _0\,\int ^{\lambda '}_0\, \left( \int _{\lambda }^{T\wedge \tau _{R}}\,{\frac{1}{|t-t'|^{1+2b}}}\, d{t}\right) |w(t',n)|^2\nonumber \\&\quad \times \frac{|w(\lambda ,n')-w(\lambda ',n')|^2}{|\lambda -\lambda '|^{1+2b}} \,dt'\,d\lambda '\,d\lambda , \end{aligned}$$
(4.16)

where we used \(0\le t'\le \lambda '\le \lambda \le t\le T\wedge \tau _{R}\) to switch the integrals. Now, the integral with respect to t is equal to \(|T\wedge \tau _{R}-t'|^{-2b}-|\lambda -t'|^{-2b}\), which is bounded by

$$\begin{aligned} |T\wedge \tau _{R}-t'|^{-2b}\le |\lambda '-t'|^{-2b}. \end{aligned}$$

Thus (4.16) is controlled by

$$\begin{aligned}&\frac{1}{R^2}\sum _{n'\in \mathbb {Z}^d}\langle n' \rangle ^{2s} \int _{0}^{T\wedge \tau _{R}}\, \int _{0}^{\lambda }\, \left( \int _{0}^{\lambda '}\,{ |\lambda '-t'|^{-2b} |w(t',n)|^2 }\, d{t'}\right) \nonumber \\&\quad \times \frac{|w(\lambda ,n')-w(\lambda ',n')|^2}{|\lambda -\lambda '|^{1+2b}} {\,d\lambda '} {\,d\lambda }. \end{aligned}$$
(4.17)

Since \(b\in \left[ 0,\frac{1}{2}\right) \), by Hardy’s inequality (see for example [35, Lemma A.2]) the \(t'\)-integral is \(\lesssim \left\| w(\cdot ,n)\right\| _{H^b(0,\lambda ')}^2\le \left\| w(\cdot ,n)\right\| _{H^b(0,T\wedge \tau _{R})}^2\). After multiplying by \(\langle n \rangle ^{2s}\) and summing over \(n\in \mathbb {Z}^d\), we see that (4.17) is controlled by

$$\begin{aligned}&\frac{1}{R^2}\sum _{n,n'\in \mathbb {Z}^d}\langle n \rangle ^{2s}\langle n' \rangle ^{2s}\left\| w(\cdot ,n)\right\| _ {H^b(0,T\wedge \tau _{R})}^2\left\| w(\cdot ,n)\right\| _{H^b_\lambda (0,T\wedge \tau _{R})}^2\\&\quad \lesssim \frac{1}{R^2}\left||u\right||_{X^{s,b}({[0,T\wedge \tau _{R}]})}^2\left||u\right||_{X^{s,b}({[0,T\wedge \tau _{R}]})}^2\\&\quad \le \min \left\{ 4\left||u\right||_{X^{s,b}({[0,T]})}^2,16R^2\right\} . \end{aligned}$$

We now prove (4.11). Let \(\tau ^u_R\) and \(\tau ^v_R\) be defined as in (4.12). Assume without loss of generality that \(\tau ^u_R\le \tau ^v_R\). We decompose

$$\begin{aligned} \text {LHS of }(4.11)&\lesssim \left||\left( \eta _{R}\left( \left||u\right||_{X^{s,b}({[0,t]})}\right) -\eta _{R}\left( \left||v\right||_{X^{s,b}({[0,t]})}\right) \right) v(t)\right||_{X^{s,b}({[0,T]})}\\&\quad +\left||\eta _{R}\left( \left||u\right||_{X^{s,b}({[0,t]})}\right) \left( u(t)-v(t)\right) \right||_{X^{s,b}({[0,T]})}\\&=:A+B. \end{aligned}$$

By the mean value theorem,

$$\begin{aligned} A&= \left||\left( \eta _{R}\left( \left||u\right||_{X^{s,b}({[0,t]})}\right) -\eta _{R}\left( \left||v\right||_{X^{s,b}({[0,t]})}\right) \right) v(t)\right||_{X^{s,b}({[0,T\wedge \tau _R^v]})}\\&\lesssim \frac{1}{R}\left||v\right||_{X^{s,b}({[0,T\wedge \tau ^v_R]})}\left||u-v\right||_{X^{s,b}({[0,T]})}\\&\lesssim \left||u-v\right||_{X^{s,b}({[0,T]})}. \end{aligned}$$

For B, one runs through the same argument as for (4.10) but with w(tn) replaced by \(\mathcal {F}_x\big [S(-t)\big (u(t)-v(t)\big )\big ](n)\), which yields

$$\begin{aligned} B\lesssim C(R)\left||u-v\right||_{X^{s,b}({[0,T]})}. \end{aligned}$$

\(\square \)

We now conclude the proof of Proposition 4.2.

Proof of Proposition 4.2

Let \(T,R>0\) and let \(E_{T}:=L_{ad }^2\left( \Omega ;X^{s,b}([0,T])\right) \) be the space of adapted processes in \(L^2\left( \Omega ;X^{s,b}([0,T])\right) \). We solve the fixed point problem (4.9) in \(E_T\). Arguing as in the additive case, and using Lemmata 4.3 and 3.6, we have

$$\begin{aligned} \left\| \Lambda _{R} u\right\| _{E_T}&\le C_1 \left\| u_0\right\| _{H^s}+C_2(R)T^{\delta } +C_3T^{b}\left\| u\right\| _{E_T};\\ \left\| \Lambda _{R} u-\Lambda _{R} v\right\| _{E_T}&\le C_4(R) T^\delta \left\| u-v\right\| _{E_T}+C_5T^b\left\| u-v\right\| _{E_T}. \end{aligned}$$

Therefore, \(\Lambda _{R}\) is a contraction from \(E_T\) to \(E_T\) provided we choose \(T=T(R)\) sufficiently small. Thus there exists a unique solution \(u_R\in E_T\). Note that T does not depend on \(\left\| u_0\right\| _{H^s}\), hence we may iterate this argument to extend \(u_R(t)\) to all \(t\in [0,\infty )\).

Finally, to see that \(u_{R}\in F_T:=L^2\big (\Omega ; C([0,T];H^s(\mathbb {T}^d))\big )\), we first note that since \(u_R\in E_T\), Lemma 3.7 infers that \(\Psi [u_R]\in F_T\). Then, by similar argument as in the end of Sect. 4.1, we have that \(D(u_R)\in L^2(\Omega ;X^{s,\frac{1}{2}+}\big ([0,T]\big ))\), where

$$\begin{aligned} D(u_R)(t):=\int _0^t S(t-t')\big (|u_R|^{2k}u_R\big )\,dt'. \end{aligned}$$

Since \(L^2\big (\Omega ; X^{s,\frac{1}{2}+}([0,T])\big )\hookrightarrow F_T\), we have \(D(u_R)\in F_T\). Also, it is clear that \(S(t)u_0\in F_T\). Hence \(u_R\in F_T\). \(\square \)

Proof of Theorem 1.6

Let

$$\begin{aligned} \tau _{R}:=\inf \big \{t>0:\left||u_{R}\right||_{X^{s,b}({[0,t]})}\ge R\big \}. \end{aligned}$$
(4.18)

Then, \(\eta _{R}(\left||u_{R}\right||_{X^{s,b}({[0,t]})})=1\) if and only if \(t\le \tau _{R}\). Hence \(u_{R}\) is a solution of (1.4) on \([0,\tau _{R}]\). For any \(\delta >0\), we have \(u_{R}(t)=u_{R+\delta }(t)\) whenever \(t\in [0,\tau _{R}]\). Consequently, \(\tau _{R}\) is increasing in R. Indeed, if \(\tau _{R}>\tau _{R+\delta }\) for some \(R>0\) and some \(\delta >0\), then for \(t\in [\tau _{R+\delta },\tau _{R}]\), we have \(\eta _{R+\delta }\big (\left||u_{R+\delta }\right||_{X^{s,b}({[0,t]})}\big )<1\) which implies that \(u_{R}(t)\ne u_{R+\delta }(t)\), a contradiction. Therefore,

$$\begin{aligned} \tau ^*:=\lim _{R\rightarrow \infty }\tau _{R} \end{aligned}$$
(4.19)

is a well-defined stopping time that is either positive or infinite almost surely. By defining \(u(t):=u_{R}(t)\) for each \(t\in [0,\tau _{R}]\), we see that u is a solution of (1.4) on \([0,\tau ^*)\) almost surely. \(\square \)

5 Global well-posedness

In this section, we prove Theorems 1.5 and 1.8. Recall that the mass and energy of a solution u(t) of the defocusing SNLS (1.1) are given respectively by

$$\begin{aligned} M(u(t))&=\int _{\mathbb {T}^d}{\frac{1}{2} |u(t,x)|^2}\, d{x} , \end{aligned}$$
(5.1)
$$\begin{aligned} E(u(t))&=\int _{\mathbb {T}^d} \frac{1}{2} |\nabla u(t,x)|^2 +\frac{1}{2(k+1)} |u(t,x)|^{2(k+1)} dx . \end{aligned}$$
(5.2)

It is well-known that these are conserved quantities for (smooth enough) solutions of the deterministic NLS equation.

For SNLS, we prove probabilistic a priori control as per Propositions 5.1 and 5.3 below. To this purpose, the idea is to compute the stochastic differentials of (5.1) and (5.2) and use the stochastic equation for u. We work with the following frequency truncated version of (1.1):

$$\begin{aligned} {\left\{ \begin{array}{ll} i\partial _t u^N - \Delta u^N \pm P_{\le N} |u^N|^{2k}u^N = F(u^N,\phi ^N dW^N),\\ u^N|_{t=0} = P_{\le N} u_0=: u_0^N \end{array}\right. } \end{aligned}$$
(5.3)

where \(P_{\le N}\) is the Littlewood-Paley projection onto the frequency set \(\{n\in \mathbb {Z}^d:|n|\le N\}\),

$$\begin{aligned} \phi ^N:= P_{\le N}\circ \phi \ \text { and }\ W^N(t):= \sum _{|n|\le N} \beta _n(t) e_n. \end{aligned}$$

By repeating the arguments in Sect. 4, one obtains local well-posedness for (5.3) with initial data \(P_{\le N}u_0\) at least with the same time of existence as for the untruncated SNLS.

5.1 SNLS with additive noise

We treat the additive SNLS in this subsection. We first prove probabilistic a priori bounds on (5.1) and (5.2) of a solution \(u^N\) of the truncated equation.

Proposition 5.1

Let \(m\in \mathbb {N}\), \(T_0>0\), \(\phi \in \mathcal {L}^2(L^2(\mathbb {T}^d);L^2(\mathbb {T}^d))\), and \(F(u,\phi \xi )=\phi \xi \). Suppose that \(u^N(t)\) is a solution to (5.3) for \(t\in [0,T]\), for some stopping time \(T\in [0,T_0]\). Then there exists a constant \(C_1=C_1(m,M(u_0), T_0, \Vert \phi \Vert _{\mathcal {L}^2(L^2;L^2)})>0\) such that

$$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T} M(u^N(t))^m \right] \le C_1. \end{aligned}$$
(5.4)

Furthermore, if (5.3) is defocusing, there exists \(C_2=C_2(m,E(u_0), T_0, \Vert \phi \Vert _{\mathcal {L}^2(L^2;H^1)})>0\) such that

$$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T} E(u^N(t))^m \right]&\le C_2. \end{aligned}$$
(5.5)

The constants \(C_1\) and \(C_2\) are independent of N.

Proof

By applying Itô’s Lemma, we have

$$\begin{aligned} M(u^N(t))^m&=M(u_0^N)^m\nonumber \\&\quad +m\,{{\mathrm{Im}}}\left( \sum _{|j|\le N}\int _{0}^{t}{M(u^N(t'))^{m-1}\int _{\mathbb {T}^d}\overline{u^N(t')}\phi ^Ne_j\,dx} {\,d\beta _j(t')}\right) \end{aligned}$$
(5.6)
$$\begin{aligned}&\quad +m(m-1)\sum _{|j|\le N}\int _{0}^{t}{M(u^N(t'))^{m-2}\left| \int _{\mathbb {T}^d}u^N(t') \phi ^Ne_j\,dx\right| ^2}{\,dt'}. \end{aligned}$$
(5.7)
$$\begin{aligned}&\quad +m\left\| \phi ^N\right\| ^2_{\mathcal {L}^2(L^2;L^2)}\int _0^t{M(u^N(t'))^{m-1}} {\,dt'}, \end{aligned}$$
(5.8)

the last term being the Itô correction term. We first control (5.6). By Burkholder-Davis-Gundy inequality (Lemma 3.1), Hölder and Young inequalities, we get

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.6)\right]&\lesssim _m \mathbb {E}\left[ \left\{ \sum _{|j|\le N}\int _0^T M(u^N(t'))^{2(m-1)} \Vert u^N(t')\Vert _{L^2}^2 \Vert \phi ^Ne_j\Vert _{L^2}^2 dt' \right\} ^\frac{1}{2} \right] \\&\lesssim {\Vert \phi ^N\Vert }_{\mathcal {L}^2(L^2;L^2)} \, \mathbb {E}\left[ \left\{ \int _0^T M(u^N(t))^{2m-1}dt \right\} ^\frac{1}{2} \right] \\&\lesssim {\Vert \phi \Vert }_{\mathcal {L}^2(L^2;L^2)} T^\frac{1}{2}\, \mathbb {E}\left[ \left\{ \sup _{t\in [0,T]} M(u^N(t))^{m-1} \right\} ^\frac{1}{2} \left\{ \sup _{t\in [0,T]} M(u^N(t))^{m} \right\} ^\frac{1}{2} \right] \\&\lesssim {\Vert \phi \Vert }_{\mathcal {L}^2(L^2;L^2)} T_0^\frac{1}{2} \left\{ \mathbb {E}\left[ \sup _{t\in [0,T]} M(u^N(t))^{m-1} \right] \right\} ^\frac{1}{2} \left\{ \mathbb {E}\left[ \sup _{t\in [0,T]} M(u^N(t))^{m} \right] \right\} ^\frac{1}{2} \end{aligned}$$

Hence by Young’s inequality, we infer that

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.6)\right]&\le C_m{\Vert \phi \Vert }^2_{\mathcal {L}^2(L^2;L^2)} T_0\, \mathbb {E}\left[ \sup _{t\in [0,T]} M(u^N(t))^{m-1} \right] \\&\quad + \frac{1}{2}\,\mathbb {E}\left[ \sup _{t\in [0,T]} M(u^N(t))^{m} \right] . \end{aligned}$$

In a straightforward way, we also have

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.7)\right]&\le m(m-1) {\Vert \phi \Vert }^2_{\mathcal {L}^2(L^2;L^2)} T_0\, \mathbb {E}\left[ \sup _{t\in [0,T]} M(u^N(t))^{m-1} \right] ,\\ \mathbb {E}\left[ \sup _{t\in [0,T]}(5.8)\right]&\le 2m {\Vert \phi \Vert }^2_{\mathcal {L}^2(L^2;L^2)} T_0 \,\mathbb {E}\left[ \sup _{t\in [0,T]} M(u^N(t))^{m-1} \right] . \end{aligned}$$

Therefore, there is some \(C_m>0\) such that

$$\begin{aligned} \begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^m\right]&\le M(u_0)^m + C_mT_0 \,\mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^{m-1}\right] \\&\quad +\frac{1}{2}\,\mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^m\right] . \end{aligned} \end{aligned}$$
(5.9)

We now wish to move the last term of (5.9) to the left-hand side. However, we do not know a priori that the moments of \(\sup _{t\in [0,T]}M(u^N(t))\) are finite. To justify this, we note that (5.9) holds with T replaced by \(T_R\), where

$$\begin{aligned} T_R:= \sup \left\{ t\in [0,T] : M(u^N(t))\le R\right\} ,\quad R>0. \end{aligned}$$

Now the terms that would be appearing in (5.9) are finite and hence the formal manipulation is justified. Note that \(T_R\rightarrow T\) almost surely as \(R\rightarrow \infty \) because u (and hence \(u^N\)) belongs in \(C([0,T];H^s(\mathbb {T}^d))\) almost surely. Hence by letting \(R\rightarrow \infty \) and invoking the monotone convergence theorem, one finds

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^m\right] \le 2M(u_0)^m + 2C_mT_0 \,\mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^{m-1}\right] . \end{aligned}$$
(5.10)

Hence, by induction on m, we obtain

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^m\right] \lesssim 1, \end{aligned}$$
(5.11)

where we note that the implicit constant is independent of N.

We now turn to estimating the energy. Applying Itô’s Lemma again, we find that \(E(u^N(t))^m\) equals

$$\begin{aligned}&E(u^N_0)^m \end{aligned}$$
(5.12)
$$\begin{aligned}&\quad + m\,{{\mathrm{Im}}}\left( \sum _{|j|\le N}\int _{0}^{t}{ E(u^N(t'))^{m-1} \int _{\mathbb {T}^d} {|u^N|^{2k}u^N\phi ^N e_j}{\,dx} }{\,d\beta _j(t')}\right) \end{aligned}$$
(5.13)
$$\begin{aligned}&\quad -m\,{{\mathrm{Im}}}\left( \sum _{|j|\le N}\int _{0}^{t}{E(u^N(t'))^{m-1} \int _{\mathbb {T}^d}\Delta \overline{u^N}\phi ^Ne_j \,dx} \,d\beta _j(t') \right) \end{aligned}$$
(5.14)
$$\begin{aligned}&\quad +(k+1)m\sum _{|j|\le N}{\int _0^t E(u^N(t'))^{m-1} \int _{\mathbb {T}^d} {|u^N|^{2k}|\phi ^N e_j|^2 \,dx\, dt' }} \end{aligned}$$
(5.15)
$$\begin{aligned}&\quad +m\left\| \nabla \phi ^N\right\| ^2_{\mathcal {L}^2(L^2;L^2)}\int _{0}^{t}\,{E(u^N(t'))^{m-1}}\, d{t'} \end{aligned}$$
(5.16)
$$\begin{aligned}&\quad + \frac{m(m-1)}{2}\sum _{|j|\le N}\int _{0}^{t}{E(u^N(t'))^{m-2} \left| \int _{\mathbb {T}^d}{\left( -\Delta \overline{u^N}+ |u^N|^{2k}\overline{u^N}\right) \phi e_jdx}\right| ^2 }{dt'}. \end{aligned}$$
(5.17)

We shall control here only the difficult term (5.13) as the other terms are bounded by similar lines of argument. Firstly, by Burkholder-Davis-Gundy inequality (Lemma 3.1), we deduce

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.13)\right]&\le C_m \mathbb {E}\left[ \left\{ \sum _{|j|\le N}\int _{0}^{T} E(u^N(t'))^{2(m-1)}\left| \int _{\mathbb {T}^d}{|u^N|^{2k}u^N\phi ^N e_j}{\,dx}\right| ^2 {\,dt'}\right\} ^\frac{1}{2}\right] . \end{aligned}$$

Then, by duality and the (dual of the) Sobolev embedding \( H^1(\mathbb {T}^d) \hookrightarrow L^{2k+2}(\mathbb {T}^d)\), we have

$$\begin{aligned} \left| \int _{\mathbb {T}^d}{|u^N|^{2k}u^N\phi ^N e_j}{\,dx}\right|&\le \left\| |u^N|^{2k}u^N\right\| _{H^{-1}(\mathbb {T}^d)} \Vert \phi ^Ne_j\Vert _{H^1(\mathbb {T}^d)}\\&\lesssim \left\| |u^N|^{2k}u^N\right\| _{L^{\frac{2k+2}{2k+1}}(\mathbb {T}^d)} \Vert \phi e_j\Vert _{H^1(\mathbb {T}^d)}\\&\lesssim E(u^N)^{\frac{2k+1}{2k+2}} \Vert \phi e_j\Vert _{H^1(\mathbb {T}^d)}, \end{aligned}$$

provided that \(1+\frac{1}{k} \ge \frac{d}{2}\). Therefore, by Hölder and Young inequalities, and similarly to the control of (5.6), we have

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.13)\right]&\le C_m{\Vert \phi \Vert }^2_{\mathcal {L}^2(L^2;H^1)} T_0 \mathbb {E}\left[ \sup _{t\in [0,T]} E(u^N(t))^{m-1} \right] \\&\quad + \frac{1}{8}\mathbb {E}\left[ \sup _{t\in [0,T]} E(u^N(t))^{m-\frac{1}{2k+2}} \right] \\&\le \tilde{C}_m{\Vert \phi \Vert }^2_{\mathcal {L}^2(L^2;H^1)} T_0 \mathbb {E}\left[ \sup _{t\in [0,T]} E(u^N(t))^{m-1} \right] \\&\quad + \frac{1}{8}\mathbb {E}\left[ \sup _{t\in [0,T]} E(u^N(t))^{m} \right] , \end{aligned}$$

where in the last step we used interpolation.

We also have

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.14)\right]&\le C_m\left\| \phi \right\| _{\mathcal {L}^2(L^2;H^1)}\mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^{m-1}\right] +\frac{1}{8}\mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^m\right] \\ \mathbb {E}\left[ \sup _{t\in [0,T]}(5.15)\right]&\le C_m\left\| \phi \right\| _{\mathcal {L}^2(L^2;H^1)}^2+\frac{1}{8}\mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N)^m\right] \\ \mathbb {E}\left[ \sup _{t\in [0,T]}(5.16)\right]&\le C_m {\Vert \phi \Vert }_{\mathcal {L}^2(L^2;H^1)}^2 \mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^{m-1}\right] ,\\ \mathbb {E}\left[ \sup _{t\in [0,T]}(5.17)\right]&\le C\left\| \phi \right\| ^2_{\mathcal {L}^2(L^2;H^1)} +\mathbb {E}\left[ \sup _{t\in [0,T]}H(u^N(t))^{m-1}\right] +\frac{1}{8}\mathbb {E}\left[ \sup _{t\in [0,T]}H(u^N(t))^m\right] . \end{aligned}$$

Gathering all the estimates, there exists \(C_m>0\) such that

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))\right]&\le E(u_0)^m + C_{m} T_0\,\mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^{m-1}\right] +\frac{1}{2}\,\mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^m\right] . \end{aligned}$$

Similarly to passing from (5.9) to (5.10) and by induction on m, we deduce that

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^m\right] \lesssim 1 , \end{aligned}$$
(5.18)

with constant independent of N. \(\square \)

We now argue that the probabilistic a priori bounds in fact hold for solutions of the original SNLS.

Corollary 5.2

For u solution to (1.1) with (1.2), the estimates (5.4) and (5.5) hold with u in place of \(u^N\) under the same assumptions as Proposition 5.1.

Proof

Let \(\Lambda ^N\) be the mild formulation of (5.3), more precisely,

$$\begin{aligned} \Lambda ^N(v):=S(t)u_0^N\pm i\int _0^tS(t-t')P_{\le N}\left( |v|^{2k}v\right) (t')\,dt'-i\int _0^tS(t-t')\phi ^N\,dW^N(t'). \end{aligned}$$
(5.19)

Then \(\Lambda ^N\) is a contraction on a ball in \(X^{1,\frac{1}{2}-}([0,T])\) and has a unique fixed point \(u^N\) that satisfies the bounds in Proposition 5.1. Hence it suffices to show that \(u^N\) in fact converges to u in \(F_T:=L^2(\Omega ;C([0,T]; H^s_x))\) for \(s=0,1\). We only show \(s=1\) since the proof of \(s=0\) is the same. To this end, we consider the mild formulations of \(u^N\) and u and show that each piece of \(u^N\) converges to the corresponding piece in u. Clearly, \(S(t)u_0^N\rightarrow S(t)u_0\) in \(F_T\). For the noise, let \(\Psi ^N(t)\) denote the stochastic convolution in (5.19). Then

$$\begin{aligned} \Psi (t)-\Psi ^N(t)&=\left( \sum _{|n|>N}\sum _{j\in \mathbb {Z}^d}+ \sum _{|n|\le N}\sum _{|j|>N} \right) e_n\int _0^t e^{i(t-t')|n|^2} \widehat{\phi e_j} (n) d\beta _j(t')\\&= \int _0^t S(t-t')P_{>N}\phi \,dW(t')+ \int _0^t S(t-t')\pi _{N} P_{\le N}\phi \,dW(t'), \end{aligned}$$

where \(\pi _N\) denotes the projection onto the linear span of the orthonormal vectors \(\{e_j: |j|>N\}\). By Lemma 3.4, the above is controlled by

$$\begin{aligned} \left\| P_{>N}\circ \phi \right\| _{\mathcal {L}^2(L^2;H^1)}^{2}+\left\| \pi _N P_{\le N} \phi \right\| _{\mathcal {L}^2(L^2;H^1)}^{2}, \end{aligned}$$

which tends to 0 as \(N\rightarrow \infty \) because both norms are tails of convergent series.

Finally we treat the nonlinear terms

$$\begin{aligned} Du(t):=\int _{0}^{t}{S(t-t')|u|^{2k}u(t')}{\,dt'}\quad \text{ and } \quad D^{\le N}u(t):=\int _{0}^{t}{S(t-t')P_{\le N}\left( |u|^{2k}u\right) (t')}{\,dt'}. \end{aligned}$$

We first fix a path for which local well-posedness holds, and prove that \(Du-D^{\le N}u\rightarrow 0\) in \(X^{1,\frac{1}{2}+}\). Firstly,

$$\begin{aligned} \left\| Du-D^{\le N}u\right\| _{X^{1,\frac{1}{2}+}([0,T])}&\le \left\| \int _{0}^{t}{S(t-t')P_{\le N}(|u|^{2k}u-|u^N|^{2k}u^N)(t')}{\,dt'}\right\| _{X^{1,\frac{1}{2}+}([0,T])}\\&\quad + \left\| P_{>N}Du\right\| _{X^{1,\frac{1}{2}+}([0,T])} \end{aligned}$$

By Lemmas 2.2, 2.4 and 2.5, we have

$$\begin{aligned} \mathrm {I}&\lesssim \left( \left\| u\right\| _{X^{1,\frac{1}{2}-}([0,T])}^{2k} +\left\| u^N\right\| _{X^{1,\frac{1}{2}-}([0,T])}^{2k}\right) \left\| u-u^N\right\| _{X^{1, \frac{1}{2}-}([0,T])} \end{aligned}$$
(5.20)
$$\begin{aligned} \mathrm {I\!I}&\lesssim \left\| u\right\| _{X^{1,\frac{1}{2}-}([0,T])}^{2k+1} \end{aligned}$$
(5.21)

In particular, (5.21) implies \(Du\in X^{1,\frac{1}{2}+}([0,T])\), and hence \(\mathrm {I\!I}\rightarrow 0\) as \(N\rightarrow \infty \). We claim that \(\mathrm {I}\rightarrow 0\) as \(N\rightarrow \infty \) as well. Indeed, \(\Lambda ^N\) and \(\Lambda \) are contractions with fixed points \(u^N\) and u respectively, hence

$$\begin{aligned} \left\| u-u^N\right\| _{X^{1,\frac{1}{2}-}([0,T])}&\le \left\| \Lambda (u)-\Lambda ^N(u)\right\| _{X^{1,\frac{1}{2}-}([0,T])}+ \left\| \Lambda ^N(u)-\Lambda ^N(u^N)\right\| _{X^{1,\frac{1}{2}-}([0,T])}\\&\le \left\| \Lambda (u)-\Lambda ^N(u)\right\| _{X^{1,\frac{1}{2}-}([0,T])}+ \frac{1}{2} \left\| u-u^N\right\| _{X^{1,\frac{1}{2}-}([0,T])}. \end{aligned}$$

By rearranging, it suffices to show that the first term on the right-hand side above tends to 0 as \(N\rightarrow \infty \). Now

$$\begin{aligned} \left\| \Lambda (u)-\Lambda ^N(u)\right\| _{X^{1,\frac{1}{2}-}([0,T])}&\le \left\| P_{>N}S(t)u_0\right\| _{X^{1,\frac{1}{2}-}([0,T])}\\&\quad + \left\| P_{>N}\int _0^tS(t-t')|u|^{2k}u(t')\,dt'\right\| _{X^{1,\frac{1}{2}-}([0,T])}\\&\quad +\left\| \Psi ^{>N}\right\| _{X^{1,\frac{1}{2}-}([0,T])}. \end{aligned}$$

By similar arguments as above, all the terms on the right go to 0 as \(N\rightarrow \infty \). This proves our claim. By the embedding \(X^{1,\frac{1}{2}+}([0,T])\subset C([0,T];H^1(\mathbb {T}^d))\), we have that

$$\begin{aligned} \left\| Du-D^{\le N}u\right\| _{C([0,T];H^1)}\rightarrow 0 \end{aligned}$$
(5.22)

almost surely as \(N\rightarrow \infty \). By the dominated convergence theorem, we have \(Du-D^{\le N}u\rightarrow 0\) in \(F_T\). This concludes our proof. \(\square \)

Finally, we conclude the proof of global well-posedness for the additive case.

Proof of Theorem 1.5

Let \(s\in \{0,1\}\) be the regularity of \(u_0\) from Theorem 1.5. Let \(\varepsilon >0\) and \(T>0\) be given. We claim that there exists an event \(\Omega _\varepsilon \) such that a solution \(u\in X^{s,b}([0,T])\cap C([0, T]; H^s(\mathbb {T}^d))\) exists on [0, T] in \(\Omega _\varepsilon \) and \(\mathbb {P}(\Omega \setminus \Omega _\varepsilon )<\varepsilon \). If this claim holds, then by setting

$$\begin{aligned} \Omega ^*=\bigcup _{n=1}^\infty \Omega _\frac{1}{n}, \end{aligned}$$

we have that \(\mathbb {P}(\Omega ^*)=1\) and u exists on [0, T], proving the theorem. Let \(\delta \in (0,1)\) be a small quantity chosen later. We subdivide [0, T] into \(M=\left\lceil \frac{T}{\delta }\right\rceil \) subintervals \(I_k=[(k-1)\delta ,k\delta ]\). Let

$$\begin{aligned} \Omega _0=\bigcap _{k=1}^M\left\{ \omega \in \Omega :\left||\int _{(k-1)\delta }^{t}\,{S(t-t')\phi }\, d{W(t')}\right||_{X^{s,b}({I_k})}\le L\right\} , \end{aligned}$$

where \(L>0\) is some large quantity determined later. Now by Chebyshev’s inequality and Lemma 3.3,

$$\begin{aligned} \mathbb {P}(\Omega \setminus \Omega _0)&=\sum _{k=1}^M\mathbb {P}\left( \left||\int _{(k-1)\delta }^{t}\,{S(t-t')\phi }\, d{W(t')}\right||_{X^{s,b}({I_k})}>L\right) \\&\le \sum _{k=1}^M\frac{1}{L^2}\mathbb {E}\left[ \left||\int _{(k-1)\delta }^{t}\,{S(t-t')\phi }\, d{W(t')}\right||_{X^{s,b}({I_k})}^2\right] \\&\lesssim \sum _{k=1}^M\frac{\delta (\delta ^2+1)}{L^2}\left\| \phi \right\| _{\mathcal {L}^2(L^2;L^2)}^2\\&\le \frac{2M\delta }{L^2}\left\| \phi \right\| _{\mathcal {L}^2(L^2;L^2)}^2\\&\lesssim \frac{T}{L^2}\left\| \phi \right\| _{\mathcal {L}^2(L^2;L^2)}^2. \end{aligned}$$

By choosing \(L=L(\varepsilon ,T,\phi )\) sufficiently large, we may therefore bound \(\mathbb {P}(\Omega ^c_0)\) above by \(\frac{\varepsilon }{2}\). Now let

$$\begin{aligned} R=\max \left\{ \left\| u_0\right\| _{H^s}, L\right\} . \end{aligned}$$

By local theory, there exists a unique solution u(t) to (1.1) with time of existence \(T_\text {max}\) given in (4.7). In particular, we note that for \(\omega \in \Omega _0\),

$$\begin{aligned} c\Big ({\left\| u_0\right\| _{H^s}+\left\| \Psi \right\| _{X^{s,b}_{[0,\delta ]}}}\Big )^{-\theta } \ge c\big ({R+L}\big )^{-\theta } \ge c\big (2R\big )^{-\theta }, \end{aligned}$$
(5.23)

where c is as in (4.7). By choosing \(\delta =\delta (R):=c ({2R})^{-\theta }\), we see that u(t) exists for \(t\in [0, \delta ]\) for all \(\omega \in \Omega _0\). Now define

$$\begin{aligned} \Omega _1=\left\{ \omega \in \Omega _0:\left\| u(\delta )\right\| _{H^s}\le R\right\} . \end{aligned}$$

By the same argument, u(t) exists for \(t\in (\delta ,2\delta )\) for all \(\omega \in \Omega _1\). Iterating this argument, we have a chain of events \(\Omega _0\supseteq \Omega _1 \supseteq \cdots \supseteq \Omega _{M-1}\) where

$$\begin{aligned} \Omega _k=\left\{ \omega \in \Omega _{k-1}:\left\| u(k\delta )\right\| _{H^s}\le R\right\} \end{aligned}$$

and u(t) exists for all \(t\in [0,(k+1)\delta ]\) on \(\Omega _k\). Setting \(\Omega _\varepsilon :=\Omega _{M-1}\), u(t) exists on the full interval [0, T] on \(\Omega _\varepsilon \). It remains to check that \(\Omega \setminus \Omega _\varepsilon \) remains small. By Corollary 5.2, we have

$$\begin{aligned} \mathbb {P}(\Omega _\varepsilon )&\le \mathbb {P}(\Omega \setminus \Omega _0)+\sum _{k=0}^{M-1}\mathbb {P}(\Omega _{k+1}^c\cap \Omega _k)\\&\le \frac{\varepsilon }{2}+\sum _{k=0}^{M-1}\mathbb {P}\left( \left\{ \left\| u((k+1)\delta )\right\| _{H^s}>R\right\} \cap \Omega _k\right) \\&\le \frac{\varepsilon }{2}+\sum _{k=0}^{M-1}\frac{1}{R^p}\mathbb {E}\left[ \mathbb {1}_{\Omega _k} \left\| u((k+1)\delta )\right\| _{H^s}^p\right] \\&\le \frac{\varepsilon }{2}+\frac{MC_1}{R^p}\\&\le \frac{\varepsilon }{2}+\frac{2TC_1(2R)^{\theta }}{cR^p}, \end{aligned}$$

for any \(p\in \mathbb {N}\). We further enlarge R if necessary by setting

$$\begin{aligned} R=\max \left\{ \frac{2TC_1}{c}+1, L, \left\| u_0\right\| _{H^s}\right\} , \end{aligned}$$

and so we have that

$$\begin{aligned} \mathbb {P}(\Omega _\varepsilon )\le \frac{\varepsilon }{2}+2^{\theta } R^{\theta -p+1}. \end{aligned}$$

This is smaller than \(\varepsilon \) provided we choose \(p=p(\varepsilon ,\theta )>0\) sufficiently large. Thus \(\Omega _\varepsilon \) satisfies our claim. \(\square \)

5.2 SNLS with multiplicative noise

In order to globalize solutions of SNLS, for the multiplicative noise case, we need to prove probabilistic control of the \(X^{s,b}\)-norm of the solutions of the truncated SNLS uniformly in the truncation parameter (Lemma 5.4). This requires a priori bounds on mass and energy of solutions.

From Sect. 4.2, we obtained a local solution of the multiplicative (1.1) with time of existence

$$\begin{aligned} \tau ^*=\lim _{R\rightarrow \infty }\tau _{R}. \end{aligned}$$

Under the hypotheses of Theorem 1.8, we shall prove global well-posedness by showing that \(\tau ^*=\infty \) almost surely.

Proposition 5.3

Let \(T_0>0\) and \(\phi \) be as in Theorem 1.8. Suppose that u(t) is a solution for (1.1) with \(F(u,\phi \xi )=u\cdot \phi \xi \) on \(t\in [0,T]\) for some stopping time \(T\in [0,T_0\wedge \tau ^*)\). Let \(C(\phi )\) be as in (3.16). Then for any \(m\in \mathbb {N}\), there exists \(C_1=C_1(m,M(u_0), T_0, C(\phi ))>0\) such that

$$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T} M(u(t))^m \right] \le C_1. \end{aligned}$$
(5.24)

Furthermore, if(1.1) is defocusing, there exists \(C_2=C_2(m,E(u_0), T_0,C(\phi ))>0\) such that

$$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T} E(u(t))^m \right]&\le C_2. \end{aligned}$$
(5.25)

Proof

We consider the frequency truncated equation (5.3) and apply Itô’s Lemma to obtain

$$\begin{aligned} M(u^N(t))^m&=M(u_0^N)^m\nonumber \\&\quad +m\,{{\mathrm{Im}}}\left( \sum _{|j|\le N}\int _{0}^{t}{M(u^N(t'))^{m-1}\int _{\mathbb {T}^d}|u^N(t')|^2\phi ^Ne_j\,dx} {\,d\beta _j(t')}\right) \end{aligned}$$
(5.26)
$$\begin{aligned}&\quad +m(m-1)\sum _{|j|\le N}\int _{0}^{t}{M(u^N(t'))^{m-2}\left| \int _{\mathbb {T}^d}|u^N(t')|^2\phi ^Ne_j\, dx\right| ^2}{\,dt'} \end{aligned}$$
(5.27)
$$\begin{aligned}&\quad +m(m-1)\sum _{|j|\le N}\int _0^t{M(u^N(t'))^{m-1} \int _{\mathbb {T}^d}|u(t')\phi e_j|^2\,dx}{\,dt'}. \end{aligned}$$
(5.28)

To bound (5.26), we use Burkholder-Davis-Gundy inequality (Lemma 3.1) and use a similar argument as in the proof of Lemma 3.6 to get

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.26)\right]&\lesssim \mathbb {E}\left[ \sum _{|j|\le N} \left( \int _{0}^{T}M(u^N(t'))^{2(m-1)}\left| \int _{\mathbb {T}^d}|u^N(t')\phi e_j|^2\,dx\right| ^2\,dt' \right) ^\frac{1}{2}\right] \\&\le C(\phi )^2\mathbb {E}\left[ \left( \int _0^TM(u^N(t'))^{2m}\right) ^\frac{1}{2}\right] \\&\le C(\phi )^2 \left( \mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^m\right] \right) ^\frac{1}{2}\left( \mathbb {E}\left[ \int _{0}^{T}{{M(u^N(t'))^{m}}} {\,dt'}\right] \right) ^\frac{1}{2} \end{aligned}$$

Similarly, one obtains

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}\left\{ (5.17)+(5.28)\right\} \right]&\lesssim C(\phi )\mathbb {E}\left[ \int _0^TM(u^N(t'))^m\,dt'\right] \end{aligned}$$

Hence there is a constant \(C_1=C_1(m, M(u_0), T, C(\phi ))\) such that

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^m\right]&\le C_1 +C_1\,\mathbb {E}\left[ \int _0^TM(u^N(t'))^m\,dt'\right] \\&\quad +C(\phi )^2 \left( \mathbb {E}\left[ \sup _{t\in [0,T]}M(u^N(t))^m\right] \right) ^\frac{1}{2}\left( \mathbb {E}\left[ \int _{0}^{T}{{M(u^N(t'))^{m}}} {\,dt'}\right] \right) ^\frac{1}{2} \end{aligned}$$

The left-hand side is bounded above by \(3\mathcal {M}\), where \(\mathcal {M}\) is maximum of the three terms of the right-hand side. In any of the three cases, we may conclude the proof via simple rearrangement arguments and Gronwall’s inequality.

Turning to the energy, we use Itô’s Lemma and the defocusing equation to obtain that \(E(u^N(t))^m\) equals

$$\begin{aligned}&E(u^N_0)^m \end{aligned}$$
(5.29)
$$\begin{aligned}&\quad + m\,{{\mathrm{Im}}}\left( \sum _{|j|\le N}\int _{0}^{t}{ E(u^N(t'))^{m-1}\int _{\mathbb {T}^d}{|u^N|^{2(k+1)}\phi ^N e_j}{\,dx} }{\,d\beta _j(t')}\right) \end{aligned}$$
(5.30)
$$\begin{aligned}&\quad -m\,{{\mathrm{Im}}}\left( \sum _{|j|\le N}\int _{0}^{t}{E(u^N(t'))^{m-1} \int _{\mathbb {T}^d}{(\Delta \overline{u^N}){u^N\phi ^N e_j}}\,dx }{\,d\beta _j(t')}\right) \end{aligned}$$
(5.31)
$$\begin{aligned}&\quad + m(k+1)\sum _{|j|\le N}{\int _{0}^{t}{E(u^N(t'))^{m-1} \int _{\mathbb {T}^d}{|u^N|^{2(k+1)}|\phi ^N e_j|^2}{\,dx} }{\,dt'}} \end{aligned}$$
(5.32)
$$\begin{aligned}&\quad +m\sum _{|j|\le N}\int _{0}^{t}{E(u^N(t'))^{m-1}\int _{\mathbb {T}^d}|\nabla {(u^N\phi ^N e_j)}(n)|^2}\,dx{\,dt'} \end{aligned}$$
(5.33)
$$\begin{aligned}&\quad +\frac{m(m-1)}{2}\left( \sum _{|j|\le N}\int _{0}^{t}{E(u^N(t'))^{m-2}\left| \int _{\mathbb {T}^d}{\left( -u^N\Delta \overline{u^N} +|u^N|^{2k+1}\right) \phi ^N e_j}{\,dx}\right| ^2 }{\,dt'} \right) \end{aligned}$$
(5.34)

For (5.30), we use Burkholder-Davis-Gundy inequality (Lemma 3.1) to get

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.30)\right]&\lesssim \mathbb {E}\left[ \left( \sum _{|j|\le N}\int _0^TE(u^N(t'))^{2(m-1)} \left| \int _{\mathbb {T}^d}|u^N|^{2k+2}\phi ^Ne_j\,dx \right| ^2\,dt' \right) ^\frac{1}{2}\right] . \end{aligned}$$

Now, with r as in Theorem 1.6,

$$\begin{aligned} \left| \int _{\mathbb {T}^d}|u^N|^{2k+2}\phi ^Ne_j\,dx\right| ^2&\le \left\| u^N\right\| _{L^{2k+2}_x}^{2(2k+2)}\left\| \phi ^Ne_j\right\| _{L^\infty _x}^2 \le E(u)^{2}\left\| \widehat{\phi ^Ne_j}\right\| _{\ell ^1}^2\\&\lesssim E(u)^{2} \Vert \phi ^N e_j\Vert _{\mathcal {F}L^{s,r}}^2, \end{aligned}$$

where the last step follows from Hölder inequality as in the proof of Lemma 3.5. Therefore, by Hölder’s inequality and (3.16),

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}(5.30)\right]&\lesssim C(\phi )\, \mathbb {E}\left[ \left( \int _0^TE((u^N(t')))^{2m}\,dt'\right) ^\frac{1}{2}\right] \\&\le C(\phi ) \left( \mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^m\right] \right) ^\frac{1}{2}\left( \mathbb {E}\left[ \int _{0}^{T}{{E(u^N(t'))^{m}}}{\,dt'}\right] \right) ^\frac{1}{2}. \end{aligned}$$

Similarly, we bound the other terms as follows:

$$\begin{aligned}&\mathbb {E}\left[ \sup _{t\in [0,T]}(5.31)\right] \lesssim C(\phi ) \left( \mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^m\right] \right) ^\frac{1}{2}\left( \mathbb {E}\left[ \int _{0}^{T}{{E(u^N(t'))^{m}}}{\,dt'}\right] \right) ^\frac{1}{2}\\&\mathbb {E}\left[ \sup _{t\in [0,T]}\{(5.32)+(5.33)+(5.34)\}\right] \lesssim C(\phi )^2 \mathbb {E}\left[ \int _0^TE(u^N(t'))^m\,dt'\right] \end{aligned}$$

It follows that there is a constant \(C_2=C_2(m, E(u_0), T, C(\phi ))\) such that

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^m\right]&\le C_2 +C_2\,\mathbb {E}\left[ \int _0^TE(u^N(t'))^m\,dt'\right] \\&\quad +C_2 \left( \mathbb {E}\left[ \sup _{t\in [0,T]}E(u^N(t))^m\right] \right) ^\frac{1}{2}\left( \mathbb {E}\left[ \int _{0}^{T}{{E(u^N(t'))^{m}}}{\,dt'}\right] \right) ^\frac{1}{2}. \end{aligned}$$

Arguing in the same way as for the mass of \(u^N\) yields the estimate for the energy of \(u^N\). This proves the proposition for \(u^N\) in place of u. The proposition then follows by letting \(N\rightarrow \infty \). \(\square \)

We now prove the following probabilistic a priori bound on the \(X^{s,b}\)-norm of a solution.

Lemma 5.4

Let \(T, R>0\). Let \(u_{R}\) be the unique solution of (4.8) on [0, T]. There exists \(C_1=C_1(\left\| u_0\right\| _{L^2}, T, C(\phi ))\) such that

$$\begin{aligned} \mathbb {E}\left[ \left||u_{R}\right||_{X^{0,b}({[0,T\wedge \tau _R]})}\right] \le C_1. \end{aligned}$$

Moreover, if (4.8) is defocusing, there also exists \(C_2=C_2(\left\| u_0\right\| _{H^1}, T,C(\phi ))\) such that

$$\begin{aligned} \mathbb {E}\left[ \left||u_{R}\right||_{X^{1,b}({[0,T\wedge \tau _R]})}\right] \le C_2. \end{aligned}$$

The constants \(C_1\) and \(C_2\) are independent of R.

Proof

Let \(\tau \) be a stopping time so that \(0< \tau \le T\wedge \tau _R\). By a similar argument used in local theory, we have

$$\begin{aligned} \begin{aligned} \left||u_{R}\right||_{X^{s,b}({[0,\tau ]})}&\le C_1\left\| u_{R}(0)\right\| _{H^s}+C_2\tau ^\delta \left||u_{R}\right||_{X^{s,b}({[0,\tau ]})}^{2k+1}+\left||\Psi \right||_{X^{s,b}({[0,\tau ]})}\\&\le C_1\left\| u_{R}\right\| _{C([T\wedge \tau _R];H^s)}+C_2\tau ^\delta \left||u_{R}\right||_{X^{s,b}({[0,\tau ]})}^{2k+1}+\left||\Psi \right||_{X^{s,b}({[0,T\wedge \tau _R]})}. \end{aligned} \end{aligned}$$
(5.35)

Let \(K=C_1\left\| u_{R}\right\| _{C([T\wedge \tau _R];H^s)}+\left||\Psi (t)\right||_{X^{s,b}({[0,T\wedge \tau _R]})}\). We claim that if \(\tau \sim K^{-\frac{2k}{\delta }}\), then

$$\begin{aligned} \left||u_{R}\right||_{X^{s,b}({[0,\tau ]})}\lesssim K. \end{aligned}$$
(5.36)

To see this, we note that the polynomial

$$\begin{aligned} p_{\tau }(x)=C_2\tau ^\delta x^{2k+1}-x+K \end{aligned}$$
(5.37)

has exactly one positive turning point at

$$\begin{aligned} x'_+=\left( {(2k+1)C_2\tau ^\delta }\right) ^{-\frac{1}{2k}}\, \end{aligned}$$

and that \(p_\tau (x'_+)<0\) if we choose \(\tau =cK^{-\frac{2k}{\delta }}\). For this choice, we have \(p_\tau (0)=K>0\) and hence \(p_\tau (x)>0\) for \(0\le x< x_+\) where \(x_+\) is the unique positive root below \(x_+'\). Now (5.35) is equivalent to \(p_\tau \big (\left\| u_R\right\| _{X^{s,b}([0,\tau ])}\big )\ge 0\). But since \(g(\,\cdot \,):=\left\| u_R\right\| _{X^{s,b}([0,\,\cdot \,])}\) is continuous and \(g(0)=0\), we must have

$$\begin{aligned} g(\tau )<x_+'\sim \tau ^{-\frac{\delta }{2k}}\sim K, \end{aligned}$$

which proves (5.36). Iterating this argument, we find that

$$\begin{aligned} \left\| u_{R}\right\| _{X^{s,b}([(j-1)\tau ,j\tau ])}\lesssim \left\| u_{R}\right\| _{C([0,T\wedge \tau _R];H^s)}+\left||\Psi (t)\right||_{X^{s,b}({[0,T\wedge \tau _R]})}\, \end{aligned}$$
(5.38)

for all integer \(1\le j\le \lceil \frac{T\wedge \tau _R}{\tau }\rceil =:M\). Putting everything together, we have

$$\begin{aligned} \left\| u_{R}\right\| _{X^{s,b}([0,T\wedge \tau _R])}&\le \sum _{j=1}^{M}\left\| u_{R}\right\| _{X^{s,b}([(j-1)\tau ,j\tau ])}\\&\quad \lesssim \frac{T\wedge \tau _R}{\tau }\left( \left\| u_{R}\right\| _{C([0,T\wedge \tau _R];H^s)} +\left||\Psi \right||_{X^{s,b}({[0,T\wedge \tau _R]})} \right) \\&\quad \lesssim T\left( \left\| u_{R}\right\| _{C([0,T\wedge \tau _R];H^s)}+\left||\Psi \right||_{X^{s,b}({[0,T\wedge \tau _R]})} \right) ^{\frac{2k}{\delta }+1}. \end{aligned}$$

By Proposition 5.3 and Lemma 3.6, all moments of the last two terms above are finite. This proves Lemma 5.4. \(\square \)

We can now conclude the proof of Theorem 1.8.

Proof of Theorem 1.8

Fix \(T>0\). Since \(\tau _{R}\) is increasing in R,

$$\begin{aligned} \mathbb {P}(\tau ^*<T)&=\lim _{R\rightarrow \infty }\mathbb {P}(\tau _R<T)= \lim _{R\rightarrow \infty }\mathbb {P}\left( \left\| u_R\right\| _{X^{s,b}([0,T\wedge \tau _R])}\ge R\right) \\&\le \lim _{R\rightarrow \infty }\frac{1}{R}\mathbb {E}\left[ \left\| u_R\right\| _{X^{s,b}([0,T\wedge \tau _R])}\right] . \end{aligned}$$

But then the right-hand side equals 0 by Lemma 5.4. It follows that \(\tau ^*=\infty \) almost surely. \(\square \)