1 Introduction

The nonlinear Schrödinger equation occurs as a basic model in many areas of physics: hydrodynamics, plasma physics, optics, molecular biology, chemical reaction, etc. It describes the propagation of waves in media with both nonlinear and dispersive responses.

In this article, we investigate the long-time behaviour of the following stochastic nonlinear Schrödinger equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \textrm{d} u(t)+\left[ {i}\Delta u(t)+{i}\alpha |u(t)|^{2\sigma } u(t) +\lambda u(t) \right] \,{\textrm{d}}t = \Phi \,{\textrm{d}} W(t) \\ u(0)=u_0, \end{array}\right. } \end{aligned}$$
(1.1)

The unknown is \(u:{\mathbb {R}}^d\rightarrow {\mathbb {C}}\). We consider \(\sigma >0\), \(\lambda >0\) and \(\alpha \in \{-1,1\}\); for \(\alpha =1\) this is called the focusing equation and for \(\alpha =-1\) this is the defocusing one. In the r.h.s., there is a stochastic forcing term, which is white in time and coloured in space.

Imposing some condition on the power, i.e. on \(\sigma \), many results are known about existence and uniqueness of solutions, in different spatial domains and with different noises; see [1,2,3, 8, 11, 13, 14]. Basically these results are obtained without damping, i.e. \(\lambda =0\), but can be easily extended to the case with \(\lambda >0\).

When there is no damping and no forcing term (i.e. \(\lambda =0\) and \(\Phi =0\)), the Schrödinger equation is conservative. However, with a noise and a damping term, we expect that the energy injected by the noise is dissipated by the damping term; because of this balance it is meaningful to look for stationary solutions or invariant measures. Ekren et al. [16] and Kim [22] provide the existence of invariant measures of the Eq. (1.1) for any damping coefficient \(\lambda >0\); see also the more general setting of [7] for the two-dimensional case in a different spatial domain and with multiplicative noise and the book [20] for the numerical analysis approach. Notice that the damping \(\lambda u\) is weaker than the dissipation given by a Laplacian \(-\lambda \Delta u\); for this reason we say that \(\lambda u\) is a zero-order dissipation. This implies that the results of existence or uniqueness of invariant measures for the damped Schrödinger equation are less easy than for the stochastic parabolic equations (see, for example, [12]). A similar issue appears in the stochastic damped 2D Euler equations, for which the existence of invariant measures has been recently proven in [5]; there again the difficulty comes from the absence of a strong dissipation, given by the Laplacian in the Navier–Stokes equations.

Let us point out that the existence of invariant measures depends on the damping term as well as on the forcing term. On the other hand, without the damping term it is well known that the stochastic Schrödinger equation has a different long-time behaviour; in [19] it is proved that stochastic solutions may scatter at large time in the subcritical or defocusing case.

The question of the uniqueness of invariant measures is quite challenging for the SPDE’s with a zero-order dissipation. Debussche and Odasso [15] proved the uniqueness of the invariant measure for the cubic focusing Schrödinger equation (1.1), i.e. \(\sigma =\alpha =1\), when the spatial domain is a bounded interval; however, no uniqueness results are known for larger dimension. For the one-dimensional stochastic damped periodic KdV equation, there is a recent result by Glatt-Holtz et al. [17]. However, for nonlinear SPDE’s of parabolic type, i.e. with a stronger dissipation term, the uniqueness issue has been solved in many cases; see, for example, the book [12] by Da Prato and Zabczyk, and the many examples in the paper [18] by Glatt-Holtz, Mattingly and Richards, dealing with the coupling technique. Let us point out that the coupling technique allows for the uniqueness result without restriction on the damping parameter \(\lambda \) but all the examples solved so far are set in a bounded spatial domain and not in \({\mathbb {R}}^d\).

The aim of our paper is to investigate the uniqueness of the invariant measures for Eq. (1.1) in \({\mathbb {R}}^d\) in dimension \(d\le 3\), with some restrictions on the nonlinearity when \(d=3\). However, our technique fails for larger dimension. Notice that also the results for the attractor in the deterministic setting are known for \(d\le 3\) (see [23]). Our main result is Theorem 5.1; it provides a sufficient condition to get the uniqueness of the invariant measure. This condition (5.1) involves \(\lambda \) and the intensity of the noise; to optimize it, in Sects. 2.2 and 3 we perform a detailed analysis on how the solution depends on the damping parameter \(\lambda \).

As far as the contents of this paper are concerned, in Sect. 2 we introduce the mathematical setting and refine known moments estimates on the solution; in Sect. 3 by means of the Strichartz estimates we prove a regularity result on the solutions for \(d=2\) and \(d=3\); this will allow to prove in Sect. 4 that the support of any invariant measure is contained in \(V\cap L^\infty ({\mathbb {R}}^d)\) and some estimates of the moments are given. Finally, Sect. 5 presents the uniqueness result. The four appendices contain auxiliary results.

2 Assumptions and basic results

For \(p\ge 1\), \(L^p({\mathbb {R}}^d)\) is the classical Lebesgue space of complex-valued functions, and the inner product in the real Hilbert space \(L^2({\mathbb {R}}^d)\) is denoted by

$$\begin{aligned} \langle u,v\rangle =\int _{{\mathbb {R}}^d} u(y){\overline{v}}(y) \textrm{d}y. \end{aligned}$$

We consider the Laplace operator \(\Delta \) as a linear operator in \(L^2({\mathbb {R}}^d)\); so

$$\begin{aligned} A_0=-\Delta ,\qquad A_1=1-\Delta \end{aligned}$$

are nonnegative linear operators and \(\{e^{{i}tA_0 }\}_{t \in {\mathbb {R}}}\) is a unitary group in \(L^2({\mathbb {R}}^d)\). Moreover, for \(s\ge 0\) we consider the power operator \(A_1^{s/2}\) in \(L^2({\mathbb {R}}^d)\) with domain \(H^{s}=\{u \in L^2({\mathbb {R}}^d): \Vert A_1^{s/2}u\Vert _{L^2({\mathbb {R}}^d)}<\infty \}\). Our two main spaces are \(H:=L^2({\mathbb {R}}^d)\) and \(V:=H^1({\mathbb {R}}^d)\). We set \(H^{-s}({\mathbb {R}}^d)\) for the dual space of \(H^s({\mathbb {R}}^d)\) and denote again by \(\langle \cdot ,\cdot \rangle \) the duality bracket.

We define the generalized Sobolev spaces \(H^{s,p}({\mathbb {R}}^d)\) with norm given by \(\Vert u\Vert _{H^{s,p}({\mathbb {R}}^d)}=\Vert A_1^{s/2}u\Vert _{L^p({\mathbb {R}}^d)}\). We recall the Sobolev embedding theorem, see, for example, [4, Theorem 6.5.1]: if \(1<q<p<\infty \) with

$$\begin{aligned} \frac{1}{p}=\frac{1}{q}-\frac{r-s}{d}, \end{aligned}$$

then the following inclusion holds

$$\begin{aligned} H^{r,q}({\mathbb {R}}^d)\subset H^{s,p}({\mathbb {R}}^d) \end{aligned}$$

and there exists a constant C such that \(\Vert u\Vert _{H^{s,p}(\mathbb R^d)}\le C \Vert u\Vert _{H^{r,q}({\mathbb {R}}^d)}\) for all \(u \in H^{r,q}({\mathbb {R}}^d)\).

Remark 2.1

For \(d=1\), the space V is a subset of \(L^\infty ({\mathbb {R}})\) and is a multiplicative algebra. This simplifies the analysis of the Schrödinger equation (1.1). However, for \(d\ge 2\) the analysis is more involved.

We write the nonlinearity as

$$\begin{aligned} F_\alpha (u):=\alpha |u|^{2\sigma }u. \end{aligned}$$
(2.1)

Lemma C.1 provides a priori estimates on it.

As far as the stochastic term is concerned, we consider a real Hilbert space U with an orthonormal basis \(\{e_j\}_{j\in {\mathbb {N}}}\) and a complete probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\). Let W be a U-canonical cylindrical Wiener process adapted to a filtration \({\mathbb {F}}\) satisfying the usual conditions. We can write it as a series

$$\begin{aligned} W(t)= \sum _{j=1}^\infty W_j(t) e_j, \end{aligned}$$

with \(\{W_j\}_j\) a sequence of i.i.d. real Wiener processes (see, for example, [12]). Hence,

$$\begin{aligned} \Phi W(t)= \sum _{j=1}^\infty W_j(t) \Phi e_j \end{aligned}$$
(2.2)

for a given linear operator \(\Phi :U\rightarrow V\).

Now, we rewrite the Schrödinger equation (1.1) in the abstract form as

$$\begin{aligned} {\left\{ \begin{array}{ll} \textrm{d} u(t)+\left[ -{i}A_0 u(t)+{i}F_\alpha ( u(t)) +\lambda u(t) \right] \,{\textrm{d}}t = \Phi \,{\textrm{d}} W(t) \\ u(0)=u_0 \end{array}\right. } \end{aligned}$$
(2.3)

We work under the following assumptions on the noise and the nonlinearity. The initial data \(u_0\) is assumed to be in V.

Assumption 2.2

(on the noise) We assume that \(\Phi :U\rightarrow V\) is a Hilbert–Schmidt operator, i.e.

$$\begin{aligned} \Vert \Phi \Vert _{L_\textrm{HS}(U,V)} := \left( \sum _{j=1}^\infty \Vert \Phi e_j\Vert ^2_V\right) ^{1/2} < \infty . \end{aligned}$$
(2.4)

This means that

$$\begin{aligned} \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^2=\sum _{j=1}^\infty \Vert A_1^{1/2}\Phi e_j\Vert ^2_H =\sum _{j=1}^\infty \Vert \Phi e_j\Vert ^2_H +\sum _{j=1}^\infty \Vert \nabla \Phi e_j\Vert ^2_H <\infty \end{aligned}$$

and it implies that the series (2.2) converges in V.

In order to compare our setting with the more general one of our previous paper [7] in the two-dimensional setting, we point out that \(\Phi \) is also a Hilbert–Schmidt operator from U to H (and we denote \(\Vert \Phi \Vert _{L_\textrm{HS}(U;H)} := \left( \sum \nolimits _{j \in {\mathbb {N}}}\Vert \Phi e_j\Vert ^2_H\right) ^{1/2}\)) and, for \(d=2\), a \(\gamma \)-radonifying operator from U to \(L^{p}({\mathbb {R}}^2)\) for any finite p.

Assumption 2.3

(on the nonlinearity (2.1))

  • If \(\alpha =1\) (focusing), let \(0\le \sigma <\frac{2}{d}\).

  • If \(\alpha =-1\) (defocusing), let \({\left\{ \begin{array}{ll}0\le \sigma < \frac{2}{d-2},&{} \text { for }d\ge 3\\ \sigma \ge 0, &{} \text { for }d\le 2\end{array}\right. }\)

We recall the continuous embeddings

$$\begin{aligned} \begin{aligned}&H^1({\mathbb {R}}^2) \subset L^{p}({\mathbb {R}}^2) \qquad \forall \ p\in [2, \infty )\\&H^1({\mathbb {R}}^d) \subset L^{p}({\mathbb {R}}^d) \qquad \forall \ p \in [2, \tfrac{2d}{d-2}] \text { for } d\ge 3 \end{aligned} \end{aligned}$$

Hence, for \(\sigma \) chosen as in Assumption 2.3 there is the continuous embedding

$$\begin{aligned} H^1({\mathbb {R}}^d) \subset L^{2+2\sigma }({\mathbb {R}}^d). \end{aligned}$$
(2.5)

Moreover, if \(\sigma d<2(\sigma +1)\), the following Gagliardo–Nirenberg inequality holds

$$\begin{aligned} \Vert u\Vert _{L^{2+2\sigma }({\mathbb {R}}^d)} \le C \Vert u\Vert _{L^2(\mathbb R^d)}^{1-\frac{\sigma d}{2(1+\sigma )}} \Vert \nabla u\Vert _{L^2(\mathbb R^d)}^{\frac{\sigma d}{2(1+\sigma )}}. \end{aligned}$$
(2.6)

In particular, this holds for the values of \(\sigma \) specified in Assumption 2.3. In the focusing case, thanks to the Young inequality for any \(\epsilon >0\) there exists \(C_\epsilon >0\) such that

$$\begin{aligned} \Vert u \Vert _{L^{2+2\sigma }({\mathbb {R}}^d)}^{2+2\sigma } \le \epsilon \Vert \nabla u \Vert _{L ^2({\mathbb {R}}^d)}^2 +C_\epsilon \Vert u \Vert _{L^2({\mathbb {R}}^d)}^{2+\frac{4\sigma }{2-\sigma d}}. \end{aligned}$$
(2.7)

Above we denoted by C a generic positive constant which might vary from one line to the other, except G which is the particular constant in the next inequality (2.13) coming from the Gagliardo–Nirenberg inequality. Moreover, we shall use this notation: if \(a,b\ge 0\) satisfy the inequality \(a \le C_A b\) with a constant \(C_A>0\) depending on the expression A, we write \(a \lesssim _A b\); for a generic constant we put no subscript. If we have \(a \lesssim _A b\) and \(b \lesssim _A a\), we write \(a \simeq _A b\).

We recall the classical invariant quantities for the deterministic unforced Schrödinger equation (\(\lambda =0\), \(\Phi =0\)), the mass and the energy (see [9]):

$$\begin{aligned}&{\mathcal {M}}(u)=\Vert u\Vert ^2_H, \end{aligned}$$
(2.8)
$$\begin{aligned}&{\mathcal {H}}(u)=\frac{1}{2} \Vert \nabla u\Vert ^2_H -\frac{\alpha }{2(1+\sigma )}\Vert u\Vert _{L^{2+2\sigma }(\mathbb R^d)}^{2+2\sigma }. \end{aligned}$$
(2.9)

They are both well defined on V, thanks to (2.5).

Remark 2.4

In the defocusing case \(\alpha =-1\), we have

$$\begin{aligned} {\mathcal {H}}(u)\ge \frac{1}{2} \Vert \nabla u\Vert ^2_H\ge 0 \qquad \forall u\in V \end{aligned}$$
(2.10)

and

$$\begin{aligned} {\mathcal {H}}(u)\le \frac{1}{2} \Vert u\Vert ^2_V+ C_{\sigma ,d} \Vert u\Vert _V^{2+2\sigma } \qquad \forall u\in V. \end{aligned}$$
(2.11)

In the focusing case \(\alpha =1\), the energy has no positive sign, but we can modify it by adding a term and recover the sign property. We introduce the modified energy

$$\begin{aligned} \tilde{{\mathcal {H}}}(u)= \frac{1}{2} \Vert \nabla u\Vert _H^2-\frac{1}{2(1+\sigma )} \Vert u\Vert _{L^{2(1+\sigma )}(\mathbb R^d)}^{2+2\sigma } +G\Vert u\Vert _H^{2+\frac{4\sigma }{2-\sigma d}} \end{aligned}$$
(2.12)

where G is the constant appearing in the following particular form of (2.7)

$$\begin{aligned} \frac{1}{2(1+\sigma )} \Vert u\Vert _{L^{2(1+\sigma )}({\mathbb {R}}^d)}^{2+2\sigma } \le \frac{1}{4+\sigma } \Vert \nabla u\Vert _H^2+G \Vert u\Vert _H^{2+\frac{4\sigma }{2-\sigma d}}. \end{aligned}$$
(2.13)

Even if G depends on \(\sigma \) and d, for short we shall write simply G. Moreover, by Assumption 2.3 we have that \(2-\sigma >0\) in the focusing case.

Therefore,

$$\begin{aligned} \tilde{{\mathcal {H}}}(u)\ge \frac{2+\sigma }{8+2\sigma } \Vert \nabla u\Vert _H^2\ge 0 \qquad \forall u\in V. \end{aligned}$$
(2.14)

Moreover, from the definition (2.12) and the continuous embedding (2.5) we get

$$\begin{aligned} \tilde{{\mathcal {H}}}(u)\le \frac{1}{2} \Vert u\Vert ^2_V+ C_{\sigma ,d} \Vert u\Vert _V^{2+2\sigma } +G\Vert u\Vert _V^{2+\frac{4\sigma }{2-\sigma d}} \qquad \forall u\in V. \end{aligned}$$
(2.15)

Next in Sect. 2.1 we recall the known results on solutions and invariant measures; then in Sect. 2.2 we provide the improved estimates for the mass and the energy.

2.1 Basic results

We recall from [14] the basic results on the solutions; for any \(u_0\in V\) there exists a unique global solution \(u=\{u(t;u_0)\}_{t\ge 0}\), which is a continuous V-valued process. Here, uniqueness is meant as pathwise uniqueness. Actually, their result is given without damping but one can easily pass from \(\lambda =0\) to any \(\lambda >0\). Let us state the result from De Bouard and Debussche [14].

Theorem 2.5

Under Assumptions 2.2 and 2.3, for every \(u_0\in V\) there exists a unique V-valued and continuous solution of (2.3). This is a Markov process in V. Moreover, for any finite \(T>0\) and integer \(m\ge 1\) there exist positive constants \(C_1\) and \(C_2\) (depending on T, m and \(\Vert u_0\Vert _V\)) such that

$$\begin{aligned} {\mathbb {E}} \sup _{0\le t\le T}\left[ {\mathcal {M}}(u(t))^m \right] \le C_1 \end{aligned}$$

and

$$\begin{aligned} {\mathbb {E}} \left[ \sup _{0\le t\le T}{\mathcal {H}}(u(t))\right] \le C_2 . \end{aligned}$$

We notice that the last estimate can be generalized to consider any power \(m>1\) of the energy in the defocusing case as well as for the modified energy in the focusing case, namely

$$\begin{aligned} {\mathbb {E}} \left[ \sup _{0\le t\le T}[{\mathcal {H}}(u(t))]^m\right] <\infty \end{aligned}$$
(2.16)

and

$$\begin{aligned} {\mathbb {E}} \left[ \sup _{0\le t\le T}[\tilde{\mathcal H}(u(t))]^m\right] <\infty . \end{aligned}$$
(2.17)

This provides

$$\begin{aligned} {\mathbb {E}} \left[ \sup _{0\le t\le T}\Vert u(t)\Vert _V^m \right] <\infty . \end{aligned}$$
(2.18)

These estimates are in [16].

As soon as a unique solution in V is defined, we can introduce the Markov semigroup. Let us denote by u(tx) the solution evaluated at time \(t>0\), with initial value \(x\in V\). We define

$$\begin{aligned} P_t f(x)={\mathbb {E}}[f(u(t;x))] \end{aligned}$$
(2.19)

for any Borelian and bounded function \(f:V\rightarrow {\mathbb {R}}\).

A probability measure \(\mu \) on the Borelian subsets of V is said to be an invariant measure for (2.3) when

$$\begin{aligned} \int _V P_t f\ \textrm{d}\mu =\int _V f\ \textrm{d}\mu \qquad \forall t\ge 0, f\in B_b(V). \end{aligned}$$
(2.20)

We recall Theorem 3.4 from [16] on existence of invariant measures.

Theorem 2.6

Under Assumptions 2.2 and 2.3, there exists an invariant measure supported in V.

2.2 Mean estimates

In this section, we revise some bounds for \(t\in [0,\infty )\) on the moments of the mass, the energy and the modified energy, in order to see how these quantities depend on the damping coefficient \(\lambda \). This improves the results by [16, Lemma 5.1]. Actually, their Lemma 5.1 has to be modified in the focusing case (see Proposition 2.8).

This is the result for the mass \({\mathcal {M}}(u)=\Vert u\Vert ^2_H\).

Proposition 2.7

Let \(u_0\in V\). Then under Assumptions 2.2 and 2.3, for every \(m\ge 1\) there exists a positive constant C (depending on m) such that

$$\begin{aligned} {\mathbb {E}}\left[ {\mathcal {M}}(u(t))^m\right] \le e^{-\lambda m t}{\mathcal {M}}(u_0)^m + C \Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;H)} \lambda ^{-m} \end{aligned}$$
(2.21)

for any \(t\ge 0\).

Proof

Let us start by proving the estimate (2.21) for \(m=1\). We apply the Itô formula to \({\mathcal {M}}(u(t))\) (see [16, Theorem 3.2])

$$\begin{aligned} \textrm{d}{\mathcal {M}}(u(t))+ 2\lambda {\mathcal {M}}(u(t))\textrm{d}t =\Vert \Phi \Vert ^2_{L_\textrm{HS}(U;H)}\textrm{d}t+2{\text {Re}}\langle u(t), \Phi \textrm{d}W(t) \rangle . \end{aligned}$$

Taking the expected value and using the fact that the stochastic integral is a martingale by Theorem 2.5, we obtain, for any \(t \ge 0\),

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t}{\mathbb {E}}\left[ {\mathcal {M}}(u(t))\right] =-2\lambda {\mathbb {E}}\left[ {\mathcal {M}}(u(t))\right] + \Vert \Phi \Vert ^2_{L_\textrm{HS}(U,H)}. \end{aligned}$$

Solving this ODE, we obtain

$$\begin{aligned} {\mathbb {E}}\left[ {\mathcal {M}}(u(t))\right]&=e^{-2\lambda t}{\mathcal {M}}(u_0)+\Vert \Phi \Vert ^2_{L_\textrm{HS}(U,H)}\int _{0}^{t} e^{-2\lambda (t-s)}\, {{\textrm{d}}}s\\&\le e^{-2\lambda t}{\mathcal {M}}(u_0)+\frac{1}{2\lambda }\Vert \Phi \Vert ^2_{L_\textrm{HS}(U,H)}, \end{aligned}$$

which proves (2.21) for \(m=1\).

For \(m\ge 2\), we apply the Itô formula to \({\mathcal {M}}(u(t))^{m}\)

$$\begin{aligned} {\mathcal {M}}(u(t))^{m} =&\, {\mathcal {M}}(u_0)^{m}-2\lambda m \int _{0}^{t} {\mathcal {M}}(u(s))^{m}\, {\textrm{d}}s\nonumber \\&+2m\int _0^t {\mathcal {M}}(u(s))^{m-1} \text {Re}\langle u(s),\Phi {\textrm{d}}W(s)\rangle \nonumber \\&+m \Vert \Phi \Vert ^{2}_{L_\textrm{HS}(U,H)}\int _{0}^{t} {\mathcal {M}}(u(s))^{m-1}\, {\textrm{d}}s\nonumber \\&+ 2(m-1)m\int _{0}^{t} {\mathcal {M}}(u(s))^{m-2}\sum _{j=1}^{\infty }[\text {Re}\langle u(s),\Phi e_j \rangle ]^2\,{\textrm{d}}s. \end{aligned}$$
(2.22)

With the Young inequality, we get

$$\begin{aligned} \begin{aligned}&m \Vert \Phi \Vert ^2_{L_\textrm{HS}(U,H)} {\mathcal {M}}(u)^{m-1} + 2(m-1)m {\mathcal {M}}(u)^{m-2}\sum _{j=1}^{\infty }[\text {Re}\langle u,\Phi e_j \rangle ]^2\\&\quad \le m (2m-1) \Vert \Phi \Vert ^2_{L_\textrm{HS}(U,H)}{\mathcal {M}}(u)^{m-1}\\&\quad \le \epsilon \lambda m {\mathcal {M}}(u)^m +C_{\epsilon ,m} \Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;H)} \lambda ^{1-m} \end{aligned} \end{aligned}$$

for any \(\epsilon >0\). Hence,

$$\begin{aligned}&{\mathcal {M}}(u(t))^{m} \le {\mathcal {M}}(u_0)^{m}-(2-\epsilon )\lambda m \int _0^t {\mathcal {M}}(u(s))^{m}\, {\textrm{d}}s\nonumber \\&\quad +C_{\epsilon ,m} \Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;H)} \lambda ^{1-m} t +2m\int _0^t {\mathcal {M}}(u(s))^{m-1} \text {Re}\langle u(s),\Phi {\textrm{d}}W(s)\rangle . \end{aligned}$$
(2.23)

By Theorem 2.5, we know that the stochastic integral in (2.23) is a martingale, so taking the expected value on both sides of (2.23) we obtain

$$\begin{aligned}{} & {} {\mathbb {E}} [{\mathcal {M}}(u(t))^{m}] \le {\mathcal {M}}(u_0)^{m} \\{} & {} -(2-\epsilon )\lambda m \int _0^t {\mathbb {E}}[ {\mathcal {M}}(u(s))^{m}]\, {\textrm{d}}s +\Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;H)}C_{\epsilon ,m} \lambda ^{1-m} t. \end{aligned}$$

Choosing \(\epsilon =1\), by means of Gronwall inequality we get

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}[ {\mathcal {M}}(u(t))^{m}] \le e^{-\lambda m t} {\mathcal {M}}(u_0)^{m} +\Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;H)} C_m \lambda ^{1-m}\int _0^t e^{-\lambda m(t-s)}\textrm{d}s\\&\quad \le e^{-\lambda m t} {\mathcal {M}}(u_0)^{m} + \Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;H)} \frac{C_m}{m} \lambda ^{-m} \end{aligned} \end{aligned}$$

for any \(t\ge 0\).

If \(1<m<2\), then we use the Hölder inequality and the estimate for \(m=2\):

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}[ {\mathcal {M}}(u(t))^{m}] \le \left( {\mathbb {E}} {[}{\mathcal {M}}(u(t))^{2}]\right) ^{\frac{m}{2}}\\&\quad \le \left( e^{-2\lambda t} {\mathcal {M}}(u_0)^{2} +\frac{1}{4} \Vert \Phi \Vert ^{4}_{L_\textrm{HS}(U;H)} \lambda ^{-2}\right) ^{\frac{m}{2}}\\&\quad \le e^{-m\lambda t} {\mathcal {M}}(u_0)^{m} +C \Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;H)} \lambda ^{-m} \end{aligned} \end{aligned}$$

\(\square \)

Notice that the estimates on the mass do not depend on \(\alpha \), whereas this happens in the next result concerning the energy \({\mathcal {H}}(u)\) given in (2.9) and the modified energy \(\tilde{{\mathcal {H}}}(u)\) given in (2.12). We introduce the functions

$$\begin{aligned} \phi _1(\sigma ,\lambda ,\Phi ) = \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{2} + \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{2+2\sigma }\lambda ^{-\sigma }. \end{aligned}$$
(2.24)

and

$$\begin{aligned} \phi _2(d, \sigma ,\lambda , \Phi ) = \phi _1+ \Vert \Phi \Vert ^{2+\frac{\sigma }{\sigma +1}(1+2\frac{2\sigma +1}{2-\sigma d})}_{L_\textrm{HS}(U;V)} \lambda ^{-\frac{1}{2}\frac{\sigma }{\sigma +1}(1+2\frac{2\sigma +1}{2-\sigma d})} +\Vert \Phi \Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L_\textrm{HS}(U;H)} \lambda ^{ -\frac{2\sigma }{2-\sigma d}}.\nonumber \\ \end{aligned}$$
(2.25)

Both mappings \(\lambda \mapsto \phi _i(\sigma ,\lambda ,\Phi )\) are strictly decreasing. The estimates for the energy in the defocusing case will depend on \(\phi _1\), and the estimates for the modified energy in the focusing case will depend on \(\phi _2\).

This is the result on the power moments of \({\mathcal {H}}(u)\) and \(\tilde{{\mathcal {H}}}(u)\).

Proposition 2.8

Let \(u_0\in V\). Under Assumptions 2.2 and 2.3, we have the following estimates:

(i):

When \(\alpha =-1\), for every \(m\ge 1\) there exists a positive constant \(C=C(d, \sigma ,m)\) such that

$$\begin{aligned} {\mathbb {E}} [{\mathcal {H}}(u(t))^m] \le e^{-\lambda m t}{\mathcal {H}}(u_0)^m+ C \phi _1^{m}\lambda ^{-m} \end{aligned}$$
(2.26)

for any \(t\ge 0\).

(ii):

When \(\alpha =1\), for every \(m\ge 1\) there exist positive constants \(a=a(d,\sigma )\), \(C_1=C(d, \sigma ,m)\) and \(C_2=C(d, \sigma ,m)\) such that

$$\begin{aligned}{} & {} {\mathbb {E}} [\tilde{{\mathcal {H}}}(u(t))^m] \le e^{-m \frac{2-\sigma }{2+\sigma } \lambda t} \tilde{{\mathcal {H}}}(u_0)^m\nonumber \\{} & {} \quad +C_1 e^{-m a \lambda t} [1+ \mathcal M(u_0)^{m(\frac{1}{2}+\frac{2\sigma }{2-\sigma d})}] \Vert \Phi \Vert ^{m}_{L_\textrm{HS}(U;V)} \lambda ^{-\frac{m}{2}} +C_2 \phi _2^m \lambda ^{-m} \end{aligned}$$
(2.27)

for any \(t\ge 0\).

Proof

The Itô formula for \({\mathcal {H}}(u(t))\) is (see Theorem 3.2 in [16])

$$\begin{aligned}{} & {} \textrm{d}{\mathcal {H}}(u(t))+2 \lambda {\mathcal {H}}(u(t)) \textrm{d}t=\alpha \lambda \frac{\sigma }{\sigma +1} \Vert u(t)\Vert _{L^{2\sigma +2}(\mathbb R^d)}^{2\sigma +2} \textrm{d}t\nonumber \\{} & {} \quad -\sum _{j=1}^\infty {\text {Re}}\langle \Delta u(t)+\alpha |u(t)|^{2\sigma }u(t),\Phi e_j\rangle \textrm{d}W_j(t) +\frac{1}{2} \Vert \nabla \Phi \Vert ^2_{L_\textrm{HS}(U;H)} \textrm{d}t\nonumber \\{} & {} \quad - \frac{\alpha }{2} \Vert |u(t)|^{\sigma } \Phi \Vert ^2_{L_\textrm{HS}(U;H)}\textrm{d}t -\alpha \sigma \sum _{j=1}^\infty \langle |u(t)|^{2\sigma -2}, [{\text {Re}}({\overline{u}}(t)\Phi e_j)]^2\rangle \textrm{d}t.\nonumber \\ \end{aligned}$$
(2.28)

We notice that the stochastic integral is a martingale, because its quadratic variation has finite mean thanks to the moment estimates (2.16)–(2.18). (Computations are similar to those in the next estimate (2.34).)

Below we repeatedly use the Hölder and Young inequalities. In particular,

$$\begin{aligned} A^{m-1}B\le \epsilon \lambda A^m+C_\epsilon \lambda ^{1-m} B^m, \qquad m>1 \end{aligned}$$
(2.29)

and

$$\begin{aligned} A^{m-2}B\le \epsilon \lambda A^m+C_\epsilon \lambda ^{1-\frac{m}{2}} B^{\frac{m}{2}}, \qquad m>2 \end{aligned}$$
(2.30)

for any positive \(A,B,\lambda , \epsilon \).

\(\bullet \) In the defocusing case \(\alpha =-1\), we neglect the first term in the r.h.s. in (2.28), i.e.

$$\begin{aligned}{} & {} \textrm{d}{\mathcal {H}}(u(t))+2 \lambda {\mathcal {H}}(u(t)) \textrm{d}t\le -\sum _{j=1}^\infty {\text {Re}}\langle \Delta u(t)- |u(t)|^{2\sigma }u(t),\Phi e_j\rangle \textrm{d}W_j(t)\nonumber \\{} & {} \quad +\Big [\frac{1}{2} \Vert \nabla \Phi \Vert ^2_{L_\textrm{HS}(U;H)} + \frac{1}{2} \Vert |u(t)|^{\sigma } \Phi \Vert ^2_{L_\textrm{HS}(U;H)} + \sigma \sum _{j=1}^\infty \langle |u(t)|^{2\sigma -2}, [{\text {Re}}({\overline{u}}(t)\Phi e_j)]^2\rangle \Big ]\textrm{d}t.\nonumber \\ \end{aligned}$$
(2.31)

Moreover, thanks to Assumption 2.3 we use Hölder and Young inequalities to get

$$\begin{aligned} \frac{1}{2}&\Vert |u|^{\sigma } \Phi \Vert ^2_{L_\textrm{HS}(U;H)} + \sigma \sum _{j=1}^\infty \langle |u|^{2\sigma -2}, [{\text {Re}}(\overline{u}(t)\Phi e_j)]^2\rangle \nonumber \\&\quad \le \frac{1}{2} \Vert |u|^\sigma \Vert ^2_{L^{\frac{2\sigma +2}{\sigma }}(\mathbb R^d)}\sum _{j=1}^\infty \Vert \Phi e_j\Vert _{L^{2\sigma +2}(\mathbb R^d)}^2+\sigma \Vert |u|^{2\sigma }\Vert _{L^{\frac{2\sigma +2}{2\sigma }}(\mathbb R^d)}\sum _{j=1}^\infty \Vert |\Phi e_j|^2\Vert _{L^{\sigma +1}({\mathbb {R}}^d)}\nonumber \\&\quad \le \frac{1+2\sigma }{2} \Vert u \Vert _{L^{2\sigma +2}({\mathbb {R}}^d)}^{2\sigma } \sum _{j=1}^\infty \Vert \Phi e_j\Vert _{L^{2\sigma +2}({\mathbb {R}}^d)}^2\nonumber \\&\quad \le \frac{1+2\sigma }{2} \Vert u \Vert _{L^{2\sigma +2}(\mathbb R^d)}^{2\sigma } C \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^2 \quad \text { by } 2.5\nonumber \\&\quad \le \frac{\lambda }{2+2\sigma } \Vert u\Vert _{L^{2+2\sigma }(\mathbb R^d)}^{2+2\sigma } +C \Vert \Phi \Vert ^{2+2\sigma }_{L_\textrm{HS}(U;V)} \lambda ^{-\sigma }\nonumber \\&\quad \le \lambda {\mathcal {H}}(u)+C\Vert \Phi \Vert ^{2+2\sigma }_{L_\textrm{HS}(U;V)} \lambda ^{-\sigma } \end{aligned}$$
(2.32)

Now, we insert this estimate in (2.31) and take the mathematical expectation to get rid of the stochastic integral

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t}{\mathbb {E}} {\mathcal {H}}(u(t))+2\lambda \mathbb E{\mathcal {H}}(u(t)) \le \frac{1}{2} \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;V)}+\lambda {\mathbb {E}} {\mathcal {H}}(u(t)) +C \Vert \Phi \Vert ^{2+2\sigma }_{L_\textrm{HS}(U;V)} \lambda ^{-\sigma }, \end{aligned}$$

i.e.

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t}{\mathbb {E}} {\mathcal {H}}(u(t))+\lambda \mathbb E{\mathcal {H}}(u(t)) \le \frac{1}{2} \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;V)}+ C \Vert \Phi \Vert ^{2+2\sigma }_{L_\textrm{HS}(U;V)} \lambda ^{-\sigma } . \end{aligned}$$

By Gronwall lemma, we get

$$\begin{aligned} {\mathbb {E}} {\mathcal {H}}(u(t))\le e^{-\lambda t}{\mathcal {H}}(u_0)+ \frac{1}{2} \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;V)}\lambda ^{-1}+C \Vert \Phi \Vert ^{2+2\sigma }_{L_\textrm{HS}(U;V)} \lambda ^{-\sigma -1} \end{aligned}$$

for any \(t\ge 0\). This proves (2.26) for \(m=1\).

For higher powers \(m \ge 2\), by means of Itô formula we get

$$\begin{aligned}{} & {} \textrm{d} {\mathcal {H}}(u(t))^m=m {\mathcal {H}}(u(t))^{m-1} \textrm{d} {\mathcal {H}}(u(t))\nonumber \\{} & {} \quad +\frac{m(m-1)}{2} {\mathcal {H}}(u(t))^{m-2} \sum _{j=1}^\infty [{\text {Re}}\langle \Delta u(t)-|u(t)|^{2\sigma }u(t),\Phi e_j\rangle ]^2 \textrm{d}t. \end{aligned}$$
(2.33)

We estimate the latter term using the Hölder and the Young inequality:

$$\begin{aligned} \frac{1}{2} \sum _j&[{\text {Re}}\langle \Delta u-|u|^{2\sigma }u,\Phi e_j\rangle ]^2\nonumber \\&\quad \le \sum _j [{\text {Re}}\langle \Delta u,\Phi e_j\rangle ]^2 +\sum _j [{\text {Re}}\langle |u|^{2\sigma }u,\Phi e_j\rangle ]^2\nonumber \\&\quad \le \Vert \nabla u\Vert _{H}^2 \sum _j \Vert \nabla \Phi e_j\Vert _{H}^2 + \Vert |u|^{2\sigma }u\Vert ^2_{L^{\frac{2\sigma +2}{2\sigma +1}}({\mathbb {R}}^d)} \sum _j \Vert \Phi e_j\Vert ^2_{L^{2+2\sigma }({\mathbb {R}}^d)}\nonumber \\&\quad \le \Vert \nabla u\Vert _{H}^2 \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^2 + C \Vert u\Vert ^{2(2\sigma +1)}_{L^{2+2\sigma }({\mathbb {R}}^d)} \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^2\nonumber \\&\quad \le \epsilon \lambda {\mathcal {H}}(u)^2+ C_{\epsilon ,\sigma }\left( \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{4} +\Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{4(1+\sigma )}\lambda ^{-2\sigma }\right) \lambda ^{-1} \end{aligned}$$
(2.34)

for any \(\epsilon >0\). Inserting in (2.33) and using the Young inequality (2.30), we get

$$\begin{aligned}{} & {} \textrm{d} {\mathcal {H}}(u(t))^m \le m {\mathcal {H}}(u(t))^{m-1} \textrm{d} {\mathcal {H}}(u(t))\nonumber \\{} & {} \quad +\frac{1}{2} m\lambda {\mathcal {H}}(u(t))^m \textrm{d}t +C \left( \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{4} +\Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{4(\sigma +1)}\lambda ^{-2\sigma }\right) ^{m/2}\lambda ^{-m+1} \textrm{d}t\nonumber \\ \end{aligned}$$
(2.35)

We estimate \({\mathcal {H}}(u(t))^{m-1} \textrm{d} {\mathcal {H}}(u(t))\) using (2.31), (2.32), and the Young inequality (2.29). Then, we take the mathematical expectation in (2.35) and obtain

$$\begin{aligned}{} & {} \frac{\textrm{d}}{\textrm{d}t}{\mathbb {E}} [{\mathcal {H}}(u(t))^m]+m \lambda {\mathbb {E}} [{\mathcal {H}}(u(t))^m ]\nonumber \\{} & {} \quad \le C_{\sigma ,m}\left( \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{2} + \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{2(1+\sigma )}\lambda ^{-\sigma }\right) ^{m}\lambda ^{-m+1}. \end{aligned}$$
(2.36)

By Gronwall lemma, we get (2.26).

For \(1<m<2\), we proceed by means of the Hölder inequality as before, using the estimate for \(m=2\).

\(\bullet \) In the focusing case \(\alpha =1\), we neglect the last two terms in the r.h.s. in (2.28) and get

$$\begin{aligned}{} & {} \textrm{d}{\mathcal {H}}(u(t))+2 \lambda {\mathcal {H}}(u(t)) \textrm{d}t \le \lambda \frac{\sigma }{\sigma +1} \Vert u(t)\Vert _{L^{2+2\sigma }(\mathbb R^d)}^{2+2\sigma } \textrm{d}t\nonumber \\{} & {} \quad -\sum _{j=1}^\infty {\text {Re}}\langle \Delta u(t)+ |u(t)|^{2\sigma }u(t),\Phi e_j\rangle \textrm{d}W_j(t) +\frac{1}{2} \Vert \nabla \Phi \Vert ^2_{L_\textrm{HS}(U;H)} \textrm{d}t.\qquad \end{aligned}$$
(2.37)

We write the Itô formula for the modified energy \(\tilde{{\mathcal {H}}}(u)= {{\mathcal {H}}}(u) +G\mathcal M(u)^{1+\frac{2\sigma }{2-\sigma d}}\). Proceeding as in (2.23) for the power \(m=1+\frac{2\sigma }{2-\sigma d}\) of the mass, we have

$$\begin{aligned}{} & {} \textrm{d}\tilde{{\mathcal {H}}}(u(t))+2 \lambda \tilde{{\mathcal {H}}}(u(t)) \textrm{d}t\nonumber \\{} & {} \qquad \le \lambda \frac{\sigma }{\sigma +1} \Vert u(t)\Vert _{L^{2\sigma +2}(\mathbb R^d)}^{2\sigma +2} \textrm{d}t \nonumber \\{} & {} \quad + \lambda \left( \epsilon \left( 1+\frac{2\sigma }{2-\sigma d}\right) - \frac{4\sigma }{2-\sigma d} \right) G {\mathcal {M}}(u(t))^{1+\frac{2\sigma }{2-\sigma d}} \textrm{d}t\;\nonumber \\{} & {} \qquad +C_\epsilon \Vert \Phi \Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L_\textrm{HS}(U;H)} \lambda ^{-\frac{2\sigma }{2-\sigma d}}\textrm{d}t +\frac{1}{2} \Vert \nabla \Phi \Vert ^2_{L_\textrm{HS}(U;H)} \textrm{d}t\nonumber \\{} & {} \qquad -\sum _j {\text {Re}}\langle \Delta u(t)+ |u(t)|^{2\sigma }u(t),\Phi e_j\rangle \textrm{d}W_j(t)\nonumber \\{} & {} \qquad +2\left( 1+\frac{2\sigma }{2-\sigma d}\right) G {\mathcal {M}}(u(s))^{\frac{2\sigma }{2-\sigma d}} \text {Re}\langle u(t),\Phi {\textrm{d}}W(t)\rangle . \end{aligned}$$
(2.38)

Since \((1-\frac{2}{2-\sigma d})< 0\) by Assumption 2.3, for \(\epsilon \) small enough we get \(\epsilon (1+\frac{2\sigma }{2-\sigma d})+2\sigma (1-\frac{2}{2-\sigma d})<0 \); hence,

$$\begin{aligned}\begin{aligned}&\frac{\sigma }{\sigma +1} \Vert u\Vert _{L^{2\sigma +2}(\mathbb R^d)}^{2\sigma +2} + \left( \epsilon \left( 1+\frac{2\sigma }{2-\sigma d}\right) - \frac{4\sigma }{2-\sigma d} \right) G \mathcal M(u)^{1+\frac{2\sigma }{2-\sigma d}}\\ {}&\quad \underset{2.13}{\le }\ \frac{2\sigma }{4+\sigma } \Vert \nabla u\Vert _H^2+\left( \epsilon \left( 1+\frac{2\sigma }{2-\sigma d}\right) +2\sigma \left( 1-\frac{2}{2-\sigma d}\right) \right) G {\mathcal {M}}(u)^{1+\frac{2\sigma }{2-\sigma d}}\\ {}&\,\,\quad \le \frac{2\sigma }{4+\sigma } \Vert \nabla u\Vert _H^2 \underset{2.14}{\le }\ \frac{4\sigma }{2+\sigma } \tilde{{\mathcal {H}}}(u). \end{aligned} \end{aligned}$$

Then,

$$\begin{aligned}{} & {} \textrm{d}\tilde{{\mathcal {H}}}(u(t))+2 \frac{2-\sigma }{2+\sigma }\lambda \tilde{{\mathcal {H}}}(u(t)) \textrm{d}t\nonumber \\{} & {} \quad \le \Big ( C \Vert \Phi \Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L_\textrm{HS}(U;H)} \lambda ^{-\frac{2\sigma }{2-\sigma d}} +\frac{1}{2} \Vert \nabla \Phi \Vert ^2_{L_\textrm{HS}(U;H)}\Big ) \textrm{d}t\nonumber \\{} & {} \qquad -\sum _j {\text {Re}}\langle \Delta u(t)+ |u(t)|^{2\sigma }u(t),\Phi e_j\rangle \textrm{d}W_j(t)\nonumber \\{} & {} \qquad +2\left( 1+\frac{2\sigma }{2-\sigma d}\right) G {\mathcal {M}}(u(s))^{\frac{2\sigma }{2-\sigma d}} {\text {Re}}\langle u(t),\Phi {\textrm{d}}W(t)\rangle . \end{aligned}$$
(2.39)

Notice that the condition \(\sigma <\frac{2}{d}\) implies that \(\sigma <2\). So considering the mathematical expectation, we obtain

$$\begin{aligned} \begin{aligned}&\frac{\textrm{d}}{\textrm{d}t}{\mathbb {E}} \tilde{{\mathcal {H}}}(u(t)) +2\frac{2-\sigma }{2+\sigma }\lambda \mathbb E\tilde{{\mathcal {H}}}(u(t))\\&\quad \le C \Vert \Phi \Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L_\textrm{HS}(U;H)} \lambda ^{-\frac{2\sigma }{2-\sigma d}} +\frac{1}{2} \Vert \nabla \Phi \Vert ^2_{L_\textrm{HS}(U;H)}. \end{aligned} \end{aligned}$$
(2.40)

By means of the Gronwall lemma, we get

$$\begin{aligned} {\mathbb {E}} \tilde{{\mathcal {H}}}(u(t)) \le e^{-2\frac{2-\sigma }{2+\sigma }\lambda t} \tilde{{\mathcal {H}}}(u_0) +C\left( \Vert \Phi \Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L_\textrm{HS}(U;H)} \lambda ^{-\frac{2\sigma }{2-\sigma d}}+\Vert \nabla \Phi \Vert ^2_{L_\textrm{HS}(U;H)}\right) \lambda ^{-1}. \end{aligned}$$

This proves (2.27) for \(m=1\).

For \(m\ge 2\), we have by Itô formula

$$\begin{aligned} \textrm{d} \tilde{{\mathcal {H}}}(u(t))^m\le m \tilde{{\mathcal {H}}}(u(t))^{m-1} \textrm{d} \tilde{{\mathcal {H}}}(u(t)) +\frac{m(m-1)}{2} \tilde{{\mathcal {H}}}(u(t))^{m-2} 2r(t)\ \textrm{d}t, \end{aligned}$$
(2.41)

where we have estimated the quadratic variation of the stochastic integral in (2.38) so to get

$$\begin{aligned}{} & {} r(t)=\sum _{j=1}^{\infty } [{\text {Re}}\langle \Delta u(t)+|u(t)|^{2\sigma }u(t),\Phi e_j\rangle ]^2 + 4G^2(1+\tfrac{2\sigma }{2-\sigma d})^2 \mathcal M(u(t))^{\frac{4\sigma }{2-\sigma d}}\\{} & {} \quad \sum _{j=1}^{\infty }[\text {Re}\langle u(t),\Phi e_j \rangle ]^2 . \end{aligned}$$

Keeping in mind the previous estimate (2.34), we get

$$\begin{aligned} \begin{aligned} r(t)\lesssim&\, \Vert \nabla u(t)\Vert _H^2 \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;V)} + \Vert u(t)\Vert ^{2(2\sigma +1)}_{L^{2\sigma +2}({\mathbb {R}}^d)} \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^2\\&+ 4G^2\left( 1+\tfrac{2\sigma }{2-\sigma d}\right) ^2 \mathcal M(u(t))^{1+\frac{4\sigma }{2-\sigma d}} \Vert \Phi \Vert _{L_\textrm{HS}(U;H)}^2. \end{aligned} \end{aligned}$$

Now to estimate the first term in the r.h.s., we use (2.14), i.e. \(\Vert \nabla u(t)\Vert _H^2 \le 4 \tilde{{\mathcal {H}}}(u)\), and for the second term by means of (2.7), we get

$$\begin{aligned} \begin{aligned} \Vert u\Vert ^{2(2\sigma +1)}_{L^{2\sigma +2}({\mathbb {R}}^d)}&\le \frac{\epsilon }{4} \Vert \nabla u\Vert _H^{2\frac{2\sigma +1}{\sigma +1}} +C_{\epsilon ,\sigma }{\mathcal {M}}(u)^{\frac{2\sigma +1}{\sigma +1}(1+\frac{2\sigma }{2-\sigma d})} \\&\le \epsilon \tilde{{\mathcal {H}}}(u)^{\frac{2\sigma +1}{\sigma +1}}+C_{\epsilon ,\sigma } {\mathcal {M}}(u)^{\frac{2\sigma +1}{\sigma +1}(1+\frac{2\sigma }{2-\sigma d})} \end{aligned} \end{aligned}$$

for any \(\epsilon >0\). Thus, we estimate the latter term in (2.41) as follows

$$\begin{aligned} \begin{aligned}&\tilde{{\mathcal {H}}}(u(t))^{m-2} r(t) \lesssim \tilde{{\mathcal {H}}}(u(t))^{m-1} \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;V)} + \tilde{{\mathcal {H}}}(u(t))^{m-\frac{1}{\sigma +1}} \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;V)}\\&\quad \;+ \tilde{{\mathcal {H}}}(u(t))^{m-2} \mathcal M(u(t))^{\frac{2\sigma +1}{\sigma +1}(1+\frac{2\sigma }{2-\sigma d})} \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;V)}\\&\quad \;+ \tilde{{\mathcal {H}}}(u(t))^{m-2} \mathcal M(u(t))^{1+\frac{4\sigma }{2-\sigma d}} \Vert \Phi \Vert _{L_\textrm{HS}(U;H)}^2 \end{aligned} \end{aligned}$$

and by Young inequality

$$\begin{aligned} \begin{aligned}&\le \lambda \epsilon \tilde{{\mathcal {H}}}(u(t))^m + C \Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;V)} \lambda ^{1-m} + C \Vert \Phi \Vert ^{2m(1+\sigma )}_{L_\textrm{HS}(U;V)} \lambda ^{1-m(1+\sigma )}\\&\quad \;+ C {\mathcal {M}}(u(t))^{\frac{2\sigma +1}{\sigma +1}(1+\frac{2\sigma }{2-\sigma d})\frac{m}{2}} \Vert \Phi \Vert ^m_{L_\textrm{HS}(U;V)} \lambda ^{1-\frac{m}{2}}\\&\quad \;+C {\mathcal {M}}(u(t))^{(1+\frac{4\sigma }{2-\sigma d})\frac{m}{2}} \Vert \Phi \Vert ^m_{L_\textrm{HS}(U;H)} \lambda ^{1-\frac{m}{2}}. \end{aligned} \end{aligned}$$

In (2.41), we insert this estimate and the previous estimate (2.39) for \(d \tilde{{\mathcal {H}}}(u(t))\), integrate in time, and take the mathematical expectation to get rid of the stochastic integrals; hence, for \(\epsilon \) small enough we obtain

$$\begin{aligned}\begin{aligned}&\frac{\textrm{d}}{\textrm{d}t} {\mathbb {E}} [\tilde{{\mathcal {H}}}(u(t))^m] +m\frac{2-\sigma }{2+\sigma }\lambda {\mathbb {E}} [\tilde{\mathcal H}(u(t))^m]\\&\quad \le C {\mathbb {E}}[\mathcal M(u(t))^{\frac{2\sigma +1}{\sigma +1}(1+\frac{2\sigma }{2-\sigma d})\frac{m}{2}}] \Vert \Phi \Vert ^m_{L_\textrm{HS}(U;V)} \lambda ^{1-\frac{m}{2}}\\&\qquad + C {\mathbb {E}}[{\mathcal {M}}(u(t))^{(1+\frac{4\sigma }{2-\sigma d})\frac{m}{2}}] \Vert \Phi \Vert ^m_{L_\textrm{HS}(U;H)} \lambda ^{1-\frac{m}{2}}\\&\qquad +C \Vert \Phi \Vert ^{2m}_{L_\textrm{HS}(U;V)} \lambda ^{1-m} + C \Vert \Phi \Vert ^{2m(1+\sigma )}_{L_\textrm{HS}(U;V)} \lambda ^{1-m(1+\sigma )}\\&\qquad +C \left( \Vert \Phi \Vert ^{2}_{L_\textrm{HS}(U;V)} +\Vert \Phi \Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L_\textrm{HS}(U;H)} \lambda ^{-\frac{2\sigma }{2-\sigma d}}\right) ^m\lambda ^{1-m} \end{aligned} \end{aligned}$$

We use Gronwall lemma and bearing in mind the estimates (2.21) for the mass we get an inequality for \({\mathbb {E}} [\tilde{{\mathcal {H}}}(u(t))^m]\). Computing the time integrals appearing there, with some elementary calculations we get

$$\begin{aligned}\begin{aligned} {\mathbb {E}} [\tilde{{\mathcal {H}}}(u(t))^m] \le&\, e^{-m \frac{2-\sigma }{2+\sigma } \lambda t} \tilde{{\mathcal {H}}}(u_0)^m +C \phi _2^m \lambda ^{-m}\\&+C e^{-m a_1 \lambda t} \mathcal M(u_0)^{\frac{2\sigma +1}{\sigma +1}(1+\frac{2\sigma }{2-\sigma d})\frac{m}{2}} \Vert \Phi \Vert ^m_{L_\textrm{HS}(U;V)} \lambda ^{-\frac{m}{2}}\\&+C e^{-m a_2 \lambda t} \mathcal M(u_0)^{m(\frac{1}{2}+\frac{2\sigma }{2-\sigma d})} \Vert \Phi \Vert ^{m}_{L_\textrm{HS}(U;H)}\lambda ^{-\frac{m}{2}} \end{aligned} \end{aligned}$$

for any \(t\ge 0\), where

$$\begin{aligned}{} & {} a_1(d,\sigma )=\min \left( \tfrac{2-\sigma }{2+\sigma }, \tfrac{1}{2} \tfrac{2\sigma +1}{\sigma +1}(1+\tfrac{2\sigma }{2-\sigma d})\right) ,\\{} & {} a_2(d,\sigma )=\min \left( \tfrac{2-\sigma }{2+\sigma }, \tfrac{1}{2} +\tfrac{2\sigma }{2-\sigma d} \right) . \end{aligned}$$

Since \(\frac{2\sigma +1}{\sigma +1}(1+\frac{2\sigma }{2-\sigma d})\frac{1}{2} <\frac{1}{2}+\frac{2\sigma }{2-\sigma d}\), we bound the sum of the two terms with different powers of \({\mathcal {M}}(u_0)\) by putting in evidence only the largest power. Therefore, we obtain (2.27).

For \(1<m<2\), we proceed as in the previous case. \(\square \)

Merging the results for the mass and the energy, we obtain the result for the V-norm. Indeed, \(\Vert u\Vert _V^2=\Vert \nabla u\Vert _H^2+\Vert u\Vert _H^2\) and

$$\begin{aligned} \Vert \nabla u\Vert _H^2 = 2 {\mathcal {H}}(u)+\frac{\alpha }{\sigma +1} \Vert u\Vert _{L^{2\sigma +2}({\mathbb {R}}^d)}^{2\sigma +2} . \end{aligned}$$

For \(\alpha =-1\), we trivially get

$$\begin{aligned} \Vert u\Vert _V^2 \le 2 {\mathcal {H}}(u)+ {\mathcal {M}}(u). \end{aligned}$$

For \(\alpha =1\), we have from (2.14)

$$\begin{aligned} \Vert u\Vert _V^2 \le \frac{8+2\sigma }{2+\sigma }\tilde{ {\mathcal {H}}}(u)+ \mathcal M(u). \end{aligned}$$

Now, we bear in mind the functions \(\phi _1\) and \(\phi _2\) given in (2.24) and (2.25), respectively. This is the result for the moments of the V-norm.

Corollary 2.9

Let \(u_0\in V\). Under Assumptions 2.2 and 2.3, for every \(m\ge 1\) we have the following estimates:

(i):

When \(\alpha =-1\)

$$\begin{aligned} {\mathbb {E}}[ \Vert u(t)\Vert _V^{2m}] \lesssim e^{-m \lambda t}[{{\mathcal {H}}}(u_0)^m +{{\mathcal {M}}}(u_0)^m]+ [\phi _1 + \Vert \Phi \Vert ^{2}_{L_\textrm{HS}(U;H)}]^m \lambda ^{-m}\nonumber \\ \end{aligned}$$
(2.42)

for any \(t\ge 0\);

(ii):

When \(\alpha =1\), there is a positive constant \(a=a(d,\sigma )\) such that

$$\begin{aligned}{} & {} {\mathbb {E}}[ \Vert u(t)\Vert _V^{2m}] \lesssim e^{-m \frac{2-\sigma }{2+\sigma } \lambda t} \tilde{{\mathcal {H}}}(u_0)^m +e^{-m \lambda t} {{\mathcal {M}}}(u_0)^m\nonumber \\{} & {} \quad +e^{-m a \lambda t} [1+ \mathcal M(u_0)^{m(\frac{1}{2}+\frac{2\sigma }{2-\sigma d})}] \Vert \Phi \Vert ^{m}_{L_\textrm{HS}(U;V)} \lambda ^{-\frac{m}{2}} +[\phi _2 + \Vert \Phi \Vert ^{2}_{L_\textrm{HS}(U;H)}]^m \lambda ^{-m}\nonumber \\ \end{aligned}$$
(2.43)

for any \(t\ge 0\).

The constants providing the above estimates (\(\lesssim \)) depend on \(m,\sigma \) and d but not on \(\lambda \).

3 Regularity results for the solution

For the solution of Eq. (2.3), we know that \(u\in C([0,+\infty );V)\) a.s. if \(u_0\in V\). Now, we look for the \(L^\infty ({\mathbb {R}}^d)\)-space regularity of the paths. When \(d=1\), this follows directly from the Sobolev embedding \(H^1(\mathbb R)\subset L^\infty ({\mathbb {R}})\). But such an embedding does not hold for \(d>1\). However, for \(d=2\) or \(d=3\) one can obtain the \(L^\infty ({\mathbb {R}}^d)\)-regularity by means of the deterministic and stochastic Strichartz estimates of Appendix A.

Let \(\phi _1\) and \(\phi _2\) be the functions appearing in Proposition 2.8.

Proposition 3.1

Let \(d=2\) or \(d=3\). In addition to Assumptions 2.2 and 2.3, we suppose that \(\sigma <\frac{1+\sqrt{17}}{4}\) when \(d=3\).

Given any finite \(T>0\) and \(u_0\in V\), the solution of Eq. (2.3) is in \( L^{2\sigma }(\Omega ;L^{2\sigma }(0,T;L^\infty ({\mathbb {R}}^d)))\). Moreover, there exists a positive constant \(C=C(\sigma ,d, T)\) such that

$$\begin{aligned}{} & {} {\mathbb {E}} \Vert u\Vert ^{2\sigma }_{L^{2\sigma }(0,T;L^\infty (\mathbb R^d))}\nonumber \\{} & {} \quad \le C \left( \Vert u_0\Vert _V^{2\sigma }+ \psi (u_0)^{\sigma (2\sigma +1)} +\phi _3^{\sigma (2\sigma +1)}\lambda ^{-{\sigma (2\sigma +1)}} + \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{2\sigma } \right) .\nonumber \\ \end{aligned}$$
(3.1)

where

$$\begin{aligned} \psi (u_0) = {\left\{ \begin{array}{ll} {{\mathcal {H}}}(u_0)+{{\mathcal {M}}}(u_0), &{} \alpha =-1\\ \tilde{{\mathcal {H}}}(u_0)+{{\mathcal {M}}}(u_0) + {\mathcal {M}}(u_0)^{1+\frac{4\sigma }{2-\sigma d}} +1 ,&{} \alpha =1 \end{array}\right. } \end{aligned}$$
(3.2)

and

$$\begin{aligned} \phi _3(d, \sigma ,\lambda , \Phi )= {\left\{ \begin{array}{ll} \phi _1(\sigma ,\lambda , \Phi ) + \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;H)},&{}\alpha =-1\\ \phi _2(d,\sigma ,\lambda , \Phi )+ \Vert \Phi \Vert ^2_{L_\textrm{HS}(U;V)},&{}\alpha =1\end{array}\right. } \end{aligned}$$
(3.3)

so \(\lambda \mapsto \phi _3(d, \sigma ,\lambda , \Phi )\) is a strictly decreasing function.

Proof

First let us consider \(d=2\). We repeatedly use the embedding \(H^{1,q}({\mathbb {R}}^2)\subset L^\infty ({\mathbb {R}}^2)\) valid for any \(q>2\). So our target is to prove the estimate for the \(L^{2\sigma }(\Omega ;L^{2\sigma }(0,T;H^{1,q}({\mathbb {R}}^2)))\)-norm of u for some \(q>2\).

We introduce the operator \(\Lambda :=-iA_0+\lambda \). It generates the semigroup \(e^{-i\Lambda t}= e^{-\lambda t} e^{iA_0 t}\), \(t\ge 0\).

Let us fix \(T>0\). We write Eq. (2.3) in the mild form (see [14])

$$\begin{aligned} {i}u(t)&= {i}e^{- \Lambda t}u_0+ \int _0^{t} e^{-\Lambda (t-s)}F_\alpha (u(s))\, {\textrm{d}}s + {i}\int _0^{t} e^{-\Lambda (t-s)}\Phi \, {\textrm{d}}W(s) \nonumber \\&=:I_1(t)+I_2(t)+I_3(t) \end{aligned}$$
(3.4)

and estimate

$$\begin{aligned} {\mathbb {E}} \Vert I_i\Vert ^{2\sigma }_{L^{2\sigma }(0,T;H^{1,q}(\mathbb R^2))},\qquad i=1,2,3 \end{aligned}$$

for some \(q>2\).

For the estimate of \(I_1\), we set

$$\begin{aligned} q={\left\{ \begin{array}{ll} \frac{2\sigma }{\sigma -1}&{} \text { if } \sigma >1\\ \frac{6}{3-\sigma } &{} \text { if } 0<\sigma \le 1 \end{array}\right. } \end{aligned}$$
(3.5)

Notice that \(q>2\). Now, before using the homogeneous Strichartz inequality (A.1) we neglect the term \(e^{-\lambda t}\), since \(e^{-\lambda t} \le 1\). First, assuming \(\sigma >1\) we work with the admissible Strichartz pair \((2\sigma ,\frac{2\sigma }{\sigma -1})\) and get

$$\begin{aligned}&\Vert I_1\Vert _{L^{2\sigma }(0,T;H^{1,\frac{2\sigma }{\sigma -1}}(\mathbb R^2))} = \left\| e^{-\lambda \cdot }e^{iA_0 \cdot }A_1^{1/2} u_0\right\| _{L^{2\sigma }(0,T;L^{\frac{2\sigma }{\sigma -1}}(\mathbb R^2))}\\&\quad \le \left\| e^{iA_0 \cdot }A_1^{1/2} u_0\right\| _{L^{2\sigma }(0,T;L^{\frac{2\sigma }{\sigma -1}}(\mathbb R^2))}\\&\quad \lesssim \Vert A_1^{1/2} u_0\Vert _{L^2({\mathbb {R}}^2)} =\Vert u_0\Vert _V \end{aligned}$$

For smaller values, i.e. \(0<\sigma \le 1\), we choose \(\tilde{\sigma }=\frac{3}{\sigma }>2>\sigma \) so \(\frac{2\tilde{\sigma }}{\tilde{\sigma }-1}=\frac{6}{3-\sigma }\) and

$$\begin{aligned} \Vert I_1\Vert _{L^{2\sigma }(0,T;H^{1,\frac{2\tilde{\sigma }}{\tilde{\sigma }-1}}(\mathbb R^2))} \lesssim \Vert I_1\Vert _{L^{2\tilde{\sigma }}(0,T;H^{1,\frac{2\tilde{\sigma }}{\tilde{\sigma }-1}}(\mathbb R^2))} \lesssim \Vert u_0\Vert _V \end{aligned}$$

by the previous computations.

For the estimate of \( I_2\), we use the Strichartz inequality (A.2) and then the estimate from Lemma C.1 on the nonlinearity. We use the notation \(\gamma ^\prime \) for the conjugate exponent of \(\gamma \in (1,\infty )\), i.e. \(\frac{1}{\gamma }+\frac{1}{\gamma ^\prime }=1\). First, consider \(\sigma >1\); the pair \((2\sigma ,\frac{2\sigma }{\sigma -1})\) is admissible. Then,

$$\begin{aligned} \begin{aligned} \Vert I_2\Vert _{L^{2\sigma }(0,T;H^{1,\frac{2\sigma }{\sigma -1}}(\mathbb R^2))}&= \Vert A_1^{1/2}I_2\Vert _{L^{2\sigma }(0,T;L^{\frac{2\sigma }{\sigma -1}}(\mathbb R^2))}\\&\lesssim \Vert A_1^{1/2}F_\alpha (u)\Vert _{L^{\frac{4}{3}}(0,T;L^{\frac{4}{3}}(\mathbb R^2))} \quad \text { by } A.2\\&=\Vert F_\alpha (u)\Vert _{L^{\frac{4}{3}}(0,T;H^{1,\frac{4}{3}}({\mathbb {R}}^2))}\\&\lesssim \Vert u\Vert ^{2\sigma +1}_{L^{\frac{4}{3}(2\sigma +1)}(0,T;V)} \quad \text { by } C.1 \text { and } C.2 \end{aligned} \end{aligned}$$

For \(0<\sigma \le 1\), we proceed in a similar way; considering the admissible Strichartz pair \((2+\sigma , 2+\frac{4}{\sigma })\), we have

$$\begin{aligned}\begin{aligned} \Vert I_2\Vert _{L^{2\sigma }(0,T;H^{1,2+\frac{4}{\sigma }}({\mathbb {R}}^2))}&\lesssim \Vert I_2\Vert _{L^{2+\sigma }(0,T;H^{1,2+\frac{4}{\sigma }}(\mathbb R^2))}\\&= \Vert A_1^{1/2}I_2\Vert _{L^{2+\sigma }(0,T;L^{2+\frac{4}{\sigma }}(\mathbb R^2))}\\&\lesssim \Vert A_1^{1/2}F_\alpha (u)\Vert _{L^{\gamma ^\prime }(0,T;L^{r^\prime }(\mathbb R^2))} \quad \text { by } \\&= \Vert F_\alpha (u)\Vert _{L^{\gamma ^\prime }(0,T;H^{1,r^\prime }({\mathbb {R}}^2))} \end{aligned} \end{aligned}$$

where \((r,\gamma )\) is an admissible Strichartz pair. According to (C.1), we choose

$$\begin{aligned} (1,2)\ni r^\prime ={\left\{ \begin{array}{ll}\frac{2}{1+2\sigma }, &{}0<\sigma <\frac{1}{2}\\ \frac{4}{3}, &{} \frac{1}{2} \le \sigma \le 1 \end{array}\right. } \end{aligned}$$
(3.6)

Hence,

$$\begin{aligned} \gamma ^\prime =\tfrac{2r^\prime }{3r^\prime -2}={\left\{ \begin{array}{ll}\frac{1}{1-\sigma },&{} 0<\sigma <\frac{1}{2}\\ \frac{4}{3}, &{} \frac{1}{2} \le \sigma \le 1 \end{array}\right. } \end{aligned}$$
(3.7)

In this way by means of the estimate (C.2) of the polynomial nonlinearity \(\Vert F_\alpha (u)\Vert _{H^{1,r^\prime }(\mathbb R^2)}\lesssim \Vert u\Vert ^{1+2\sigma }_V\), we obtain

$$\begin{aligned} \Vert I_2\Vert _{L^{2\sigma }(0,T;H^{1,2+\frac{4}{\sigma }}({\mathbb {R}}^2))} \lesssim \Vert u\Vert ^{2\sigma +1}_{L^{\gamma ^\prime (2\sigma +1)}(0,T;V)}. \end{aligned}$$

Summing up, we have shown that for any \(\sigma >0\) there exists \(q>2\) and \(\gamma ^\prime \) such that

$$\begin{aligned} {\mathbb {E}} \Vert I_2\Vert ^{2\sigma }_{L^{2\sigma }(0,T;H^{1,q}(\mathbb R^2))} \lesssim {\mathbb {E}}\left( \int _0^T \Vert u(t)\Vert ^{\gamma ^\prime (2\sigma +1)}_{V}\ {\textrm{d}}t\right) ^{\frac{2\sigma }{\gamma ^\prime }}. \end{aligned}$$
(3.8)

Bearing in mind Corollary 2.9, we get the second and third terms in the r.h.s. of (3.1). The details are given in Appendix 5.1.

It remains to estimate the term \( I_3\). We choose q as in (3.5). Using the stochastic Strichartz estimate (A.3), we get for \(\sigma >1\)

$$\begin{aligned}&{\mathbb {E}} \Vert I_3\Vert ^{2\sigma }_{L^{2\sigma }(0,T; H^{1,\frac{2\sigma }{\sigma -1}}({\mathbb {R}}^2))} = {\mathbb {E}} \Vert A_1^{1/2} I_3\Vert ^{2\sigma }_{L^{2\sigma }(0,T; L^{\frac{2\sigma }{\sigma -1}}({\mathbb {R}}^2))}\\&\quad \lesssim \Vert A_1^{1/2}\Phi \Vert _{L_\textrm{HS}(U;H)}^{2\sigma } = \Vert \Phi \Vert _{L_\textrm{HS}(H;V)}^{2\sigma }. \end{aligned}$$

For smaller values of \(\sigma \), we proceed as before for \(I_1\).

Now, consider \(d=3\). The additional assumption on \(\sigma \) appears because of the stronger conditions on the parameters given later on.

For \(q\ge 1\), we have \(H^{\theta ,q}({\mathbb {R}}^3)\subset L^\infty ({\mathbb {R}}^3)\) when \(\theta q>3\). So for each \(I_i\) in (3.4) we look for an estimate in the norm \(L^{2\sigma }(0,T; H^{\theta ,q}({\mathbb {R}}^3))\) for some parameters with \(\theta q>3\).

We estimate \( I_1\) for any \(0<\sigma <2\). When \(0<\sigma \le 1\), we consider the admissible Strichartz pair (2, 6). By means of the homogeneous Strichartz estimate (A.1), we proceed as before

$$\begin{aligned} \Vert I_1\Vert _{L^{2\sigma }(0,T; H^{1,6 } ({\mathbb {R}}^3))}&\lesssim \Vert I_1\Vert _{L^{2}(0,T; H^{1,6 } ({\mathbb {R}}^3))}\\&= \left\| e^{-\lambda \cdot }e^{iA_0 \cdot } A_1^{1/2}u_0\right\| _{L^{2}(0,T;L^{6}({\mathbb {R}}^3))}\\&\le \left\| e^{i A_0 \cdot } A_1^{1/2}u_0\right\| _{L^{2}(0,T;L^{6}({\mathbb {R}}^3))}\\&\lesssim \Vert A_1^{1/2} u_0\Vert _{L^2({\mathbb {R}}^3)}= \Vert u_0\Vert _V. \end{aligned}$$

When \(\sigma >1\), we work with the admissible Strichartz pair \((2\sigma , \frac{6\sigma }{3\sigma -2})\) and get

$$\begin{aligned} \Vert I_1\Vert _{L^{2\sigma }(0,T; H^{1,\frac{6\sigma }{3\sigma -2} } (\mathbb R^3))} \lesssim \Vert A_1^{1/2} u_0\Vert _{L^2({\mathbb {R}}^3)}= \Vert u_0\Vert _V; \end{aligned}$$

since \(\frac{6\sigma }{3\sigma -2}>3\) for \(1<\sigma <2\), we obtain the \(L^\infty ({\mathbb {R}}^3)\)-norm estimate.

The estimate for \(I_2\) is more involved, and we postpone it to Appendix 5.2, where condition (D.1) leads to the upper bound \(\sigma <\frac{1+\sqrt{17}}{4}\).

It remains to estimate the term \( I_3\). For any \(\sigma >0\), we use the Hölder inequality and the stochastic Strichartz estimate (A.3) for the admissible pair \((2+\frac{\sigma ^2}{2},6\frac{4+\sigma ^2}{4+3\sigma ^2})\); therefore,

$$\begin{aligned}\begin{aligned} {\mathbb {E}} \Vert I_3\Vert ^{2\sigma } _{L^{2\sigma }(0,T;H^{1,6\frac{4+\sigma ^2}{4+3\sigma ^2}}(\mathbb R^3))}&\lesssim _T {\mathbb {E}} \Vert I_3\Vert ^{2\sigma } _{L^{2+\frac{\sigma ^2}{2}}(0,T;H^{1,6\frac{4+\sigma ^2}{4+3\sigma ^2}}(\mathbb R^3))}\\&\lesssim \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{2\sigma }. \end{aligned}\end{aligned}$$

\(\square \)

Notice that the restriction \(\sigma <\frac{1+\sqrt{17}}{4}\) on the power of the nonlinearity affects only the defocusing case, since by Assumption 2.3 in the focusing case we already require the stronger bound \(\sigma <\frac{2}{3}\) when \(d=3\).

We conclude this section by remarking that there is no similar result for \(d\ge 4\).

Remark 3.2

For larger dimension, there is no result similar to those in this section. Indeed, if one looks for \(u\in L^{2\sigma }(0,T; H^{1,q}({\mathbb {R}}^d))\subset L^{2\sigma }(0,T; L^\infty (\mathbb R^d))\), it is necessary that

$$\begin{aligned}q>d\end{aligned}$$

in order to have \( H^{1,q}({\mathbb {R}}^d)\subset L^\infty (\mathbb R^d)\). Already the estimate for \(I_1\) does not hold under this assumption. Indeed, the homogeneous Strichartz estimate (A.1) provides

$$\begin{aligned} I_1 \in C([0,T];H^1({\mathbb {R}}^d))\cap L^{2\sigma }(0,T;H^{1,q}({\mathbb {R}}^d)) \end{aligned}$$

if

$$\begin{aligned} \frac{1}{\sigma }=d\left( \frac{1}{2} -\frac{1}{q}\right) \qquad \text { and } \qquad 2\le q\le \frac{2d}{d-2} . \end{aligned}$$

Since \(\frac{2d}{d-2}\le 4\) for \(d\ge 4\), the latter condition \(q\le \frac{2d}{d-2}\) and the condition \(q>d\) are incompatible for \(d\ge 4\).

Let us notice that also in the deterministic setting the results on the attractors are known for \(d\le 3\), see [23].

4 The support of the invariant measures

From Theorem 2.6 we know that there exist invariant measures supported on V. Now, we show some more properties on these invariant measures. In dimension \(d=2\) and \(d=3\), thanks to the regularity results of Sect. 3 we provide an estimate for the moments in the V and \(L^\infty ({\mathbb {R}}^d)\)-norm.

Set

$$\begin{aligned} \phi _4={\left\{ \begin{array}{ll}\phi _1+\Vert \Phi \Vert _{L_\textrm{HS}(U;H)}^2, &{}\text { for } \alpha =-1\\ \phi _2+\Vert \Phi \Vert _{L_\textrm{HS}(U;H)}^2, &{}\text { for } \alpha =1\end{array}\right. } \end{aligned}$$

The function \(\phi _4=\phi _4(d, \sigma ,\lambda ,\Phi )\) is strictly decreasing w.r.t. \(\lambda \).

Proposition 4.1

Let \(d\le 3\) and Assumptions 2.2 and 2.3 hold.

Let \(\mu \) be an invariant measure for Eq. (2.3), given by Theorem 2.6. Then, for any finite \(m\ge 1\) we have

$$\begin{aligned} \int \Vert x\Vert _{V}^{2m} \ \textrm{d}\mu (x) \le \phi _4^{m} \lambda ^{-m}. \end{aligned}$$
(4.1)

Moreover, supposing in addition that \(\sigma <\frac{1+\sqrt{17}}{4}\) when \(d=3\), we have

$$\begin{aligned} \int \Vert x\Vert _{L^\infty }^{2\sigma } \textrm{d}\mu (x)\le \phi _5(d, \sigma ,\lambda ,\Phi ), \end{aligned}$$
(4.2)

where \(\lambda \mapsto \phi _5(d, \sigma ,\lambda ,\Phi )\) is a smooth decreasing function.

Proof

As far as (4.1) is concerned, we define the bounded mapping \(\Psi _k\) on V as

$$\begin{aligned} \Psi _k(x)={\left\{ \begin{array}{ll} \Vert x\Vert _V^{2m}, &{} \text {if} \ \Vert x\Vert _V \le k \\ k^{2m}, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

for \(k \in {\mathbb {N}}\).

By the invariance of \(\mu \) and the boundedness of \(\Psi _k\), we have

$$\begin{aligned} \int _{V} \Psi _k \, {\textrm{d}}\mu = \int _{V} P_s \Psi _k\, \textrm{d}\mu \qquad \forall s>0. \end{aligned}$$
(4.3)

So

$$\begin{aligned} P_s \Psi _k(x)={\mathbb {E}} [\Psi _k(u(s;x))] \le {\mathbb {E}} \Vert u(s;x)\Vert _V^{2m}. \end{aligned}$$

Moreover, from Corollary 2.9 we get an estimate for \({\mathbb {E}} \Vert u(s;x)\Vert _V^{2m}\), and letting \(s \rightarrow +\infty \) the exponential terms in the r.h.s. of (2.42) and (2.43) vanish so we get

$$\begin{aligned} \limsup _{s\rightarrow +\infty } P_s \Psi _k(x)\le \phi _4^{m} \lambda ^{-m} \qquad \forall x\in V. \end{aligned}$$

By Fatou lemma, we have that the same holds for the integral, that is

$$\begin{aligned} \limsup _{s\rightarrow +\infty } \int _V P_s \Psi _k(x)\, {\textrm{d}}\mu (x) \le \phi _4^{m} \lambda ^{-m}. \end{aligned}$$

From (4.3), we get

$$\begin{aligned} \int _V \Psi _k\, {\textrm{d}}\mu \le \phi _4^{m} \lambda ^{-m} \end{aligned}$$

as well. Since \(\Psi _k\) converges pointwise and monotonically from below to \(\Vert \cdot \Vert _V^{2m}\), the monotone convergence theorem yields (4.1).

As far as (4.2) is concerned, for \(d=1\) this is a consequence of estimate (4.1), because of the Sobolev embedding \(H^1({\mathbb {R}})\subset L^\infty ({\mathbb {R}})\). However, for \(d>1\) we consider the estimate (3.1) for \(T=1\) and set \(\tilde{\Psi }( u)= \Vert u\Vert _{L^\infty ({\mathbb {R}}^d)}^{2\sigma }\); this defines a mapping \(\tilde{\Psi }:V \rightarrow {\mathbb {R}}_+ \cup \{+\infty \}\). Its approximation \(\tilde{\Psi }_k:V\rightarrow {\mathbb {R}}_+ \), given by

$$\begin{aligned} \tilde{\Psi }_k(u)= {\left\{ \begin{array}{ll} \Vert u\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)}, &{} \text {if} \ \Vert u\Vert _{L^\infty ({\mathbb {R}}^d)} \le k \\ k^{2\sigma }, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

defines a bounded mapping \(\tilde{\Psi }_k:V\rightarrow {\mathbb {R}}_+ \) for any \(k \in {\mathbb {N}}\).

It obviously holds

$$\begin{aligned} \int _{V} \tilde{\Psi }_k\, {\textrm{d}}\mu = \int _0^1 \left( \int _{V} \tilde{\Psi }_k \, {\textrm{d}}\mu \right) \, {\textrm{d}}s. \end{aligned}$$

By the invariance of \(\mu \) and the boundedness of \(\tilde{\Psi }_k\), it also holds

$$\begin{aligned} \int _{V} \tilde{\Psi }_k \, {\textrm{d}}\mu = \int _{V} P_s \tilde{\Psi }_k\, {\textrm{d}}\mu \qquad \forall s>0. \end{aligned}$$

Thus, by Fubini–Tonelli theorem, since \(\tilde{\Psi }_k(u)=\Vert u\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)} \wedge k^{2\sigma } \le \Vert u\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)}\), we get

$$\begin{aligned}&\int _{V} \tilde{\Psi }_k \, {\textrm{d}}\mu = \int _0^1 \int _{V} P_s \tilde{\Psi }_k\, {\textrm{d}}\mu \, {\textrm{d}}s =\int _V\int _0^1 {\mathbb {E}}\left[ \tilde{\Psi }_k(u(s;x)) \right] \,{\textrm{d}}s\, \textrm{d}\mu (x)\\&\quad \le \int _V {\mathbb {E}} \int _0^1 \Vert u(s;x)\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)} {\textrm{d}}s\, {\textrm{d}} \mu (x)\\&\quad \le C \int _V \left( \Vert x\Vert _V^{2\sigma } + \psi (x)^{\sigma (2\sigma +1)} +\phi _3^{\sigma (2\sigma +1)}\lambda ^{-{\sigma (2\sigma +1)}} + \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{2\sigma }\right) {\textrm{d}}\mu (x) \end{aligned}$$

where we used (3.1) from Proposition 3.1 for \(T=1\).

The integral \( \int _V \Vert x\Vert _V^{2\sigma } {\textrm{d}}\mu (x)\) can be estimated by means of (4.1). The same holds for the integral of the second term, by bearing in mind the expression (3.2) of \(\psi \) and the bounds (2.11), (2.15); let us denote by \(\phi _\psi =\phi _\psi (d, \sigma , \lambda ,\Phi )\) the new function estimating \(\int _V \psi (x)^{\sigma (2\sigma +1)} {\textrm{d}}\mu (x)\).

Therefore, we have proved that

$$\begin{aligned} \int _{V} \tilde{\Psi }_k\, {\textrm{d}}\mu \le \phi _5(d, \sigma , \lambda ,\Phi ) \end{aligned}$$

where \(\phi _5\) is proportional to

$$\begin{aligned} \phi _4^{\sigma } \lambda ^{ -\sigma }+\phi _\psi +\phi _3^{\sigma (2\sigma +1)}\lambda ^{-{\sigma (2\sigma +1)}} + \Vert \Phi \Vert _{L_\textrm{HS}(U;V)}^{2\sigma }. \end{aligned}$$

This holds for any k. Since \(\tilde{\Psi }_k\) converges pointwise and monotonically from below to \(\tilde{\Psi }\), the monotone convergence theorem yields the same bound for \( \int _{V} \Vert x\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)}\, {\textrm{d}}\mu (x)\). This proves (4.2). \(\square \)

5 Uniqueness of the invariant measure for sufficiently large damping

We will prove that if the damping coefficient \(\lambda \) is sufficiently large, then the invariant measure is unique.

Theorem 5.1

Let \(d\le 3\). In addition to Assumptions 2.2 and 2.3, we suppose that \(\sigma <\frac{1+\sqrt{17}}{4}\) when \(d=3\).

If

$$\begin{aligned} \lambda >2 \phi _5(d, \sigma ,\lambda ,\Phi ) \end{aligned}$$
(5.1)

where \(\phi _5\) is the function appearing in Proposition 4.1, then for Eq. (2.3) there exists a unique invariant measure supported in V.

Proof

The existence of an invariant measure comes from Theorem 2.6. Now, we prove the uniqueness by means of a reductio ad absurdum. Let us suppose that there exists more than one invariant measure. In particular, there exist two different ergodic invariant measures \(\mu _1\) and \(\mu _2\). For both of them, Proposition 4.1 holds. Fix either \(i=1\) or \(i=2\) and consider any \(f\in L^1(\mu _i)\). Then, by the Birkhoff ergodic theorem (see, for example, [10]) for \(\mu _i\)-a.e. \(x_i\in V\) we have

$$\begin{aligned} \lim _{t\rightarrow +\infty }\frac{1}{t} \int _0^t f(u(s;x_i))\ \textrm{d}s=\int _V f\ \textrm{d}\mu _i \qquad {\mathbb {P}}-a.s. \end{aligned}$$
(5.2)

Here, u(tx) is the solution at time t, with initial value \(u(0)=x\in V\).

Now, fix two initial data \(x_1\) and \(x_2\) belonging, respectively, to the support of the measure \(\mu _1\) and \(\mu _2\). We have

$$\begin{aligned} \int _V f \ \textrm{d}\mu _1- \int _V f \ \textrm{d}\mu _2=\lim _{t\rightarrow +\infty }\frac{1}{t} \int _0^t [f(u(s;x_1))-f(u(s;x_2))]\ \textrm{d}s \end{aligned}$$

\({\mathbb {P}}\)-a.s.. Taking any arbitrary f in the set \( \mathcal G_0\) defined in (B.2), we get

$$\begin{aligned} \left| \int _V f \ \textrm{d}\mu _1-\int _V f \ \textrm{d}\mu _2\right| \le L \lim _{t\rightarrow +\infty }\frac{1}{t} \int _0^t \Vert u(s;x_1)-u(s;x_2)\Vert _H \textrm{d}s . \end{aligned}$$

If we prove that

$$\begin{aligned} \lim _{t\rightarrow +\infty }\Vert u(t;x_1)-u(t;x_2)\Vert _H =0 \qquad {\mathbb {P}}-a.s., \end{aligned}$$
(5.3)

then we conclude that

$$\begin{aligned} \int _V f \ \textrm{d}\mu _1-\int _V f \ \textrm{d}\mu _2=0 \end{aligned}$$

so \(\mu _1=\mu _2\) thanks to Lemma B.1. So let us focus on the limit (5.3).

With a short notation, we write \(u_i(t)=u(t;x_i)\). Then, consider the difference \(w=u_1-u_2\) fulfilling

$$\begin{aligned}{\left\{ \begin{array}{ll} \frac{\textrm{d}}{\textrm{d}t} w(t)-{i}A_0 w(t)+ {i}F_\alpha (u_1(t))- {i}F_\alpha (u_2(t)) +\lambda w (t) = 0\\ w(0)=x_1-x_2 \end{array}\right. } \end{aligned}$$

so

$$\begin{aligned} \frac{1}{2} \frac{\textrm{d}}{\textrm{d}t} \Vert w(t)\Vert _H^2 +\lambda \Vert w(t)\Vert _H^2 \le \int _{{\mathbb {R}}^d} \Big | [ |u_1(t)|^{2\sigma } u_1(t)- |u_2(t)|^{2\sigma } u_2(t)]w(t) \Big | \textrm{d}y. \end{aligned}$$

Using the elementary estimate

$$\begin{aligned} | |u_1|^{2\sigma } u_1- |u_2|^{2\sigma } u_2| \le C_\sigma [|u_1|^{2\sigma }+|u_2|^{2\sigma }]|u_1-u_2|, \end{aligned}$$

we bound the nonlinear term in the r.h.s. as

$$\begin{aligned} \int _{{\mathbb {R}}^d} |[ |u_1|^{2\sigma } u_1- |u_2|^{2\sigma } u_2]w | \ \textrm{d}y \le [\Vert u_1\Vert ^{2\sigma }_{L^\infty (\mathbb R^d)}+\Vert u_2\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)}] \Vert w\Vert _{L^2({\mathbb {R}}^d)}^2. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{{\textrm{d}}}{{\textrm{d}}t} \Vert w(t)\Vert ^2_H+ 2\lambda \Vert w(t)\Vert ^2_H \le 2\left( \Vert u_1(t)\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)} +\Vert u_2(t)\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)}\right) \Vert w(t)\Vert _{H}^2 . \end{aligned}$$

Gronwall inequality gives

$$\begin{aligned} \Vert w(t)\Vert ^2_H \le \Vert w(0)\Vert ^2_H e^{-2\lambda t+2\int _0^t \left( \Vert u_1(s)\Vert ^{2\sigma }_{ L^\infty ({\mathbb {R}}^d)}+ \Vert u_2(s)\Vert ^{2\sigma }_{ L^\infty ({\mathbb {R}}^d)}\right) \textrm{d}s} \end{aligned}$$

that is

$$\begin{aligned} \Vert w(t)\Vert ^2_H \le \Vert x_1-x_2\Vert ^2_H e^{-2t \left[ \lambda -\frac{1}{t} \int _0^t \left( \Vert u_1(s)\Vert ^{2\sigma }_{ L^\infty ({\mathbb {R}}^d)}+ \Vert u_2(s)\Vert ^{2\sigma }_{ L^\infty ({\mathbb {R}}^d)}\right) \textrm{d}s\right] }. \end{aligned}$$
(5.4)

This is a pathwise estimate.

We know from Proposition 4.1 that \(f(x)=\Vert x\Vert _{L^\infty ({\mathbb {R}}^d)}^{2\sigma } \in L^1(\mu _i)\); therefore, (5.2) becomes

$$\begin{aligned} \lim _{t\rightarrow +\infty }\frac{1}{t} \int _0^t \Vert u(s;x_i)\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)} \textrm{d}s = \int _V \Vert x\Vert ^{2\sigma }_{L^\infty ({\mathbb {R}}^d)}\ \textrm{d}\mu _i (x) \le \phi _5(\lambda ) \end{aligned}$$

\({\mathbb {P}}\)-a.s., for either \(i=1\) or \(i=2\). Therefore, if

$$\begin{aligned} \lambda >2 \phi _5(\lambda ), \end{aligned}$$

the exponential term in the r.h.s. of (5.4) vanishes as \(t\rightarrow +\infty \). This proves (5.3) and concludes the proof. \(\square \)