1 Introduction and main result

This article is concerned with the stochastic nonlinear Schrödinger equation with multiplicative noise

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( \mathrm {i}\Delta _g u(t)-\mathrm {i}\lambda \vert u(t)\vert ^{\alpha -1} u(t)\right) dt-\mathrm {i}\sum _{m=1}^{\infty }e_m u(t) \circ \mathrm {d}\beta _m(t), t\in (0,T),\\ u(0)&=u_0\in {H^1(M)}, \end{aligned}\right. \end{aligned}$$
(1.1)

on a compact 3D Riemannian manifold M, where \(\Delta _g\) is the Laplace-Beltrami-operator, \(\alpha \in (1,3]\), \(\lambda \in \{-1,1\}\), \(\left( e_m\right) _{m\in \mathbb {N}}\) are real valued functions and \(\left( \beta _m\right) _{m\in \mathbb {N}}\) are independent Brownian motions. If \(\lambda =1\), the NLS is called defocusing and if \(\lambda =-1\), it is called focusing. In the main result of this article, we show the pathwise uniqueness of martingale solutions to Problem (1.1) in the energy space \({H^1(M)}\).

Theorem 1.1

Let M be a compact 3D Riemannian manifold. Let \(\lambda \in \{-1,1\}\), \(\alpha \in (1,3]\) and \(e_m\in {L^\infty (M)}\) real valued with \(\nabla e_m\in L^3(M)\) for \(m\in \mathbb {N}\) and

$$\begin{aligned} \sum _{m=1}^{\infty }\left( \Vert \nabla e_m \Vert _{L^3}+\Vert e_m \Vert _{L^\infty }\right) ^2<\infty . \end{aligned}$$
(1.2)

Then, the solutions to Problem (1.1) in the sense of Definition 2.2 are pathwise unique.

Theorem 1.1 generalizes the result by Burq, Gérard and Tzvetkov from [7], Theorem 3, for the cubic NLS to the stochastic setting. Note that this uniqueness result holds for both the focusing and defocusing case. Combining Theorem 1.1 with the existence of martingale solutions proved by the authors in [9] and the Yamada-Watanabe-Theory developed in [36], Theorem 5.3 and Corollary 5.4, see also [41, Theorems 2 and 12.1], we obtain the existence of a unique strong solution of (1.1).

Corollary 1.2

Let M be a compact 3D Riemannian manifold. Let \(\lambda =1\) and \(\alpha \in (1,3]\) or \(\lambda =-1\) and \(\alpha \in (1,\frac{7}{3})\). If \(\left( e_m\right) _{m\in \mathbb {N}}\) satisfies the conditions from Theorem 1.1, there is a unique global strong solution to Problem (1.1), cf. Definition 2.2 below, and the martingale solutions are unique in law.

The question of the existence and the uniqueness of global solutions of the stochastic nonlinear Schrödinger equation has attracted some attention in the recent years. We refer to de Bouard and Debussche [21, 22], Barbu, Röckner and Zhang [14, 15, 50] and the second author [30] as well as to the references in these articles. These authors considered the stochastic NLS in the Euclidean space \({\mathbb {R}^d}\) and employed a fixed point argument based on the Strichartz estimates to prove the existence and the uniqueness simultaneously. As in the deterministic setting, their range of the exponents \(\alpha \) depends on the space dimension d and the considered regularity. Brzeźniak and Millet followed a similar approach for the stochastic NLS on a compact 2-dimensional manifold M. In higher dimensions, their argument only yields the existence of local solutions since the estimates for the nonlinearity rely on the Sobolev embeddings \(H^{s,p}\hookrightarrow L^\infty \) that are too restrictive to work in the energy space \({H^1(M)}\). Another result about the stochastic NLS is due to Keller and Lisei, see [34], who considered the equation on the one-dimensional space-interval (0, 1) with the Neumann boundary conditions. They proved the existence by employing the Galerkin method and proved the uniqueness via the Sobolev embedding \(H^1(0,1)\hookrightarrow L^\infty (0,1)\). Hence, their argument cannot be transferred to higher dimensions. After this work was finished, we learned about a recent paper [20] by Cheung and Mosincat. Using the additional structure in the special case of the d-dimensional torus \(M=\mathbb {T}^d\) and algebraic nonlinearities, i.e. \(\alpha =2k+1\) for some \(k\in \mathbb {N}\), the authors employed a fixed point argument based on the multilinear Strichartz estimates and an estimate of the stochastic convolution in Bourgain spaces \(X^{s,b}\) combined with the truncation method from [21, 22, 30]. As a result, they solved the stochastic NLS with multiplicative noise in the space \(L^2(\Omega ,C([0,\tau ],H^s(\mathbb {T}^d))\cap X^{s,b}([0,\tau ]))\) for some stopping time \(\tau >0\), for every \(s>s_{crit}:=\frac{d}{2}-\frac{2}{\alpha -1}\) and some \(b<\frac{1}{2}\). As a byproduct, their argument also implies the pathwise uniqueness of the martingale solutions in \(L^2(\Omega ,C([0,T],H^s(\mathbb {T}^3))\cap X^{s,b}([0,T]))\) for \(\alpha =3\) and \(s>\frac{1}{2}\), which reflects an improvement on the torus compared to the general case considered in our Theorem 1.1.

Let us now explain our approach to the existence and the uniqueness and the main contributions of the present paper. Instead of using a fixed point argument, we separate the proof of the existence and the uniqueness. The construction of a martingale solution has been treated by the authors in [9] and the second author in [31] based on the Hamiltonian structure of the NLS without using the Strichartz estimates. Since these ingredients are independent of the underlying geometry, the existence proof works in a more general framework including non-compact manifolds and domains with Neumann or Dirichlet boundary in arbitrary dimension. The flexibility of this approach is underlined by the fact that it could be also used to construct a martingale solution of the NLS with pure jump noise, see [8]. In a second step, we consider the pathwise uniqueness of the martingale solution with additional assumptions on the noise and the geometry in order to profit from the dispersive properties of the Schrödinger equation. In this way, we have proved an analogue of Theorem 1.1 in two dimensions in [9].

In the present article, we turn our attention to the case of three dimensions. To the best knowledge of the authors our paper is the first one in which the uniqueness of solutions to the stochastic NLS on a 3-dimensional compact Riemannian manifold driven by linear multiplicative noise has been established. The main technical ingredient in the proof of Theorem 1.1 is the estimate

$$\begin{aligned} \Vert u \Vert _{L^2(J,L^p)}\lesssim 1+\left( \vert J\vert p\right) ^\frac{1}{2}\qquad \text {a.s.} \end{aligned}$$
(1.3)

of any martingale solution to Problem (1.1) in terms of the length of time interval J and the exponent \(p\in [6,\infty )\). We refer to Proposition 3.1 for an exact formulation of this estimate. It is obtained by employing the Littlewood-Paley decomposition and a partition of unity of the time interval followed by the spectrally localized Strichartz estimates on short time intervals, cf. for instance inequality (1.5) below. Hereby, the main novelty of our approach is the spectrally localized Strichartz estimate for stochastic convolution with the Schrödinger group established in Lemma 2.12 and its usage to get a pathwise estimate of the stochastic convolution which can handle the stochastic term in Eq. (1.1). Note that the estimate (1.3) is sharp in the sense that it cannot be proved for \(\alpha >3\). In the cubic case \(\alpha = 3\), the endpoint Strichartz inequality of Keel and Tao [35] is required to estimate the most critical terms.

We now sketch the proof of Theorem 1.1 and thereby explain the role of the estimate (1.3) to overcome the additional difficulties posed by the 3-dimensional setting compared to the 2-dimensional one. We take two solutions with \(u_1,u_2\in L^\infty (0,T;{H^1(M)})\) almost surely. Our starting point is the representation

$$\begin{aligned} \Vert u_1(t)-u_2(t) \Vert _{L^2}^2= & {} 2 \int _0^t {\text {Re}}\big (u_1(s)-u_2(s),-\mathrm {i}\lambda \vert u_1(s)\vert ^{\alpha -1} u_1(s)\nonumber \\&+\mathrm {i}\vert u_2(s)\vert ^{\alpha -1} u_2(s)\big )_{L^2} \mathrm {d}s \end{aligned}$$
(1.4)

which holds almost surely for all \(t\in [0,T]\). To get this identity, it is crucial to consider Eq. (1.1) with the noise in the Stratonovich form with real valued coefficients, since this leads to cancellations of the stochastic integral and the correction term in the Itô formula. We remark that the formula (1.4) is closely related to the mass conservation of solutions to (1.1) which leads to the notion of conservative noise. To use equality (1.4) for the uniqueness proof, one can employ the following local Strichartz estimate

$$\begin{aligned} \Vert t\mapsto e^{\mathrm {i}t\Delta _g}\varphi (h^2\Delta _g)x \Vert _{L^q(0,T;L^p)}\lesssim \Vert x \Vert _{L^2},\qquad x\in L^2(M), \end{aligned}$$
(1.5)

for small times \( T\lesssim h\) and the global Strichartz estimate

$$\begin{aligned} \Vert t\mapsto e^{\mathrm {i}t\Delta _g}x \Vert _{L^q(0,T;L^p(M))}\lesssim \Vert x \Vert _{H^{\frac{1}{q}}(M)},\qquad x\in H^{\frac{1}{q}}(M). \end{aligned}$$
(1.6)

from [7] for every Strichartz-admissible exponent pairs \((p,q)\in [2,\infty ]^2\). Here, \(h\in (0,1]\) and \(\varphi \in C_c^\infty (\mathbb {R})\) can be chosen arbitrarily.

In the 2-dimensional case \(d=2\), inequality (1.6) allows one to deduce an improved regularity \(L^q(0,T; H^{s-\frac{1}{q},p})\), for \(s\in (1-\frac{1}{q},1)\), almost surely, of the processes \(u_1,u_2\). Hence, one can use a Gronwall Lemma type argument based on the Sobolev embedding \(H^{s-\frac{1}{q},p}(M)\hookrightarrow {L^\infty (M)}\) to prove the pathwise uniqueness. For the details, we refer to [9]. In 3D, the challenge is to gain \(\frac{1}{2}+\varepsilon \) derivatives with respect to the embedding \(H^{\frac{3}{2}+\varepsilon }(M)\hookrightarrow {L^\infty (M)}\) in order to control the nonlinearity in (1.4) by the \(H^1\)-estimates of the solutions. Unfortunately, this is not possible, but it turns out that (1.3) is a good substitute for the missing \(L^\infty \)-estimate.

To deduce the pathwise uniqueness from (1.3), we generalize a strategy developed by Yudovich, [49], for the Euler equation to the stochastic setting. In the context of the deterministic NLS, it is also well-known and has been used already by Vladimirov in [48], Ogawa and Ozawa in [40] and [42]. They considered the 2-dimensional domains and used the Trudinger type inequalities as an analogue of inequality (1.3) to control the growth of \(L^p\)-norms for \(p\rightarrow \infty \). Burq, Gérard and Tzvetkov could use the Yudovich-strategy for three dimensional manifolds without boundary due to the regularizing effect of the Strichartz estimates. In [17], Blair, Smith and Sogge proved the uniqueness of weak solutions of the deterministic NLS on compact 3D manifolds with boundary as an application of their Strichartz estimates on this type of geometry.

The paper is organized as follows. In Sect. 2, we fix the notations, formulate our assumptions and collect auxiliary results. Section 3 is devoted to proof of the estimate (1.3) and the pathwise uniqueness.

2 Definitions and auxiliary results

This section is devoted to the notations, definitions and auxiliary results that will be used in the next section to show the pathwise uniqueness.

If \(a,b\ge 0\) satisfy the inequality \(a\le C b\) with a constant \(C>0\), we write \(a \lesssim b\). Given \(a\lesssim b\) and \(b\lesssim a\), we write \(a\eqsim b\). For two Banach spaces EF, we denote by \(\mathcal {L}(E,F)\) the space of linear bounded operators \(B: E\rightarrow F\) and abbreviate \(\mathcal {L}(E):=\mathcal {L}(E,E)\). We use the notation \({\text {HS}}(H_1,H_2)\) for the space of Hilbert-Schmidt-operators between Hilbert spaces \(H_1\) and \(H_2\). Furthermore, we write \(E\hookrightarrow F\) if E is continuously embedded in F;  i.e. \(E\subset F\) with natural embedding \(j\in \mathcal {L}(E,F)\).

Let M be a three dimensional compact Riemannian \(C^\infty \) manifold without boundary and \(L^p(M)\) for \(p\in [1,\infty ]\) the space equivalence classes of \(\mathbb {C}\)-valued p-integrable functions. The distance induced by g is denoted by \(\rho \) and canonical measure on M is called \(\mu \). By \(L^p(M)\) for \(p\in [1,\infty ]\), we denote the space of equivalence classes of \(\mathbb {C}\)-valued p-integrable functions with respect to \(\mu \). The Laplace-Beltrami operator on M, i.e. the generator of the heat semigroup on M, is named \(\Delta _g\). Moreover, we use the fractional Sobolev spaces

$$\begin{aligned} H^{s,p}(M):=\left\{ u\in L^p(M): \exists v\in L^p(M): u=(I-\Delta _g)^{-\frac{s}{2}}v \right\} \end{aligned}$$

for \(p\in [1,\infty )\) and \(s\ge 0\) with the norm \(\Vert u \Vert _{H^{s,p}}:=\Vert v \Vert _{L^p}\). For \(s< 0\), the space \(H^{s,p}(M)\) is defined as the completion of \(L^p(M)\) with respect to

$$\begin{aligned} \Vert u \Vert _{H^{s,p}}:=\Vert (I-\Delta _g)^{\frac{s}{2}}u \Vert _{L^p},\qquad u\in L^p(M). \end{aligned}$$

For all \(s\in \mathbb {R}\), we shortly denote \(H^s(M):=H^{s,2}(M)\). For properties of the Laplace-Beltrami operator, characterizations of the fractional Sobolev spaces and embedding theorems, we refer to [46] and [44]. For \(s=1\), one can show that the definition from above coincides with the classical Sobolev space and \(\left( \Vert u \Vert _{L^2}^2+\Vert \nabla u \Vert _{L^2}^2\right) ^\frac{1}{2}\) defines an equivalent norm on \({H^1(M)}\). We refer to [38] for an explanation of the gradient as an element of the tangential bundle of M.

Next, we summarize the assumptions on the coefficient of the noise in Problem (1.1).

Assumption 2.1

Let Y be a separable Hilbert space and \(B: {H^1(M)}\rightarrow {\text {HS}}(Y,{H^1(M)})\) a linear operator. For an ONB \(\left( f_m\right) _{m\in \mathbb {N}}\) of Y and \(m\in \mathbb {N}\), we set \(B_m :=\mathrm {B}(\cdot )f_m\). Additionally, we assume that \(B_m\), \(m\in \mathbb {N}\), are bounded operators on \({H^1(M)}\) with

$$\begin{aligned} \sum _{m=1}^{\infty }\Vert B_m \Vert _{\mathcal {L}(H^1)}^2<\infty \end{aligned}$$
(2.1)

and that \(B_m\) is symmetric as operator in \(L^2(M)\), i.e.

$$\begin{aligned} \big (B_m u,v\big )_{L^2}=\big (u,B_m v\big )_{L^2},\qquad u,v\in {H^1(M)}. \end{aligned}$$
(2.2)

In particular, we have \(B\in \mathcal {L}\left( {H^1(M)},{\text {HS}}(Y,{H^1(M)})\right) \) and \(\mu \in \mathcal {L}({H^1(M)})\) if we abbreviate

$$\begin{aligned} \mu (u) := -\frac{1}{2} \sum _{m=1}^{\infty }B_m^2 u,\qquad u\in {H^1(M)}. \end{aligned}$$

We look at the following slight generalization of (1.1) in the Itô form

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( \mathrm {i}\Delta _g u(t)-\mathrm {i}\lambda \vert u(t)\vert ^{\alpha -1} u(t)+ \mu \left( u(t)\right) \right) \mathrm {d}t-\mathrm {i}B u(t) \mathrm {d}W(t), t\in (0,T),\\ u(0)&=u_0. \end{aligned}\right. \end{aligned}$$
(2.3)

In the introduction, we used that the process

$$\begin{aligned} W=\sum _{m=1}^{\infty }f_m \beta _m \end{aligned}$$

with a sequence \(\left( \beta _m\right) _{m\in \mathbb {N}}\) of independent Brownian motions is a cylindrical Wiener process in Y, see [25], Proposition 4.7, and the identity

$$\begin{aligned} -\mathrm {i}B u(t) \circ \mathrm {d}W(t)=-\mathrm {i}B u(t) \mathrm {d}W(t)+\mu \left( u(t)\right) \mathrm {d}t, \end{aligned}$$
(2.4)

which relates Itô and Stratonovich noise. For the sake of simplicity, we restricted ourselves to the special case of multiplication operators

$$\begin{aligned} B_m u=e_m u, \, u\in {H^1(M)}. \end{aligned}$$

with real valued functions \(e_m\) satisfying

$$\begin{aligned} \sum _{m=1}^{\infty }\left( \Vert \nabla e_m \Vert _{L^3}+\Vert e_m \Vert _{L^\infty }\right) ^2<\infty . \end{aligned}$$
(2.5)

We want to justify that they fit in Assumption 2.1. The Sobolev embedding \({H^1(M)}\hookrightarrow L^6(M)\) and the Hölder inequality yield

$$\begin{aligned} \Vert \nabla \left( e_m u\right) \Vert _{L^2}&\le \Vert u \nabla e_m \Vert _{L^2}+ \Vert e_m \nabla u \Vert _{L^2} \le \Vert \nabla e_m \Vert _{L^3} \Vert u \Vert _{L^6}+\Vert e_m \Vert _{L^\infty } \Vert \nabla u \Vert _{L^2}\\&\lesssim \left( \Vert \nabla e_m \Vert _{L^3}+\Vert e_m \Vert _{L^\infty }\right) \Vert u \Vert _{H^1},\, u\in {H^1(M)}. \end{aligned}$$

Thus,

$$\begin{aligned}&\Vert B_m u \Vert _{H^1}\eqsim \Vert e_m u \Vert _{L^2}+\Vert \nabla \left( e_m u\right) \Vert _{L^2}\\&\lesssim \left( \Vert \nabla e_m \Vert _{L^3}+\Vert e_m \Vert _{L^\infty }\right) \Vert u \Vert _{H^1},\, u\in {H^1(M)}. \end{aligned}$$

Note that the existence-Theorem from [9] additionally needs the assumptions \(B_m\in \mathcal {L}(L^2(M))\cap \mathcal {L}(L^{\alpha +1}(M))\) with

$$\begin{aligned} \sum _{m=1}^{\infty }\Vert B_m \Vert _{\mathcal {L}(L^2)}^2<\infty ,\qquad \sum _{m=1}^{\infty }\Vert B_m \Vert _{\mathcal {L}(L^{\alpha +1})}^2<\infty . \end{aligned}$$
(2.6)

But in our example of multiplication operators, this assumption is implied by (2.5). In the first Definition, we explain two solution concepts for problem (1.1).

Definition 2.2

Let \(T>0\) and \(u_0\in {H^1(M)}\).

  1. (a)

    A martingale solution to Problem (1.1) is a system \(\left( \Omega ,\mathcal {F},\mathbb {P},W,\mathbb {F},u\right) \) with

    • A probability space \(\left( \Omega ,\mathcal {F},\mathbb {P}\right) \)

    • A Y-valued cylindrical Wiener W process on \(\Omega ;\)

    • A filtration \(\mathbb {F}=\left( \mathcal {F}_t\right) _{t\in [0,T]}\) with the usual conditions;

    • A continuous, \(\mathbb {F}\)-adapted process with values in \({H^{-1}(M)}\) such that almost all paths are in \({C_w([0,T],{H^1(M)})}\) and \(u\in L^2(\Omega \times [0,T],{H^1(M)});\)

    such that the equation

    $$\begin{aligned}&u(t)= u_0+ \int _0^t \left[ \mathrm {i}\Delta _g u(s)-\mathrm {i}\lambda \vert u(s)\vert ^{\alpha -1} u(s)+\mu (u(s))\right] \mathrm {d}s\nonumber \\&- \mathrm {i}\int _0^t B u(s) \mathrm {d}W(s) \end{aligned}$$
    (2.7)

    holds \(\mathbb {P}\)-almost surely in \({H^{-1}(M)}\) for all \(t\in [0,T]\).

  2. (b)

    Given a probability space \(\left( \Omega ,\mathcal {F},\mathbb {P}\right) \), a Y-valued cylindrical Wiener W process on \(\Omega \), and a filtration \(\mathbb {F}=\left( \mathcal {F}_t\right) _{t\in [0,T]}\) with the usual conditions, a strong solution to Problem (1.1) is a continuous, \(\mathbb {F}\)-adapted process with values in \({H^{-1}(M)}\) such that almost all paths are in \({C_w([0,T],{H^1(M)})}\), \(u\in L^2(\Omega \times [0,T],{H^1(M)})\) and (2.7) holds almost surely in \({H^{-1}(M)}\) for all \(t\in [0,T]\).

Remark 2.3

For \(\alpha \in (1,3]\), the solution is almost surely continuous in \(L^2(M)\). Indeed, this follows from the following mild form of the Itô Eq. (2.7),

$$\begin{aligned}&u(t)= e^{\mathrm {i}t\Delta _g}u_0+ \int _0^t e^{\mathrm {i}(t-s)\Delta _g}\left[ -\mathrm {i}\lambda \vert u(s)\vert ^{\alpha -1} u(s)+\mu (u(s))\right] \mathrm {d}s\nonumber \\&- \mathrm {i}\int _0^t e^{\mathrm {i}(t-s)\Delta _g}B u(s) \mathrm {d}W(s) \end{aligned}$$
(2.8)

almost surely for all \(t\in [0,T]\), see for example the proof of Proposition 3.1 in a similar situation, since the nonlinearity with \(\alpha \in (1,3]\) maps \({H^1(M)}\) to \(L^2(M)\) by the Sobolev embedding \({H^1(M)}\hookrightarrow L^{2\alpha }(M)\).

In the following definition, we fix different notions of uniqueness. As we have seen in the previous remark, it makes sense to define uniqueness by comparing solutions in \(C([0,T],L^2(M))\).

Definition 2.4

  1. (a)

    We say that the solutions to Problem (1.1) are pathwise unique in \(L^2(\Omega ;L^\infty (0,T;{H^1(M)}))\), if and only if given two martingale solutions \(\left( \Omega ,\mathcal {F},\mathbb {P},W,\mathbb {F},u_j\right) \) with \(u_j\in L^2(\Omega ;L^\infty (0,T;{H^1(M)}))\) for \(j=1,2\), to Problem (1.1), we have \(u_1(t)=u_2(t)\) almost surely in \({L^2(M)}\) for all \(t\in [0,T]\).

  2. (b)

    We say that the solutions to Problem (1.1) are unique in law in \(L^2(\Omega ;L^\infty (0,T;{H^1(M)}))\), if given two martingale solutions \(\left( \Omega _j,\mathcal {F}_j,\mathbb {P}_j,W_j,\mathbb {F}_j,u_j\right) \) with \(u_j(0)=u_0\) and \(u_j\in L^2(\Omega ;L^\infty (0,T;{H^1(M)}))\) for \(j=1,2\), to Problem (1.1), we have \(\mathbb {P}_1^{u_1}=\mathbb {P}_2^{u_2}\) on \(C([0,T],L^2(M))\).

We continue with some auxiliary results which are either well-known or due to Burq, Gérard and Tzvetkov, [7]. The first Lemma gives us an estimate for the nonlinear term in Problem (1.1).

Lemma 2.5

Let \(q\in [2,6]\) and \(r\in (1,\infty )\) with \(\frac{1}{r'}=\frac{1}{2}+\frac{\alpha -1}{q}\). Then, we have

$$\begin{aligned} \Vert \vert u\vert ^{\alpha -1}u \Vert _{H^{1,r'}}\lesssim \Vert u \Vert _{H^1}^\alpha ,\, u\in {H^1(M)}. \end{aligned}$$

Proof

See [11], Lemma III.1.4. \(\square \)

The following Lemma deals with a Littlewood-Paley type decomposition of \(L^p(M)\) for \(p\in [2,\infty )\).

Lemma 2.6

Let \(\psi \in C_c^\infty (\mathbb {R})\), \(\varphi \in C_c^\infty (\mathbb {R}\setminus \{0\})\) with

$$\begin{aligned} 1=\psi (\lambda )+\sum _{k=1}^\infty \varphi \left( 2^{-k}\lambda \right) ,\, \lambda \in \mathbb {R}. \end{aligned}$$

Then, we have

$$\begin{aligned} \Vert f \Vert _{L^2}\eqsim \left( \Vert \psi (\Delta _g)f \Vert _{L^2}^2+\sum _{k=1}^\infty \Vert \varphi \left( 2^{-k}\Delta _g\right) f \Vert _{L^2}^2\right) ^\frac{1}{2},\, f\in L^2(M), \end{aligned}$$
(2.9)

and

$$\begin{aligned} \Vert f \Vert _{L^p}\lesssim _p \Vert \psi (\Delta _g)f \Vert _{L^p}+\left( \sum _{k=1}^\infty \Vert \varphi \left( 2^{-k}\Delta _g\right) f \Vert _{L^p}^2\right) ^\frac{1}{2},\, f\in L^p(M), \end{aligned}$$
(2.10)

for \(p\in [2,\infty )\).

Proof

Let \(p\in (1,\infty )\). By [12], page 2, or [37] Theorem 4.1 and estimate (2.9) in a more general setting, we have

$$\begin{aligned} \Vert f \Vert _{L^p}\eqsim \bigg \Vert {\left( \vert \psi (\Delta _g)f\vert ^{2}+\sum _{k=1}^\infty \vert \varphi \left( 2^{-k}\Delta _g\right) f \vert ^{2}\right) ^\frac{1}{2}}\bigg \Vert _{L^p},\qquad f\in L^p(M). \end{aligned}$$

Hence, we get (2.9) by Fubini and (2.10) by Minkowski’s inequality. \(\square \)

The previous Lemma indicates the importance of estimating operators of the form \(\varphi (h^2\Delta _g)\) for \(h\in (0,1]\). In the next Lemma, we state how they act in \(L^p\)-spaces and Sobolev spaces. Note that these kind of estimates are usually called Bernstein inequalities.

Lemma 2.7

  1. (a)

    There is \(C>0\) such that for all \(1\le q\le r\le \infty \) and \(\varphi \in C_c^\infty (\mathbb {R})\), the following inequality holds:

    $$\begin{aligned} \Vert \varphi (h^2\Delta _g)u \Vert _{ L^r(M)}\le C h^{d\left( \frac{1}{r}-\frac{1}{q}\right) }\Vert \varphi (h^2\Delta _g)u \Vert _{ L^{q}}, \qquad u\in L^{q}(M), \;\;\; h\in (0,1]. \end{aligned}$$
  2. (b)

    Let us assume that \(p\in (1,\infty )\) and \(s\ge 0\). Then, for every \(\varphi \in C_c^\infty (\mathbb {R}\setminus \{0\})\), there is \(C>0\) such that

    $$\begin{aligned} \Vert \varphi (h^2\Delta _g)u \Vert _{ L^p}\le C h^s \Vert \varphi (h^2\Delta _g)u \Vert _{ H^{s,p}},\qquad u\in H^{s,p}(M),\;\; h\in (0,1]. \end{aligned}$$

Proof

  1. (a)

    See [7], Corollary 2.2. Let us emphasize that the fact that constant C is independent of q and r follows from the proof given in [7].

  2. (b)

    Throughout this proof, we w.l.o.g. assume \(s>0\).

Moreover, we take \(\tilde{\varphi }\in C_c^\infty (\mathbb {R}\setminus \{0\})\) with \(\tilde{\varphi }=1\) on \({\text {supp}}(\varphi )\) and define

$$\begin{aligned} f_h: [0,\infty ) \rightarrow \mathbb {R},\qquad f_h(t):=t^{-\frac{s}{2}}\tilde{\varphi }(-h^2 t) \end{aligned}$$

for \(h\in (0,1]\). Then, we have \(\varphi (-h^2t)=f_h(t)t^{\frac{s}{2}}\varphi (-h^2t)\) for all \(t\in [0,\infty )\) and \(h\in (0,1]\). Furthermore, we obtain that \(f_h\) satisfies the Mihlin condition

$$\begin{aligned} \sup _{t\ge 0}\vert t^k f_h^{(k)}(t)\vert \lesssim h^s,\qquad k\in \mathbb {N}_0,\quad h\in (0,1]. \end{aligned}$$

Fact 2.20 in [47] and the Spectral Multiplier Theorem 7.6 in [24] hence imply

$$\begin{aligned} \Vert f_h(-\Delta _g) \Vert _{\mathcal {L}(L^1,L^{1,\infty })}\lesssim h^s,\qquad h\in (0,1]. \end{aligned}$$

Since we also have

$$\begin{aligned} \Vert f_h(-\Delta _g) \Vert _{\mathcal {L}(L^2)}\le \sup _{t\ge 0} \vert f_h(t)\vert \lesssim h^s, \qquad h\in (0,1], \end{aligned}$$

by the Borel functional calculus for selfadjoint operators, the Marcinkiewitz Interpolation Theorem, see [27], Theorem 1.3.2, yields

$$\begin{aligned} \Vert f_h(-\Delta _g) \Vert _{\mathcal {L}(L^p)}\lesssim h^s, \qquad h\in (0,1], \end{aligned}$$

for \(p\in (1,2]\). Since \(f_h(-\Delta _g)\) is selfadjoint on \(L^2(M)\), we obtain for \(p\in (2,\infty )\)

$$\begin{aligned} \Vert f_h(-\Delta _g) \Vert _{\mathcal {L}(L^p)}&=\sup _{u\in L^p\cap L^2: \Vert u \Vert _{L^p}\le 1} \sup _{v\in L^{p'}\cap L^2: \Vert v \Vert _{L^{p'}}\le 1}\left| \big (f_h(-\Delta _g)u,v\big )_{L^2}\right| \\&=\sup _{v\in L^{p'}\cap L^2: \Vert v \Vert _{L^{p'}}\le 1} \sup _{u\in L^p\cap L^2: \Vert u \Vert _{L^p}\le 1} \left| \big (u,f_h(-\Delta _g)v\big )_{L^2}\right| \\&=\Vert f_h(-\Delta _g) \Vert _{\mathcal {L}(L^{p'})} \lesssim h^s, \qquad h\in (0,1]. \end{aligned}$$

For every \(p\in (1,\infty )\), we therefore get

$$\begin{aligned} \Vert \varphi (h^2\Delta _g)u \Vert _{ L^p}&=\Vert f_h(-\Delta _g) \left( -\Delta _g\right) ^{\frac{s}{2}}\varphi (h^2\Delta _g)u \Vert _{ L^p} \lesssim h^s \Vert \left( -\Delta _g\right) ^{\frac{s}{2}}\varphi (h^2\Delta _g)u \Vert _{ L^p}\\&\lesssim h^s \Vert \varphi (h^2\Delta _g)u \Vert _{ H^{s,p}},\, u\in H^{s,p}(M). \end{aligned}$$

This completes the proof of Lemma 2.7. \(\square \)

Definition 2.8

A pair \((p,q)\in [2,\infty ]^2\) is called Strichartz-admissible for the dimension \(d\in \mathbb {N}\) if

$$\begin{aligned} \frac{2}{q}+\frac{d}{p}=\frac{d}{2}\, \text{ and } \, (q,p,d)\ne (2,\infty ,2). \end{aligned}$$

In the remainder of the article, we will use the exponent pairs (6, 2) and \((2,\infty )\) which are Strichartz-admissible for \(d=3\). We emphasize that the Strichartz estimates which involve the pair (6, 2) are often called the endpoint Strichartz estimates. We refer to the article [35] in which Keel and Tao treated the endpoint case for the first time. Next, we collect the spectrally localized Strichartz estimates from [7] which will be crucial in the proof of Proposition 3.1.

Lemma 2.9

Let M be a compact Riemannian manifold of dimension \(d\in \mathbb {N}\) and a pair \((p,q)\in [2,\infty ]^2\) be Strichartz-admissible. Then, for any \(\varphi \in C_c^\infty (\mathbb {R})\), there exist positive numbers \(\beta >0\) and \(C>0\) such that for \(h\in (0,1]\) and any interval J of length \(\vert J\vert \le \beta h\)

$$\begin{aligned} \Vert t\mapsto e^{\mathrm {i}t\Delta _g}\varphi (h^2\Delta _g)x \Vert _{L^q(J,L^p)}\le C \Vert x \Vert _{L^2},\qquad x\in L^2(M). \end{aligned}$$
(2.11)

Proof

See [7], Proposition 2.9. The result follows from the dispersive estimate for the Schrödinger group from [7], Lemma 2.5, and an application of Keel-Tao’s Theorem ( [35]) with \(U(t)=e^{\mathrm {i}t\Delta _g}\tilde{\varphi }(h^2\Delta _g){\mathbf {1}}_{J}(t)\) for some \(\tilde{\varphi }\in C_c^\infty (\mathbb {R})\) with \(\tilde{\varphi }=1\) on \({\text {supp}}(\varphi )\).    \(\square \)

A similar result also holds for convolutions with the Schrödinger group.

Lemma 2.10

Let M be a compact Riemannian manifold of dimension \(d\in \mathbb {N}\) and \((p_1,q_1), (p_2,q_2)\in [2,\infty ]^2\) be Strichartz-admissible. Then, for any \(\varphi \in C_c^\infty (\mathbb {R})\), there is \(\beta >0\) and \(C>0\) such that for \(h\in (0,1]\) and any interval J of length \(\vert J\vert \le \frac{\beta h}{2}\)

$$\begin{aligned} \bigg \Vert {t\mapsto \int _{-\infty }^{t} e^{\mathrm {i}(t-s)\Delta _g}\varphi (h^2\Delta _g)f(s)\mathrm {d}s}\bigg \Vert _{L^{q_1}(J,L^{p_1})} \le C \Vert \varphi (h^2\Delta _g)f \Vert _{L^{q_2'}(J,L^{p_2'})} \end{aligned}$$

Proof

See [7], Lemma 3.4. \(\square \)

To prepare the next Lemma, we recall the following notation.

Notation 2.11

Let E be a separable Banach space, \(p\in [1,\infty )\), \(J\subset [0,\infty )\) an interval and \(\left( \Omega , \mathcal {F}, \mathbb {P},\mathbb {F}\right) \) a filtered probability space. By \(\mathcal {M}^p(J,X)\), we denote the space of \(\mathbb {F}\)-progressively measurable E-valued processes \(\xi : J\times \Omega \rightarrow E\) with \(\Vert \xi \Vert _{L^p(J\times \Omega ,E)}<\infty \).

Adapting the proof of [10, Theorem 3.10] to the present situation, we obtain a new spectrally localized stochastic Strichartz estimate for the stochastic convolution processes with the Schrödinger group.

Lemma 2.12

Let M be a compact Riemannian manifold of dimension 3. Let \(\varphi , \tilde{\varphi }\in C_c^\infty (\mathbb {R}\setminus \{0\})\) with \(\tilde{\varphi }=1\) on \({\text {supp}}(\varphi )\). Choose \(\beta >0\) as in Lemma 2.9. Let \(h\in (0,1]\) and \(J\subset [0,T]\) be an interval of length \(\vert J\vert \le \beta h\) and \(\chi _h\in C_c^\infty (\mathbb {R})\) with \({\text {supp}}(\chi _h)\subset J\). For \(B\in \mathcal {M}^2(J,{\text {HS}}(Y,L^2))\), we set

$$\begin{aligned} G(t):=\int _{0}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _h(s) \varphi (h^2\Delta _g)\mathrm {B}(s)\mathrm {d}W(s),\qquad t\in J. \end{aligned}$$

Then,

$$\begin{aligned} \Vert G \Vert _{L^2(\Omega ,{L^2(J,L^6)})}\lesssim \Vert \tilde{\varphi }(h^2\Delta _g)B \Vert _{L^2(\Omega ,L^2(J,{\text {HS}}(Y,L^2)))}. \end{aligned}$$

Proof

We abbreviate

$$\begin{aligned} F(t,s):={\mathbf {1}}_{\{s\le t\}} e^{\mathrm {i}(t-s)\Delta _g}\chi _h(s) \varphi (h^2\Delta _g)\mathrm {B}(s),\qquad t,s\in J, \end{aligned}$$

and use the Burkholder-Davis-Gundy-inequality in the martingale type 2 Banach space \({L^2(J,L^6)}\), see for example [13], to estimate

$$\begin{aligned} \Vert G \Vert _{L^2(\Omega ,{L^2(J,L^6)})}^2=\mathbb {E}\bigg \Vert {\int _J F(\cdot ,s) \mathrm {d}W(s)}\bigg \Vert _{{L^2(J,L^6)}}^2 \lesssim \mathbb {E}\int _J \Vert F(\cdot ,s) \Vert _{\gamma (Y,L^2(J,L^6))}^2 \mathrm {d}s\nonumber \\ \end{aligned}$$
(2.12)

Writing out the definition of \(\gamma (Y,L^2(J,L^6))\) and using \(\varphi (h^2\Delta _g)= \varphi (h^2\Delta _g)\tilde{\varphi }(h^2\Delta _g)\), we get

$$\begin{aligned} \Vert F(\cdot ,s) \Vert _{\gamma (Y,L^2(J,L^6))}^2&=\tilde{\mathbb {E}}\bigg \Vert {t\mapsto \sum _{m=1}^{\infty }\gamma _m {\mathbf {1}}_{\{s\le t\}} e^{\mathrm {i}(t-s)\Delta _g}\chi _h(s) \varphi (h^2\Delta _g)\mathrm {B}(s)f_m}\bigg \Vert _{L^2(J,L^6)}^2\\&=\tilde{\mathbb {E}}\bigg \Vert {t\mapsto \sum _{m=1}^{\infty }e^{\mathrm {i}t\Delta _g}\varphi (h^2\Delta _g)\left[ \gamma _m e^{-\mathrm {i}s\Delta _g}\chi _h(s) \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(s)f_m\right] }\bigg \Vert _{L^2(J_{\ge s},L^6)}^2, \end{aligned}$$

where \(\left( \gamma _m\right) _{m\in \mathbb {N}}\) is a sequence of i.i.d. \(\mathcal {N}(0,1)\)-Gaussians on some probability space \(\tilde{\Omega }\). By Lemma 2.9, the operator \(e^{\mathrm {i}\cdot \Delta _g}\varphi (h^2\Delta _g)\) is bounded from \(L^2(M)\) to \({L^2(J,L^6)}\). Hence, we can take it out of the sum and obtain

$$\begin{aligned} \Vert F(\cdot ,s) \Vert _{\gamma (Y,L^2(J,L^6))}^2&\lesssim \tilde{\mathbb {E}}\bigg \Vert { \sum _{m=1}^{\infty }\gamma _m \ e^{-\mathrm {i}s\Delta _g}\chi _h(s) \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(s)f_m}\bigg \Vert _{L^2}^2\\&=\Vert e^{-\mathrm {i}s\Delta _g}\chi _h(s) \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(s) \Vert _{\gamma (Y,L^2)}^2\\&\eqsim \Vert e^{-\mathrm {i}s\Delta _g}\chi _h(s) \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(s) \Vert _{{\text {HS}}(Y,L^2)}^2\\&\lesssim \Vert \chi _h(s) \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(s) \Vert _{{\text {HS}}(Y,L^2)}^2. \end{aligned}$$

Finally, inserting the last estimate in (2.12) yields

$$\begin{aligned} \Vert G \Vert _{L^2(\Omega ,{L^2(J,L^6)})}^2&\lesssim \mathbb {E}\int _J \Vert \chi _h(s) \tilde{\varphi }(h^2\Delta _g)\tilde{\varphi }(h^2\Delta _g)\mathrm {B}(s) \Vert _{{\text {HS}}(Y,L^2)}^2 \mathrm {d}s\\&\lesssim \Vert \tilde{\varphi }(h^2\Delta _g)B \Vert _{L^2(\Omega ,L^2(J,{\text {HS}}(Y,L^2)))}^2. \end{aligned}$$

The proof of Lemma 2.12 is thus completed. \(\square \)

Lemma 2.13

Let \(a\in (0,\infty )\), \(b\in (0,1)\), let \(X_k:\Omega \rightarrow [0,\infty ]\), \(k\in \mathbb {N}_0\), be random variables which satisfy \(\mathbb {E}X_k \le a\, b^k\) for all \(k\in \mathbb {N}_0\). Then for every \(\varepsilon \in (0,1)\) there exists a random variable \(C:\Omega \rightarrow [0,\infty ]\) with \(\mathbb {P}(C<\infty )=1\) which satisfies \(X_k\le C b^{(1-\varepsilon )k}\) for all \(k\in \mathbb {N}_0\).

Proof

Let \(C:=\sum _{j=0}^\infty \xi _j\), where \(\xi _j\), \(j\in \mathbb {N}_0\), are nonnegative random variables given by \(\xi _j:= X_j \,b^{-(1-\varepsilon )j}\) for \(j\in \mathbb {N}_0\). Then, C is a non-negative random variable and from monotone convergence and the assumptions, we infer

$$\begin{aligned} \mathbb {E}\big [ C\big ]&= \sum _{j=0}^\infty \mathbb {E}\xi _j = \sum _{j=0}^\infty b^{-(1-\varepsilon )j}\, \mathbb {E}X_j \le \sum _{j=0}^\infty b^{-(1-\varepsilon )j} a\, b^j\\&= a \sum _{j=0}^\infty b^{\varepsilon j} = \frac{a}{1-b^\varepsilon }<\infty . \end{aligned}$$

As a consequence, we obtain \(\mathbb {P}(C<\infty )=1\) and for all \(k\in \mathbb {N}_0\), we have

$$\begin{aligned} X_k = \xi _k\, b^{(1-\varepsilon )k}\le \sum _{j=0}^\infty \xi _j\, b^{(1-\varepsilon )k} = C \, b^{(1-\varepsilon )k}. \end{aligned}$$

\(\square \)

3 Uniqueness

In the following section, we will prove the pathwise uniqueness of solutions of (1.1). A key ingredient for this result is an \(L^2_tL^p_x\)-estimate for solutions for arbitrary large p with moderate growth of the bound in p.

Proposition 3.1

Let \(d=3\) and \(\alpha \in (1,3]\). Let \(T>0\) and let \(\left( \Omega ,\mathcal {F},\mathbb {P},W,\mathbb {F},u\right) \), where \(\mathbb {F}=\left( \mathcal {F}_t\right) _{t\in [0,T]}\), be a martingale solution of Problem (1.1). Then, there are a measurable set \(\Omega _\infty \subset \Omega \) with \(\mathbb {P}(\Omega _\infty )=1\) and a random variable \({\mathfrak {C}}:\Omega \rightarrow [0,\infty ]\) with \({\mathfrak {C}}<\infty \) on \(\Omega _\infty \) such that for all \(\omega \in \Omega _\infty \), \(p\in [6,\infty )\) and intervals \(J\subset [0,T]\), we have

$$\begin{aligned} \Vert u(\cdot ,\omega ) \Vert _{L^2(J,L^p)}\le {\mathfrak {C}}(\omega ) \left( 1+\left( \vert J\vert p\right) ^\frac{1}{2}\right) . \end{aligned}$$

Let us emphasize that the random constant \({\mathfrak {C}}\) in the estimate is independent of p. We further remark that this estimate of \(L^p\)-norms is a substitute for the \(L^\infty \)-bound for solutions in the 2D-setting, see [9], and complements the inequality, for \(p\in [1,6]\),

$$\begin{aligned} \Vert u \Vert _{L^2(J,L^p)}\lesssim \vert J\vert ^\frac{1}{2}\Vert u \Vert _{L^\infty (J,H^1)}<\infty \qquad \text {a.s.}, \end{aligned}$$

which we get from the Sobolev embedding and the energy estimate for martingale solutions. Before we start with the proof, we introduce an equidistant partition of the time interval.

Notation 3.2

Let \(I=[a,b]\) with \(0<a<b<\infty \). For \(\rho >0\) and \(N:=\lfloor \frac{b-a}{\rho }\rfloor \), i.e. \(N=\max \{n\in \mathbb {N}:n\le \frac{b-a}{\rho }\}\), the family \(\left( I_j\right) _{j=0}^N\) defined by

$$\begin{aligned} I_j:&= \left[ a+j\rho ,a+(j+1)\rho \right] ,\quad j\in \{0,\dots N-1\},\\ I_N:&= \left[ a+N\rho ,b\right] \end{aligned}$$

is called the \(\rho \)-partition of I. Observe that

$$\begin{aligned} \vert I_j\vert \le \rho ,\quad j=0,\dots ,N,\qquad I=\bigcup _{j=0}^N I_j,\qquad I_j^\circ \cap I_k^\circ =\emptyset ,\quad j\ne k. \end{aligned}$$

Proof of Proposition 3.1

Let us choose and fix \(T>0\) and a martingale solution \(\left( \Omega ,\mathcal {F},\mathbb {P},W,\mathbb {F},u\right) \) to Problem (1.1). We choose \(\beta >0\) as in Lemma 2.9 and in Lemma 2.10.

  • Step 1 Let us choose \(h\in (0,1]\) and take the \(\frac{\beta h}{4}\)-partition \(\left( I_j\right) _{j=0}^{N_{T,h}}\) of the interval [0, T] in the sense of Notation 3.2. Furthermore, we define a cover \(\left( I_j'\right) _{j=0}^{N_{T,h}}\) of \(\left( I_j\right) _{j=0}^{N_{T,h}}\), a sequence \(\left( m_j\right) _{j=0}^{N_{T,h}}\) by

    $$\begin{aligned} I_j':=\left( I_j+\left[ -\frac{\beta h}{8},\frac{\beta h}{8}\right] \right) \cap [0,T], \, m_j:=\frac{j \beta h}{4}+\frac{\beta h}{8},\qquad j=0,\dots ,N_{T,h},\nonumber \\ \end{aligned}$$
    (3.1)

    and a sequence \(\left( \chi _{I_j}\right) _{j=0}^{N_{T,h}}\subset C_c^\infty ([0,\infty ))\) by \(\chi _{I_j}:=\chi \left( (\beta h)^{-1}(\cdot -m_j)\right) \) for some fixed \(\chi \in C_c^\infty (\mathbb {R})\) with \({\mathbf {1}}_{[-\frac{1}{8},\frac{1}{8}]}\le \chi \le {\mathbf {1}}_{[-\frac{1}{4},\frac{1}{4}]}\). Then, we have

    $$\begin{aligned} \chi _{I_j}=1 \quad \text {on } I_j,\qquad {\text {supp}}(\chi _{I_j}) \subset I_j', \quad \Vert \chi _{I_j}' \Vert _{L^\infty (\mathbb {R})}\le (\beta h)^{-1} \Vert \chi ' \Vert _{L^\infty (\mathbb {R})}.\nonumber \\ \end{aligned}$$
    (3.2)

    We fix \(\varphi , \tilde{\varphi }\in C_c^\infty (\mathbb {R}\setminus \{0\})\) with \(0\le \varphi ,\tilde{\varphi }\le 1\) and \(\tilde{\varphi }=1\) on \({\text {supp}}(\varphi )\). In order to localize the solution u spectrally and in time, we set

    $$\begin{aligned} v_{I_j}(t)=\chi _{I_j}(t) \varphi (h^2\Delta _g)u(t),\;\; t\in [0,T], \qquad j=0,\dots ,N_{T,h}, \end{aligned}$$

    and apply the Itô formula to \(\varPhi _j \in C^{1,2}(I_j'\times {H^{-1}(M)},{H^{-3}(M)})\) defined by

    $$\begin{aligned} \varPhi _j(s,x)=e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}(s)\varphi (h^2\Delta _g)x,\qquad s\in I_j',\quad x\in {H^{-3}(M)}, \end{aligned}$$

    to get the following mild form representation of \(v_{I_j}\), for \(j=1,\dots , N_{T,h}\),

    $$\begin{aligned} v_{I_j}(t)= & {} \varPhi _j(t, \chi _{I_j}(t) \varphi (h^2\Delta _g)u(t)) \nonumber \\= & {} \int _{\min I_j'}^t \left[ -\mathrm {i}\Delta _g e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}(s)\varphi (h^2\Delta _g)u(s)+e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}'(s)\varphi (h^2\Delta _g)u(s)\right] \mathrm {d}s\nonumber \\&+\int _{\min I_j'}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}(s) \varphi (h^2\Delta _g)\left[ \mathrm {i}\Delta _g u(s)-\mathrm {i}\lambda \vert u(s)\vert ^{\alpha -1} u(s)+\mu (u(s))\right] \mathrm {d}s\nonumber \\&-\mathrm {i}\int _{\min I_j'}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}(s) \varphi (h^2\Delta _g)B u(s)\mathrm {d}W(s)\nonumber \\= & {} \int _{\min I_j'}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}'(s)\varphi (h^2\Delta _g)u(s)\mathrm {d}s\nonumber \\&+\int _{\min I_j'}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}(s) \varphi (h^2\Delta _g)\left[ -\mathrm {i}\lambda \vert u(s)\vert ^{\alpha -1} u(s)+\mu (u(s))\right] \mathrm {d}s\nonumber \\&-\mathrm {i}\int _{\min I_j'}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}(s) \varphi (h^2\Delta _g)B u(s)\mathrm {d}W(s) \end{aligned}$$
    (3.3)

    in \({H^{-3}(M)}\) almost surely for all \(t\in I_j'\). Because of the regularity of each term (recall that \(\alpha \le 3\)), the identity (3.3) holds in \(L^2(M)\) almost surely for all \(t\in I_j'\). Analogously, we get

    $$\begin{aligned} v_{I_0}(t)= & {} e^{\mathrm {i}t\Delta _g}v_{I_0}(0)+\int _{0}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_0}'(s)\varphi (h^2\Delta _g)u(s)\mathrm {d}s\nonumber \\&+\int _{0}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_0}(s) \varphi (h^2\Delta _g)\left[ -\mathrm {i}\lambda \vert u(s)\vert ^{\alpha -1} u(s)+\mu (u(s))\right] \mathrm {d}s\nonumber \\&-\mathrm {i}\int _{0}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_0}(s) \varphi (h^2\Delta _g)B u(s)\mathrm {d}W(s) \end{aligned}$$
    (3.4)

    in \(L^2(M)\) almost surely for all \(t\in I_0'\). We abbreviate

    $$\begin{aligned} G_{I_j,h}(t):=\int _{\min I_j^\prime }^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{I_j}(s) \varphi (h^2\Delta _g)B u(s)\mathrm {d}W(s),\quad 0\le t \in [0,T].\nonumber \\ \end{aligned}$$
    (3.5)

    We use the stochastic Strichartz estimate from Lemma 2.12, the properties of the partitions \(\left( I_j\right) _{j=0}^{N_{T,h}}\) and \(\left( I_j^\prime \right) _{j=0}^{N_{T,h}}\) and Lemma 2.7 b) to estimate

    $$\begin{aligned} \mathbb {E}\sum _{j=0}^{{N_{T,h}}} \Vert G_{I_j,h} \Vert _{L^2(I_j',L^6)}^2&\lesssim \, \mathbb {E}\sum _{j=0}^{{N_{T,h}}} \int _{I_j^\prime }\Vert \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(u(s)) \Vert _{{\text {HS}}(Y,L^2)}^2 \mathrm {d}s\\&\le 2 \mathbb {E}\sum _{j=0}^{{N_{T,h}}} \int _{I_j}\Vert \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(u(s)) \Vert _{{\text {HS}}(Y,L^2)}^2 \mathrm {d}s\\&= 2\mathbb {E}\int _0^T\Vert \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(u(s)) \Vert _{{\text {HS}}(Y,L^2)}^2 \mathrm {d}s\\&\lesssim \, h^2 \mathbb {E}\int _0^T \Vert \tilde{\varphi }(h^2\Delta _g)\mathrm {B}(u(s)) \Vert _{{\text {HS}}(Y,H^1)}^2 \mathrm {d}s. \end{aligned}$$

    Since \(\tilde{\varphi }(h^2\Delta _g)\) is a contractive operator from \({H^1(M)}\) to \({H^1(M)}\) (recall \(\Vert \tilde{\varphi } \Vert _\infty \le 1\)) and B is bounded from \({H^1(M)}\) to \({\text {HS}}(Y,{H^1(M)})\) by Assumption 2.1, we conclude

    $$\begin{aligned} \mathbb {E}\sum _{j=0}^{{N_{T,h}}} \Vert G_{I_j,h} \Vert _{L^2(I_j',L^6)}^2&\lesssim \, h^2\, \mathbb {E}\int _0^T \Vert u(s) \Vert _{H^1}^2 \mathrm {d}s, \qquad h\in (0,1). \end{aligned}$$
    (3.6)

    We emphasize that in view of Lemma 2.12 the implicit constant in (3.6) is independent of \(h\in (0,1)\). Consequently, Lemma 2.13 shows that for any \(\varepsilon \in (0,1)\) there is a random variable \(C:\Omega \rightarrow [0,\infty ]\) with \(C<\infty \) almost surely such that

    $$\begin{aligned} \sum _{j=0}^{{N_{T,2^{-k/2}}}} \Vert G_{I_j,2^{-k/2}} \Vert _{L^2(I_j',L^6)}^2\le 2^{-k(1-\varepsilon )} C \hbox { on}\ \Omega \qquad \text {for all } k\in \mathbb {N}_0. \end{aligned}$$
    (3.7)
  • Step 2 Now, let us choose and fix an interval \(J\subset [0,T]\), let \(\varepsilon \in (0,1/2)\), \(k\in \mathbb {N}_0\) and \(h=2^{-k/2}\). Let us define \(\Omega _k\) as the intersection of the full probability sets from (3.3), (3.4) and (3.7) for \(h=2^{-k/2}\) and the assumption that \(u_j\in L^\infty (0,T;H^1(M))\) almost surely. Moreover, we set \(\Omega _\infty :=\bigcap _{j=0}^\infty \Omega _{j}\) and notice that \(\mathbb {P}(\Omega _\infty )=1\). Let us next choose and fix a path \(\omega \in \Omega _\infty \). In the rest of the argument, we skip the dependence on \(\omega \) to keep the notation simple. Let us pick those intervals \(J_0,\dots ,J_N\) from the partition \((I_j)_{j=0}^{N_{T,h}}\) which cover the given interval J. The associated intervals in \((I_j^\prime )_{j=0}^N\) will be denoted by \(J_0^\prime ,\dots , J_N^\prime \). From (3.3) and (3.5) we deduce for \(j=1,\dots ,N\) that

    $$\begin{aligned} v_{J_j}(t)= & {} \int _{\min J_j'}^t e^{\mathrm {i}(t-s)\Delta _g}\varphi (h^2\Delta _g)\left[ \chi _{J_j}'(s) u(s) + \chi _{J_j}(s) \mu (u(s)) \right] \mathrm {d}s\nonumber \\&-\mathrm {i}\lambda \int _{\min J_j'}^t e^{\mathrm {i}(t-s)\Delta _g}\chi _{J_j}(s) \varphi (h^2\Delta _g)\vert u(s)\vert ^{\alpha -1} u(s) \mathrm {d}s-\mathrm {i}\, G_{J_j,h}(t)\nonumber \\ \end{aligned}$$
    (3.8)

    holds in \(L^2(M)\) almost surely for all \(t\in I_j'\). Note that (3.1) implies \(\vert J_j'\vert \le \frac{\beta h}{2}\). Due to the choice of \(\beta >0\), we can apply Lemma 2.10 with the Strichartz-admissible pairs (6, 2) and \((2,\infty )\). Hence, we obtain for \(j=1,\dots ,N\)

    $$\begin{aligned}&\bigg \Vert {t\mapsto \int _{\min J_j'}^{t} e^{\mathrm {i}(t-s)\Delta _g}\varphi (h^2\Delta _g)\left[ \chi _{J_j}'(s) u(s) + \chi _{J_j}(s) \mu (u(s)) \right] \mathrm {d}s}\bigg \Vert _{L^{2}(J_j',L^{6})}\\&\qquad \lesssim \, \bigg \Vert {\varphi (h^2\Delta _g)\left[ \chi _{J_j}' u + \chi _{J_j} \mu (u) \right] }\bigg \Vert _{L^{1}(J_j',L^{2})} \end{aligned}$$

    and

    $$\begin{aligned}&\bigg \Vert {t\mapsto \int _{\min J_j'}^{t} e^{\mathrm {i}(t-s)\Delta _g}\varphi (h^2\Delta _g)\chi _{J_j}(s) \vert u(s)\vert ^{\alpha -1} u(s) \mathrm {d}s}\bigg \Vert _{L^{2}(J_j',L^{6})}\\&\qquad \lesssim \, \Vert \varphi (h^2\Delta _g)\chi _{J_j} \vert u\vert ^{\alpha -1} u \Vert _{L^{2}(J_j',L^{\frac{6}{5}})}. \end{aligned}$$

    Let us emphasize that the implicit constants in these estimates are independent of h and \(\omega \). Combining the latter estimates with (3.8) yields

    $$\begin{aligned}&\Vert v_{J_j} \Vert _{L^2(J_j,L^6)}\le \Vert v_{J_j} \Vert _{L^2(J_j^\prime ,L^6)} \lesssim \, \Vert \chi _{J_j}^\prime \varphi (h^2\Delta _g)u \Vert _{L^1(J_j^\prime ,L^2)}\nonumber \\&\qquad +\Vert \chi _{J_j}\varphi (h^2\Delta _g)\vert u \vert ^{\alpha -1} u \Vert _{L^2(J_j^\prime ,L^{\frac{6}{5}})}\nonumber \\&\qquad +\Vert \chi _{J_j}\varphi (h^2\Delta _g)\mu (u) \Vert _{L^1(J_j^\prime ,L^2)} +\Vert G_{J_j,h} \Vert _{L^2(J_j^\prime ,L^6)} \end{aligned}$$
    (3.9)

    for \(j=1,\dots ,N\). Using similar arguments and taking into account Lemma 2.9 for the evolution of the initial value, we deduce

    $$\begin{aligned} \Vert v_{J_0} \Vert _{L^2(J_0,L^6)}\le \Vert v_{J_0} \Vert _{L^2(J_0^\prime ,L^6)}\lesssim & {} \, \Vert v_{J_0}(\min J_0^\prime ) \Vert _{L^2}+ \Vert \chi _{J_0}^\prime \varphi (h^2\Delta _g)u \Vert _{L^1(J_0^\prime ,L^2)} \nonumber \\&+\Vert \chi _{J_0}\varphi (h^2\Delta _g)\vert u \vert ^{\alpha -1} u \Vert _{L^2(J_0^\prime ,L^{\frac{6}{5}})}\nonumber \\&+\Vert \chi _{J_0}\varphi (h^2\Delta _g)\mu (u) \Vert _{L^1(J_0^\prime ,L^2)} \nonumber \\&+\Vert G_{J_0,h} \Vert _{L^2(J_0^\prime ,L^6)}. \end{aligned}$$
    (3.10)

    Note that \(v_{J_0}(\min J_0^\prime )=0\) if \(I_0\ne J_0\). Next, we estimate the terms on the right hand side of (3.9) and (3.10). By properties (3.2), Lemma 2.7 b) and the Hölder inequality with \(\vert J_j^\prime \vert \lesssim h\), we get

    $$\begin{aligned} \Vert \chi _{J_j}^\prime \varphi (h^2\Delta _g)u \Vert _{L^1(J_j^\prime ,L^2)}&\lesssim \, h^{-1} \Vert \varphi (h^2\Delta _g)u \Vert _{L^1(J_j^\prime ,L^2)} \lesssim \, \Vert \varphi (h^2\Delta _g)u \Vert _{L^1(J_j^\prime ,H^1)}\nonumber \\&\lesssim \, h^\frac{1}{2} \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J_j^\prime ,H^1)}. \end{aligned}$$

    The Hölder inequality with \(\vert J_j^\prime \vert \lesssim h\), Lemma 2.7 b) and the boundedness of the operators \(\varphi (h^2\Delta _g)\) and \(\mu \) in \({H^1(M)}\) yield

    $$\begin{aligned} \Vert \chi _{J_j}\varphi (h^2\Delta _g)\mu (u) \Vert _{L^1(J_j^\prime ,L^2)}&\lesssim \, h \Vert \chi _{J_j}\varphi (h^2\Delta _g)\mu (u) \Vert _{L^\infty (J_j^\prime ,L^2)}\\&\le h \Vert \varphi (h^2\Delta _g)\mu (u) \Vert _{L^\infty (0,T;L^2)}\\&\lesssim \, h^2 \Vert \varphi (h^2\Delta _g)\mu (u) \Vert _{L^\infty (0,T;H^1)}\\&\lesssim \, h^2 \Vert u \Vert _{L^\infty (0,T;H^1)}. \end{aligned}$$

    Next we apply Lemma 2.5 with \(r^\prime =\frac{6}{\alpha +2}\) and \(q=6\). Since \(\alpha \le 3\) we have \(r^\prime \ge \frac{6}{5}\) and hence we obtain the following estimate

    $$\begin{aligned} \Vert \vert v \vert ^{\alpha -1} v \Vert _{H^{1,\frac{6}{5}}} \lesssim \, \Vert \vert v \vert ^{\alpha -1} v \Vert _{H^{1,\frac{6}{\alpha +2}}} \lesssim \, \Vert v \Vert _{H^1}^{\alpha },\qquad v\in {H^1(M)}. \end{aligned}$$
    (3.11)

    Moreover, note that \(\sup _{h\in (0,1)}\Vert \varphi (h^2\Delta _g) \Vert _{\mathcal {L}(H^{1,\frac{6}{5}})}<\infty \). Together with (3.11), the Hölder inequality, and Lemma 2.7 b), this implies

    $$\begin{aligned} \Vert \chi _{J_j}\varphi (h^2\Delta _g)\vert u \vert ^{\alpha -1} u \Vert _{L^2\left( J_j^\prime ,L^{\frac{6}{5}}\right) }&\lesssim \, h^\frac{1}{2}\Vert \varphi (h^2\Delta _g)\vert u \vert ^{\alpha -1} u \Vert _{L^\infty \left( 0,T;L^{\frac{6}{5}}\right) }\\&\lesssim \, h^\frac{3}{2}\Vert \varphi (h^2\Delta _g)\vert u \vert ^{\alpha -1} u \Vert _{L^\infty \left( 0,T;H^{1,\frac{6}{5}}\right) }\\&\lesssim \, h^\frac{3}{2}\Vert \vert u \vert ^{\alpha -1} u \Vert _{L^\infty \left( 0,T;H^{1,\frac{6}{5}}\right) }\lesssim \, h^\frac{3}{2}\Vert u \Vert _{L^\infty \left( 0,T;H^{1}\right) }^\alpha , \end{aligned}$$

    where we again emphasize that the implicit constants in the inequalities above and below are independent of \(h\in (0,1)\). Inserting the last three estimates in (3.9) and (3.10) yields for \(j=1,\dots ,N\)

    $$\begin{aligned} \Vert v_{J_j} \Vert _{L^2(J_j,L^6)}\lesssim & {} \, h^\frac{1}{2} \Vert \varphi (h^2\Delta _g)u \Vert _{L^2\left( J_j^\prime ,H^1\right) } + h^{\frac{3}{2}} \Vert u \Vert _{L^\infty \left( 0,T;H^1\right) }^\alpha \nonumber \\&+h^2 \Vert u \Vert _{L^\infty (0,T;H^1)} +\Vert G_{J_j,h} \Vert _{L^2(J_j^\prime ,L^6)}, \end{aligned}$$
    (3.12)
    $$\begin{aligned} \Vert v_{J_0} \Vert _{L^2(J_0,L^6)}\lesssim & {} \, h\Vert \varphi (h^2\Delta _g)u(\min J_0^\prime ) \Vert _{H^1}+ h^\frac{1}{2} \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J_0^\prime ,H^1)}\nonumber \\&+ h^{\frac{3}{2}} \Vert u \Vert _{L^\infty (0,T;H^1)}^\alpha \nonumber \\&+h^2 \Vert u \Vert _{L^\infty (0,T;H^1)} +\Vert G_{J_0,h} \Vert _{L^2(J_0^\prime ,L^6)}. \end{aligned}$$
    (3.13)

    We square the estimates (3.12) and (3.13) and sum them up over \(j\in \{0,...,N\}\). Since \(\chi _{J_j}=1\) on \(J_j\), by using (3.7) and \(N\le N_{T,h}=\left\lfloor \frac{4 T}{\beta h}\right\rfloor \), we conclude that

    $$\begin{aligned} \Vert \varphi (h^2\Delta _g)u \Vert _{L^{2}(J,L^6)}^{2}\le & {} \sum _{j=0}^N \Vert \chi _{J_j}\varphi (h^2\Delta _g)u \Vert _{L^{2}(J_j,L^6)}^{2}=\sum _{j=0}^N \Vert v_{J_j} \Vert _{L^{2}(J_j,L^6)}^{2}\nonumber \\\lesssim & {} \,h^{2}\Vert \varphi (h^2\Delta _g)u(\min J_0^\prime ) \Vert _{H^1}^{2}\nonumber \\&+\sum _{j=0}^N \left[ h \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J_j^\prime ,H^1)}^{2} + h^{3} \Vert u \Vert _{L^\infty (0,T;H^1)}^{2\alpha }\right] \nonumber \\&+\sum _{j=0}^N \left[ h^{4} \Vert u \Vert _{L^\infty (0,T;H^1)}^{2}\right] +\sum _{j=0}^{{N_{T,h}}} \Vert G_{I_j,h} \Vert _{L^2(I_j',L^6)}^2 \nonumber \\\lesssim & {} \,h^{2}\Vert \varphi (h^2\Delta _g)u(\min J_0^\prime ) \Vert _{H^1}^{2}\nonumber \\&+\sum _{j=0}^N \left[ h \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J_j^\prime ,H^1)}^{2} + h^{3} \Vert u \Vert _{L^\infty (0,T;H^1)}^{2\alpha }\right] \nonumber \\&+\sum _{j=0}^N \left[ h^{4} \Vert u \Vert _{L^\infty (0,T;H^1)}^{2}\right] +h^{2(1-\varepsilon )} C \nonumber \\\lesssim & {} \, h^{2}\Vert \varphi (h^2\Delta _g)u(\min J_0^\prime ) \Vert _{H^1}^{2} +h \sum _{j=0}^N \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J_j^\prime ,H^1)}^{2} \nonumber \\&+ h^{2} \Vert u \Vert _{L^\infty (0,T;H^1)}^{2\alpha } +h^{3} \Vert u \Vert _{L^\infty (0,T;H^1)}^{2} +h^{2(1-\varepsilon )} C. \end{aligned}$$
    (3.14)

    Below, we will use the following additional notation

    $$\begin{aligned} J_{N+1}:=\left( \bigcup _{j=0}^N J_j^\prime \right) \setminus \left( \bigcup _{j=0}^N J_j\right) ,\qquad J^h:=\bigcup _{j=0}^{N+1} J_j. \end{aligned}$$

    Since

    $$\begin{aligned}&\sum _{j=0}^N \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J_j^\prime ,H^1)}^{2} \le 2\sum _{j=0}^{N+1} \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J_j,H^1)}^2\\&=2 \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J^h,H^1)}^{2} \end{aligned}$$

    we infer that

    $$\begin{aligned} \Vert \varphi (h^2\Delta _g)u \Vert _{L^{2}(J,L^6)}^{2}&\lesssim \, h^{2}\Vert \varphi (h^2\Delta _g)u(\min J_0^\prime ) \Vert _{H^1}^{2} +h \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J^h,H^1)}^{2} \nonumber \\&\quad + h^{2} \Vert u \Vert _{L^\infty (0,T;H^1)}^{2 \alpha } +h^{3} \Vert u \Vert _{L^\infty (0,T;H^1)}^2 +h^{2(1-\varepsilon )} C. \end{aligned}$$

    Let now choose and fix \(p\ge 6\). Since \(u\in L^\infty (0,T;{H^1(M)})\), by Lemma 2.7 a) we deduce for \(h=2^{-k/2}\), \(k\in \mathbb {N}_0\), that

    $$\begin{aligned} \Vert \varphi (h^2\Delta _g)u \Vert _{L^{2}(J,L^p)}\lesssim & {} \, h^{3\left( \frac{1}{p}-\frac{1}{6}\right) }\Vert \varphi (h^2\Delta _g)u \Vert _{L^{2}(J,L^6)}\nonumber \\\lesssim & {} \, h^{\frac{3}{p}+\frac{1}{2}}\Vert \varphi (h^2\Delta _g)u(\min J_0^\prime ) \Vert _{H^1} +h^{\frac{3}{p}} \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J^h,H^1)}\nonumber \\&+ h^{\frac{3}{p}+\frac{1}{2}} \Vert u \Vert _{L^\infty (0,T;H^1)}^\alpha +h^{\frac{3}{p}+1} \Vert u \Vert _{L^\infty (0,T;H^1)} +h^{\frac{3}{p}+\frac{1}{2}-\varepsilon } C \nonumber \\\lesssim & {} \, h^{\frac{3}{p}+\frac{1}{2}}+h^{\frac{3}{p}} \Vert \varphi (h^2\Delta _g)u \Vert _{L^2(J^h,H^1)} + h^{\frac{3}{p}+\frac{1}{2}-\varepsilon } +h^{\frac{3}{p}+1} \end{aligned}$$
    (3.15)

    with the implicit constant being independent of k and p because the constant C from Lemma 2.7 a) does not depend on p.

  • Step 3 In the last step, we use inequality (3.15) and the Littlewood-Paley theory to derive the estimate stated in the Proposition. To this end, we set \(h_k:=2^{-\frac{k}{2}}\) and \(k_0:=\min \left\{ k: \vert J\vert > \frac{\beta h_{k}}{4}\right\} \). As in the previous step, we fix a path \(\omega \in \Omega _\infty :=\bigcap _{k=0}^\infty \Omega _{k}\) and, in the rest of the argument, we skip the dependence on \(\omega \) to keep the notation simple. Moreover, we choose \(\psi \in C_c^\infty (\mathbb {R})\), \(\varphi \in C_c^\infty (\mathbb {R}\setminus \{0\})\) such that

$$\begin{aligned} 1=\psi (\lambda )u+\sum _{k=1}^\infty \varphi (2^{-k}\lambda ) ,\qquad \lambda \in \mathbb {R}. \end{aligned}$$

Then, Lemma 2.6, the embedding \(\ell ^1(\mathbb {N})\hookrightarrow \ell ^2(\mathbb {N})\), (3.15), and the hypothesis \(\varepsilon \in (0,1/2)\) from Step 2 imply that

$$\begin{aligned} \Vert u \Vert _{L^{2}(J,L^p)}\lesssim & {} \, \bigg \Vert {\left( \Vert \psi (\Delta _g)u \Vert _{L^p}^2+\sum _{k=1}^\infty \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^p}^2\right) ^\frac{1}{2}}\bigg \Vert _{L^{2}(J)}\nonumber \\= & {} \, \left( \Vert \psi (\Delta _g)u \Vert _{L^2(J,L^p)}^2+\sum _{k=1}^\infty \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J,L^p)}^2\right) ^\frac{1}{2}\nonumber \\\le & {} \, \Vert \psi (\Delta _g)u \Vert _{L^2(J,L^p)}+\sum _{k=1}^\infty \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J,L^p)}\nonumber \\\lesssim & {} \, \Vert \psi (\Delta _g)u \Vert _{L^{2}(J,L^p)}+\sum _{k=1}^{k_0-1} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,L^p)}\nonumber \\&+\sum _{k=k_0}^\infty 2^{-\frac{3k}{2 p}} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J^{h_k},H^1)} \nonumber \\&+\sum _{k=k_0}^\infty \left[ 2^{-\frac{k}{2}\left( \frac{3}{p}+\frac{1}{2}\right) } +2^{-\frac{k}{2}\left( \frac{3}{p}+1\right) } +2^{-\frac{k}{2} \left( \frac{3}{p}+\frac{1}{2}-\varepsilon \right) } \right] \nonumber \\\le & {} \Vert \psi (\Delta _g)u \Vert _{L^{2}(J,L^p)}+\sum _{k=1}^{k_0-1} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,L^p)}\nonumber \\&+\sum _{k=k_0}^\infty 2^{-\frac{3k}{2 p}} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J^{h_k},H^1)}+\sum _{k=k_0}^\infty \left[ 2^{-\frac{k}{4}} +2^{-\frac{k}{2}} +2^{-\frac{k}{4}(1-2\varepsilon ) } \right] \nonumber \\\lesssim & {} \Vert \psi (\Delta _g)u \Vert _{L^{2}(J,L^p)}+\sum _{k=1}^{k_0-1} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,L^p)}\nonumber \\&+ \left( \sum _{k=k_0}^\infty 2^{-\frac{3k}{p}} \right) ^\frac{1}{2} \left( \sum _{k=k_0}^\infty \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J^{h_k},H^1)}^2\right) ^\frac{1}{2}+1. \end{aligned}$$
(3.16)

From Lemma 2.7 a) with \(h=1\), we conclude

$$\begin{aligned} \Vert \psi (\Delta _g)u \Vert _{L^{2}(J,L^p)} \lesssim \, \Vert \psi (\Delta _g)u \Vert _{L^{2}(J,L^2)} \lesssim \, \Vert u \Vert _{L^{2}(J,L^2)} \lesssim \, 1. \end{aligned}$$
(3.17)

From Lemma 2.7 a) and the Sobolev embedding, we infer

$$\begin{aligned} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,L^p)}&\lesssim 2^{-k(\frac{3}{2 p}-\frac{1}{4})} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,L^6)}\\&\lesssim 2^{\frac{k}{4}} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,H^1)} \end{aligned}$$

for \(k\in \left\{ 1,\dots ,k_0-1\right\} \). From the definition of \(k_0\), we have \(\vert J\vert \eqsim 2^{-\frac{k_0}{2}}\). Thus, we get

$$\begin{aligned} \sum _{k=1}^{k_0-1} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,L^p)}\lesssim & {} \left( \sum _{k=1}^{k_0-1} 2^{\frac{k}{2}}\right) ^\frac{1}{2} \left( \sum _{k=1}^{k_0-1}\Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,H^1)}^2\right) ^\frac{1}{2}\nonumber \\\lesssim & {} 2^{\frac{k_0}{4}} \Vert u \Vert _{L^{2}(J,H^1)} \lesssim \vert J\vert ^{-\frac{1}{2}} \vert J\vert ^{\frac{1}{2}}\lesssim 1. \end{aligned}$$
(3.18)

We proceed with the estimate of the sums over \(k\ge k_0\). The fact that we have \(J^{h_{k+1}}\subset J^{h_{k}}\) for all \(k\in \mathbb {N}\), leads to

$$\begin{aligned} \sum _{k=k_0}^\infty \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J^{h_k},H^1)}^2&=\sum _{k: \vert J\vert> \frac{\beta h_k}{4}} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J^{h_k},H^1)}^2\nonumber \\&\le \sum _{k: \vert J\vert > \frac{\beta h_k}{4}} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J^{h_{k_0}},H^1)}^2\nonumber \\&\lesssim \Vert u \Vert _{L^2(J^{h_{k_0}},H^1)}^2\le \vert J^{h_{k_0}}\vert \,\Vert u \Vert _{L^\infty (0,T;H^1)}^2. \end{aligned}$$

Using \(\vert J^{h_{k_0}}\vert \le 3 \frac{\beta h_{k_0}}{4}+\vert J\vert \le 4\vert J\vert \) and \(u\in L^\infty (0,T;{H^1(M)})\) almost surely, we obtain

$$\begin{aligned} \sum _{k=k_0}^\infty \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J^{h_k},H^1)}^2\lesssim \vert J\vert . \end{aligned}$$
(3.19)

Finally, the calculation

$$\begin{aligned} \lim _{p\rightarrow \infty }\frac{1}{p}\sum _{k=1}^\infty 2^{-\frac{3k}{p}}=\lim _{p\rightarrow \infty }\frac{1}{p}\left( \frac{1}{1-2^{-\frac{3}{p}}}-1\right) =\lim _{p\rightarrow \infty }\frac{1}{p\left( 2^{\frac{3}{p}}-1\right) }=\frac{1}{3\log (2)} \end{aligned}$$

yields the boundedness of the function defined by \([6,\infty )\ni p\mapsto \frac{1}{p}\sum _{k=1}^\infty 2^{-\frac{3k}{p}}\) and hence,

$$\begin{aligned} \sum _{k=1}^\infty 2^{-\frac{3k}{p}}\lesssim \, p. \end{aligned}$$
(3.20)

Using the estimates (3.17) (3.18), (3.19), and (3.20) in (3.16), we get

$$\begin{aligned} \Vert u \Vert _{L^{2}(J,L^p)}\lesssim \, 1+\left( \vert J\vert p\right) ^\frac{1}{2},\qquad p\in [6,\infty ), \end{aligned}$$

which implies the assertion. The proof of Proposition 3.1 is thus completed. \(\square \)

We would like to continue with some remarks on seemingly natural extensions of the previous result to higher dimensions, nonlinear noise and non-compact manifolds.

Remark 3.3

We would like to comment on the case of higher dimensions \(d\ge 4\). The Strichartz-endpoint is \((2,\frac{2d}{d-2})\) and the use of Lemma 2.5 leads to the restriction \(\alpha \le 1+\frac{2}{d-2}\). The corresponding estimate in (3.16) has to be replaced by

$$\begin{aligned} \Vert u \Vert _{L^{2}(J,L^p)}&\lesssim \Vert \psi (\Delta _g)u \Vert _{L^{2}(J,L^p)}+\sum _{k=1}^{k_0-1} \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^{2}(J,L^p)}\nonumber \\&\quad +\sum _{k=k_0}^\infty 2^{-\frac{k}{2}\left( \frac{d}{p}-\nu (d)\right) } \Vert \varphi (2^{-k}\Delta _g)u \Vert _{L^2(J,H^1)}\nonumber \\&\quad +\sum _{k=k_0}^\infty \left[ 2^{-\frac{k}{2}\left( \frac{d}{p}-\nu (d)+\frac{1}{2}\right) } +2^{-\frac{k}{2}\left( \frac{d}{p}-\nu (d)+1\right) } +2^{-\frac{k}{2} \left( \frac{d}{p}-\nu (d)+\frac{1}{2}\right) } \right] \end{aligned}$$

for \(p\ge \frac{2 d}{d-2}\), where we set \(\nu (d):=\frac{d-3}{2}\). Hence, the convergence of the sums requires an upper bound on p, which destroys the uniqueness proof below such that the case \(d\ge 4\) remains an open problem. In fact, this problem occurs since the scaling condition for the Strichartz exponents, Sobolev embeddings and Bernstein inequalities are more restrictive in higher dimensions and therefore, the restriction to \(d=3\) is of deterministic nature.

Remark 3.4

In the proof of Proposition 3.1, we did not need the optimal estimates for the correction term \(\mu \) and the stochastic integral. In fact, it is possible to generalize the argument and show the estimate

$$\begin{aligned} \Vert u \Vert _{L^2(J,L^p)}\lesssim \, 1+\left( \vert J\vert p\right) ^\frac{1}{2}\qquad \text {a.s.},\quad p\in [6,\infty ), \end{aligned}$$

for martingale solutions of the equation

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( \mathrm {i}\Delta _g u(t)-\mathrm {i}\lambda \vert u(t)\vert ^{\alpha -1} u(t)+ \mu \left( \vert u(t)\vert ^{2(\gamma -1)}u(t)\right) \right) \mathrm {d}t\\&-\mathrm {i}B\left( \vert u(t)\vert ^{\gamma -1}u(t)\right) \mathrm {d}W(t),\\ u(0)&=u_0, \end{aligned}\right. \end{aligned}$$
(3.21)

with nonlinear noise of power \(\gamma \in [1,2)\). However, we do not know if this equation has a solution, since the existence theory developed in [9] only applies for \(\gamma =1\). Moreover, it is unclear how to apply these estimates in order to prove the pathwise uniqueness since the arguments below rely on the linearity of the noise. Hence, the case of Eq. (3.21) remains another open problem.

Remark 3.5

Let us comment on the case of possibly non-compact manifolds with bounded geometry. In the two dimensional setting, the Strichartz estimates from [16] with an additional loss of \(\varepsilon \) regularity were sufficient to prove uniqueness, see [9], Section 7. In fact, these estimates correspond to the localized Strichartz estimates of the form

$$\begin{aligned} \Vert t\mapsto e^{\mathrm {i}t \Delta _g}{\psi _{m,\frac{1}{2}}(-h^2\Delta _g)} x \Vert _{L^q(J,L^p)}\le C_\varepsilon \Vert x \Vert _{L^2},\qquad \vert J\vert \le \beta _\varepsilon h^{1+\varepsilon }, \end{aligned}$$
(3.22)

for all \(\varepsilon >0\) and some \(C_\varepsilon >0\) and \(\beta _\varepsilon >0\), where we denote \(\psi _{m,a}(\lambda ):= \lambda ^m e^{-a \lambda }\) for \(m\in \mathbb {N}\) and \(a>0\). A continuous version of the Littlewood-Paley inequality which can substitute (2.10) is given by

$$\begin{aligned} \Vert f \Vert _{L^p}\eqsim \Vert \varphi _{m,a}(-\Delta _g)f \Vert _{L^p}+\bigg \Vert {\left( \int _0^1 \vert {\psi _{m,a}(-h^2\Delta _g)} f\vert ^2 \frac{\mathrm {d}h}{h}\right) ^\frac{1}{2}}\bigg \Vert _{L^p},\quad f\in L^p(M),\nonumber \\ \end{aligned}$$
(3.23)

for \(\varphi _{m,a}(\lambda ):=\int _{\lambda }^\infty \psi _{m,a}(t)\frac{\mathrm {d}t}{t}\), see [16], Theorem 2.8. Based on (3.22) and (3.23), we can argue similarly as in the proof of Proposition 3.1 and end up with the estimate

$$\begin{aligned} \Vert u \Vert _{L^2(J,L^p)}\lesssim \, 1+\vert J\vert ^\frac{1}{2}\left( \frac{p}{6-\varepsilon p}\right) ^\frac{1}{2}\qquad \text {a.s.} \end{aligned}$$

for each \(\varepsilon >0\) and \(p\in [6,6 \varepsilon ^{-1})\) with an implicit constant which goes to infinity for \(\varepsilon \rightarrow 0\). The upper bound on p is due to the fact that the additional \(\varepsilon \) in (3.22) weakens the estimates of the critical term containing the derivative \(\chi _j^\prime \) of the temporal cut-off and enlarges the number of summands in (3.14). As in the case of higher dimensions than \(d=3\), the uniqueness argument breaks down since a limit process \(p\rightarrow \infty \) is no longer possible.

So far, we only used the topological properties of the noise, i.e.

$$\begin{aligned} B\in \mathcal {L}\left( {H^1(M)},{\text {HS}}(Y,{H^1(M)})\right) ,\qquad \mu \in \mathcal {L}({H^1(M)}). \end{aligned}$$

Now, the Stratonovich structure and the symmetry of the operators \(B_m\) for \(m\in \mathbb {N}\) come into play to prove the following representation formula for the \(L^2\)-distance of two solutions.

Lemma 3.6

Let \(d=3\) and \(\alpha \in (1,3]\). Let \(\left( \Omega ,\mathcal {F},\mathbb {P},W,\mathbb {F},u_j\right) \), \(j=1,2\), be solutions of Problem (1.1). Then, we have

$$\begin{aligned}&\Vert u_1(t)-u_2(t) \Vert _{L^2}^2\nonumber \\&=2 \int _0^t {\text {Re}}\big (u_1(s)-u_2(s),-\mathrm {i}\lambda \vert u_1(s)\vert ^{\alpha -1} u_1(s)+\mathrm {i}\lambda \vert u_2(s)\vert ^{\alpha -1} u_2(s)\big )_{L^2} \mathrm {d}s\nonumber \\ \end{aligned}$$
(3.24)

almost surely for all \(t\in [0,T]\).

Note that the RHS of (3.24) only contains the terms induced by the nonlinearity. In particular, the stochastic integral vanishes, which will enable us to use the pathwise estimate from Proposition 3.1 to prove uniqueness.

Proof

We restrict ourselves to a formal argumentation. Similarly to [9], Proposition 6.5, our reasoning can be rigorously justified by a regularization procedure based on Yosida approximations \(R_\lambda :=\lambda \left( \lambda -\Delta _g\right) ^{-1}\) for \(\lambda >0\). The function \(\mathcal {M}: {L^2(M)} \rightarrow \mathbb {R}\) defined by \(\mathcal {M}(v):=\Vert v \Vert _{L^2}^2\) is twice continuously Fréchet-differentiable with

$$\begin{aligned} \mathcal {M}^\prime [v]h_1&= 2 {\text {Re}}\big (v, h_1\big )_{L^2}, \, \mathcal {M}^{\prime \prime }[v] \left[ h_1,h_2\right] = 2 {\text {Re}}\big ( h_1,h_2\big )_{L^2} \end{aligned}$$

for \(v, h_1, h_2\in {L^2(M)}\). We set \(w:=u_1-u_2\). Then, a formal application of the Itô formula yields

$$\begin{aligned} \Vert w(t) \Vert _{L^2}^2= & {} 2 \int _0^t {\text {Re}}\big (w(s),\mathrm {i}\Delta _g w(s)-\mathrm {i}\vert u_1(s)\vert ^{\alpha -1} u_1(s)+\mathrm {i}\vert u_2(s)\vert ^{\alpha -1} u_2(s)\big )_{L^2} \mathrm {d}s\nonumber \\&+2 \int _0^t {\text {Re}}\big (w(s),\mu (w(s))\big )_{L^2} \mathrm {d}s- 2 \int _0^t {\text {Re}}\big (w(s),\mathrm {i}B w(s) \mathrm {d}W(s)\big )_{L^2}\nonumber \\&+\sum _{m=1}^{\infty }\int _0^t \Vert B_m w(s)\Vert _{L^2}^2\mathrm {d}s \end{aligned}$$
(3.25)

almost surely for all \(t\in [0,T]\). Since \(\Delta _g\) is selfadjoint, we get \({\text {Re}}\big (w,\mathrm {i}\Delta _g w\big )_{L^2}=0\). From the symmetry of \(B_m\), \(m\in \mathbb {N}\), we infer \({\text {Re}}\big (w,\mathrm {i}B_m w\big )_{L^2}=0\) and thus, we obtain

$$\begin{aligned} \int _0^t {\text {Re}}\big (w(s),\mathrm {i}B w(s) \mathrm {d}W(s)\big )_{L^2}=0. \end{aligned}$$

Moreover, we simplify

$$\begin{aligned} 2 {\text {Re}}\big (w(s),\mu (w(s))\big )_{L^2}=-\sum _{m=1}^{\infty }{\text {Re}}\big (w(s),B_m^2 w(s)\big )_{L^2} =-\sum _{m=1}^{\infty }\Vert B_m w(s) \Vert _{L^2}^2. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \Vert w(t) \Vert _{L^2}^2&=2 \int _0^t {\text {Re}}\big ( w(s),-\mathrm {i}\vert u_1(s)\vert ^{\alpha -1} u_1(s)+\mathrm {i}\vert u_2(s)\vert ^{\alpha -1} u_2(s)\big )_{L^2} \mathrm {d}s \end{aligned}$$

almost surely for all \(t\in [0,T]\). \(\square \)

We close with the proof of our main Theorem 1.1. We prove the uniqueness by applying a strategy developed by Yudovich, [49], for the Euler equation. In the context of the NLS, it was first used by Vladimirov in [48], Ogawa and Ozawa in [40] and [42]. They looked at 2D domains and used the Trudinger type inequalities to control the growth of the \(L^p\)-norms for \(p\rightarrow \infty \). A generalization of this argument to the stochastic case in 2D is straightforward and can be found in [29], Subsection 5.2. Following Burq, Gérard and Tzvetkov in the case without boundary, the Yudovich-strategy in combination with the Strichartz estimates as an improvement of the Trudinger inequality was also applied it to the deterministic NLS on compact 3D manifolds with boundary by Blair, Smith and Sogge in [17].

Proof of Theorem 1.1

Step 1 Let us take two solutions \(u_1,u_2\in L^2(\Omega ,L^\infty (0,T;H^1(M)))\). Using Proposition 3.1, we choose a null set \(N_1\in \mathcal {F}\) with

$$\begin{aligned} \Vert u_j(\cdot ,\omega ) \Vert _{L^2(J,L^{p})}\lesssim _\omega \,1+\left( \vert J\vert p\right) ^\frac{1}{2},\qquad \omega \in \Omega \setminus N_1, \end{aligned}$$
(3.26)

for each interval \(J\subset [0,T]\) and \(p\ge 6\). By Corollary 3.6, we choose a null set \(N_2\in F\) such that

$$\begin{aligned}&\Vert u_1(t)-u_2(t) \Vert _{L^2}^2\nonumber \\&=2 \int _0^t {\text {Re}}\big (u_1(s)-u_2(s),-\mathrm {i}\lambda \vert u_1(s)\vert ^{\alpha -1} u_1(s)+\mathrm {i}\lambda \vert u_2(s)\vert ^{\alpha -1} u_2(s)\big )_{L^2} \mathrm {d}s\nonumber \\ \end{aligned}$$
(3.27)

holds on \(\Omega \setminus N_2\) for all \(t\in [0,T]\). In particular, this leads to the weak differentiability of the map \(G:=\Vert u_1-u_2 \Vert _{L^2}^2\) on \(\Omega \setminus N_2\) and to the estimate

$$\begin{aligned} \vert G^\prime (t)\vert= & {} \left| 2 {\text {Re}}\big (u_1(s)-u_2(s),-\mathrm {i}\lambda \vert u_1(s)\vert ^{\alpha -1} u_1(s)+\mathrm {i}\lambda \vert u_2(s)\vert ^{\alpha -1} u_2(s)\big )_{L^2}\right| \nonumber \\\lesssim & {} \int _M \vert u_1(s,x)-u_2(s,x)\vert ^{2} \left( \vert u_1(s,x)\vert ^{\alpha -1}+\vert u_2(s,x)\vert ^{\alpha -1}\right) \mathrm {d}x. \end{aligned}$$
(3.28)

The Sobolev embedding \(H^1(M)\hookrightarrow L^6(M)\) yields \(u_j\in L^\infty (0,T;L^6(M))\), \(j=1,2\), almost surely. Moreover, we have the mild representation

$$\begin{aligned} \mathrm {i}u_j(t)= & {} \mathrm {i}e^{\mathrm {i}t\Delta _g}u_0+\int _0^t e^{\mathrm {i}(t-\tau )\Delta _g}\lambda \vert u_j(\tau )\vert ^{\alpha -1} u_j(\tau )\mathrm {d}\tau +\mathrm {i}\int _0^t e^{\mathrm {i}(t-\tau )\Delta _g}\mu (u_j(\tau ))\mathrm {d}\tau \\&+\int _0^t e^{\mathrm {i}(t-\tau )\Delta _g}\mathrm {B}(u_j(\tau ))\mathrm {d}W(\tau ) \end{aligned}$$

almost surely for all \(t\in [0,T]\) in \(H^{-1}(M)\) for \(j=1,2\). As a consequence of \(\alpha \in (1,3]\) and \(u_j\in L^\infty (0,T;L^6(M))\), each of the terms on the RHS is in \(L^2(M)\). In particular, we obtain \(u_j\in C([0,T],L^2(M))\), \(j=1,2\), almost surely and thus, we can take another null set \(N_3\in F\) such that

$$\begin{aligned} u_j\in L^\infty (0,T;L^6(M))\cap C([0,T],L^2(M))\quad \text {on} \quad \Omega \setminus N_3. \end{aligned}$$

Now, we define \(\Omega _1:=\Omega \setminus \left( N_1\cup N_2\cup N_3\right) \) and fix \(\omega \in \Omega _1\). We take a sequence \(\left( p_n\right) _{n\in \mathbb {N}}\in [6,\infty )^\mathbb {N}\) with \(p_n\rightarrow \infty \) as \(n\rightarrow \infty \). We fix \(n\in \mathbb {N}\) and define \(q_n:=\frac{p_n}{\alpha -1}\). By the estimate (3.28) and the Hölder inequality with exponents \(\frac{1}{q_n^\prime }+\frac{1}{q_n}=1\), we get

$$\begin{aligned} \vert G^\prime (t)\vert&\lesssim \Vert u_1(t)-u_2(t) \Vert _{L^{2{q^\prime _n}}}^2 \bigg \Vert {\vert u_1(t)\vert ^{\alpha -1}+\vert u_2(t)\vert ^{\alpha -1}}\bigg \Vert _{L^{{q_n}}} , \qquad t\in [0,T]. \end{aligned}$$

The choice of \({q_n}\) yields \(2{q^\prime _n}\in [2,6]\) and for \(\theta :=\frac{3}{2{q_n}}\in (0,1)\), we have \(\frac{1}{2 {q^\prime _n}}=\frac{1-\theta }{2}+ \frac{\theta }{6}\). Hence, we obtain

$$\begin{aligned}&\Vert u_1(t)-u_2(t) \Vert _{L^{2{q^\prime _n}}}^2 \le \Vert u_1(t)-u_2(t) \Vert _{L^2}^{2-\frac{3}{{q_n}}} \Vert u_1(t)-u_2(t) \Vert _{L^6}^\frac{3}{{q_n}}\\&\le \Vert u_1(t)-u_2(t) \Vert _{L^2}^{2-\frac{3}{{q_n}}} \Vert u_1-u_2 \Vert _{L^\infty (0,T;L^6)}^\frac{3}{{q_n}} \end{aligned}$$

by interpolation. We choose a constant \(C_1>0\) such that

$$\begin{aligned} \Vert u_1 \Vert _{L^\infty (0,T;L^6)}+\Vert u_2 \Vert _{L^\infty (0,T;L^6)}\le C_1, \end{aligned}$$

which leads to

$$\begin{aligned} \vert G^\prime (t)\vert\lesssim & {} \, C_1^\frac{3}{{q_n}} G(t)^{1-\frac{3}{{2q_n}}} \left[ \Vert u_1(t) \Vert _{L^{{p_n}}}^{\alpha -1}+\Vert u_2(t) \Vert _{L^{{p_n}}}^{\alpha -1}\right] . \end{aligned}$$
(3.29)

Step 2 We argue by contradiction and assume that there is \(t_2\in [0,T]\) with \(G(t_2)>0\). By the continuity of G, we get

$$\begin{aligned} \exists t_1\in [0,t_2): G(t_1)=0 \quad \text {and}\quad \forall t\in (t_1,t_2): G(t)>0. \end{aligned}$$
(3.30)

We set \(J_\varepsilon := (t_1,t_1+\varepsilon )\) with \(\varepsilon \in (0,t_2-t_1)\) to be chosen later. By the weak chain rule, see [28, Theorem 7.8] and (3.29), we get

$$\begin{aligned} G(t)^\frac{3}{2 {q_n}}&=\frac{3}{2 {q_n}} \int _{t_1}^t G^\prime (s) G(s)^{\frac{3}{2 {q_n}}-1} \mathrm {d}s \\&\lesssim \frac{3}{2 {q_n}} C_1^{\frac{3}{{q_n}}}\int _{t_1}^t \left[ \Vert u_1(s) \Vert _{L^{{p_n}}}^{\alpha -1}+\Vert u_2(s) \Vert _{L^{{p_n}}}^{\alpha -1}\right] \mathrm {d}s,\qquad t\in J_\varepsilon . \end{aligned}$$

By another application of the the Hölder inequality with exponents \(\frac{2}{\alpha -1}\) and \(\frac{2}{3-\alpha }\), we infer that

$$\begin{aligned} G(t)^\frac{3}{2 {q_n}}&\lesssim \frac{3}{2 {q_n}} C_1^{\frac{3}{{q_n}}} \Big [\Vert u_1 \Vert _{L^2(t_1,t;L^{p_n})}^{\alpha -1}+\Vert u_2 \Vert _{L^2(t_1,t;L^{p_n})}^{\alpha -1}\Big ]\varepsilon ^{\frac{3-\alpha }{2}},\qquad t\in J_\varepsilon . \end{aligned}$$

Now, we are in the position to apply (3.26) and we obtain

$$\begin{aligned} G(t)^\frac{3}{2 {q_n}}&\lesssim \frac{3}{2 {q_n}} C_1^{\frac{3}{{q_n}}}\left( 1+(\varepsilon p_n)^\frac{\alpha -1}{2}\right) \varepsilon ^{\frac{3-\alpha }{2}},\qquad t\in J_\varepsilon . \end{aligned}$$

In particular, there is a constant \(C>0\) such that for all \(t\in J_\varepsilon \) it holds that

$$\begin{aligned} G(t)\le & {} C_1^2 \left( \frac{3 C}{2 {q_n}}\left( 1+(\varepsilon (\alpha -1)q_n)^\frac{\alpha -1}{2}\right) \varepsilon ^{\frac{3-\alpha }{2}}\right) ^\frac{2 {q_n}}{3}\nonumber \\\le & {} C_1^2 \left( \frac{3C }{2 {q_n}}\left( 1+\varepsilon ^\frac{\alpha -1}{2} (\alpha -1)q_n\right) \varepsilon ^{\frac{3-\alpha }{2}}\right) ^\frac{2 {q_n}}{3} =:b_n, \end{aligned}$$
(3.31)

where we used \(p_n:={q_n}(\alpha -1)\) and \(\frac{\alpha -1}{2}\in (0,1]\).

Step 3 We aim to show that the sequence \(\left( b_n\right) _{n\in \mathbb {N}}\) on the RHS of (3.31) converges to 0 for \(\varepsilon \) sufficiently small. Then, we have proved \(G(t)=0\) for all \(t\in J_\varepsilon \) which contradicts (3.30). Hence, we have \(u_1(t)=u_2(t)\) almost surely for all \(t\in [0,T]\).

To this end, we choose \(\varepsilon \in (0,\min \{t_2-t_1,\frac{2}{3C (\alpha -1)}\}) \). Then,

$$\begin{aligned} b_n&=C_1^2 \left( \frac{3C }{2 {q_n}}\left( 1+\varepsilon ^\frac{\alpha -1}{2} (\alpha -1)q_n\right) \varepsilon ^{\frac{3-\alpha }{2}}\right) ^\frac{2 {q_n}}{3}\\&=C_1^2 \left( \frac{3C\varepsilon (\alpha -1) }{2}\right) ^\frac{2 {q_n}}{3}\left( \frac{1}{\varepsilon ^\frac{\alpha -1}{2} (\alpha -1)q_n}+1\right) ^\frac{2 {q_n}}{3} \xrightarrow {n\rightarrow \infty }0. \end{aligned}$$

The proof of Theorem 1.1 is thus completed. \(\square \)

4 Concluding remarks and open questions

To the best of our understanding there exists in the literature at least three different methods of studying the question of the existence of solutions to stochastic Nonlinear Schrödinger Equation (NLS), i.e.

  1. (i)

    The compactness method,

  2. (ii)

    The Banach Fixed Point Theorem or the Picard iteration scheme,

  3. (iii)

    The Doss-Sussmann transformation.

In our recent joint paper [9] we proved the existence of solutions to the stochastic NLS by employing the compactness method. The method of proving the existence of solutions based on the Banach Fixed Point Theorem or the Picard iteration scheme is the original one, see the celebrated papers [21] and [22]) by de Bouard and Debussche, as well as more recent papers [10] and [30], and references therein. One important unifying feature of these two approaches is the use of the deterministic [7] and the stochastic Strichartz estimates. The third approach goes back to papers [23] by Doss and [45] by Sussman for stochastic ordinary equations. This approach, usually called the Doss-Sussmann method, has been used with the so-called bilinear noise in the case of stochastic parabolic equations by Acquistapace and Terreni [1] and by the first named authour et al in [2]. A generalisation of that approach based on the Kunita’s stochastic characteristics has also been used by a multiple of authors, e.g. the first named author and Flandoli [3], Lisei [39], Röger and Weber in [43] , Chugreeva+Melcher [19] and references therein. This generalisation has also been discussed as an alternative proof by the first named authour et al in [5] for stochastic Euler Equations. The third method has also been employed for the stochastic NLS by Barbu, Röckner and Zhang in [14], see also [50], where it was called a “rescaling” method, to prove in particular the existence and the uniqueness of solutions. As in the earlier cited papers these papers treat the so called bilinear noise and have been generalised, to allow in particular non-linear noise coefficients, using the second method by the second named author in [30]. Let us point out that in our previous paper [9] we also studied stochastic NLS driven by bilinear noise but some generalisations to non-linear noise coefficients have been found in the PhD thesis [29] by the second named author. One fundamental difference between the first and the second methods on the one hand and the Doss-Sussman method on the other is that the latter only works for bilinear noise while the former for more general noise coefficients. Nevertheless, a natural question emerges whether the results from our previous paper [9] can be fully proven by using the Doss-Sussman method? Since the present paper essentially works only for bilinear noise, a second natural question is whether it is possible to prove our uniqueness result from the present article by employing the Doss-Sussman method? The difficulty one would encounter in undertaking such a challenge is that after applying the Doss-Sussman transformation, the NLS in the Itô-Stratonovitch form becomes a family of deterministic equations but with the Laplace-Beltrami operator being replaced by a time-dependent second-order operator containing first-order terms and we simply do not know how to treat such equations. Even classical Strichartz estimates are problematic. We find these an interesting questions and hope that some other researchers will find them interesting as well.

We finish these comments with posing another interesting open problem: Is it possible to generalize the uniqueness to other geometries (in particular bounded or unbounded do- mains and non-compact manifolds). The existence in the latter cases is proved in the recent paper [31] by the second named author. In Remark 3.5 we explained why we couldn’t apply the approach from the present paper to prove uniqueness for these geometries.

Finally we would like to discuss our Corollary 1.2 about the existence of the strong solutions. Our approach of proving this result consists of three steps. The first step is to prove the existence of weak/martingale solutions. This step has been successfully concluded in our previous paper [9]. The second step is to prove the pathwise uniqueness of weak/martingale solutions. This step has been successfully concluded in our current paper. The third step is to apply the infinite dimensional Yamada-Watanabe Theorem. All these three steps have been implemented for stochastic reaction diffusion equations in a rather classical paper by the first named author and Gatarek in [6], where the classical version Yamada-Watanabe Theorem from [32] has been used. A proper formulation and a full detailed proof of the infinite dimensional Yamada-Watanabe Theorem has been first presented by Ondreját in [41] for mild solutions and by Kunze [36] for weak solutions. Here we use the latter as it fits better our framework. However, since the existence of solutions is obtained by a compactness argument, it is probably possible to use the Gyongy and Krylov Lemma, see [26, Lemma 1] to prove that in fact the approximations converge in probability. As far as we understand, this approach has been recently used by Crisan, Flandoli and Holm [18] and it still required the use of the Skorokhod embedding theorem. It would of interest to see if this approach works for the class of stochastic NLS studied in the present paper. One possible source of difficulties could be that the original result by Gyongy-Krylov works in the framework of Polish spaces while the existence proof from our first paper [9] a certain class of non-Polish spaces as the Jakubowski-Skorokhod Theorem by Jakubowski from [33]. But, it happens that the Gyongy-Krylov Lemma has been recently generalised to the same framework as used in [33] in a recent monograph by Breit, Hofmanova and Feireisl [4].