Keywords

2010 Mathematics Subject Classification

1 Introduction

Consideration is given to the following Stratonovich one-dimensional BBM-type equation

$$\displaystyle \begin{aligned} d u = - \partial_x K \left( u + K u^2 \right) d t + \sum_j \gamma_j \partial_x \left( u + K u^2 \right) \circ d W_j \end{aligned} $$
(1)

introduced in [4], as a model describing surface waves of a fluid layer. It is supplemented with the initial condition u(0) = u 0. Equation (1) has a Hamiltonian structure with the energy

$$\displaystyle \begin{aligned} \mathcal H (u) = \int_{\mathbb R} \left( \frac 12 \big( K^{-1/2}u \big)^2 + \frac 13 u^3 \right) dx . \end{aligned} $$
(2)

The Fourier multiplier operator K, defined in the space of tempered distributions \(\mathcal S'(\mathbb R)\), has an even symbol of the form

$$\displaystyle \begin{aligned} K(\xi) \simeq ( 1 + \xi^2 )^{ - \sigma_0 }\end{aligned} $$
(3)

with σ 0 > 1∕2. Expression (3) means that the symbol K(ξ) is bounded from below and above by RHS(3) multiplied by some positive constants. In other words the operator K essentially behaves as the Bessel potential of order 2σ 0, see [6]. The space variable is \(x \in \mathbb R\) and the time variable is \(t \geqslant 0\). The unknown u is a real valued function of these variables and of the probability variable ω ∈ Ω, representing the free surface elevation in the fluid layer. The scalar sequence {γ j} satisfies the restriction \( \sum _j \gamma _j^2 < \infty , \) and {W j} is a sequence of independent scalar Brownian motions on a filtered probability space \( \left ( \Omega , \mathcal F, \{ \mathcal F_t \}, \mathbb P \right ) . \)

Model (1) was introduced in [4], where an attempt to extend an elegant Hamiltonian formulation of [1] to the stochastic setting was made. We will just briefly comment on the methodology of [4]. The white noise is firstly introduced via the stochastic transport theory presented in [8], which is based on splitting of fluid particle motion into smooth and random movements. Then it is restricted to a particular Stratonovich form in order to respect the energy conservation. In particular, it provides us with a model having multiplicative noise of Hamiltonian structure. Finally, a long wave approximation results in simplified models as (1), for example.

One may notice that after discarding the nonlinear terms in Eq. (1), the details can be seen in [4], the corresponding linearised initial-value problem can be solved exactly with the help of the fundamental multiplier operator

$$\displaystyle \begin{aligned} \mathcal S(t, t_0) = \exp \left[ - \partial_x K ( t - t_0 ) + \sum_j \gamma_j \partial_x ( W_j(t) - W_j(t_0) ) \right] ,\end{aligned} $$
(4)

where \(t_0, t \in \mathbb R\). Note that it can be factorised as \( \mathcal S(t, t_0) = S(t - t_0) S_W(t, t_0) , \) where \(S(t) = \exp ( - \partial _x Kt )\) is a unitary semi-group and S W containing all the randomness coming from the Wiener process is unitary as well. They obviously commute as bounded differential operators. We recall that S(t) is defined via the Fourier transform \( \mathfrak F \left ( S(t) \psi \right ) = \exp ( - i\xi K(\xi ) t ) \widehat \psi (\xi ) \) for any \(\psi \in \mathcal S'(\mathbb R)\) and \( \widehat \psi = \mathfrak F \psi . \) Similarly, S W(t, t 0) is defined by the line

$$\displaystyle \begin{aligned} S_W(t, t_0) \psi = \mathfrak F^{-1} \left( \xi \mapsto \exp \left( i\xi \sum_j \gamma_j ( W_j(t) - W_j(t_0) ) \right) \widehat \psi(\xi) \right) .\end{aligned} $$

It allows us to represent (1) in the Duhamel form

$$\displaystyle \begin{aligned} u(t) = \mathcal S(t, 0) \left( u_0 + \int_0^t \mathcal S(0, s) f(u(s)) ds + \sum_j \gamma_j \int_0^t \mathcal S(0, s) g(u(s)) dW_j(s) \right) , \end{aligned} $$
(5)

where

$$\displaystyle \begin{aligned} f(u) = - \partial_x K^2 u^2 + \sum_j \gamma_j^2 \partial_x K( u \partial_x K u^2) \end{aligned}$$

and

$$\displaystyle \begin{aligned} g(u) = \partial_x K u^2 . \end{aligned}$$

Existence and uniqueness of solution to Eq. (5) is under consideration. It is worth to point out that both S W and the stochastic integral in (5) are well defined. Indeed, appealing to Doobs’ inequalities for the submartingale \( \left | \sum _{j = n}^{n + m} \gamma _j W_j \right | \) and the Itô-Nisio theorem one can show that ∑j γ j W j converges uniformly in time almost surely, in probability and in L 2 sense. If the integrand of the stochastic integral in (5) is in some Sobolev space \(H^{\sigma }( \mathbb R )\) for each s and a.e. ω, then we can understand this sum of integrals as an integration with respect to a Q-Wiener process associated with a Hilbert space H and a non-negative symmetric trace class operator Q having eigenvalues \(\gamma _j^2\) and eigenfunctions e j forming an orthonormal basis in H. Then the corresponding integrand is the unbounded linear operator between H and \(H^{\sigma }( \mathbb R )\) that maps all e j to the same element of \(H^{\sigma }( \mathbb R )\), namely, to \( \mathcal S(0, s) g(u(s)) . \) In particular, it explains why we need the summability condition \( \sum _j \gamma _j^2 < \infty . \)

Before we formulate the main result it is left to introduce a notation as follows. By \( C(0, T; H^{\sigma }(\mathbb R)) \) we will notate the space of continuous functions on [0, T] having values in \(H^{\sigma }(\mathbb R)\) with the usual supremum norm.

Theorem 1

Let σ 0 > 1∕2 and \(\sigma \geqslant \max \{ \sigma _0, 1 \}\) . Then for any \(\mathcal F_0\) -measurable \( u_0 \in L^2(\Omega ; H^{\sigma }( \mathbb R )) \cap L^{\infty }(\Omega ; H^{\sigma _0}( \mathbb R )) \) with sufficiently small \(L^{\infty }H^{\sigma _0}\) -norm and any T 0 > 0 Eq.(5) has a unique adapted solution \( u \in L^2( \Omega ; C(0, T_0; H^{\sigma }(\mathbb R)) ) \cap L^{\infty }( \Omega ; C(0, T_0; H^{\sigma _0}(\mathbb R)) ) . \) Moreover, \( \mathcal H(u(t)) = \mathcal H(u_0) \) for each t ∈ [0, T 0] almost surely on Ω.

The conservation of energy (2) plays a crucial role in the proof. So it will be a bit more convenient to regard the energy norm defined by

$$\displaystyle \begin{aligned} \left\Vert u \right\Vert {}_{ \mathcal H }^2 = \frac 12 \int_{\mathbb R} \big( K^{-1/2}u \big)^2 dx \end{aligned}$$

instead of the spatial \(H^{\sigma _0}\)-norm. They are obviously equivalent.

The proof is essentially based on the contraction mapping principle. We do not exploit much smoothing properties of the group \(\mathcal S(t, t_0)\), as for example is done in [2] for analysis of a stochastic nonlinear Schrödinger equation. It is enough to know that the absolute value of its symbol equals one, and that S(t) is a unitary semigroup. However, in order to appeal to the fixed point theorem we have to truncate both deterministic f and random g nonlinearities. There are a couple of technical difficulties related to implementation of the energy conservation in our case. Firstly, for the truncated equation we can claim \(\mathcal H\)-conservation only until a particular stopping time. Secondly, one can control \(\left \Vert u \right \Vert { }_{ \mathcal H }\) with \(\mathcal H(u)\) only provided \(\left \Vert u \right \Vert { }_{ \mathcal H }\) is small. These additional difficulties make us repeat the arguments of the last section in the paper iteratively in order to construct solution on the whole time interval [0, T 0].

As a final remark we point out that the noise in Eq. (1) can be gathered in one dimensional \( \partial _x \left ( u + K u^2 \right ) \circ d B \) with the scalar Brownian motion B =∑j γ j W j. However, this does not affect the proof below anyhow, so we continue to stick to the original formulation (1). In future works we are planning to extend it to γ j being either Fourier multipliers or space-dependent coefficients.

2 Truncation

The Sobolev space \(H^{\sigma }(\mathbb R)\) consists of tempered distributions u having the finite square norm \( \left \Vert u \right \Vert { }_{H^{\sigma }}^2 = \int \left | \widehat u (\xi ) \right |{ }^2 \left ( 1 + \xi ^2 \right ) ^{\sigma } d\xi < \infty . \) Let \( \theta \in C_0^{\infty }(\mathbb R) \) with \( \operatorname {\mathrm {supp}} \theta \in [-2, 2] \) being such that θ(x) = 1 for x ∈ [−1, 1] and \( 0 \leqslant \theta (x) \leqslant 1 \) for \( x \in \mathbb R . \) For R > 0 we introduce the cut off θ R(x) = θ(xR) and

$$\displaystyle \begin{aligned} f_R(u) = \theta_R( \left\Vert u \right\Vert {}_{H^{\sigma}} ) f(u) , \quad g_R(u) = \theta_R( \left\Vert u \right\Vert {}_{H^{\sigma}} ) g(u) \end{aligned}$$

that we substitute in (5) instead of f(u), g(u), respectively. The new R-regularisation of (5) reads as

$$\displaystyle \begin{aligned} u(t) = \mathcal S(t, t_0) \left( u(t_0) + \int_{t_0}^t \mathcal S(t_0, s) f_R(u(s)) ds + \sum_j \gamma_j \int_{t_0}^t \mathcal S(t_0, s) g_R(u(s)) dW_j(s) \right). \end{aligned} $$
(6)

In this section without loss of generality we can set t 0 = 0 and u(t 0) = u 0. We will vary time moments t 0 below in the next section. Equation (6) can be solved with a help of the contraction mapping principle in \( L^2( \Omega ; C(0, T; H^{\sigma }(\mathbb R)) ) . \)

Proposition 1

Let σ > 1∕2, \( u_0 \in L^2( \Omega ; H^{\sigma }(\mathbb R) ) \) be \(\mathcal F_0\) -measurable and T 0 > 0. Then (6) has a unique adapted solution \( u \in L^2( \Omega ; C(0, T_0; H^{\sigma }(\mathbb R)) ) . \) Moreover, it depends continuously on the initial data u 0.

Proof

We set \( \mathcal Tu(t) = \mbox{RHS(6)} . \) We will show that \(\mathcal T\) is a contraction mapping in \( X_T = L^2( \Omega ; C(0, T; H^{\sigma }(\mathbb R)) ) , \) provided T > 0 is sufficiently small, depending only on R. Let u 1, u 2 be two adapted processes in X T. Firstly, one can notice that

$$\displaystyle \begin{aligned} \left\Vert f_R( u_1 ) - f_R( u_2 ) \right\Vert _{H^{\sigma}} \leqslant C \left( 1 + R \right)^2 \left\Vert u_1 - u_2 \right\Vert _{H^{\sigma}} , \end{aligned}$$
$$\displaystyle \begin{aligned} \left\Vert g_R( u_1 ) - g_R( u_2 ) \right\Vert _{H^{\sigma}} \leqslant C R \left\Vert u_1 - u_2 \right\Vert _{H^{\sigma}} . \end{aligned}$$

Indeed, \(H^{\sigma }(\mathbb R)\) poses an algebraic property for σ > 1∕2 and x K is bounded in \(H^{\sigma }(\mathbb R)\). Then assuming \( \left \Vert u_1 \right \Vert { }_{H^{\sigma }} \geqslant \left \Vert u_2 \right \Vert { }_{H^{\sigma }} \) without loss of generality one deduces

$$\displaystyle \begin{aligned} \left\Vert g_R( u_1 ) - g_R( u_2 ) \right\Vert _{H^{\sigma}} &\leqslant C \left\Vert \theta_R( \left\Vert u_1 \right\Vert {}_{H^{\sigma}} ) u_1^2 - \theta_R( \left\Vert u_2 \right\Vert {}_{H^{\sigma}} ) u_2^2 \right\Vert _{H^{\sigma}} \\ & \leqslant C \theta_R( \left\Vert u_1 \right\Vert {}_{H^{\sigma}} ) \left\Vert u_1^2 - u_2^2 \right\Vert _{H^{\sigma}}\\ &\quad + \left| \theta_R( \left\Vert u_1 \right\Vert {}_{H^{\sigma}} ) - \theta_R( \left\Vert u_2 \right\Vert {}_{H^{\sigma}} ) \right| \left\Vert u_2^2 \right\Vert _{H^{\sigma}}\\ & \leqslant C R \left\Vert u_1 - u_2 \right\Vert _{H^{\sigma}} , \end{aligned} $$

where we have used the estimate \( \left | \theta _R( \left \Vert u_1 \right \Vert { }_{H^{\sigma }} ) - \theta _R( \left \Vert u_2 \right \Vert { }_{H^{\sigma }} ) \right | \leqslant \left \Vert \theta ' \right \Vert { }_{L^{\infty }} R^{-1}\) \( \left \Vert u_1 - u_2 \right \Vert { }_{H^{\sigma }} \) following obviously from the mean value theorem. The difference between f R(u 1) and f R(u 2) can be obtained in the same way. Thus

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \left\Vert \mathcal Tu_1(t) - \mathcal Tu_2(t) \right\Vert _{H^{\sigma}} \leqslant \left\Vert \int_0^t \mathcal S(0, s) ( f_R( u_1(s) ) - f_R( u_2(s) ) ) ds \right\Vert _{H^{\sigma}} \\ & &\displaystyle \qquad \qquad \quad + \left\Vert \sum_j \gamma_j \int_0^t \mathcal S(0, s) ( g_R( u_1(s) ) - g_R( u_2(s) ) ) dW_j(s) \right\Vert _{H^{\sigma}} = I + II . \end{array} \end{aligned} $$

The first integral is estimated straightforwardly as

$$\displaystyle \begin{aligned} I \leqslant \int_0^T \left\Vert f_R( u_1(s) ) - f_R( u_2(s) ) \right\Vert _{H^{\sigma}} ds \leqslant C ( 1 + R )^2 T \left\Vert u_1 - u_2 \right\Vert _{C( 0, T; H^{\sigma} )} . \end{aligned}$$

The second one is estimated with the use of the Burkholder inequality [5] as

$$\displaystyle \begin{aligned} \mathbb E \sup_{ 0 \leqslant t \leqslant T } II^2 \leqslant C \mathbb E \int_0^T \left\Vert g_R( u_1(s) ) - g_R( u_2(s) ) \right\Vert _{H^{\sigma}}^2 ds \leqslant C R^2 T \mathbb E \left\Vert u_1 - u_2 \right\Vert _{C( 0, T; H^{\sigma} )}^2 . \end{aligned}$$

It is clear that time-continuity of \( \mathcal Tu_1 , \mathcal Tu_2 \) follows from the factorisation \( \mathcal S = S S_W \) and the estimate \( \left \Vert S_W g_R( u ) \right \Vert _{H^{\sigma }} \leqslant CR^2 , \) so we have a stochastic convolution as in [5, Lemma 3.3]. Thus

$$\displaystyle \begin{aligned} \left\Vert \mathcal Tu_1 - \mathcal Tu_2 \right\Vert _{X_T} \leqslant C \left( ( 1 + R )^2 T + R \sqrt{T} \right) \left\Vert u_1 - u_2 \right\Vert _{X_T} , \end{aligned}$$

and so there exists a small T depending only on R such that \(\mathcal T\) has a unique fixed point in X T. Moreover, this estimate also gives us continuous dependence of solution in X T on the initial data \( u_0 \in L^2( \Omega ; H^{\sigma }(\mathbb R) ) , \) obviously. Clearly, the solution can be extended to the whole interval [0, T 0]. □

The regularisation affects the energy conservation. Indeed, in the Itô differential form Eq. (6) reads

$$\displaystyle \begin{aligned} d u &= \left( - \partial_x K u + \frac 12 \sum_j \gamma_j^2 \partial_x^2 u + f_R(u) + \sum_j \gamma_j^2 \partial_x g_R(u) \right) d t\\ &\quad + \sum_j \gamma_j \left( \partial_x u + g_R(u) \right) d W_j ,\end{aligned} $$
(7)

and so applying the Itô formula to the energy functional \(\mathcal H(u(t))\) defined by (2) with the use of (7), one can easily obtain

$$\displaystyle \begin{aligned} d \mathcal H(u) = \left( \left( \theta_R - 1 \right) \int u^2 \partial_x K u dx + \theta_R \left( \theta_R - 1 \right) \sum_j \gamma_j^2 \int \left( \frac 12 g(u) K^{-1} g(u) + u g^2(u) \right) dx \right) dt . \end{aligned} $$
(8)

Indeed, assuming \(\sigma \geqslant \sigma _0 + 2\) at first, we notice that the solution u given by Proposition 1 solves Eq. (7). Let us introduce the following notations

$$\displaystyle \begin{aligned} \Psi(t)dt + \Phi(t)dW = \Psi(t)dt + \sum_j \gamma_j \Phi(t) e_j d W_j = \mbox{RHS(7)} .\end{aligned} $$

Then Itô’s formula reads

$$\displaystyle \begin{aligned} \mathcal H(u(t)) & = \mathcal H(u_0) + \int_0^t \partial_u \mathcal H(u(s)) \Psi(s) ds + \int_0^t \partial_u \mathcal H(u(s)) \Phi(s) dW(s) \\ & \quad + \frac 12 \int_0^t \operatorname{\mathrm{tr}} \partial_u^2 \mathcal H(u(s)) ( \Phi(s), \Phi(s) ) ds ,\end{aligned} $$

where the Fréchet derivatives are defined by

$$\displaystyle \begin{aligned} \partial_u \mathcal H(u) \phi = \int_{\mathbb R} \left( K^{-1/2}u K^{-1/2}\phi + u^2 \phi \right) dx , \end{aligned}$$
$$\displaystyle \begin{aligned} \partial_u^2 \mathcal H(u) ( \phi, \psi) = \int_{\mathbb R} \left( K^{-1/2}\phi K^{-1/2}\psi + 2u \phi \psi \right) dx \end{aligned}$$

at every \( \phi , \psi \in H^{\sigma _0}(\mathbb R) . \) Substituting these expressions together with the definitions of Φ and Ψ into the Itô’s formula one obtains (8). Let us, for example, calculate the stochastic integral

$$\displaystyle \begin{aligned} \int_0^t \partial_u \mathcal H(u(s)) \Phi(s) dW(s) &= \sum_j \gamma_j \int_0^t \int_{\mathbb R} \left( K^{-1/2}u K^{-1/2} + u^2 \right)\\ \quad & \left( \partial_x u + \theta_R( \left\Vert u \right\Vert {}_{H^{\sigma}} ) \partial_x K u^2 \right) dx d W_j \end{aligned} $$

that equals zero as one can see integrating by parts in the space integral. Similarly, one calculates the other two integrals in the Itô formula. Thus we have proved (8) for \(\sigma \geqslant \sigma _0 + 2\). In order to lower the bound for σ, one would like to argue here by approximation of initial value u 0 via smooth functions and appeal to the continuous dependence on u 0, however, there is a problem here, since θ R in (8) contains the dependence on σ. So even for a smooth initial data the corresponding solution lies a priori only in H σ. This difficulty is overcome in the next statement, where we argue similar to [3].

Proposition 2

Let σ 0 > 1∕2 and \(\sigma \geqslant \max \{ \sigma _0, 1 \}\) . Then (8) holds almost surely for u satisfying Eq.(6) given by Proposition 1.

Proof

The main idea is to cut off high frequencies of the differential operator x in (7) as follows. Let P λ be a Fourier multiplier with the symbol θ λ, λ > 0. It is defined by the expression \( \mathfrak F ( P_{\lambda } \psi ) = \theta _{\lambda } \widehat \psi . \) Now we consider instead of (7) the following regularisation

$$\displaystyle \begin{aligned} d u &= \left( - \partial_x K u + \frac 12 \sum_j \gamma_j^2 \partial_x^2 P_{\lambda}^2 u + f_R(u) + \sum_j \gamma_j^2 \partial_x P_{\lambda} g_R(u) \right) d t \\ & \quad + \sum_j \gamma_j \left( \partial_x P_{\lambda} u + g_R(u) \right) d W_j \end{aligned} $$
(9)

that has a strong solution. Indeed, it contains only bounded operators and the corresponding mild equation has exactly the same form as Eq. (6) with \( \mathcal S^{\lambda } = S S_W^{\lambda } \) now instead of \(\mathcal S\), where

$$\displaystyle \begin{aligned} S_W^{\lambda} = \exp \left[ \sum_j \gamma_j \partial_x P_{\lambda} ( W_j(t) - W_j(t_0) ) \right] . \end{aligned}$$

So we can actually apply Proposition 1 to obtain u = u λ solving (9). Let u = u stay for the solution of the original Eq. (6). Firstly, we will check that u λ → u in \( L^2( \Omega ; L^2(0, T_0; H^{\sigma }(\mathbb R)) ) \) for any σ > 1∕2 as λ →.

Let \( 0 \leqslant t \leqslant T \leqslant T_0 , \) where a positive small enough time moment T is to be chosen below. Then

$$\displaystyle \begin{aligned} \left\Vert u_{\lambda}(t) - u_{\infty}(t) \right\Vert _{H^{\sigma}} &= \left\Vert \mathcal T^{\lambda}u_{\lambda}(t) - \mathcal T^{\infty}u_{\infty}(t) \right\Vert _{H^{\sigma}} \\ & \leqslant \left\Vert \left( \mathcal S^{\lambda}(t, 0) - \mathcal S^{\infty}(t, 0) \right) u_0 \right\Vert _{H^{\sigma}} \\ &\quad + \left\Vert \int_0^t \left( \mathcal S^{\lambda}(t, s) - \mathcal S^{\infty}(t, s) \right) f_R( u_{\infty}(s) ) ds \right\Vert _{H^{\sigma}}\\ &\quad + \left\Vert \int_0^t \mathcal S^{\lambda}(t, s) ( f_R( u_{\lambda}(s) ) - f_R( u_{\infty}(s) ) ) ds \right\Vert _{H^{\sigma}} \\ &\quad + \left\Vert \left( \mathcal S^{\lambda}(t, 0) - \mathcal S^{\infty}(t, 0) \right) \sum_j \gamma_j \int_0^t \mathcal S^{\infty}(0, s) g_R( u_{\infty}(s) ) dW_j(s) \right\Vert _{H^{\sigma}} \\ &\quad + \left\Vert \sum_j \gamma_j \int_0^t \left( \mathcal S^{\lambda}(0, s) - \mathcal S^{\infty}(0, s) \right) g_R( u_{\infty}(s) ) dW_j(s) \right\Vert _{H^{\sigma}} \\ &\quad + \left\Vert \sum_j \gamma_j \int_0^t \mathcal S^{\lambda}(0, s) ( g_R( u_{\lambda}(s) ) - g_R( u_{\infty}(s) ) ) dW_j(s) \right\Vert _{H^{\sigma}}\\ & = I_1 + \ldots + I_6. \end{aligned} $$

The terms I 3 and I 6 are estimated exactly as the analogous integrals I and II in the proof of Proposition 1, namely,

$$\displaystyle \begin{aligned} I_3 \leqslant C ( 1 + R )^2 \sqrt T \left\Vert u_{\lambda} - u_{\infty} \right\Vert _{L^2( 0, T; H^{\sigma} )} \end{aligned}$$

and

$$\displaystyle \begin{aligned} \mathbb E \sup_{ 0 \leqslant t \leqslant T } I_6^2 & \leqslant C \mathbb E \int_0^T \left\Vert g_R( u_{\lambda}(s) ) - g_R( u_{\infty}(s) ) \right\Vert _{H^{\sigma}}^2 ds\\ & \leqslant C R^2 \mathbb E \left\Vert u_{\lambda} - u_{\infty} \right\Vert _{L^2( 0, T; H^{\sigma} )}^2 . \end{aligned} $$

Thus

$$\displaystyle \begin{aligned} \mathbb E \int_0^T \left( I_3^2 + I_6^2 \right) dt \leqslant C \left( ( 1 + R )^4 T^2 + R^2 T \right) \mathbb E \left\Vert u_{\lambda} - u_{\infty} \right\Vert _{L^2( 0, T; H^{\sigma} )}^2 , \end{aligned}$$

and so there exists a small T > 0 depending only on R such that

$$\displaystyle \begin{aligned} \mathbb E \left\Vert u_{\lambda} - u_{\infty} \right\Vert _{L^2( 0, T; H^{\sigma} )}^2 \leqslant C \mathbb E \int_0^T \left( I_1^2 + I_2^2 + I_4^2 + I_5^2 \right) dt . \end{aligned}$$

One needs to show that the right hand side of this expression tends to zero when λ →. All these four integrals are treated similarly. Indeed, let us regard more closely the first one

$$\displaystyle \begin{aligned} I_1^2 = \int \left| \exp \left( i\xi \theta_{\lambda}(\xi) \sum_j \gamma_j W_j(t) \right) - \exp \left( i\xi \sum_j \gamma_j W_j(t) \right) \right|{}^2 \left| \widehat{u_0} (\xi) \right|{}^2 \left( 1 + \xi^2 \right)^{\sigma} d\xi \end{aligned}$$

that obviously tends to zero as λ → for a.e. ω and any t. Hence \( \mathbb E \int _0^T I_1^2 dt \to 0 \) by the dominated convergence theorem, sine \( I_1 \leqslant 2 \left \Vert u_0 \right \Vert { }_{H^{\sigma }} . \) The integral of \(I_4^2\) is estimated exactly in the same manner with the stochastic integral of \( \mathcal S^{\infty } g_R( u_{\infty } ) \) standing in place of u 0. The second integral

$$\displaystyle \begin{aligned} \mathbb E \int_0^T I_2^2 dt \leqslant T \mathbb E \int_0^T \int_0^T \left\Vert \left( \mathcal S^{\lambda}(t, s) - \mathcal S^{\infty}(t, s) \right) f_R( u_{\infty}(s) ) \right\Vert _{H^{\sigma}}^2 ds dt \to 0 \end{aligned}$$

by the dominated convergence theorem, since \( \left \Vert \ldots \right \Vert _{H^{\sigma }}^2 \leqslant CR^2 (1 + R)^4 . \) Finally, the last integral

$$\displaystyle \begin{aligned} \mathbb E \int_0^T I_5^2 dt \leqslant T \mathbb E \sup _{ t \in [0, T] } I_5^2 \leqslant C T \mathbb E \int_0^T \left\Vert \left( \mathcal S^{\lambda}(0, s) - \mathcal S^{\infty}(0, s) \right) g_R( u_{\infty}(s) ) \right\Vert _{H^{\sigma}}^2 ds \to 0 \end{aligned}$$

by the Burkholder inequality and the dominated convergence theorem, since \( \left \Vert \ldots \right \Vert _{H^{\sigma }}^2 \leqslant CR^4 . \)

Repeating this argument iteratively on subintervals of [0, T 0] of the size T one obtains that u λ → u in \( L^2( \Omega \times [0, T_0] ; H^{\sigma }(\mathbb R) ) . \)

Let us calculate each term in the Itô formula for u = u λ. As we shall see the corresponding stochastic integral is not zero, and moreover, it is difficult to pass to the limit λ → treating the stochastic part. So instead of \(\mathcal H\) we consider at first a sequence \(\mathcal H_n\), \(n \in \mathbb N\), with the cubic term being cut off in the following way

$$\displaystyle \begin{aligned} \mathcal H_n(u) = \left\Vert u \right\Vert {}_{\mathcal H}^2 + \frac 13 \theta_n \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) \int u^3 dx \end{aligned}$$

that clearly tends to \(\mathcal H(u)\) almost surely at any fixed time moment. The corresponding Fréchet derivatives are defined by

$$\displaystyle \begin{aligned} \partial_u \mathcal H_n(u) \phi = \int_{\mathbb R} \left[ \left(1 + \frac 13\theta_n^{\prime} \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) \int u^3 dy \right) K^{-1/2}u K^{-1/2}\phi + \theta_n \left( \left\Vert u \right\Vert {}_{\mathcal H}^2\right) u^2 \phi \right] dx , \end{aligned} $$
$$\displaystyle \begin{aligned} \partial_u^2 \mathcal H_n(u) ( \phi, \psi) & = \int_{\mathbb R} \left[ \left( 1 + \frac 13 \theta_n^{\prime} \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) \int u^3 dx \right) K^{-1/2} \phi K^{-1/2}\psi {+} 2 \theta_n \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) u \phi \psi \right] dx \\ &\quad + \theta_n^{\prime} \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) \int u^2 \phi dx \int K^{-1/2} u K^{-1/2}\psi dy \\ &\quad + \frac 13 \theta_n^{\prime\prime} \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) \int u^3 dx \int K^{-1/2} u K^{-1/2}\phi dy\\ &\quad \int K^{-1/2} u K^{-1/2}\psi dz \end{aligned} $$

at every \( \phi , \psi \in H^{\sigma _0}(\mathbb R) . \) Substituting it to the stochastic integral one obtains the following expression that can be simplified by integration by parts

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \int_0^t \partial_u \mathcal H_n(u(s)) \Phi(s) dW(s) \\ & &\displaystyle = \sum_j \gamma_j \int_0^t \int_{\mathbb R} \left[ \left( 1 + \frac 13 \theta_n^{\prime} \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) \int u^3 dy \right) K^{-1/2}u K^{-1/2} + \theta_n \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) u^2 \right] \\ & &\displaystyle \left( \partial_x P_{\lambda} u {+} \theta_R( \left\Vert u \right\Vert {}_{H^{\sigma}} ) \partial_x K u^2 \right) dx d W_j {=} \sum_j \gamma_j \int_0^t \theta_n \left( \left\Vert u \right\Vert {}_{\mathcal H}^2 \right) \int_{\mathbb R} u^2 \partial_x P_{\lambda} u dx d W_j , \end{array} \end{aligned} $$

where u = u λ. We will show that this integral tends to zero as λ →. That is exactly the place where we need the cut off θ n. Applying some algebraic manipulations to the space integral and the Burkholder inequality to the stochastic integral, one deduces the estimate

$$\displaystyle \begin{aligned} & \mathbb E \sup _{0 \leqslant t \leqslant T_0} \left| \int_0^t \partial_u \mathcal H_n(u(s)) \Phi(s) dW(s) \right|{}^2\\ &\leqslant C \mathbb E \int_0^{T_0} \theta_n^2 \left( \left\Vert u_{\lambda}(t) \right\Vert {}_{\mathcal H}^2 \right) \left( \int_{\mathbb R} u_{\lambda}^2(t) \partial_x ( P_{\lambda} - 1 ) u_{\lambda}(t) dx \right) ^2 dt \\ & \leqslant C \mathbb E \int_0^{T_0} \theta_n^2 \left( \left\Vert u_{\lambda}(t) \right\Vert {}_{\mathcal H}^2 \right) \left\Vert u_{\lambda}(t) \right\Vert {}_{\mathcal H}^4\\ &\quad \left( \left\Vert ( P_{\lambda} - 1 ) u_{\infty}(t) \right\Vert {}_{H^{1/2}}^2 + \left\Vert ( P_{\lambda} - 1 ) ( u_{\lambda}(t) - u_{\infty}(t) ) \right\Vert {}_{H^{1/2}}^2 \right) dt \\ & \leqslant C n^4 \mathbb E \int_0^{T_0} \left( \left\Vert ( P_{\lambda} - 1 ) u_{\infty}(t) \right\Vert {}_{H^{1/2}}^2 + \left\Vert ( u_{\lambda}(t) - u_{\infty}(t) ) \right\Vert {}_{H^{1/2}}^2 \right) dt \to 0 \end{aligned} $$

as λ → 0 for each fixed \(n \in \mathbb N\). Note that the use of the functional \(\mathcal H_n\) instead of \(\mathcal H\) is important here. Similarly, we calculate the rest two terms in the Itô formula

$$\displaystyle \begin{aligned} & \partial_u \mathcal H_n(u) \Phi + \frac 12 \operatorname{\mathrm{tr}} \partial_u^2 \mathcal H(u) ( \Phi, \Phi )\\ &= \left( \theta_R - \theta_n \right) \int u^2 \partial_x K u dx + \theta_n \theta_R \left( \theta_R - 1 \right) \sum_j \gamma_j^2 \int u g^2(u) dx \\ &\quad + \frac { \theta_R (\theta_R - 1) }2 \sum_j \gamma_j^2 \int g(u) K^{-1} g(u) dx\\ &\quad + \frac {\theta_n}2 \sum_j \gamma_j^2 \int \left( u^2 \partial_x^2 P_{\lambda}^2 u + 2u ( \partial_x P_{\lambda} u )^2 \right) dx \\ &\quad + \theta_n \theta_R \sum_j \gamma_j^2 \left( 2 \int u( \partial_x P_{\lambda} u ) g(u) dx - \int g(u) P_{\lambda} K^{-1} g(u) dx \right) \\ &\quad + \frac 13 \theta_R \theta_n^{\prime} \int u^3 dy \left( \frac {\theta_R - 1}2 \sum_j \gamma_j^2 \int g(u) K^{-1} g(u) dx - \int u g(u) dx \right)\\ & = J_1 + \ldots + J_6 , \end{aligned} $$

where as above u = u λ. One can prove that for a.e. ω ∈ Ω and t ∈ [0, T 0] the first three terms J 1 + J 2 + J 3 tend to the integrand of the right hand side of Expression (8) in the subsequent limits, firstly, as λ → and then as n →. Both J 4 and J 5 tend to zero as λ →. Meanwhile the last term J 6 stays bounded by Cn, and so \( \lim _{n \to \infty } \lim _{\lambda \to \infty } J_6 = 0 . \) Let us show, for example, that J 4 → 0 which is the most troublesome term in the sum, since here is the only place in the paper where we make use of the fact \(\sigma \geqslant 1\). The rest are treated similarly without this additional restriction. Indeed,

$$\displaystyle \begin{aligned} J_4 & \leqslant C \left| \int \left( u \partial_x P_{\lambda} u - P_{\lambda} ( u \partial_x u ) \right) ( P_{\lambda} - 1 ) \partial_x u dx \right|\\ & \leqslant C \left\Vert u_{\lambda} \right\Vert {}_{H^1}^2 \left( \left\Vert ( P_{\lambda} - 1 ) u_{\infty} \right\Vert {}_{H^1} + \left\Vert u_{\lambda} - u_{\infty} \right\Vert {}_{H^1} \right) \end{aligned} $$

that obviously tends to zero as λ →. This concludes the proof. □

At this stage one cannot claim the energy conservation yet, so we will prove a weaker result that will be sharpened later. Note that there exists \(C_{\mathcal H} > 0\) such that

$$\displaystyle \begin{aligned} \left\Vert u \right\Vert {}_{ \mathcal H }^2 ( 1 - C_{\mathcal H} \left\Vert u \right\Vert {}_{ \mathcal H } ) \leqslant \mathcal H(u) \leqslant \left\Vert u \right\Vert {}_{ \mathcal H }^2 ( 1 + C_{\mathcal H} \left\Vert u \right\Vert {}_{ \mathcal H } ) , \end{aligned} $$
(10)

following from the well-known embedding \( H^{\sigma _0} (\mathbb R) \hookrightarrow L^{\infty } (\mathbb R) , \) recall that σ 0 > 1∕2.

Lemma 1

There exists a constant T 1 > 0 independent of ω such that if u solving Eq.(6) has \( \left \Vert u \right \Vert { }_{ \mathcal H } \leqslant \frac 1{ 2 C_{\mathcal H} } \) on some interval [0, τ] then \( \mathcal H(u) \leqslant 2 \mathcal H(u(0)) \) on [0, T 1 ∧ τ].

Proof

At first one can notice that as long as \(\left \Vert u \right \Vert { }_{ \mathcal H }\) stays bounded by \((2 C_{\mathcal H})^{-1}\), we have

$$\displaystyle \begin{aligned} \frac 12 \left\Vert u \right\Vert {}_{ \mathcal H }^2 \leqslant \mathcal H(u) \leqslant \frac 32 \left\Vert u \right\Vert {}_{ \mathcal H }^2 . \end{aligned}$$

Moreover, one can as well easily deduce from (8) the following bound

$$\displaystyle \begin{aligned} \mathcal H(u(t)) \leqslant \mathcal H(u(0)) + C \int_0^t \mathcal H(u(s)) ds , \end{aligned}$$

and so the proof is concluded by Grönwall’s lemma. □

3 Proof of the Main Result

We construct a solution u of (5) iteratively on the intervals [0, T 1], [T 1, 2T 1] and so on. Here the interval size T 1 is defined by Lemma 1. Staying under the assumptions of Theorem 1, we denote by u m solutions of Eq. (6) with \(R = m \in \mathbb N\) given by Proposition 1, where we subsequently set t 0 = 0, T 1, 2T 1, …. We define the stopping times

$$\displaystyle \begin{aligned} \tau_m = \tau_m^{t_0} = \inf \left \{ t \in [t_0, T_0] : \left\Vert u_m(t) \right\Vert {}_{H^{\sigma}} > m \right \} \end{aligned} $$
(11)

with the agreement \(\inf \emptyset = T_0\). Starting with t 0 = 0 we firstly show the following result.

Lemma 2

For a.e. ω  Ω, any \( m \in \mathbb N \) and each t ∈ [0, τ] with \( \tau (\omega ) = \min \{ \tau _m(\omega ), \tau _{m+1}(\omega ) \} , \) it holds true that u m(t) = u m+1(t).

Proof

We define

$$\displaystyle \begin{aligned} \widetilde u_i(t) = \left \{ \begin{aligned} & u_i(t) & \mbox{if } & t \in [0, \tau] \\ & \mathcal S(t, \tau) u_i(\tau) & \mbox{if } & t \in [\tau, T_0] \end{aligned} \right. , \quad i = m, m+1 . \end{aligned}$$

At first we will show that \( \widetilde u_m \) and \( \widetilde u_{m+1} \) coincide in X T provided T is sufficiently small. Then we will finish the proof by an iteration procedure. The difference of these functions has the form

where the stochastic integral is estimated via

$$\displaystyle \begin{aligned} \begin{array}{rcl} & \mathbb E \sup_{ 0 \leqslant t \leqslant T } \left\Vert S_W(t, 0) \sum_j \gamma_j \int_0^t S(t - s) \chi_{ \{ s \leqslant \tau \} }(s) S_W(0, s) \left( g(\widetilde u_{m+1}(s)) - g(\widetilde u_m(s)) \right) dW_j(s) \right\Vert _{ H^{\sigma} }^2&\displaystyle \\ & \leqslant C \mathbb E \int_0^T \chi_{ \{ s \leqslant \tau \} }(s) \left\Vert S_W(0, s) \left( g(\widetilde u_{m+1}(s)) - g(\widetilde u_m(s)) \right) \right\Vert _{ H^{\sigma} }^2 ds&\displaystyle \\ & \leqslant C \mathbb E \int_0^T \chi_{ \{ s \leqslant \tau \} }(s) \left( \left\Vert \widetilde u_{m+1}(s) \right\Vert + \left\Vert \widetilde u_m(s) \right\Vert _{ H^{\sigma} } \right) ^2 \left\Vert \widetilde u_{m+1}(s) - \widetilde u_m(s) \right\Vert _{ H^{\sigma} }^2 ds&\displaystyle \\ & \leqslant C ( 2m + 1 )^2 T \mathbb E \sup_{[0, T]} \left\Vert \widetilde u_{m+1} - \widetilde u_m \right\Vert _{ H^{\sigma} }^2&\displaystyle \end{array} \end{aligned} $$

with the help of the Burkholder inequality for convolution with the unitary group S, see [5, Lemma 3.3]. The first integral is estimated more straightforwardly, notice a similar argument employed to I in the proof of Proposition 1, and so one obtains

$$\displaystyle \begin{aligned} \left\Vert \widetilde u_{m+1} - \widetilde u_m \right\Vert _{ X_T } \leqslant C(m) \sqrt{T} \left\Vert \widetilde u_{m+1} - \widetilde u_m \right\Vert _{ X_T } . \end{aligned}$$

Hence \( \widetilde u_{m+1} = \widetilde u_m \) on [0, T] for a.e. ω ∈ Ω provided T is chosen sufficiently small depending only on m. Thus we can iterate this procedure to show that \( \widetilde u_{m+1} = \widetilde u_m \) on the whole interval [0, T 0], which concludes the proof of the lemma. □

Our goal is to bound \( \left \Vert u_m \right \Vert _{ L^2 C( 0, T_1; H^{\sigma } ) } \) by a constant independent of \(m \in \mathbb N\), and so we will need to estimate \( \left \Vert f(u_m) \right \Vert { }_{H^{\sigma }} , \) \( \left \Vert g(u_m) \right \Vert { }_{H^{\sigma }} , \) in particular. This can be easily done with the help of

$$\displaystyle \begin{aligned} \left\Vert \phi \psi \right\Vert {}_{H^{\sigma}} \leqslant C( \sigma, \sigma_0 ) \left( \left\Vert \phi \right\Vert {}_{H^{\sigma}} \left\Vert \psi \right\Vert {}_{H^{\sigma_0}} + \left\Vert \phi \right\Vert {}_{H^{\sigma_0}} \left\Vert \psi \right\Vert {}_{H^{\sigma}} \right) \end{aligned}$$

being true for any \(\sigma \geqslant 0\) and σ 0 > 1∕2, see for example [7, Estimate (3.12)].

For a.e. ω ∈ Ω and any \(m \in \mathbb N\), t ∈ [0, T 0] we have

$$\displaystyle \begin{aligned} \left\Vert u_m(t) \right\Vert {}_{H^{\sigma}} \leqslant \left\Vert u_0 \right\Vert {}_{H^{\sigma}} + \int_0^t \left\Vert f(u_m(s)) \right\Vert {}_{H^{\sigma}} ds + \left\Vert \sum_j \gamma_j \int_0^t \mathcal S(0, s) g_m(u_m(s)) dW_j(s) \right\Vert _{H^{\sigma}} , \end{aligned}$$

where \( \left \Vert f(u_m(s)) \right \Vert { }_{H^{\sigma }} \leqslant C \left ( \left \Vert u_m(s) \right \Vert { }_{H^{\sigma _0}} + \left \Vert u_m(s) \right \Vert { }_{H^{\sigma _0}}^2 \right ) \left \Vert u_m(s) \right \Vert { }_{H^{\sigma }} . \) Now taking into account that \( \left \Vert \mathcal S(0, s) g_m(u_m(s)) \right \Vert _{H^{\sigma }} \leqslant C \left \Vert u_m(s) \right \Vert { }_{H^{\sigma _0}} \left \Vert u_m(s) \right \Vert { }_{H^{\sigma }} , \) the stochastic integral can be estimated by the Burkholder inequality, and so we obtain for any \(0 < T \leqslant T_0\) the following inequality

$$\displaystyle \begin{aligned} \mathbb E \sup_{ t \in [0, T] } \left\Vert u_m(t) \right\Vert {}_{H^{\sigma}}^2 \leqslant 3 \mathbb E \left\Vert u_0 \right\Vert {}_{H^{\sigma}}^2 + C \mathbb E \int_0^T \left( \left\Vert u_m(t) \right\Vert {}_{H^{\sigma_0}}^2 + \left\Vert u_m(t) \right\Vert {}_{H^{\sigma_0}}^4 \right) \left\Vert u_m(t) \right\Vert {}_{H^{\sigma}}^2 dt , \end{aligned} $$
(12)

where C depends only on σ 0, σ, T 0, \(\sum _j \gamma _j^2\). This inequality we will use iteratively on the intervals [0, T 0 ∧ kT 1], \(k \in \mathbb N\), with T 1 found in Lemma 1. Let \( \left \Vert u_0 \right \Vert { }_{ \mathcal H } \leqslant ( 5 C_{ \mathcal H } )^{-1} \) a.e. on Ω. Consider the following stopping time

$$\displaystyle \begin{aligned} T_2^m = \inf \left \{ t \in [0, T_0] : \left\Vert u_m(t) \right\Vert {}_{ \mathcal H } > ( 2 C_{ \mathcal H } )^{-1} \right \} . \end{aligned}$$

Then a.e. \(T_1 \leqslant T_2^m\). Indeed, assuming the contrary \(T_1 > T_2^m\) one can deduce from (10) and Lemma 1 that

$$\displaystyle \begin{aligned} \left\Vert u_m(T_2^m) \right\Vert {}_{ \mathcal H } &\leqslant \sqrt{ 2 \mathcal H( u_m(T_2^m) ) } \leqslant 2 \sqrt{ \mathcal H( u_0 ) } \leqslant 2 \sqrt{ 1 + C_{\mathcal H} \left\Vert u_0 \right\Vert {}_{ \mathcal H } } \left\Vert u_0 \right\Vert {}_{ \mathcal H }\\ & \leqslant \sqrt{ \frac{24}{125} } C_{ \mathcal H }^{-1} < ( 2 C_{ \mathcal H } )^{-1} ,\end{aligned} $$

which contradicts to the definition of the stopping time \(T_2^m\) due to continuity of \(\left \Vert u_m \right \Vert { }_{ \mathcal H }\). As a result \(\left \Vert u_m \right \Vert { }_{ \mathcal H }\) stays bounded by \(( 2 C_{ \mathcal H } )^{-1}\) on the interval [0, T 1] for a.e. ω, and this simplifies (12) in the following way

$$\displaystyle \begin{aligned} \mathbb E \sup_{ t \in [0, T] } \left\Vert u_m(t) \right\Vert {}_{H^{\sigma}}^2 \leqslant 3 \mathbb E \left\Vert u_0 \right\Vert {}_{H^{\sigma}}^2 + C \int_0^T \mathbb E \sup_{s \in[0, t]} \left\Vert u_m(s) \right\Vert {}_{H^{\sigma}}^2 dt \end{aligned}$$

holding true for any \(0 < T \leqslant T_1\). Hence by Grönwall’s lemma we obtain

$$\displaystyle \begin{aligned} \left\Vert u_m \right\Vert _{ L^2 C( 0, T_1; H^{\sigma} ) }^2 \leqslant 3 \left\Vert u_0 \right\Vert {}_{L^2 H^{\sigma}}^2 e^{CT_1} = M , \end{aligned}$$

where M does not depend on \(m \in \mathbb N\). Hence

$$\displaystyle \begin{aligned} \mathbb P ( \tau_m \geqslant T_1 ) = \mathbb P \left( \left\Vert u_m \right\Vert _{ C( 0, T_1; H^{\sigma} ) } \leqslant m \right) \geqslant 1 - \frac 1{m^2} \mathbb E \left\Vert u_m \right\Vert _{ C( 0, T_1; H^{\sigma} ) }^2 \geqslant 1 - \frac M{m^2} , \end{aligned}$$

and so \( [0, T_1] \subset \cup _{m \in \mathbb N} [0, \tau _m(\omega )] \) for a.e. ω ∈ Ω. Thus we can define u on [0, T 1] by assigning u = u m on [0, τ m]. This is obviously a solution of (5) on [0, T 1] satisfying \(d \mathcal H(u) = 0\) and \( \left \Vert u \right \Vert { }_{ \mathcal H } < ( 2 C_{ \mathcal H } )^{-1} \) for a.e. ω ∈ Ω.

Now one can repeat the argument on [T 1, 2T 1] by constructing new solutions u m of Eq. (6) with the initial data u(T 1) given at the time moment t 0 = T 1. The stopping times τ m are defined by (11) with t 0 = T 1. The fact that \(\left \Vert u_m \right \Vert { }_{ \mathcal H }\) does not exceed the level \(( 2 C_{ \mathcal H } )^{-1}\), is guaranteed by the energy conservation, namely by \( \mathcal H(u(T_1)) = \mathcal H(u_0) \) in the same manner as above. The rest is similar, and so we get a solution on [T 1, 2T 1] with the constant energy equalled \(\mathcal H(u_0)\). After several repetitions of the argument we construct a solution on [0, T 0].

It remains to prove the uniqueness. Let \( u_1, u_2 \in L^2( \Omega ; C(0, T_0; H^{\sigma }(\mathbb R)) ) \) solve Eq. (5). For R > 0 we introduce

$$\displaystyle \begin{aligned} \tau_R = \inf \left \{ t \in [0, T_0] : \max_{ i = 1,2 } \left\Vert u_i(t) \right\Vert {}_{H^{\sigma}} > R \right \} . \end{aligned}$$

Clearly, for a.e. ω ∈ Ω both u 1 and u 2 are solutions of (6) on [0, τ R]. By Proposition 1 it holds true that u 1 = u 2 on [0, τ R] for a.e. ω ∈ Ω. Taking \(R \in \mathbb N\) and exploiting the time-continuity of u 1, u 2 one obtains u 1 = u 2 on [0, limR τ R] for a.e. ω ∈ Ω. Now from sub-additivity and Chebyshev’s inequality we deduce

$$\displaystyle \begin{aligned} \mathbb P ( \tau_R \geqslant T_0 ) & = \mathbb P \left( \max_{ i = 1,2 } \left\Vert u_i \right\Vert {}_{C(0, T_0; H^{\sigma})} \leqslant R \right)\\ & \geqslant 1 - \frac 1{R^2} \mathbb E \left( \left\Vert u_1 \right\Vert {}_{C(0, T_0; H^{\sigma})}^2 + \left\Vert u_2 \right\Vert {}_{C(0, T_0; H^{\sigma})}^2 \right) \to 1 \end{aligned} $$

as R →, proving u 1 = u 2 on [0, T 0]. This concludes the proof of Theorem 1.