1 Introduction

Motivations

Uniqueness is a problem with many facets for PDEs and different problems may require different approaches. When turning to stochastic PDEs, the problem acquires new levels of complexity, as uniqueness for stochastic processes can be understood in several ways. We refer to [25] for a recent review.

A prototypical example of PDE for which uniqueness is open are the Navier–Stokes equations, where the issue of uniqueness is mixed with the issue of regularity and emergence of singularities [22]. The stochastic version shares the same problems. In recent years, by means of a clever way to solve the Kolmogorov equation, Da Prato and Debussche [13, 19] have shown existence of Markov families of solutions. Moreover, such Markov families admit a unique invariant measure, with exponential convergence rate [38]. In [26, 28] similar results have been obtained with a completely different method, based on the Krylov selection method [35]. Related results can be found in [1, 14, 24, 27, 4044]. Both methods apply equally well in more general situations [8].

The purpose of this paper is to analyse uniqueness and emergence of blow-up in a much simpler infinite dimensional stochastic equation. We look for a model that retains some characteristics of the original problem and is amenable to the analysis of [13, 28]. The main point is the choice of the non-linearity.

The Navier–Stokes non-linearity on the torus with \(2\pi \)-periodic boundary conditions reads in Fourier series as

$$\begin{aligned} (u\cdot \nabla )u = \mathrm i \sum \limits _{k} \sum \limits _{n+m=k} (u_n\cdot m)u_m \mathrm{e }^\mathrm{i k\cdot x}. \end{aligned}$$

Here the \(k\text{ th }\) mode interacts with almost every other mode. The most reasonable simplification is to reduce the interaction to a finite number of modes, while keeping the orthogonality property in the energy estimate. The simplest possible is the nearest neighbour interaction and this gives the dyadic model.

The dyadic model

The dyadic model has been introduced in [29, 33] as a model of the interaction of the energy of an inviscid fluid among different packets of wave-modes (shells). It has been lately studied in [4, 6, 12, 34, 46] and in the inviscid and stochastically forced case in [3, 5, 9].

The viscous version has been studied in [10, 11, 30]. Blow-up of positive solutions with non-linearity of strong intensity is proved in [10]. In [7] the authors prove well-posedness and convergence to the inviscid limit, again for positive solutions, with non-linearity of intensity of “Navier–Stokes” type.

In this paper we study the dyadic model with additive noise,

$$\begin{aligned} dX_n = \bigl (- \nu \lambda _n^2 X_n + \lambda _{n-1}^\beta X_{n-1}^2 - \lambda _n^\beta X_n X_{n+1}\bigr )\,dt + \sigma _n\,dW_n, \quad n\ge 1, \end{aligned}$$
(1.1)

where \(\lambda _n = 2^n\) and \(X_0\equiv 0\). The noise coefficients satisfy suitable assumptions and the parameter \(\beta \) measures the relative intensity of the non-linearity with respect to the linear term. Throughout the paper we consider the viscous problem, namely \(\nu >0\). The inviscid limit will be addressed in a future work.

The non-linear term cancels out as in Navier–Stokes providing an a-priori bound in \(\ell ^2(\mathbf R )\) independent of \(\beta \). If \(\lambda _n^2 X_n\gtrsim \lambda _n^{\beta } X_n^2\), the linear term dominates the non-linear term. This is the heuristic reason why local strong solutions exist when the initial condition decays at least as \(\lambda _n^{{-(\beta -2)}}\). If \(\beta \le 2\) this is always true due to the \(\ell ^2\)-bound, and the non-random problem has a unique global solution [10]. Likewise, uniqueness holds with noise when \(\beta \le 2\).

By a scaling argument (see for instance [10]), one can “morally” identify the dyadic model with the Navier–Stokes equations when \(\beta \approx \tfrac{5}{2}\). In [7] well-posedness is proved in a range which includes the value \(\tfrac{5}{2}\), but only for positive solutions. Positivity is preserved by the unforced dynamics. It is clear that, as is, positivity is broken by the random perturbation.

Main results

This paper contains a thorough analysis of the case \(\beta >2\), which can be roughly summarised in the table below.

 

\(\beta \le 2\)

\(2<\beta \le 3\)

\(\beta >3\)

Blow-up

NO

NO\({}^\mathrm{a}\)

YES

Uniqueness

YES

YES

?

  1. \(^\mathrm{a}\) Absence of blow-up is proved up to \(\beta _c<3\)

We prove pathwise uniqueness in the range \(\beta \in (2,3]\) by adapting an idea for positive solutions of [7]. The solution is decomposed in a quasi-positive component and a residual term. Quasi-positivity means that there is a lower bound that decays as a (negative) power of \(\lambda _n\). This bound is preserved by the system as long as the random perturbation is not too strong. Under the same conditions the residual term is small.

Quasi-positivity and the invariant area argument of [7] together imply smoothness of the solution. Here by smoothness we mean that \((\lambda _n^\gamma X_n)_{n\ge 1}\) is bounded for every \(\gamma \). This result holds for \(\beta \in (2,\beta _c)\), where \(\beta _c\in (2,3]\) is the value identified in [7].

When \(\beta >3\) we use an idea of [10] for positive solutions. We are able to identify a set of initial conditions that lead to blow-up with positive probability.

Emergence of blow-up has been already proved in several stochastic models. See for instance [16, 17] for the Schrödinger equation [21, 36, 37] for the nonlinear heat equation (the result of [23] is basically one dimensional and no ideas for infinite dimensional systems are involved). All such results ensure that blow-up occurs only with positive probability.

We first state some general conditions that ensure that blow-up occurs with probability one. Roughly speaking, one needs first to identify a set of initial states that lead to blow-up with positive probability. In general, this is not sufficient (see Example 5.6). The crucial idea is to prove that such sets are recurrent for the evolution, conditional to nonappearance of blow-up. We believe that these general results may be of independent interest.

Our main result on blow-up for the dyadic model ensures that if at least one component is forced by noise, then blow-up occurs with full probability. The result holds as long as the initial state satisfies \(\lambda _n^\alpha X_n(0) \approx O(1)\) for some \(\alpha >\beta -2\). This is optimal since it is the same condition that ensures the existence of a local smooth solution. In different words, “smoothness” is transient.

The main ingredient to prove recurrence for the sets leading to blow-up is a stronger form of quasi-positivity. This ensures that the negative parts of the solution become smaller in a finite time, depending only on the size of the initial condition in \(H\) and on the size of the random perturbation. We remark that recurrence is not at all obvious, since for \(\beta >3\) the dissipation of the system is not strong enough to provide existence of a stationary solution.

It remains open to understand uniqueness for \(\beta >3\), since blow-up rules out the use of smooth solutions, making pathwise uniqueness a harder problem. Uniqueness in law may still be achievable.

2 Preliminary results and definitions

The following assumption on the intensity of the noise will be in strength for the whole paper.

Assumption 2.1

There is \(\alpha _0>\max \{\tfrac{1}{2}(\beta -3), \beta -3\}\) such that

$$\begin{aligned} \sup _{n\ge 1}\bigl (\lambda _n^{\alpha _0}\sigma _n\bigr ) < \infty . \end{aligned}$$
(2.1)

2.1 Notations

Set \(\lambda = 2\) and \(\lambda _n = \lambda ^n\). For \(\alpha \in \mathbf R \) let \(V_\alpha \) be the (Hilbert) space

$$\begin{aligned} V_\alpha = \left\{ (x_n)_{n\ge 1}: \sum _{n=1}^\infty (\lambda _n^\alpha x_n)^2<\infty \right\} , \end{aligned}$$

with scalar product \(\langle x,y\rangle _\alpha = \sum _{n=1}^\infty \lambda _n^{2\alpha } x_n y_n\) and norm \(\Vert \cdot \Vert _\alpha = \langle \cdot ,\cdot \rangle _\alpha ^{1/2}\). Set in particular \(H = V_0\) and \(V = V_1\).

2.2 Definitions of solution

We turn to the definition of solution. We consider first strong solutions, which are unique, regular but defined on a (possibly) random interval. Then we will consider weak solutions, which are global in time.

2.2.1 Strong solutions

We first discuss local strong solution.

Definition 2.2

(Strong solution) Let \(\mathcal W \) be an Hilbert sub-space of \(H\). Given a probability space \((\Omega , \fancyscript{F}, \mathbb P )\) and a cylindrical Wiener process \((W_t, \fancyscript{F}_t)_{t\ge 0}\) on \(H\), a strong solution in \(\mathcal W \) with initial condition \(x\in \mathcal W \) is a pair \((X(\cdot ;x),\tau _x^\mathcal W )\) such that

  • \(\tau _x^\mathcal W \) is a stopping time with \(\mathbb P [\tau _x^\mathcal W >0]=1\),

  • \(X(\cdot ;x)\) is a process defined on \([0,\tau _x^\mathcal W )\) with \(\mathbb P [X(0,x)=x]=1\),

  • \(X(\cdot ;x)\) is continuous with values in \(\mathcal W \) for \(t<\tau _x^\mathcal W \),

  • \(\Vert X(t;x)\Vert _\mathcal{W }\rightarrow \infty \) as \(t\uparrow \tau _x^\mathcal W \), \(\mathbb P -\)a. s.,

  • \(X(\cdot ;x)\) is solution of (1.1) on \([0,\tau _x^\mathcal W )\).

The strong solution turns out to be a Markov process (and even a strong Markov process, but we do not need this fact here) in the following sense (see [31] for further details). Set , where the terminal state is an isolated point. Define the set \(\overline{W}(\mathcal W ^{\prime })\) of all paths \(\omega :[0,\infty )\rightarrow \mathcal W ^{\prime }\) such that there exists a time \(\zeta (\omega )\in [0,\infty ]\) with \(\omega \) continuous with values in \(\mathcal W \) on \([0,\zeta (\omega ))\) and     for \(t\ge \zeta (\omega )\). The strong solution defined above can be extended as a process in \([0,\infty )\) with values in \(\mathcal W ^{\prime }\) in a canonical way, achieving value for \(t\ge \tau _x^\mathcal{W }\). We say that the strong solution is Markov when the process on the extended state space \(\mathcal W ^{\prime }\) is a Markov process.

Theorem 2.3

Let \(\beta >2\) and assume (2.1). Let \(\alpha \in (\beta -2,\alpha _0+1)\), then for every \(x\in V_\alpha \) there exists a strong solution \((X(\cdot ;x), \tau _x^\alpha )\) with initial condition \(x\). Moreover, the solution is unique in the sense that if \((X(\cdot ;x),\tau _x)\) and \((X^{\prime }(\cdot ;x),\tau _x^{\prime })\) are two solutions, then \(\mathbb P [\tau _x=\tau _x^{\prime }]=1\) and \(X(\cdot ;x)=X^{\prime }(\cdot ;x)\) for \(t<\tau _x\). Finally, the process \((X(\cdot ;x))_{x\in V_\alpha }\) is Markov, in the sense given above.

Proof

Existence and uniqueness are essentially based on the same ideas of [42, Theorem 5.1], but with simpler estimates. We give a quick sketch of the proof to introduce some of the definitions we will use later. Let \(\chi \in C^\infty ([0,\infty ))\) be non increasing and such that \(\chi (u)=1\) for \(u\le 1\) and \(\chi (u)=0\) for \(u\ge 2\). Consider the problem

$$\begin{aligned} dX_n^R = - \nu \lambda _n^2 X_n^R\,dt + \chi _R(\Vert X^R\Vert _\alpha )\bigl (\lambda _{n-1}^\beta (X_{n-1}^R)^2 - \lambda _n^\beta X_n^R X_{n+1}^R\bigr )\,dt + \sigma _n\,dW_n.\nonumber \\ \end{aligned}$$
(2.2)

The above equation has a (pathwise) unique global solution for every \(x\in V_\alpha \), which is continuous in time with values in \(V_\alpha \). Given \(x\in V_\alpha \), define \(\tau _x^{\alpha ,R}\) as the first time \(t\) when \(\Vert X^R(t)\Vert _\alpha =R\). Then \(\tau _x^\alpha = \sup _{R>0}\tau _x^{\alpha ,R}\) and the strong solution \(X(t;x)\) coincides with \(X^R(t;x)\) for \(t\le \tau _x^{\alpha ,R}\). By uniqueness the definition makes sense. Markovianity follows by the Markovianity of each \(X^R\). \(\square \)

By pathwise uniqueness, if \(x\in V_\alpha \), then \(\tau _x^\alpha \le \tau _x^{\alpha ^{\prime }}\) for every \(\alpha ^{\prime }\in (\beta -2,\alpha )\). We will be able to deduce that \(\tau _x^\alpha =\tau _x^{\alpha ^{\prime }}\) as a consequence of Proposition 4.3.

2.2.2 Weak martingale solutions

The fact that the blow-up time \(\tau _x^\alpha \) associated to a strong solution may (or may not) be infinite is the main topic of discussion of the paper. To consider global solutions we introduce weak solutions.

Given a sequence of independent one-dimensional standard Brownian motions \((W_n)_{n\ge 1}\), let \(Z =(Z_n)_{n\ge 1}\) be the solution of

$$\begin{aligned} dZ_n + \nu \lambda _n^2 Z_n\,dt = \sigma _n\,dW_n, \quad n\ge 1, \end{aligned}$$
(2.3)

with \(Z_n(0)=0\) for all \(n\ge 1\). Define the functional \(\mathcal G _t\) as

$$\begin{aligned} \mathcal G _t(y,z) = \Vert y(t)\Vert _H^2 + 2\int \limits _0^t\left( \nu \Vert y(s)\Vert _V^2 - \sum _{n=1}^\infty \lambda _n^\beta \bigl (y_n+z_n)(y_{n+1}z_n - y_n z_{n+1}\bigr )\right) \,ds. \end{aligned}$$

If \(Y\in L^\infty _\mathrm{\tiny loc }(0,\infty ;H)\cap L^2_\mathrm{\tiny loc }(0,\infty ;V)\) and Assumption 2.1 holds, then by the lemma below \(\mathcal G _t(Y,Z)\) is finite and jointly measurable in the variables \((t,y,z)\) (see [8, 41] for a related problem). The following regularity result for \(Z\) is standard [15].

Lemma 2.4

Assume (2.1) with \(\alpha _0\in \mathbf R \). Given \(\alpha <\alpha _0+1\), then almost surely \(Z\in C([0,T];V_\alpha )\) for every \(T>0\). Moreover, for every \(\epsilon \in (0,1]\), with \(\epsilon <\alpha _{0}\,+\,1-\alpha \), there are \(c_{2.4-1.\epsilon }>0\) and \(c_{2.4-2.\epsilon }>0\), such that for every \(T>0\),

$$\begin{aligned} \mathbb E \left[ \exp \left( \frac{c_{2.4-2.\epsilon }}{T^\epsilon } \sup _{[0,T]}\Vert Z(t)\Vert _\alpha ^2\right) \right] \le c_{2.4-1.\epsilon }. \end{aligned}$$

Definition 2.5

(Energy martingale solution) A weak martingale solution starting at \(x\in H\) is a couple \((X,W)\) on a filtered probability space \((\Omega ,\fancyscript{F}(\fancyscript{F}_t)_{t\ge 0},\mathbb P )\) such that \(W = (W_n)_{n\ge 1}\) is a sequence of independent standard Brownian motions and \(X = (X_n)_{n\ge 1}\) is component-wise a solution of (1.1) with \(X(0)=x\).

A weak solution is an energy solution if \(Y=X-Z\in L^\infty _\mathrm{\tiny loc }(0,\infty ;H)\cap L^2_\mathrm{\tiny loc }([0,\infty );V)]\) with probability one and there is a set \(T_\mathbb P \subset (0,\infty )\) of null Lebesgue measure such that for every \(s\not \in T_\mathbb P \) and every \(t>s\), the following energy inequality holds,

$$\begin{aligned} \mathbb P [\mathcal G _t(Y,Z)\le \mathcal G _s(Y,Z)] = 1. \end{aligned}$$

Remark 2.6

Let \(\Omega _\beta = C([0,\infty );V_{-\beta })\) and define on \(\Omega _\beta \) the canonical process \(\xi \) as \(\xi _t(\omega ) = \omega (t)\) for all \(t>0\) and \(\omega \in \Omega _\beta \). It is a standard interpretation [24] that a weak solution can be seen as a probability on the path space \(\Omega _\beta \). Namely, if \(\mathbb P _x\) is the law of a weak solution starting at \(x\in H\), then \(\xi \) is a weak solution on \((\Omega _\beta ,\mathbb P _x)\). This interpretation will be used in the rest of the paper.

Remark 2.7

The process \(Y = X - Z\) satisfies the equations

$$\begin{aligned} \dot{Y}_n + \nu \lambda _n^2 Y_n = \lambda _{n-1}^\beta (Y_{n-1}+Z_{n-1})^2 - \lambda _n^\beta (Y_n+Z_n)(Y_{n+1}+Z_{n+1}), \end{aligned}$$
(2.4)

\(\mathbb P \)–almost surely, for every \(n\ge 1\) and \(t>0\).

Given \(\alpha >\beta -2\) and \(R>0\), define the following random times on \(\Omega _\beta \),

$$\begin{aligned} \tau _\infty ^\alpha = \inf \{t\ge 0: \Vert \omega (t)\Vert _\alpha =\infty \}, \qquad \tau _\infty ^{\alpha ,R} = \inf \{t\ge 0: \Vert \omega (t)\Vert _\alpha >R\}, \end{aligned}$$
(2.5)

and each random time is \(\infty \) if the corresponding set is empty. The energy inequality required in Definition 2.5 ensures that all weak solutions with the same initial condition coincide with the strong solution up to the blow-up time \(\tau _\infty ^\alpha \).

Theorem 2.8

Let \(\beta >2\) and assume (2.1). Then for every \(x\in H\) there exists at least one energy martingale solution \(\mathbb P _x\). Moreover,

  • if \(\alpha \in (\beta -2,1+\alpha _0)\), \(x\in V_\alpha \) and \(\mathbb P _x\) is an energy martingale solution with initial condition \(x\), then \(\tau _x^\alpha = \tau _\infty ^\alpha \) under \(\mathbb P _x\) and for every \(t>0\),

    $$\begin{aligned} \xi _s = X(s;x), \quad s\le t,\quad \qquad \mathbb P _x-a.s.\quad \text{ on } \{\tau _x^\alpha >t\}, \end{aligned}$$

    where \((X(\cdot ;x),\tau _x^\alpha )\) is the strong solution with initial condition \(x\) defined on \(\Omega _\beta \).

  • There exists at least one family \((\mathbb P _x)_{x\in H}\) of energy martingale solutions satisfying the almost sure Markov property. Namely for every \(x\in H\) and every bounded measurable \(\phi :H\rightarrow \mathbf R \),

    $$\begin{aligned} \mathbb E ^\mathbb{P _x}\bigl [\phi (\xi _t)|\fancyscript{B}_s] = \mathbb E ^\mathbb{P _{\omega (s)}}[\phi (\xi _{t-s})], \quad \mathbb P _x-a. s., \end{aligned}$$

    for almost every \(s\ge 0\) (including \(0\)) and for all \(t\ge s\).

Proof

The proof of the first fact can be done as in [2]. The proofs of the other two facts are entirely similar to those of Theorem 2.1 of [41] and Theorem 3.6 of [42] and we refer to these references for further details. \(\square \)

A natural way to prove existence of weak solution (see [2]) is to use finite dimensional approximations. Consider for each \(N\ge 1\) the solution \((X_n^{(N)})_{1\le n\le N}\) to the following finite dimensional system,

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{X}_1^{(N)} = -\nu \lambda _1^2X_1^{(N)} - \lambda _1^\beta X_1^{{(N)}} X_2^{{(N)}} + \sigma _1\,dW_1,\\ \dots ,\\ \dot{X}_n^{{(N)}} = - \nu \lambda _n^2X_n^{{(N)}} + \lambda _{n-1}^\beta (X_{n-1}^{{(N)}})^2 - \lambda _n^\beta X_n^{{(N)}} X_{n+1}^{{(N)}} + \sigma _n\,dW_n,\\ \dots ,\\ \dot{X}_N^{{(N)}} = - \nu \lambda _N^2X_N^{{(N)}} + \lambda _{N-1}^\beta (X_{N-1}^{{(N)}})^2 + \sigma _N\,dW_N. \end{array}\right. } \end{aligned}$$
(2.6)

Given \(x\in H\), let \(\mathbb P ^{{(N)}}_x\) be the probability distribution on \(\Omega _\beta \) of the solution of the above system with initial condition \(x^{{(N)}}=(x_1,x_2,\dots ,x_N)\).

Definition 2.9

(Galerkin martingale solution) Given \(x\in H\), a Galerkin martingale solution is any limit point in \(\Omega _\beta \) of the sequence \((\mathbb P ^{{(N)}}_x)_{N\ge 1}\).

It is easy to verify (it is the proof of existence in Theorem 2.8, see [2] for details in a similar problem) that Galerkin martingale solutions are energy solutions.

Remark 2.10

All results of this section hold for any polynomial non-linearity with finite modes interaction. On the other hand the rest of the paper is strongly based on the structure of the non-linearity. At least for nearest-neighbour interaction, we are dealing with the difficult case. Indeed every nearest-neighbour interaction can be written [34] as \(a_1 B_n^1(X) + a_2 B_n^2(X)\), where \(B_n^1\) is the non-linearity of the dyadic model and \(B_n^2(x) = \lambda _{n+1}^\beta x_{n+1}^2 - \lambda _n^\beta x_{n-1} x_n\). In [34] the authors prove that the inviscid problem with non-linearity \(B_n^2\) is well-posed.

3 Control of the negative components

Given \(\beta >2\), \(\alpha \in \mathbf R \) and \(c_0>0\), consider the solution \(Z\) of (2.3) and define the following process,

$$\begin{aligned} N_{\alpha ,c_0}(t) = min\bigl \{m\ge 1: |Z_n(s)|\le c_0\nu \lambda _{n-1}^{-\alpha } \text{ for } s\in [0,t] \text{ and } n\ge m\bigr \}, \end{aligned}$$
(3.1)

with \(N_{\alpha ,c_0}(t) = \infty \) if the set is empty.

Lemma 3.1

(Moments of \(N_{\alpha ,c_0}\)) Given \(\beta >2\), assume (2.1) and let \(\alpha <\alpha _0+1\). Then for every \(\gamma \in (0,\alpha _0+1-\alpha )\) and \(\epsilon \in (0,1]\), with \(\epsilon <\alpha _0+1-\alpha -\gamma \), there are two numbers \(c_{3.1-1}>0\) and \(c_{3.1-2}>0\), depending only on \(\epsilon \), \(\gamma \) and \(\alpha _0\), such that

$$\begin{aligned} \mathbb P [N_{\alpha ,c_0}(t) > n] \le c_{3.1-1} \exp \left( -c_{3.1-2}\frac{c_0\nu }{t^\epsilon }\lambda _n^\gamma \right) , \end{aligned}$$

for every \(t>0\) and \(n\ge 1\). In particular, \(\mathbb P [N_{\alpha ,c_0}(t) = n] > 0\) for every \(n\ge 1\) and

$$\begin{aligned} \mathbb E \bigl [\exp \bigl (\lambda _{N_{\alpha ,c_0}(t)}^\gamma \bigr )\bigr ] <\infty . \end{aligned}$$

Proof

For \(n\ge 1\),

$$\begin{aligned} \{N_{\alpha ,c_0}(t)\le n\} = \left\{ \sup _{k\ge n}\sup _{[0,t]}\lambda _{k-1}^\alpha |Z_k(s)|\le c_0\nu \right\} . \end{aligned}$$

Hence if \(\gamma <\alpha _0+1-\alpha \) and \(k\ge n\),

$$\begin{aligned} \sup _{[0,t]}\lambda _{k-1}^\alpha |Z_k(s)| \le \lambda _{n-1}^{-\gamma }\sup _{[0,t]}\Vert Z(s)\Vert _{\alpha +\gamma }. \end{aligned}$$

Therefore by Chebychev’s inequality and Lemma 2.4,

$$\begin{aligned} \mathbb P [N_{\alpha ,c_0}(t)\!>\!n] \!\le \! \mathbb P \left[ \sup _{[0,t]}\Vert Z(s)\Vert _{\alpha +\gamma } \!>\! c_0\nu \lambda _{n-1}^\gamma \right] \!\le \! c_{2.4-1.\epsilon }\exp \left( -c_{2.4-2.\epsilon }\frac{c_0\nu }{t^\epsilon }\lambda _{n\!-\!1}^\gamma \right) \!, \end{aligned}$$

for every \(\epsilon \in (0,1]\) with \(\epsilon <\alpha _0+1-\alpha -\gamma \). The double-exponential moment follows from this estimate.

We finally prove that \(\mathbb P [N_{\alpha ,c_0}(t)=n]>0\). We prove it for \(n=1\) and all other cases follow similarly. By independence,

$$\begin{aligned} \mathbb P [N_{\alpha ,c_0}(t)=1] = \exp \left( - \sum _{n=1}^\infty -\log \mathbb P \bigl [\sup _{[0,t]}\lambda _{k-1}^\alpha |Z_k(s)|\le c_0\nu \bigr ]\right) , \end{aligned}$$

and it is sufficient to show that the series above is convergent. By (2.1),

$$\begin{aligned} \mathbb P \left[ \sup _{[0,t]}\lambda _{k-1}^\alpha |Z_k(s)|\le c_0\nu \right] \ge \mathbb P \left[ \sup _{[0,t]}|\zeta (\lambda _n^2s)|\le 2^\alpha c_0\nu \lambda _n^{\alpha _0+1-\alpha }\right] , \end{aligned}$$

where \(\zeta \) is the solution of the one dimensional SDE \(d\zeta + \nu \zeta \,dt = dW\), with \(\zeta (0) = 0\). The conclusion follows by standard tail estimates on the one dimensional Ornstein–Uhlenbeck process (see for instance [20]), since \(\alpha <1+\alpha _0\). \(\square \)

The lemma below is the crucial result of the paper. To formulate its statement, we introduce suitable finite dimensional approximations. Consider for each integer \(N\ge 1\) the finite dimensional approximations of (2.4),

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{Y}_1^{{(N)}} = - \nu \lambda _1^2Y_1^{{(N)}} - \lambda _1^\beta X_1^{{(N)}} X_2^{{(N)}},\\ \dots ,\\ \dot{Y}_n^{{(N)}} = - \nu \lambda _n^2Y_n^{{(N)}} + \lambda _{n-1}^\beta (X_{n-1}^{{(N)}})^2 - \lambda _n^\beta X_n^{{(N)}} X_{n+1}^{{(N)}},\\ \dots ,\\ \dot{Y}_N^{{(N)}} = - \nu \lambda _N^2Y_N^{{(N)}} + \lambda _{N-1}^\beta (X_{N-1}^{{(N)}})^2. \end{array}\right. } \end{aligned}$$
(3.2)

In the above system we have set \(X_n^{{(N)}} = Y_n^{{(N)}} + Z_n\) for \(n=1,\dots ,N\). It is easy to verify that the above SDE admits a unique global solution.

Lemma 3.2

(Main lemma) Let \(\beta >2\), \(N\ge 1\) and \(T>0\), and assume (2.1). Let \(\alpha \in [\beta -2,1+\alpha _0)\) and consider \(c_0>0\), \(a_0>0\) and \(n_0\ge 1\) such that

$$\begin{aligned} c_0 \le a_0 \quad \text{ and }\quad c_0 < \sqrt{a_0}\bigl (\lambda _{n_0}^{\frac{1}{2}(\alpha +2-\beta )}-\sqrt{a_0}\bigr ). \end{aligned}$$
(3.3)

Assume that \(\lambda _{n-1}^\alpha X_n^{{(N)}}(0)\ge - a_0\nu \) for all \(n=n_0,\dots ,N\). If \(N > N_{\alpha ,c_0}(T)\), then \(Y_n^{{(N)}}(t)\ge -a_0\nu \lambda _{n-1}^{-\alpha }\) for all \(t\in [0,T]\) and all \(n\ge n_0\vee N_{\alpha ,c_0}(T)\).

Proof

For simplicity we drop the superscript \({}^{{(N)}}\). We can first assume that \(\lambda _{n-1}^\alpha Y_n(0)>-\nu a_0\) for \(n\ge n_0,\dots ,N\) (the case of equality follows by continuity). Then the same is true in a neighbourhood of \(t=0\). Let \(t_0>0\) be the first time when at least for one \(n\), \(\lambda _{n-1}^\alpha Y_n(t_0) = -\nu a_0\). Let \(n\ge n_0\vee N_{\alpha ,c_0}(T)\) be one of such indices. Then

$$\begin{aligned} \dot{Y}_n(t_0)&\ge -\nu \lambda _n^2 Y_n(t_0) - \lambda _n^\beta (Y_n(t_0)+Z_n(t_0))(Y_{n+1}(t_0)+Z_{n+1}(t_0))\\&\ge a_0\nu ^2\lambda ^2\lambda _{n-1}^{2-\alpha } + \lambda _n^\beta \bigl (a_0\nu \lambda _{n-1}^{-\alpha }-Z_n(t_0)\bigr )(Y_{n+1}(t_0)+Z_{n+1}(t_0))\\&\ge a_0\nu ^2\lambda ^2\lambda _{n-1}^{2-\alpha } - \lambda _n^\beta \bigl (a_0\nu \lambda _{n-1}^{-\alpha }-Z_n(t_0)\bigr )(Y_{n+1}(t_0)+Z_{n+1}(t_0))_-, \end{aligned}$$

since \(\nu a_0\lambda _{n-1}^{-\alpha }-Z_n(t_0)\ge 0\) for \(n\ge N_{\alpha ,c_0}(T)\). Here \(x_- = \max (-x,0)\). We also know that \(Y_{n+1}(t_0)\ge -a_0\nu \lambda _n^{-\alpha }\), hence \(Y_{n+1}(t_0)+Z_{n+1}(t_0)\ge -\nu (a_0+c_0)\lambda _n^{-\alpha }\) and \((Y_{n+1}(t_0)+Z_{n+1}(t_0))_-\le \nu (a_0+c_0)\lambda _n^{-\alpha }\). We also have \(a_0\nu \lambda _{n-1}^{-\alpha }-Z_n(t_0)\le \nu (a_0+c_0)\lambda _{n-1}^{-\alpha }\), so in conclusion

$$\begin{aligned} \dot{Y}_n(t_0) \ge \nu ^2\lambda ^2\lambda _{n-1}^{2-\alpha } \bigl (a_0 - \lambda _n^{\beta -2-\alpha }(a_0+c_0)^2\bigr ) >0. \end{aligned}$$

\(\square \)

The next theorem shows that the process can diverge only in the positive area.

Theorem 3.3

Given \(\beta >2\), assume (2.1). Let \(\alpha \in (\beta -2,\alpha _0+1)\) and \(x\in V_\alpha \), and let \((X(\cdot ;x),\tau _x^\alpha )\) be the strong solution in \(V_\alpha \) with initial condition \(x\). Then

$$\begin{aligned} \mathbb E \left[ \sup _{n\ge 1}\sup _{t\in [0,T\wedge \tau _x^\alpha ]} \bigl (\lambda _{n-1}^\alpha \bigl (X_n(t)\bigr )_-\bigr )^p\right] <\infty , \end{aligned}$$

for every \(T>0\) and \(p\ge 1\). In particular,

$$\begin{aligned} \inf _{n\ge 1}\inf _{t\in [0,\tau _x^\alpha \wedge T]}\lambda _{n-1}^\alpha X_n > -\infty , \qquad \mathbb P -\text{ a. } \text{ s. } \end{aligned}$$

Proof

Fix \(x\in V_\alpha \) and \(T>0\). Set \(a_0=\tfrac{1}{4}\) and \(c_0=\tfrac{1}{6}\), so that condition (3.3) holds for any \(n_0\). Choose \(n_0\ge 1\) as the smallest integer such that \(\lambda _{n-1}^\alpha x_n\ge -\frac{1}{4}\nu \) for all \(n\ge n_0\). With the choice \(c_0=\tfrac{1}{6}\), define the event \(\mathcal Z _{\alpha ,T} = \{N_{\alpha ,1/6}(T)<\infty \}\). By Lemma 3.1 \(\mathcal Z _{\alpha ,T}\) has probability one. Lemma 3.2 implies that on \(\{\tau _x^\alpha >T\}\),

$$\begin{aligned} Y_n(t)\ge -\tfrac{1}{4}\nu \lambda _{n-1}^{-\alpha }, \quad \text{ for } n\ge n_0\vee N_{\alpha ,\frac{1}{6}}(T). \end{aligned}$$

Indeed, we can set \(x^{{(N)}}=(x_1,\dots ,x_N)\) and notice that on the event \(\{\tau _x^\alpha >T\}\), problem (2.4) has a unique solution. Hence for every \(N\) the solution of (3.2) with initial condition \(x^{{(N)}}\) converges to the solution of (2.4) with initial condition \(x\). Here the convergence is component-wise uniform in time on \([0,T]\).

Let \(N_1 = n_0\vee N_{\alpha ,1/6}(T)\). It is clear that \(N_1\) has the same finite moments of \(N_{\alpha ,1/6}(T)\). Moreover on \(\{\tau _x^\alpha >T\}\),

$$\begin{aligned} \lambda _{n-1}^\alpha X_n(t) \ge {\left\{ \begin{array}{ll} -\lambda _{N_1-1}^\alpha \sup _{t\in [0,T]}\Vert X(t)\Vert _H, &{} n<N_1,\\ -\frac{5}{12}\nu , &{} n\ge N_1, \end{array}\right. } \end{aligned}$$

for every \(n\ge 1\). Therefore

$$\begin{aligned} \sup _{n\ge 1}\sup _{t\in [0,T]}\lambda _{n-1}^\alpha \bigl (X_n(t)\bigr )_- \le \nu + \lambda _{N_1-1}^\alpha \sup _{t\in [0,T]}\Vert X(t)\Vert _H. \end{aligned}$$

From Lemma 3.1 and the fact that \(\mathbb E [\sup _{[0,T]}\Vert X(t)\Vert _H^p]\) is finite for every \(p\ge 1\), the estimate in the statement of the theorem readily follows. \(\square \)

Remark 3.4

Given an initial condition \(x\in V_\alpha \), if we set

$$\begin{aligned} \tau _{x,\pm }^\alpha = \sup \{t: \sup \nolimits _{n\ge 1}\lambda _n^\alpha (X_n)_\pm <\infty \}, \end{aligned}$$

then \(\tau _x^\alpha = \min (\tau _{x,+}^\alpha ,\tau _{x,-}^\alpha )\), and the previous theorem implies that \(\tau _x^\alpha =\tau _{x,+}^\alpha \).

Corollary 3.5

Let \(\beta >2\), \(\alpha \in (\beta -2,\alpha _0+1)\) and \(x\in V_\alpha \), and assume (2.1). If either problem (2.4), with initial state \(x\), admits a unique solution for almost every possible value assumed by \(Z\), or we are dealing with a Galerkin solution, then

$$\begin{aligned} \mathbb E \left[ \sup _{n\ge 1}\sup _{t\in [0,T]} \bigl (\lambda _{n-1}^\alpha \bigl (X_n(t)\bigr )_-\bigr )^p\right] <\infty , \end{aligned}$$

for every \(T>0\) and \(p\ge 1\). In particular,

$$\begin{aligned} \inf _{n\ge 1}\inf _{t\in [0,T]}\lambda _{n-1}^\alpha X_n > -\infty , \quad \mathbb P -\text{ a. } \text{ s. } \end{aligned}$$

Proof

We simply notice that in the proof of the theorem above we have used the piece of information \(\{\tau _x^\alpha >T\}\) only to ensure that (2.4) admits a unique solution.

On the other hand, if we are dealing with a Galerkin solution, then up to a sub-sequence we still have component-wise uniform convergence in time. \(\square \)

4 Uniqueness and regularity for \(2<\beta \le \tfrac{5}{2}\)

In this section we prove two extensions of results given in the non-random case. The first concerns path-wise uniqueness, the second is about absence of blow-up. Both extensions are based on the control of negative components shown in Sect. 3.

Theorem 4.1

(Pathwise uniqueness) Let \(\beta \in (2,3]\) and assume that (2.1) holds. Let \(X(0)\in V_{\beta -2}\), then there exists a (pathwise) unique solution of (1.1) with initial condition \(X(0)\), in the class of Galerkin martingale solutions.

We do not know if uniqueness holds in some larger class (energy or weak martingale solutions), neither we know if a Galerkin solution develops blow-up. By slightly restricting the range of values of \(\beta \), we have an improvement.

Theorem 4.2

(Smoothness) There exists \(\beta _c\in (\tfrac{5}{2},3]\) such that the following statement holds. Assume (2.1) and let \(\beta \in (2,\beta _c)\) and \(\alpha \in (\beta -2,1+\alpha _0)\). Then \(\tau _x^\alpha =\infty \) for all \(x\in V_\alpha \) and path-wise uniqueness holds in the class of energy martingale solutions.

4.1 The proof of Theorem 4.1

The proof is based on [7, Proposition 3.2], which builds up on an idea in [4]. Both results hold for positive solutions and no noise.

Proof of Theorem 4.1

Fix \(T>0\). It is sufficient to show uniqueness on \([0,T]\). We will use Lemma 3.2 with \(c_0=\tfrac{1}{6}\) and \(a_0=\tfrac{1}{4}\). With these values (3.3) holds for any \(n_0\). Moreover, the bounds of Lemma 3.2 hold for Galerkin solutions, since they are the component-wise limit of finite dimensional approximations.

Let \(n_0\) be the smallest integer such that \(\inf _{n\ge n_0}\lambda _n^{\beta -2}X_n(0)\ge -\tfrac{1}{4}\nu \) and set \(N_0 = 1 + n_0\vee N_{\beta -2,1/6}(T)\). Let \(X^1\), \(X^2\) be two solutions with the same initial condition \(X(0)\). By Lemma 3.2, \(X_n^i(t)\ge Z_n(t) - \tfrac{1}{4}\nu \lambda _{n-1}^{2-\beta }\) for \(n\ge N_0\), \(t\in [0,T]\), and \(i=1,2\). Set \(A_n = X_n^1 - X_n^2\), \(B_n = X_n^1 + X_n^2\), \(D_n = \tfrac{1}{2}\nu \lambda _{n-1}^{2-\beta } - 2 Z_n\), and

$$\begin{aligned} \psi _\ell (t) = \sum _{n=1}^{N_0-1}\frac{A_n^2}{\lambda _n}, \quad \psi _{h,N}(t) = \sum _{N_0}^N\frac{A_n^2}{\lambda _n}, \quad \psi _N(t) = \psi _\ell (t) + \psi _{h,N}(t). \end{aligned}$$

Notice that \(B_n + D_n\ge 0\) if \(t\in [0,T]\) and \(n\ge N_0\). A simple computation yields

for \(N > N_0\). For the first term we notice that \(D_{n+1}\le \tfrac{5}{6}\nu \lambda _n^{2-\beta }\), hence . For the second term,

The quantity on the right-hand side is a. s. finite since \(Z\in C([0,T];V_{{(\beta -1)/2}})\) by Lemma 2.4, \(V\in L^2([0,T];V)\) and \(\beta \le 3\). This implies that a. s. as \(N\rightarrow \infty \). Likewise,

and in conclusion . Set \(\psi (t) = \Vert A(t)\Vert _{-1/2}\), then \(\psi _N\uparrow \psi \). Integrate in time the inequality for \(\psi _N\) and take the limit as \(N\uparrow \infty \) to get

$$\begin{aligned} \psi (t) \le \lambda _{N_0-1}^\beta \left( \sup _{[0,T]}\Vert X^1 + X^2\Vert _H\right) \int \limits _0^t\psi (s)\,ds. \end{aligned}$$

By Gronwall’s lemma \(\psi (t)=0\) a. s. for all \(t\in [0,T]\). \(\square \)

4.2 The proof of Theorem 4.2

We give a minimal requirement for smoothness of solutions of (1.1). This is analogous to the criterion developed in [7] without noise. Given \(T>0\) define the subspace \(K_T\) of \(\Omega _\beta \) as

$$\begin{aligned} K_T = \left\{ \omega \in \Omega _\beta : \lim _n\left( \max _{t\in [0,T]}\lambda _n^{\beta -2}|\omega _n(t)|\right) = 0\right\} . \end{aligned}$$

Proposition 4.3

Assume (2.1), and let \(\beta >2\), \(\alpha \in (\beta -2,1+\alpha _0)\). Let \(x\in V_\alpha \) and \(\mathbb P _x\) be an energy martingale solution starting at \(x\). If \(\tau _\infty ^\alpha \) is the random time defined in (2.5), then \(\{\tau _\infty ^\alpha >T\} = K_T\) under \(\mathbb P _x\), for every \(T>0\).

Proof

Fix \(\alpha \in (\beta -2,\alpha _0+1)\), \(x\in V_\alpha \) and a solution \(\mathbb P _x\) starting at \(x\), and let \(\tau _\infty ^\alpha \), \(\tau _\infty ^{\alpha ,R}\) be the random times defined in (2.5). Assume \(\tau _\infty ^\alpha (\omega )>T\), then \(\tau _\infty ^{\alpha ,R_0}(\omega )>T\) for some \(R_0>\Vert x\Vert _\alpha \). In particular \(\Vert \xi _t(\omega )\Vert _\alpha \le R_0\) for \(t\in [0,T]\). Hence,

$$\begin{aligned} \lambda _n^{\beta -2}\max _{[0,T]}|\xi _{n,t}(\omega )| \le \lambda _n^{\beta -2-\alpha }\sup _{[0,T]}\Vert \xi _t(\omega )\Vert _\alpha \le R_0\lambda _n^{\beta -2-\alpha }, \end{aligned}$$

and \(\omega \in K_T\). Vice versa, let \(\omega \in K_T\) and choose \((M_n)_{n\ge 1}\) such that \(M_n\downarrow 0\) and \(\lambda _n^{\beta -2}\max _{[0,T]}|\xi _n(t)|\le M_n\). Set \(u_n = \lambda _n^\alpha \xi _n\) and \(m_n = \max _{[0,T]}|u_n(t)|\), then

$$\begin{aligned} |u_n(t)| \le |u_n(0)| + \left( \sup _{[0,T]}\lambda _n^\alpha |Z_n(t)|\right) + \nu ^{-1}\lambda ^{\alpha -2}M_{n-1}m_{n-1} + \nu ^{-1}\lambda ^{2-\beta }M_{n-1}m_n, \end{aligned}$$

and

$$\begin{aligned} (1 - \nu ^{-1}\lambda ^{2-\beta }M_{n-1})m_n \le |u_n(0)| + \bigl (\sup \nolimits _{[0,T]}\lambda _n^\alpha |Z_n(t)|\bigr ) + \nu ^{-1}\lambda ^{\alpha -2}M_{n-1}m_{n-1}. \end{aligned}$$

Set \(A_n = 2\lambda _n^\alpha |x_n| + 2 \bigl (\sup _{[0,T]}\lambda _n^\alpha |Z_n(t)|\bigr )\). By Lemma 2.4 applied with an \(\alpha ^{\prime }>\alpha \), we know that \(\sum _n A_n^2<\infty \) with probability one. For \(n\) large enough (depending only on \(\lambda \), \(\nu \) and \(\beta \)), the above inequality reads \(m_n\le A_n + \tfrac{1}{2} m_{n-1}\). By solving the recursion we get \(\sum _n m_n^2<\infty \), and in particular \(\tau _\infty ^\alpha (\omega )>T\). \(\square \)

The basic idea of the proof of Theorem 4.2 is that given a smooth initial state \(x\), there is a solution \(\mathbb P _x\) that satisfies \(\mathbb P _x[K_T]=1\). Hence is the unique solution.

Proof of Theorem 4.2

Fix \(\alpha \in (\beta -2,\alpha _0+1)\), \(x\in V_\alpha \), \(T>0\) and an energy martingale solution \(\mathbb P _x\) starting at \(x\), and let \(\tau _\infty ^\alpha \) be the random time defined in (2.5). There is no loss of generality in assuming that \(\mathbb P _x\) is a Galerkin solution. Indeed, by Theorem 2.8, \(\tau _\infty ^\alpha \) is equal a. s. to the lifespan \(\tau _x^\alpha \) of the strong solution with the same initial state.

Since \(\mathbb P _x\) is a Galerkin solution, there are \(x^{{(N_k)}}\) and the solution \(\mathbb P ^{{(N_k)}}\) with initial state \(x^{{(N_k)}}\) of (2.6) with dimension \(N_k\), such that \(x^{{(N_k)}}\rightarrow x\) in \(H\) and \(\mathbb P ^{{(N_k)}}\rightharpoonup \mathbb P _x\) in \(\Omega _\beta \). By definition we also have that \(x_n^{{(N_k)}}=x_n\) for \(n\le N_k\).

By a standard argument (Skorokhod’s theorem) there are a common probability space \(({\bar{\Omega }},{\bar{\fancyscript{F}}},{\bar{\mathbb{P }}})\) and random variables \(X^{{(N_k)}}, X\) on \({\bar{\Omega }}\) with laws \(\mathbb P ^{{(N_k)}}\), \(\mathbb P _x\) respectively, such that \(X_n^{{(N_k)}}\rightarrow X_n\), \({\bar{\mathbb{P }}-}\)a. s., uniformly on \([0,T]\) for all \(n\ge 1\).

Let \(\epsilon >0\) be such that \(\alpha >\beta -2\,+\,2\epsilon \) and \(6\,-\,2\beta \,-\,3\epsilon >0\). We will use Lemma 3.2 with \(a_0<\tfrac{1}{2}\) (to be chosen later in the proof) and \(c_0=\tfrac{1}{3} a_0\). Let \({\bar{n}}\) be the smallest integer such that \(\lambda _{n-1}^\alpha |x_n|\le a_0\nu \) for all \(n\ge {\bar{n}}\), and set \(N_0 = {\bar{n}}\vee N_{\beta -2+2\epsilon ,c_0}(T)\).

For each integer \(n_0\ge 1\) and real \(M>0\) define the event

$$\begin{aligned} A_M(n_0) = \left\{ \sup _{[0,T]}\bigl (|X_{n_0-1}^{{(N_k)}}| + |X_{n_0}^{{(N_k)}}|\bigr ) \le M\quad \text{ for } \text{ all } k \text{ such } \text{ that } N_k\ge n_0\right\} . \end{aligned}$$

Clearly \({\bar{\mathbb{P }}}\bigl [\bigcup _M A_M(n_0)\bigr ] = 1\) since \(X_n^{{(N_k)}}\rightarrow X_n\) uniformly for \(n\ge 1\), hence

$$\begin{aligned} {\bar{\mathbb{P }}}\left[ \bigcup _{n_0\ge 1,\ M>0} \bigl (\{N_0 = n_0\}\cap A_M(n_0)\bigr )\right] = 1. \end{aligned}$$

Fix \(n_0\ge 1\) and \(M>0\), then everything boils down to prove that \(K_T\) happens on \(\{N_0 = n_0\}\cap A_M(n_0)\) for \((X_n^{{(N_k)}})_{n_0\le n\le N_k}\) uniformly in \(k\). We work pathwise for \(\omega \in \{N_0 = n_0\}\cap A_M(n_0)\) and we adapt the method in [7]. We will prove that the area in Figure 4.2 is invariant for a suitable rescaling of \(Y\). The area

figure a

\(A\) is defined by \(c=\lambda ^{{-(6-2\beta -3\epsilon )}}\), \(\delta =\tfrac{1}{10}\), \(\theta =\tfrac{3}{5}\) and \(m=\tfrac{3}{4}\), and by \(g(x)=\min (mx+\theta ,1)\) and \(h_\eta \) specified later in (4.6). In [7] we used the value \(\eta =\delta \).

First, we change and rescale the solution. Let \(\epsilon _n = \nu \lambda _{n-1}^{-2\epsilon }\) and define

$$\begin{aligned} \delta _0^{-1}&= \max \bigl \{\delta ^{-1}, \lambda _{n_0}^{\beta -2+\epsilon }M + 2a_0\lambda _{n_0}^\epsilon \epsilon _{n_0-1}, \sup \nolimits _{n\ge n_0}\lambda _n^\epsilon (\lambda _n^{\beta -2}x_n+2a_0\epsilon _n)\bigr \},\\ U_n(t)&= \lambda _n^\epsilon \delta _0^2\bigl (\lambda _n^{\beta -2} Y_n^{{(N_k)}}(\delta _0t) + a_0\epsilon _n\bigr ), \quad V_n(t) = \lambda _n^\epsilon \bigl (a_0\epsilon _n - \lambda _n^{\beta -2}Z_n(\delta _0t)\bigr ). \end{aligned}$$

It follows by Lemma 3.2 that

$$\begin{aligned} U_n\ge 0, \quad \text{ and }\quad \tfrac{2}{3}a_0\lambda _n^\epsilon \epsilon _n \le V_n \le \tfrac{5}{3}a_0\lambda _n^\epsilon \epsilon _n, \end{aligned}$$
(4.1)

for all \(n_0\le n\le N_k\). By the choice of \(\delta _0\) it follows that \(U_n(0)\le \delta _0\le \delta \) for all \(n\ge n_0\), \(\max _{[0,T]} U_{n_0-1}\le \delta \), and \(\max _{[0,T]} U_{n_0}\le \delta \).

Consider for \(n\ge n_0\) the coupled systems in \((U_n,U_{n+1})\),

$$\begin{aligned} \frac{d}{dt} \begin{pmatrix} U_n\\ U_{n+1} \end{pmatrix} = \lambda _n^{2-\epsilon }\left( \delta _0^3\mathfrak B _n^0 + \delta _0\mathfrak B _n^1 + \frac{1}{\delta _0}\lambda ^{\beta -4+2\epsilon }\mathfrak B _n^2\right) , \end{aligned}$$
(4.2)

where

$$\begin{aligned} \mathfrak B _n^i&= \begin{pmatrix} P_n^i\\ \lambda ^{2-\epsilon }P_{n+1}^i \end{pmatrix}, \qquad i=0,1,2,\\ P_n^0&= a_0\nu \lambda _n^{2\epsilon }\epsilon _n + \lambda ^{\beta -4+2\epsilon } V_{n-1}^2 - \lambda ^{2-\beta -\epsilon } V_n V_{n+1},\\ P_n^1&= - \nu \lambda _n^\epsilon U_n - 2\lambda ^{\beta -4+2\epsilon } V_{n-1} U_{n-1} + \lambda ^{2-\beta -\epsilon } (V_{n+1} U_n + V_n U_{n+1}),\\ P_n^2&= U_{n-1}^2 - \lambda ^{6-2\beta -3\epsilon } U_n U_{n+1}. \end{aligned}$$

The goal is to prove that \((U_n(t))_{n_0\le n\le N_k}\) is uniformly bounded in \(n\) and \(t\). Indeed, we will see that \(0\le U_n(t)\le 1\) for all \(n,t\). In turns this implies that \( -\lambda _n^\epsilon \epsilon _n \le \lambda _n^{\beta -2+\epsilon }Y_n^{{(N_k)}}(t) \le \delta _0^{-2}, \) for all \(n,t\). Since \(Y_n^{{(N_k)}}\rightarrow Y_n\) uniformly on \([0,T]\) for each \(n\), the same holds for the limit \(Y\). Due to Lemma 2.4, \(X\in K_T\).

By the choice of \(\delta _0\) each pair \((U_n(t), U_{n+1}(t))\) is in the interior of \(A\) at \(t=0\). If we show that each pair stays in \(A\) for all \(t>0\), then \(U_n\le 1\). To this end it suffices to show that each vector field on the right hand side of (4.2) points inwards on the boundary of \(A\). By Lemma 3.2 it immediately follows that the normal vectors \(\varvec{n}_1\) and \(\varvec{n}_6\) point inwards. Moreover, since \(A\) is convex, it is sufficient to verify that each of the products of \(\varvec{n}_i\), \(i=2,\dots ,5\), with the vector fields \(\mathfrak B _n^0\), \(\mathfrak B _n^1\) and \(\mathfrak B _n^2\) is positive.

The vector field \(\mathfrak B ^1\). We will use (4.1), that \(U\le 1\) in \(A\) and that \(\epsilon _n\) is non-increasing. If \(a_0\) is chosen small enough (depending only on \(m\), \(\beta \) and \(\epsilon \), but not on \(M\), \(n_0\) or \(\delta _0\)), then the lower bounds we will obtain are positive numbers.

On the border with normal \(\varvec{n}_2 = (m,-1)\), \(\lambda ^2 U_{n+1} - m U_n\ge \lambda ^2\theta \), hence

$$\begin{aligned} \mathfrak B _n^1\cdot \varvec{n}_2 = m P_n^1 - \lambda ^{2-\epsilon } P_{n+1}^1 \ge \lambda _n^\epsilon \bigl (\nu \lambda ^2\theta - a_0 c(m,\beta ,\epsilon )\epsilon _{n-1}\bigr ). \end{aligned}$$
(4.3)

On the border with normal \(\varvec{n}_3 = (0, -1)\) we have \(U_{n+1} = 1\), hence

$$\begin{aligned} \mathfrak B _n^1\cdot \varvec{n}_3 = - \lambda ^{2-\epsilon } P_{n+1}^1 \ge \lambda ^{2-\epsilon } \lambda _{n+1}^\epsilon (\nu - 4a_0\lambda ^{2-\beta }\epsilon _{n+1}). \end{aligned}$$
(4.4)

Similarly, on the border with normal \(\varvec{n}_4 = (-1, 0)\) we have \(U_n = 1\), hence

$$\begin{aligned} \mathfrak B _n^1\cdot \varvec{n}_4 = - P_n^1 \ge \lambda _n^\epsilon (\nu - 4a_0\lambda ^{2-\beta }\epsilon _n). \end{aligned}$$
(4.5)

Before computing the scalar product with \(\varvec{n}_5\), let us give the definition of \(h_\eta \). For \(\eta \in (0,1)\) define \(\varphi _\eta (x) = \bigl ((x-\eta )/(1-\eta )\bigr )^{{\lambda ^{2}}}\), \(x\in [\eta ,1]\), and, for \(\eta \le \delta \),

$$\begin{aligned} h_\eta (x) = \frac{c}{1-\varphi _\eta (\delta )}\bigl (\varphi _\eta (x) - \varphi _\eta (\delta )\bigr ), \quad x\in [\delta ,1]. \end{aligned}$$
(4.6)

Each \(h_\eta \) is positive, increasing, convex, \(h_\eta (\delta )=0\), \(h_\eta (1) = c\) and \(h_\eta \rightarrow h\) in \(C^1([\delta ,1])\) as \(\eta \uparrow \delta \). Moreover, there is \(c_{\delta ,\eta }>0\) such that \(x h_\eta ^{\prime } - \lambda ^2 h_\eta \ge c_{\delta ,\eta }\). With this inequality in hand, we proceed with the estimate of \(\mathfrak B _n^1\cdot \varvec{n}_5\). On the border with normal \(\varvec{n}_5 = (-h_\eta ^{\prime }(U_n), 1)\) we have \(U_n\in [\delta ,1]\) and \(U_{n+1} = h_\eta (U_n)\). Since \(h_\eta ^{\prime }\le c\lambda ^2 /(1-2\delta )\), it follows that

$$\begin{aligned} \mathfrak B _n^1\cdot \varvec{n}_5 = \lambda ^{2-\epsilon } P_{n+1}^1 - h_\eta ^{\prime }(U_n)P_n^1 \ge \lambda _n^\epsilon \bigl (\nu c_{\delta ,\eta } - a_0 c(\beta ,\epsilon ,\delta )\epsilon _n\bigr ). \end{aligned}$$
(4.7)

The vector field \(\mathfrak B ^0\). Using (4.1) we have that \(|P_n^0|\le \lambda _n^{2\epsilon } (a_0\nu \epsilon _n + c a_0^2\epsilon _{n-1}^2)\). This quantity can be made a small fraction of \(\lambda _n^\epsilon \) if \(a_0\) is small enough. Therefore, due to formulae (4.3), (4.4), (4.5), (4.7), each product \((\mathfrak B _n^0+\mathfrak B _n^1)\cdot \varvec{n}_i\), \(i=2,\dots ,5\) is positive.

The vector field \(\mathfrak B ^2\). We have chosen the same parameters as in [7], hence the products \(\mathfrak B ^2_n\cdot \varvec{n}_3\) and \(\mathfrak B ^2_n\cdot \varvec{n}_4\) are positive. A simple computation shows that \(\mathfrak B ^2_n\cdot \varvec{n}_2\) and \(\mathfrak B ^2_n\cdot \varvec{n}_5\) are continuous functions of \(h_\eta ,h_\eta ^{\prime }\), and have positive minima for \(\eta =\delta \). Then the same is true for \(\eta \) small enough, since \(h_\eta \rightarrow h\) in \(C^1([\delta ,1])\).

The proof we have given (due to the choice of the numbers \(m\), \(\theta \), \(\delta \)) works for \(\beta \le \tfrac{5}{2}\). Hence we can consider \(\beta _c\) slightly larger than \(\tfrac{5}{2}\). A larger value of \(\beta _c\) may be considered (see [7, Remark 2.2]). \(\square \)

5 The blow-up time

We analyse in more detail the blow-up time introduced in Definition 2.2. We give some general results that hold beyond the dyadic model. Such results are the key to prove in the next section that blow-up happens with probability one. Example 5.6 shows that the a. s. emergence of blow-up is a property dependent in general on the structure of the drift. Hence it strongly motivates our analysis.

Let \((X(\cdot ;x), \tau _x)_{x\in \mathcal W }\) be the local strong solution of a stochastic equation on a suitable separable Hilbert space \(\mathcal W \). Having our case in mind, we assume that

  • \(\mathbb P [\tau _x>0]=1\) for all \(x\in \mathcal W \),

  • \(X(\cdot ;x)\) is continuous for \(t<\tau _x\) with values in \(\mathcal W \),

  • \(X(\cdot ;x)\) is the maximal local solution, namely either \(\tau _x=\infty \) or \(\Vert X(t;x)\Vert _\mathcal{W }\rightarrow \infty \) as \(t\uparrow \infty \), \(\mathbb P -\)a. s.,

  • \((X(\cdot ;x), \tau _x)_{x\in \mathcal W }\) is Markov (in the sense given in Theorem 2.3),

  • all martingale solutions coincide with the strong solution up to \(\tau _x\).

The last statement plainly implies that the occurrence of blow-up is an intrinsic property of the unique local strong solution. Define

$$\begin{aligned} \flat (t,x) = \mathbb P [\tau _x>t], \quad \text{ and }\quad \flat (x) = \inf _{t\ge 0} \flat (t,x) = \mathbb P [\tau _x=\infty ], \end{aligned}$$

for \(x\in \mathcal W \) and \(t\ge 0\). Clearly \(\flat (0,x)=1\) and \(\flat (\cdot ,x)\) is non-increasing. Next lemma shows a \(0\)\(1\) law for the supremum of \(\flat \) over space and time.

Lemma 5.1

Consider the family of processes \((X, \tau )\) on \(\mathcal W \) as above. If there is \(x_0\in \mathcal W \) such that \(\mathbb P [\tau _{x_0}=\infty ]>0\), then

$$\begin{aligned} \sup _{x\in \mathcal W }\mathbb P [\tau _x=\infty ] = 1. \end{aligned}$$

Proof

By the Markov property,

$$\begin{aligned} \flat (t+s,x) = \mathbb P [\tau _x>t+s] = \mathbb E [{{\small 1}\!\!1}_{\{\tau _x>t\}}\flat (s,X(t;x))], \end{aligned}$$

and in the limit as \(s\uparrow \infty \), by monotone convergence,

$$\begin{aligned} \flat (x) = \mathbb E [{{\small 1}\!\!1}_{\{\tau _x>t\}}\flat (X(t;x))]. \end{aligned}$$
(5.1)

Set \(c=\sup \flat (x)\), then by the above formula,

$$\begin{aligned} \flat (x_0) = \mathbb E [{{\small 1}\!\!1}_{\{\tau _{x_0}>t\}}\flat (X(t;x_0))] \le c\mathbb E [{{\small 1}\!\!1}_{\{\tau _{x_0}>t\}}] = c \flat (t,x_0). \end{aligned}$$

As \(t\uparrow \infty \), we get \(\flat (x_0)\le c \flat (x_0)\), that is \(c\ge 1\), hence \(c=1\). \(\square \)

Remark 5.2

Something more can be said by knowing additionally that there is \(x_0\) with \(\flat (x_0) = 1\). Indeed, \({{\small 1}\!\!1}_{\{\tau _{x_0}>t\}} = 1\) a. s., and, using again formula (5.1),

$$\begin{aligned} \mathbb E [\flat (X(t;x_0))] = \mathbb E [{{\small 1}\!\!1}_{\{\tau _{x_0}>t\}}\flat (X(t;x_0))] = \flat (x_0) = 1. \end{aligned}$$

Hence \(\flat (X(t;x_0)) = 1\), a. s. for every \(t>0\). This is very close to proving that \(\flat \equiv 1\). In fact [28, Theorem 6.8] proves, although with a completely different approach, that \(\flat (x_0)=1\) implies that \(\flat \equiv 1\) on \(\mathcal W \). This holds under the assumptions of strong Feller regularity and conditional irreducibility, namely that \(\mathbb P [X(t;x)\in A,\tau _x>t]>0\) for every \(x\in \mathcal W \), \(t>0\) and every open set \(A\subset \mathcal W \).

Proposition 5.3

Consider the family \((X,\tau )\) of processes as above. Assume that, given \(x\in \mathcal W \), there exist a closed set \(B_\infty \subset \mathcal W \) with non-empty interior and three numbers \(p_0\in (0,1)\), \(T_0>0\) and \(T_1>0\) such that

  • \(\mathbb P [\sigma _{B_\infty }^{x,T_1}=\infty ,\tau _x=\infty ]=0\),

  • \(\mathbb P [\tau _y\le T_0]\ge p_0\) for every \(y\in B_\infty \),

where the (discrete) hitting time \(\sigma _{B_\infty }^{x,T_1}\) of \(B_\infty \), starting from \(x\), is defined as

$$\begin{aligned} \sigma _{B_\infty }^{x,T_1} = \min \{k\ge 0: X(kT_1;x)\in B_\infty \}, \end{aligned}$$

and \(\sigma _{B_\infty }^{x,T_1} = \infty \) if the set is empty. Then

$$\begin{aligned} \mathbb P [\tau _x<\infty ]\ge \frac{p_0}{1+p_0}. \end{aligned}$$

Remark 5.4

The first condition in the above proposition can be interpreted as recurrence in a conditional sense: knowing that the solution does not explode, it will visit \(B_\infty \) in a finite time with probability \(1\).

Proof

The first assumption says that \(\mathbb P [\sigma _{B_\infty }^{x,T_1}>n,\tau _x>nT_1]\downarrow 0\) as \(n\rightarrow \infty \). If \(\mathbb P [\tau _x=\infty ]=0\), there is nothing to prove. If on the other hand \(\mathbb P [\tau _x=\infty ]>0\), then \(\mathbb P [\tau _x>nT_1]>0\) for all \(n\ge 1\) and, since \(\mathbb P [\tau _x>nT_1]\downarrow \mathbb P [\tau _x=\infty ]\) as \(n\rightarrow \infty \),

$$\begin{aligned} \mathbb{P }[\sigma _{B_{\infty }}^{x,T_1}\le n|{\tau }_x>nT_1] = 1 - \frac{\mathbb{P }[{\sigma }_{B_{\infty }}^{x,T_1}>n,{\tau }_x>nT_1]}{\mathbb{P }[{\tau }_x>nT_1]} \longrightarrow 1, \quad n \rightarrow \infty . \end{aligned}$$

For \(n\ge 1\),

$$\begin{aligned} \mathbb P [\tau _x>nT_1+T_0] \le \mathbb P [\tau _x>nT_1+T_0,\ \sigma _{B_\infty }^{x,T_1}\le n] + \mathbb P [\sigma _{B_\infty }^{x,T_1}>n]. \end{aligned}$$

The strong solution is Markov, hence

$$\begin{aligned} \mathbb P [\tau _x>nT_1+T_0,\ \sigma _{B_\infty }^{x,T_1}\le n]&= \sum _{k=0}^n \mathbb P [\tau _x>nT_1+T_0,\ \sigma _{B_\infty }^{x,T_1} = k]\\&\le (1-p_0)\mathbb P [\sigma _{B_\infty }^{x,T_1}\le n]. \end{aligned}$$

In conclusion

$$\begin{aligned}&\mathbb P [\tau _x>nT_1+T_0] \le (1-p_0)\mathbb P [\sigma _{B_\infty }^{x,T_1}\le n] + \mathbb P [\sigma _{B_\infty }^{x,T_1}>n] \\&\quad = 1 - p_0\mathbb P [\sigma _{B_\infty }^{x,T_1}\le n] \le 1 - p_0\mathbb P [\sigma _{B_\infty }^{x,T_1}\le n|\tau _x>nT_1]\mathbb P [\tau _x>nT_1], \end{aligned}$$

and, as \(n\rightarrow \infty \), \(\mathbb P [\tau _x=\infty ]\le 1 - p_0\mathbb P [\tau _x=\infty ]\), that is \(\mathbb P _x[\tau _x=\infty ]\le \tfrac{1}{1+p_0}\). \(\square \)

Corollary 5.5

Assume that there are \(p_0\in (0,1)\), \(T_0>0\) and \(B_\infty \subset \mathcal W \) such that the assumptions of the previous proposition hold for every \(x\in \mathcal W \) (the time \(T_1\) may depend on \(x\)). Then for every \(x\in \mathcal W \), \(\mathbb P [\tau _x<\infty ] = 1\).

Proof

The previous proposition yields that \(\sup _{x\in \mathcal W }\mathbb P [\tau _x=\infty ]\le \tfrac{1}{1+p_0}\). By the dichotomy of Lemma 5.1, \(\mathbb P [\tau _x<\infty ] = 1\) for every \(x\in \mathcal W \). \(\square \)

Example 5.6

The following simple one dimensional example shows that the a. s. occurrence of blow-up depends on the structure of the drift. Our proofs below are elementary and mimic the proofs of the next section. Consider the SDEs,

$$\begin{aligned} dX = f_i(X)\,dt + dW, \quad i=1,2, \end{aligned}$$

with initial condition \(X(0) = x\in \mathbf R \), where

$$\begin{aligned} f_1(x) = \left\{ \begin{array}{ll} x^2, &{}\quad x\ge 0,\\ x, &{}\quad x<0, \end{array}\right. \qquad \quad f_2(x) = \left\{ \begin{array}{ll} x^2, &{}\quad x\ge 0,\\ -x, &{}\quad x<0. \end{array}\right. \end{aligned}$$

The Feller test [32, Proposition 5.22]) yields \(0<\flat _1(x)<1\) for the blow-up function corresponding to the drift \(f_1\), and \(\flat _2(x)\equiv 1\) for the one of the drift \(f_2\).

In view of the results proved above and the analysis of the next section (see Theorem 6.1), we notice that

  • if \(B_\infty = \{x\ge 1\}\), then for both drifts there are \(p_0>0\) and \(T_0>0\) such that \(\mathbb P [\tau _x\le T_0]\ge p_0\) for all \(x\in B_\infty \), that is the second assumption of Proposition 5.3 holds,

  • the first assumption of Proposition 5.3 holds for \(f_2\) but not for \(f_1\),

  • in both cases \(\mathbb E \bigl [\sup _{[0,T]}(X_n)_-^p\bigr ]<\infty \) for all \(T>0\) and \(p\ge 1\).

Indeed, given an initial condition \(x\in [1,\infty )\), we have that

$$\begin{aligned} \mathbb P \left[ \left\{ \sup _{t\in [0,2]}|W_t|\le \tfrac{1}{4}\right\} \cap \{\tau _x>2\}\right] =0. \end{aligned}$$

Set \(Y_t = X_t - W_t\), so that \(Y_0=x\) and \(dY = dX - dW = (Y + W)^2\), in particular \(Y_t\ge 1\). On the event \(\{\sup _{t\in [0,2]}|W_t|\le \tfrac{1}{4}\}\),

$$\begin{aligned} \dot{Y} \ge Y^2 - 2|W|Y \ge Y \left( Y-\frac{1}{2}\right) \ge \frac{1}{2}Y^2, \end{aligned}$$

hence by comparison \(Y_t\) (and hence \(X_t\)) explodes before time \(\tfrac{2}{x}\le 2\).

6 Blow-up for \(\beta >3\)

In the first part of the section we prove that there are sets in the state space which lead to blow-up with positive probability. The idea is to use Lemma 3.2 to adapt the estimates of [10], which work only for positive solutions.

In the second part of the section we show that such sets are recurrent, when the blow-up time is conditioned to be infinite. The general result of the previous section immediately implies that blow-up occurs with full probability.

6.1 Blow-up with positive probability

Given \(\alpha >\beta -2\), \(p\in (0,\beta -3)\), \(a_0>0\) and \(M_0>0\), define the set

$$\begin{aligned} B_\infty (\alpha ,p,a_0,M_0) = \left\{ x\in V_\alpha : \Vert x\Vert _p\ge M_0 \text{ and } \inf _{n\ge 1}\bigl (\lambda _{n-1}^{\beta -2}x_n\bigr )\ge -\nu a_0\right\} . \end{aligned}$$
(6.1)

We will show that for suitable values of \(a_0\), \(M_0\), each solution of (1.1) with initial condition in the above set blows up in finite time with positive probability.

Theorem 6.1

Let \(\beta >3\) and assume (2.1). Given \(\alpha \in (\beta -2,\alpha _0+1)\), \(p\in (0,\beta -3)\), and \(a_0\in (0,\tfrac{1}{4}]\), there exist \(p_0>0\), \(T_0>0\) and \(M_0>0\) such that for each \(x\in B_\infty (\alpha ,p,a_0,M_0)\) and for every energy martingale weak solution \(\mathbb P _x\) starting at \(x\),

$$\begin{aligned} \mathbb P _x[\tau _\infty ^\alpha \le T_0] \ge p_0. \end{aligned}$$

Proof

Choose \(c_0>0\) with \(c_0\le a_0\le \sqrt{a_0}(1-\sqrt{a_0})\), and consider the random integer \(N_{\alpha ,c_0}(T_0)\) defined in (3.1). The value \(T_0\) will be specified later. Set

$$\begin{aligned} p_0 = \mathbb P _x[N_{\alpha ,c_0}(T_0)=1]. \end{aligned}$$

We recall that \(p_0>0\) by Lemma 3.1, and that its value depends only on the distribution of the solution of (2.3). The theorem will be proved if we show that

$$\begin{aligned} \mathbb P _x[\tau _\infty ^\alpha >T_0,\ N_{\alpha ,c_0}(T_0)=1] = 0. \end{aligned}$$
(6.2)

Indeed \( \mathbb P _x[\tau _\infty ^\alpha \le T_0] = 1 - \mathbb P _x[\tau _\infty ^\alpha \le T_0,\ N_{\alpha ,c_0}(T_0)>1] \ge 1 - \mathbb P _x[N_{\alpha ,c_0}(T_0)>1] = p_0. \) We proceed with the proof of (6.2) and we work pathwise on the event

$$\begin{aligned} \Omega (\alpha ,T_0) = \{\tau _\infty ^\alpha >T_0\}\cap \{N_{\alpha ,c_0}(T_0)=1\}. \end{aligned}$$

Let \(Z\) be the solution of (2.3) and \(Y = X - Z\). Equation (2.4) has a unique solution on \([0,T_0]\) on \(\{\tau _\infty ^\alpha >T_0\}\). On \(\{N_{\alpha ,c_0}(T_0)=1\}\) we have \(\lambda _{n-1}^{\beta -2}|Z_n(t)|\le c_0\) for every \(t\in [0,T_0]\) and every \(n\ge 1\). Set

$$\begin{aligned} \eta _n = X_n - Z_n + a_0\nu \lambda _{n-1}^{2-\beta }. \end{aligned}$$

By this position \(\eta = (\eta _n)_{n\ge 1}\) satisfies the system

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{\eta }_n = - \nu \lambda _n^2\eta _n + a_0\nu ^2\lambda ^2 \lambda _{n-1}^{4-\beta } + \lambda _{n-1}^\beta X_{n-1}^2 - \lambda _n^\beta X_n X_{n+1},\\ \eta _n(0) = X_n(0) + a_0\nu \lambda _{n-1}^{2-\beta }, \end{array}\right. } n\ge 1. \end{aligned}$$

Moreover, by Lemma 3.2 (with \(a_0\), \(c_0\) as fixed above), it follows that \(\eta _n(t;\omega )\ge 0\) for all \(t\in [0,T_0]\), \(n\ge 1\) and \(\omega \in \Omega (\alpha ,T_0)\).

Fix a number \(b>0\), which will be specified later, then

$$\begin{aligned} \frac{d}{dt}\bigl (\eta _n^2 + b\eta _n\eta _{n+1}\bigr )&= - 2\nu \lambda _n^2\eta _n^2 - b\nu (1+\lambda ^2)\lambda _n^2\eta _n\eta _{n+1}\\&\quad \,\, +\, a_0\lambda ^2\nu ^2(2 + b\lambda ^{4-\beta })\lambda _{n-1}^{4-\beta }\eta _n + a_0 b\lambda ^2\nu ^2\lambda _{n-1}^{4-\beta }\eta _{n+1}\\&\quad \,\, +\, 2\lambda _{n-1}^\beta X_{n-1}^2\eta _n + b\lambda _{n-1}^\beta X_{n-1}^2 \eta _{n+1} + b\lambda _n^\beta X_n^2\eta _n\\&\quad \,\, -\, 2\lambda _n^\beta X_n X_{n+1}\eta _n - b\lambda _n^\beta X_n X_{n+1}\eta _{n+1} - b\lambda _{n+1}^\beta \eta _n X_{n+1}X_{n+2}. \end{aligned}$$

Since \((a_0 + c_0)^2\le a_0\) and \(0\le (a_0\nu \lambda _{n-1}^{2-\beta } - Z_n) \le \nu (a_0+c_0)\lambda _{n-1}^{2-\beta }\), Young’s inequality and some straightforward computations yield

$$\begin{aligned} \frac{d}{dt}\bigl (\eta _n^2 + b\eta _n\eta _{n+1}\bigr ) + 2\nu \lambda _n^2\eta _n^2 + b\nu (1+\lambda ^2)\lambda _n^2\eta _n\eta _{n+1} \ge A_n + B_n + C_n, \end{aligned}$$

where

$$\begin{aligned} A_n&= b\lambda _n^\beta \eta _n^3 + \tfrac{\lambda ^{2p}}{1+\lambda ^{2p}}\lambda _{n-1}^\beta \eta _{n-1}^2\eta _n - 2\lambda _n^\beta \eta _n^2\eta _{n+1} - b\lambda _n^\beta \eta _n\eta _{n+1}^2 - b\lambda _{n+1}^\beta \eta _n\eta _{n+1}\eta _{n+2},\\ B_n&= -2b\lambda ^{\beta -2}\nu (a_0 + c_0)\lambda _n^2\eta _n^2, \quad \text{ and }\quad C_n = - 4\nu ^2(a_0+c_0)^2\tfrac{\lambda ^{2\beta +2p-4}}{\lambda ^{2p}-1}\lambda _{n-1}^{4-\beta }\eta _n. \end{aligned}$$

The term \(A_n\) is roughly the same as in the deterministic case, hence by proceeding in the same way as in [10] we have

$$\begin{aligned} \sum _{n=1}^\infty \lambda _n^{2p} A_n \ge k_1\sum _{n=1}^\infty \lambda _n^{\beta +2p}\eta _n^3 + k_1^{\prime }\sum _{n=1}^\infty \lambda _n^{\beta +2p}\eta _n^2\eta _{n+1} = k_1 \sum _{n=1}^\infty \lambda _n^{\beta +2p}\eta _n^3, \end{aligned}$$

where we have chosen \(b\) so that \(k_1^{\prime }= \lambda ^{2p} - 1 - 4b(2+2\lambda ^\beta +\lambda ^{-2p})=0\). The other two terms are simpler, indeed

$$\begin{aligned} \sum _{n=1}^\infty \lambda _n^{2p} B_n = -2b\lambda ^{\beta -2}\nu (a_0+c_0)\sum _{n=1}^\infty \lambda _n^{2+2p}\eta _n^2 = - k_2 \Vert \eta \Vert _{1+p}^2, \end{aligned}$$

and, by the Cauchy–Schwarz inequality and the fact that \(p<\beta -3\),

$$\begin{aligned} \sum _{n=1}^\infty \lambda _n^{2p} C_n \ge -4\nu ^2(a_0+c_0)^2\frac{\lambda ^{3\beta +2p-8}}{\lambda ^{2p}-1} \bigl (\lambda ^{2(3-\beta +p)} - 1\bigr )^{-\frac{1}{2}}\Vert \eta \Vert _{1+p} = - k_3 \Vert \eta \Vert _{1+p}. \end{aligned}$$

On the other hand,

$$\begin{aligned}&2\nu \Vert \eta \Vert _{1+p}^2 + b\nu (1+\lambda ^2)\sum _{n=1}^\infty \lambda _n^{2+2p}\eta _n\eta _{n+1} \le k_4\Vert \eta \Vert _{1+p}^2,\\ \Vert \eta \Vert _{1+p}^2&= \sum _{n=1}^\infty \bigl (\lambda _n^{\frac{1}{3}(\beta +2p)}\eta _n\bigr )^2\lambda _n^{-\frac{2}{3}(\beta -3-p)} \le \left( \frac{1}{k_5}\sum _{n=1}^\infty \lambda _n^{\beta +2p}\eta _n^3\right) ^{\frac{2}{3}}. \end{aligned}$$

If we set \(H(t) = \sum _{n=1}^\infty \lambda _n^{2p} \bigl (\eta _n^2 + b\eta _n\eta _{n+1}\bigr )\) and \(\psi (t) = \Vert \eta \Vert _{1+p}^2\), the estimates obtained so far together yield

$$\begin{aligned} \dot{H} + k_4\psi \ge k_1k_5\psi ^{\frac{3}{2}} - k_2\psi - k_3\sqrt{\psi }. \end{aligned}$$

Finally, \(H\le (1+b\lambda ^{-p})\psi = k_6 \psi \), and it is easy to show by a simple argument (for instance the one in [10]) that if

$$\begin{aligned} H(0) > M_0^2 := \frac{k_6}{k_1 k_5}\bigl (k_4+k_2+\sqrt{(k_4+k_2)^2 + 2k_1k_3k_5}\bigr ) \quad \text{ and }\quad T_0 > \frac{4k_6^{\frac{3}{2}}}{k_1k_5\sqrt{H(0)}}, \end{aligned}$$

then \(H\) becomes infinite before time \(T_0\). \(\square \)

6.2 Ineluctable occurrence of the blow-up

So far we know that if the initial condition is not too negative and the noise is not too strong, then the deterministic dynamics dominates and the process diverges. In this section we show that the sets that lead to blow-up are recurrent in a conditional sense (as in Remark 5.4).

Theorem 6.2

Let \(\beta >3\) and assume (2.1). Assume moreover that the set \(\{n\ge 1: \sigma _n\ne 0\}\) is non-empty. Given \(\alpha \in (\beta -2,1+\alpha _0)\), for every \(x\in V_\alpha \) and every energy martingale solution \(\mathbb P _x\) with initial condition \(x\),

$$\begin{aligned} \mathbb P _x[\tau _\infty ^\alpha <\infty ] = 1. \end{aligned}$$

Our strategy to prove the theorem is based on Corollary 5.5. We will show that the sets (6.1) where blow-up occurs satisfy the assumptions of the corollary. Lemma 6.4 shows that the negative part of the solution becomes small. Lemma 6.5 shows that the size of the solution becomes large. Finally, Lemma 6.6 shows that, without blow-up, the sets (6.1) are visited with probability one.

Lemma 6.3

Let \(\beta >3\) and assume (2.1). There exists \(c_{6.3}>0\) such that for \(\alpha \in (\beta -2,1+\alpha _0)\), for every \(x\in V_\alpha \), every energy martingale solution \(\mathbb P _x\) starting at \(x\), every \(T>0\) and every \(c_0>0\) with \(4 c_0(1+\lambda ^{\beta -3})\le 1\),

$$\begin{aligned} \sup _{[0,T]} \Vert X(t)\Vert _H \le \Vert x\Vert _H + c_{6.3}\nu , \end{aligned}$$

\(\mathbb P _x-\)a. s. on the event \(\{\tau _\infty ^\alpha > T\}\cap \{N_{\beta -2,c_0}(T)=1\}\).

Proof

Problem (2.4) has a unique solution on \(\{\tau _\infty ^\alpha > T\}\), hence we work directly on \(Y\). We know that \(\lambda _n^{\beta -2}|Z_n(t)|\le c_0\nu \) for \(t\in [0,T]\) and \(n\ge 1\), hence

$$\begin{aligned} \frac{d}{dt}\Vert Y\Vert _H^2 + 2\nu \Vert Y\Vert _1^2&\le 2\sum _{n=1}^\infty \lambda _n^\beta (Y_nY_{n+1}Z_n + Y_{n+1}Z_n^2 - Y_n^2Z_{n+1} - Y_nZ_nZ_{n+1})\\&\le 4c_0\nu (1+\lambda ^{\beta -3})\Vert Y\Vert _1^2 + \tfrac{\lambda ^{2\beta -4}}{2(\lambda ^{\beta -3}-1)}c_0^3\nu ^3. \end{aligned}$$

The assumption on \(c_0\) and the inequality \(\Vert Y\Vert _1\ge \lambda \Vert Y\Vert _H\) yield

$$\begin{aligned} \frac{d}{dt}\Vert Y\Vert _H^2 + \nu \lambda ^2\Vert Y\Vert _H^2 \le k_0 c_0^3\nu ^3, \end{aligned}$$

where the value of \(k_0\) depends only on \(\beta \). The bound for \(Y\) follows by integrating the differential inequality. The lemma then follows using that \(X = Y+Z\) and that \(N_{\beta -2,c_0}(T)=1\). \(\square \)

The next lemma is a slight improvement of Lemma 3.2. We prove that there is a drift towards the positive cone and solutions tend to be not too negative if the effect of noise is small, regardless of the sign of the initial condition.

Lemma 6.4

(Contraction of the negative components) Let \(\beta >3\) and assume (2.1). For every \(M>0\), \(a_0\in (0,\frac{1}{4}]\) and \(c_0<a_0\), with \(4 c_0(1+\lambda ^{\beta -3})\le 1\), there exists \(T_M>0\) such that for every \(x\in V_\alpha \), with \(\alpha \in (\beta -2,1+\alpha _0)\) and \(\Vert x\Vert _H\le M\), and every energy martingale solution \(\mathbb P _x\),

$$\begin{aligned} \inf _{n\ge 1} \bigl (\lambda _{n-1}^{\beta -2} X_n(T_M)\bigr ) \ge -(a_0 + c_0)\nu , \end{aligned}$$

\(\mathbb P _x-\)a. s. on the event \(\{\tau _\infty ^\alpha > T_M\} \cap \{N_{\beta -2,c_0}(T_M)=1\}\).

Proof

Let \(n_0\) be the first integer such that \(\inf _{n\ge n_0}\lambda _n^{\beta -2}x_n\ge -a_0\nu \). If \(n_0=1\) there is nothing to prove, so we consider the case \(n_0>1\). Lemma 3.2 implies that \(\lambda _n^{\beta -2}Y_n(t)\ge -a_0\nu \) holds for every \(t\in [0,T_M]\) and every \(n\ge n_0\).

The idea to prove the lemma is to show that \((Y_{n_0-1})_-\) becomes closer to \(0\) within a time \(T_{n_0-1}\). At time \(T_{n_0-1}\) we can apply again Lemma 3.2. The same contraction idea yields that the negative part of the component \(n_0-2\) becomes small as well within a time \(T_{n_0-2}\), and so on. The sequence of times depends only on the size of the initial state in \(H\) and turns out to be summable. Therefore it suffices to prove the following statement: given \(n>1\), if we know that for \(t_0>0\),

$$\begin{aligned} \sup _{k\ge 1}\sup _{[t_0,T]}\lambda _{k-1}^{\beta -2}|Z_k|\le c_0\nu \quad \text{ and }\quad \sup _{[t_0,T]}\lambda _n^{\beta -2}(Y_{n+1})_-\le a_0\nu , \end{aligned}$$
(6.3)

then at time \(t_0 + T_n\) we have that \(Y_n(t_0 + T_n)\ge -a_0\nu \lambda _{n-1}^{2-\beta }\). Here we have set

$$\begin{aligned} T_n(\Vert x\Vert _H,c_0,a_0) = \tfrac{2}{\nu }(\beta -2)\log \bigl (\lambda (n-1)\lambda _n^{-2}\bigr ) + \tfrac{2}{\nu }\lambda _n^{-2}\log \bigl (1\vee \tfrac{\Vert x\Vert _H+c_{6.3}\nu }{(a_0-c_0)\nu }\bigr ). \end{aligned}$$

We first notice that \(\sum _n T_n<\infty \), hence we can choose \(T_M\) as the sum \(T_M = \sum _n T_n(M,c_0,a_0)\). We turn to the proof of the above claim. Set

$$\begin{aligned} \eta _n = Y_n + c_0\nu \lambda _{n-1}^{2-\beta }. \end{aligned}$$

Then \(X_n = \eta _n - (c_0\nu \lambda _{n-1}^{2-\beta } - Z_n)\) and

$$\begin{aligned} \dot{\eta }_n&= -\nu \lambda _n^2\eta _n + c_0\nu ^2\lambda ^2\lambda _{n-1}^{4-\beta } + \lambda _{n-1}^\beta X_{n-1}^2 - \lambda _n^\beta X_n X_{n+1}\\&\ge -(\nu \lambda _n^2 + \lambda _n^\beta X_{n+1})\eta _n + c_0\nu ^2\lambda ^2\lambda _{n-1}^{4-\beta } - \lambda _n^\beta (c_0\nu \lambda _{n-1}^{2-\beta } - Z_n)(X_{n+1})_-. \end{aligned}$$

By (6.3), \((X_{n+1})_-\le (a_0+c_0)\nu \lambda _n^{2-\beta }\) and \((c_0\nu \lambda _{n-1}^{2-\beta } - Z_n)\le 2c_0\nu \lambda _{n-1}^{2-\beta }\), hence \(\dot{\eta }_n\ge - (\nu \lambda _n^2 + \lambda _n^\beta X_{n+1})\eta _n\). Since \(a_0+c_0\le \tfrac{1}{2}\), it follows that

$$\begin{aligned} \nu \lambda _n^2 + \lambda _n^\beta X_{n+1} \ge \nu \lambda _n^2 - \nu (a_0+c_0)\lambda _n^2 = \nu \lambda _n^2\bigl (1 - (a_0+c_0)\bigr ) \ge \frac{1}{2}\nu \lambda _n^2. \end{aligned}$$

Therefore for \(t\ge t_0\),

$$\begin{aligned} \eta _n(t) \ge \eta _n(t_0)\exp \left( -\int \limits _{t_0}^t(\nu \lambda _n^2 + \lambda _n^\beta X_{n+1})\,ds\right) \ge -(\eta _n(t_0))_-\mathrm{e }^{-\frac{1}{2}\nu \lambda _n^2(t-t_0)}. \end{aligned}$$

Finally, by Lemma 6.3, \((\eta _n(t_0))_- \le (Y_n(t_0))_- \le \Vert Y(t_0)\Vert _H \le \Vert x\Vert _H + c_{6.3}\nu \). It is elementary now to check that at time \(t_0+T_n\),

$$\begin{aligned} Y_n(t_0+T_n) = \eta _n(t_0+T_n) - c_0\nu \lambda _{n-1}^{2-\beta } \ge -a_0\nu \lambda _n^{2-\beta }. \end{aligned}$$

\(\square \)

The last ingredient to show that the hitting time of sets (6.1) is finite is the fact that the solution can be large enough, while being not too negative. At this stage the noise is crucial, although one randomly perturbed component is enough for our purposes. The underlying ideas of the following lemma come from control theory. We do not need sophisticated results [39, 45] though, because a quick and strong impulse turns out to be sufficient.

Lemma 6.5

(Expansion in \(H\)) Under the assumptions of Theorem 6.2, let \(m\) be equal to \(\min \{n\ge 1:\sigma _n\ne 0\}\). Let \(M_1>0\), \(M_2>0\), and \(a_0,a_0^{\prime },c_0>0\) be such that \(c_0<a_0<a_0^{\prime }<\tfrac{1}{4}\) and \(c_0+a_0<a_0^{\prime }\). For every \(X(0)\in V_\alpha \), with \(\Vert X(0)\Vert _H\le M_1\) and \(\inf _{n\ge 1}\lambda _{n-1}^{\beta -2} X_n(0)\ge -a_0\nu \), there exists \(T= T(M_1,M_2,c_0,a_0,a_0^{\prime },m)>0\) such that

  • \(\lambda _{n-1}^{\beta -2} X_n(t) \ge -(a_0^{\prime } + c_0)\nu \) for every \(n\ge 1\) and \(t\in [0,T]\),

  • \(\Vert X(T)\Vert _H\ge M_2\),

on the event

$$\begin{aligned} \{\tau _\infty ^\alpha \!>\!T_M\} \cap \left\{ \sup _{[0,T]}\lambda _{n-1}^{\beta -2}|Z_n(t)|\le c_0\nu \text{ for } n\ne m\right\} \cap \left\{ \sup _{[0,T]}\lambda _{m-1}^{\beta -2}|Z_m(t) - \psi (t)|\le c_0\nu \right\} . \end{aligned}$$

Here \(\psi :[0,T]\rightarrow \mathbf R \) is a non-decreasing continuous function such that \(\psi (0)=0\) and \(\psi (T)\) large enough depending on the above given data (its value is given in the proof).

Proof

We work on the event given in the statement of the theorem.

Step 1: estimate in \(H\). Set \({\bar{\psi }}=\sup _{[0,T]}\Vert Z_m\Vert _H\le \psi (T) + c_0\nu \), then as in Lemma 6.3,

$$\begin{aligned} \frac{d}{dt}\Vert Y\Vert _H^2 \!+\! 2\nu \Vert Y\Vert _1^2&\le 2\sum _{n=1}^\infty \lambda _n^\beta \bigl (|Z_n Y_n Y_{n+1}| \!+\! Z_n^2 |Y_{n+1}| \!+\! |Z_{n+1}|Y_n^2 \!+\! |Z_n Z_{n+1}Y_n|\bigr )\\&\quad +\, 2\lambda _{m-1}^\beta \bigl (|Z_m|Y_{m-1}^2 + |Z_{m-1}Z_mY_{m-1}|\bigr )\\&\quad +\, 2\lambda _m^\beta \bigl (||Z_mY_mY_{m+1} + Z_m^2 |Y_{m+1}| + |Z_mZ_{m+1}Y_m|\bigr )\\&\le \nu \Vert Y\Vert _1^2 + k_0\nu ^3 + 16\lambda _m^\beta (1+\nu )(1+{\bar{\psi }}^2)(1+\Vert Y\Vert _H^2). \end{aligned}$$

If \(k_1=k_0\nu ^3\), \(k_2=16\lambda _m^\beta (1+\nu )\) and \(M_3(T,\psi (T))^2 = (M_1^2 + k_1/k_2)\exp \bigl (k_2T(1+{\bar{\psi }}^2)\bigr )\), it follows from Gronwall’s lemma that \(\sup _{[0,T]}\Vert Y(t)\Vert _H^2\le M_3(T,\psi (T))^2\). Since on the given event we have that \(\Vert Z(t)\Vert _H\le \lambda ^{2-\beta }(\lambda ^{2-\beta }-1)^{-1} + {\bar{\psi }}\) for every \(t\in [0,T]\), we finally have that

$$\begin{aligned} \sup _{[0,T]}\Vert X(t)\Vert _H \le c_0\nu + \tfrac{\lambda ^{2-\beta }}{\lambda ^{2-\beta }-1} + \psi (T) + M_3(T,\psi (T)) =: M_4(T,\psi (T)). \end{aligned}$$

Step 2: large size at time \(T\). Using the previous estimate we have

$$\begin{aligned} X_m(t)&= \mathrm{e }^{-\nu \lambda _m^2t}X_m(0) + Z_m(t) + \int \limits _0^t \mathrm{e }^{-\nu \lambda _m^2(t-s)}\bigl ( \lambda _{m-1}^\beta X_{m-1}^2 - \lambda _m^\beta X_m X_{m+1}\bigr )\,ds\\&\ge - a_0\nu \lambda _{m-1}^{2-\beta } + \bigl (\psi (t) - c_0\nu \lambda _{m-1}^{2-\beta }\bigr ) - \lambda _m^\beta t \sup _{[0,T]}\Vert X\Vert _H^2. \end{aligned}$$

For \(t=T\) we have \(X_m(T) \ge \psi (T) - \nu - \lambda _m^\beta M_4(T,\psi (T)) T\). If we choose \(\psi (T) = M_2 + 2\nu \), then \(M_4(T,M_2+2\nu ) T\rightarrow 0\) as \(T\downarrow 0\). Therefore we can choose \(T\) small enough so that \(\lambda _m^\beta M_4(T,\psi (T)) T\le \nu \), hence \(X_m(T)\ge M_2\) and \(\Vert X(T)\Vert _H\ge M_2\).

Step 3: Bound from below for \(n=m\). The choice of \(\psi (T)\) and the computations in the above step yield

$$\begin{aligned} \lambda _{m-1}^{\beta -2} X_m(t) \ge - (a_0 + c_0)\nu - \lambda ^{2-\beta }M_4(T, M_2+2\nu )\lambda _m^{2\beta -2} T, \end{aligned}$$

since \(\psi \) is non-negative. By assumption we have that \(a_0+c_0<a_0^{\prime }\), hence, possibly fixing a smaller value of \(T\) than the one chosen in the previous step, we can ensure that \(X_m\ge -a_0^{\prime }\nu \lambda _{m-1}^{2-\beta }\) on \([0,T]\).

Step 4: Bound from below for \(n\ne m\). If \(n>m\), the proof proceeds as in Lemma 3.2, since \(X_m\) appears in the system of equations for \((Y_n)_{n>m}\) only through the positive term \(\lambda _m^\beta X_m^2\) in the equation for the \((m+1)\text{ th }\) component.

If \(n<m\), the proof follows by finite induction. For \(n=m\) the lower bound is true by the previous step. Let now \(n\ge m\) and assume that \(\lambda _n^{2-\beta } Y_{n+1}\ge -a_0^{\prime }\nu \) on \([0,T]\). We prove that \(\lambda _{n-1}^{2-\beta } Y_n\ge -a_0^{\prime }\nu \) as in Lemma 6.4. Set \(\eta _n = Y_n + a_0^{\prime }\nu \lambda _{n-1}^{2-\beta }\). Since \(\lambda _{n-1}^{\beta -2}|Z_n|\le c_0\nu \) and \((X_{n+1})_-\le (a_0^{\prime }+c_0)\nu \lambda _n^{2-\beta }\) on \([0,T]\),

$$\begin{aligned} \dot{\eta }_n&\ge - (\nu \lambda _n^2 + \lambda _n^\beta X_{n+1})\eta _n + a_0^{\prime }\nu ^2\lambda ^{\beta -2}\lambda _n^{4-\beta } - \lambda _n^\beta (a_0^{\prime }\nu \lambda _{n-1}^{2-\beta } - Z_n)(X_{n+1})_-\\&\ge - (\nu \lambda _n^2 + \lambda _n^\beta X_{n+1})\eta _n + \nu \lambda ^{\beta -2}\bigl (a_0^{\prime } - (a_0^{\prime }+c_0)^2\bigr )\lambda _n^{4-\beta }\\&\ge - (\nu \lambda _n^2 + \lambda _n^\beta X_{n+1})\eta _n, \end{aligned}$$

The fact that \(\eta _n(0)\ge 0\) implies that \(\eta _n(t)\ge 0\). \(\square \)

We systematize the random perturbation that, by Lemma 6.4 and Lemma 6.5, moves the solution from a ball in \(H\) to sets (6.1). Let \(c_0>0\), \(t_0>0\), \(T_c>0\), \(T_e>0\) and \(\psi :[0,T_e]\rightarrow \mathbf R \) be a non-negative non-increasing function, and define

$$\begin{aligned} \mathcal N (t_0;c_0,T_c,T_e,\psi ) = \mathcal N _c(c_0,t_0,T_c)\cap \mathcal N _e(c_0,t_0+T_c,T_e,\psi ), \end{aligned}$$

where

$$\begin{aligned} \mathcal N _c(c_0,t_1,t_2)&= \bigl \{ \lambda _{n-1}^{\beta -2}|Z_n^c(t)|\le c_0\nu \text{ for } \text{ all } n\ge 1 \text{ and } t\in [t_1,t_1+t_2] \bigr \},\\ \mathcal N _e(c_0,t_1,t_2,\psi )&= \left\{ \sup _{[t_1,t_1+t_2]}\lambda _{m-1}^{\beta -2}|Z_m^e(t)-\psi _{t_1}(t)| \le c_0\nu ,\ t\in [t_1,t_1+t_2] \right\} \\&\cap \bigl \{\lambda _{n-1}^{\beta -2}|Z_n^e(t)|\le c_0\nu \text{ for } \text{ all } n\ne m \text{ and } t\in [t_1,t_1+t_2] \bigr \}. \end{aligned}$$

Here \(\psi _s:[s,s+T_e]\rightarrow \mathbf R \) is defined for \(s\ge 0\) as \(\psi _s(t) = \psi (t-s)\), for \(t\in [s,s+T_e]\), \(m\) is the smallest integer of the set \(\{n:\sigma _n\ne 0\}\), and for every \(n\ge 1\),

$$\begin{aligned} Z^c_n(t)&= \sigma _n\int \limits _{t_0}^t \mathrm{e }^{-\nu \lambda _n^2(t-t_0)}\,dW_n, \quad t\in [t_0,t_0+T_c],\\ Z^e_n(t)&= \sigma _n\int \limits _{t_0+T_c}^t \mathrm{e }^{-\nu \lambda _n^2(t-t_0-T_c)}\,dW_n, \quad t\in [t_0+T_c,t_0+T_c+T_e]. \end{aligned}$$

Under a martingale solution \(\mathbb P _x\) starting at \(x\), the two events \(\mathcal N _c(c_0,t_0,T_c)\) and \(\mathcal N _e(c_0,t_0+T_c,T_e,\psi )\) are independent, have positive probability (by Lemma 3.1), and the values of their probability is independent of \(t_0\). Moreover, if \(t_0,T_c,T_e\) and \(t_0^{\prime }\) are given such that \(t_0+T_c+T_e\le t_0^{\prime }\), then the events \(\mathcal N (t_0;c_0,T_c,T_e,\psi )\) and \(\mathcal N (t_0^{\prime };c_0,T_c,T_e,\psi )\) are independent.

Lemma 6.6

Assume (2.1) and let \(\beta >3\) and \(\alpha \in (\beta -2,\alpha _0+1)\). There exists \(c_{6.6}>0\) such that if \(M>0\), \(T_c>0\), \(T_e>0\), \(c_0>0\), and \(\psi :[0,T_e]\rightarrow \mathbf R \) is a non-negative non-decreasing function, with

$$\begin{aligned} \frac{c_{6.6}}{M^2} + \mathrm{e }^{-\nu \lambda ^2T} < 1, \quad (T = T_c + T_e), \end{aligned}$$

then for every \(x\in V_\alpha \) and every energy martingale solution \(\mathbb P _x\) starting at \(x\),

$$\begin{aligned} \mathbb P _x\left[ \{\tau _\infty ^\alpha =\infty \} \cap \bigcap _{k\ge 1}\left( \{\Vert X(kT)\Vert _H\le M\}\cap \mathcal N (k T;c_0,T_c,T_e,\psi )\right) ^c\right] = 0. \end{aligned}$$

Proof

We first obtain a quantitative estimate on the return time in balls of \(H\) of the Markov process \(X^R(\cdot ;x)\), solution of problem (2.2), starting at \(x\in V_\alpha \). The same estimate will hold for the strong solution and the lemma will follow.

Step 1. Standard computations with Itô’s formula and Gronwall’s lemma yield

$$\begin{aligned} \mathbb E [\Vert X^R(t;x)\Vert _H^2] \le \Vert x\Vert _H^2\mathrm{e }^{-2\nu \lambda ^2t} + c_{6.6}, \end{aligned}$$
(6.4)

where \(c_{6.6}=(2\nu \lambda ^2)^{-1}\sum _{n=1}^\infty \sigma _n^2\). The series converges due to (2.1) and \(\alpha _0>\beta -3\).

Step 2. We use the previous estimate to show that

$$\begin{aligned} \mathbb P \bigl [\Vert X^R(kT;x)\Vert _H\ge M \text{ for } k=1,\dots ,n\bigr ] \le \bigl (\mathrm{e }^{-\nu \lambda ^2T} + \tfrac{c_{6.6}}{M^2}\bigr )^{n-1}. \end{aligned}$$
(6.5)

We proceed as in [18, Lemma III.2.4]. Define, for \(k\) integer, \(C_k = \{\Vert X^R(kT;x)\Vert _H\ge M\}\) and \(B_k = \bigcap _{j=0}^k C_j\). Set \(\alpha _k = \mathbb E [{{\small 1}\!\!1}_{B_k}\Vert X^R(kT;x)\Vert _H^2]\) and \(p_k = \mathbb P [B_k]\). By the Markov property, Chebychev’s inequality and (6.4),

$$\begin{aligned} \mathbb P [C_{k+1}|\fancyscript{F}_{kT}] \le \tfrac{1}{M^2}\mathrm{e }^{-\nu \lambda ^2T}\Vert X^R(kT;x)\Vert _H^2 + \tfrac{c_{6.6}}{M^2}, \end{aligned}$$

hence

$$\begin{aligned} p_{k+1} = \mathbb E \bigl [{{\small 1}\!\!1}_{B_k}\mathbb P [C_{k+1}|\fancyscript{F}_{kT}]\bigr ] \le \tfrac{1}{M^2}\mathrm{e }^{-\nu \lambda ^2T}\alpha _k + \tfrac{c_{6.6}}{M^2}p_k. \end{aligned}$$

On the other hand, by integrating (6.4) on \(B_k\), we get

$$\begin{aligned} \alpha _{k+1} \le \mathbb E [{{\small 1}\!\!1}_{B_k}\Vert X^R((k+1)T;x)\Vert _H^2] \le \mathrm{e }^{-\nu \lambda ^2T}\alpha _k + c_{6.6}p_k. \end{aligned}$$

Let \(({\bar{\alpha }}_k)_{k\in \mathbf N }\) and \(({\bar{p}}_k)_{k\in \mathbf N }\) be the solutions to the recurrence system

$$\begin{aligned} {\left\{ \begin{array}{ll} {\bar{\alpha }}_{k+1} = \mathrm{e }^{-\nu \lambda ^2T}{\bar{\alpha }}_k + c_{6.6}{\bar{p}}_k,\\ {\bar{p}}_{k+1} = \frac{1}{M^2}\mathrm{e }^{-\nu \lambda ^2T}{\bar{\alpha }}_k + \frac{c_{6.6}}{M^2}{\bar{p}}_k, \end{array}\right. } k\ge 1, \end{aligned}$$

with \({\bar{p}}_1 = p_1\) and \({\bar{\alpha }}_1 = \alpha _1\). Then \({\bar{\alpha }}_k = M^2{\bar{p}}_k\) for \(k\ge 2\) and \(\alpha _k\le {\bar{\alpha }}_k\), \(p_k\le {\bar{p}}_k\) for all \(k\ge 1\). The inequality (6.5) easily follows.

Step 3. We recall that \(\tau _x^\alpha = \sup _{R>0}\tau _x^{\alpha ,R}\), hence by (6.5),

$$\begin{aligned}&\mathbb P [\Vert X(kT;x)\Vert _H\!\ge \! M, k\!\le \! n,\tau _x^\alpha \!=\!\infty ]\\&\quad \le \lim _{R\uparrow \infty }\mathbb P [\Vert X(kT;x)\Vert _H\!\ge \! M, k\!\le \! n,\tau _x^{\alpha ,R}\!>\!nT]\\&\quad = \lim _{R\uparrow \infty }\mathbb P [\Vert X^R(kT;x)\Vert _H\!\ge \! M, k\!\le \! n,\tau _x^{\alpha ,R}\!>\!nT]\\&\quad \le \left( \mathrm{e }^{-\nu \lambda ^2T} \!+\! \frac{c_{6.6}}{M^2}\right) ^{n-1}. \end{aligned}$$

Define the hitting time \(K_1 = \min \{k\ge 0: \Vert X(kT;x)||_H\le M\}\) of the ball \(B_M(0)\) in \(H\) (\(K_1=\infty \) if the set is empty). Clearly \(K_1<\infty \) on \(\{\tau _x^\alpha =\infty \}\). Likewise, define the return times \(K_j = \min \{k>K_{j-1}: \Vert X(kT;x)||_H\le M\}\), \(j\ge 2\) (\(K_j=\infty \) if the set is empty). By the previous step, \(K_j<\infty \) on \(\{\tau _x^\alpha =\infty \}\) for each \(j\ge 1\).

Step 4. Consider for \(k\ge 1\) the events \(\mathcal N _k = \mathcal N (kT;c_0,T_c,T_e,\psi )\). We know that \(\mathbb P [\mathcal N _k]\) is constant in \(k\), so we set \(p=\mathbb P [\mathcal N _k]\). Moreover, by the choice of \(T\), it turns out that \(\mathcal N _1, \mathcal N _2, \dots , \mathcal N _k, \dots \) are independent. Set \(\mathcal N _\infty = \emptyset \) and define the time

$$\begin{aligned} L_0 = \min \{j\ge 1: {{\small 1}\!\!1}_\mathcal{N _{K_j}} = 1\}, \end{aligned}$$

with \(L_0 = \infty \) if the set is empty. Notice that if \(L_0\) is finite, then \(\Vert X(K_{L_0}T;x)\Vert _H\le M\) and the random perturbation leads the system to a set (6.1) within time \(K_{L_0}T + T_c + T_e\). Hence the lemma is proved if we show that

$$\begin{aligned} \mathbb P [L_0 = \infty , \tau _x^\alpha =\infty ] = 0. \end{aligned}$$
(6.6)

Step 5. Given an integer \(\ell \ge 1\), we have that

$$\begin{aligned} \mathbb P [L_0>\ell , \tau _x^\alpha =\infty ]&= \mathbb P [\mathcal N _{K_1}^c\cap \dots \cap \mathcal N _{K_\ell }^c\cap \{\tau _x^\alpha =\infty \}]\\&= \sum _{k_1=1}^\infty \dots \sum _{k_\ell =k_{\ell -1}+1}^\infty \mathbb P [S_\ell (k_1,\dots ,k_l)\cap \{\tau _x^\alpha =\infty \}], \end{aligned}$$

where \(S_\ell (k_1,\dots ,k_\ell ) = \mathcal N _{K_1}^c\cap \dots \mathcal N _{K_\ell }^c \cap \{K_1=k_1,\dots ,K_\ell =k_\ell \}\). Notice that \(S_{\ell }(k_1,\dots ,k_\ell )\in \fancyscript{F}_{(k_\ell +1)T}\), hence by the Markov property,

$$\begin{aligned}&\mathbb P [S_\ell (k_1,\dots ,k_l)\cap \{\tau _x^\alpha >(k_\ell +1)T\}]\\&\qquad \quad = \mathbb E \bigl [{{\small 1}\!\!1}_{S_{\ell -1}(k_1,\dots ,k_{\ell -1})} {{\small 1}\!\!1}_{\{\tau _x^\alpha >(k_{\ell -1}+1)T\}} {{\small 1}\!\!1}_{\{K_\ell =k_\ell \}} \mathbb P [\mathcal N _{k_\ell }^c\cap \{\tau _{X(k_\ell T;x)}^\alpha >T\}|\fancyscript{F}_{k_\ell T}]\bigr ]\\&\qquad \quad \le (1-p)\mathbb P [S_{\ell -1}(k_1,\dots ,k_{\ell -1})\cap \{\tau _x^\alpha >(k_{\ell -1}+1)T\}\cap \{K_\ell =k_\ell \}]. \end{aligned}$$

By summing up over \(k_\ell \), we have

$$\begin{aligned}&\sum _{k_\ell =k_{\ell -1}+1}^\infty \mathbb P [S_\ell (k_1,\dots ,k_\ell )\cap \{\tau _x^\alpha >(k_\ell +1)T\}]\\&\qquad \qquad \qquad \le (1-p) \mathbb P [S_{\ell -1}(k_1,\dots ,k_{\ell -1})\cap \{\tau _x^\alpha >(k_{\ell -1}+1)T\}]. \end{aligned}$$

By iteration, \(\mathbb P [L_0>\ell , \tau _x^\alpha =\infty ]\le (1-p)^\ell \) and (6.6) follows. \(\square \)

Proof of Theorem 6.2

Fix \(\alpha \in (\beta -2,1+\alpha _0)\), \({\bar{p}}\in (0,\beta -3)\) and \({\bar{a}}_0\in (0,\tfrac{1}{4}]\). Let \({\bar{p}}_0>0\), and \({\bar{M}}_0>0\) be the values given by Theorem 6.1. In view of Corollary 5.5, it suffices to prove that the (sampled) arrival time to \(B_\infty (\alpha ,{\bar{p}}, \bar{a}_0, {\bar{M}}_0)\) is finite on \(\{\tau _x^\alpha =\infty \}\), for all \(x\in V_\alpha \). By virtue of Lemma 6.6, it is sufficient to prove that there are \(M,T_c,T_e,c_0>0\) and \(\psi \) such that \(\mathrm{e }^{-\nu \lambda ^2T_c} +\frac{c_{6.6}}{M^2}<1\) and

$$\begin{aligned} \left. \begin{array}{c} \Vert X(t_0;x)\Vert _H\le M\\ \mathcal N (t_0;c_0,T_c,T_e,\psi ) \end{array} \right\} \Rightarrow X(t_0+T_c+T_e;x)\in B_\infty (\alpha ,{\bar{p}}, \bar{a}_0, {\bar{M}}_0).\qquad \end{aligned}$$
(6.7)

Indeed, the left-hand side of the above implication happens almost surely on \(\{\tau _x^\alpha <\infty \}\) for some integer \(k\) such that \(t_0 = k(T_c+T_e)\). Hence the right-hand side happens with probability one as well and \(\mathbb P [\sigma _{B_\infty }^{{x,T_c+T_e}}=\infty ,\tau _x^\infty =\infty ] = 0\).

We finally prove (6.7). We first notice that in Lemma 6.4, the larger we choose \(M\), the larger is the time \(T_c\). Hence we apply Lemma 6.4 with \(a_0 = {\bar{a}}_0/8\), \(c_0<\min \{{\bar{a}}_0/8, (4(1+\lambda ^{\beta -3}))^{-1}\}\) and \(M>0\) large enough so that the time \(T_c\) satisfies \(\mathrm{e }^{-\nu \lambda ^2T_c} +\frac{c_{6.6}}{M^2}<1\). Moreover we know that

  • \(\inf _{n\ge 1}\bigl (\lambda _{n-1}^{\beta -2}X_n(t_0+T_c)\bigr ) \ge -(a_0 + c_0)\nu \ge -\frac{1}{4} {\bar{a}}_0\nu \) on \(\{\tau _x^\alpha =\infty \}\cap \mathcal N _c(c_0,t_0,T_c)\),

  • \(\Vert X(t_0+T_c)\Vert _H\le \Vert X(t_0)\Vert _H + c_{6.3}\nu \le M + c_{6.3}\nu \).

The second statement follows from Lemma 6.3. By Lemma 6.5 with \(M_1=M + c_{6.3}\nu \), \(M_2={\bar{M}}_0\), \(a_0 = {\bar{a}}_0/4\), \(a_0^{\prime } = 2a_0\) and \(c_0\) as above, there is \(T_e>0\) such that

  • \(\inf _{n\ge 1}\bigl (\lambda _{n-1}^{\beta -2}X_n(t_0+T_c+T_e)\bigr ) \ge - {\bar{a}}_0\nu \) on \(\{\tau _x^\alpha =\infty \}\cap \mathcal N _e(c_0,t_0+T_c,T_e,\psi )\),

  • \(\Vert X(t_0+T_c+T_e)\Vert _{{\bar{p}}}\ge \lambda ^{{\bar{p}}}\Vert X(t_0+T_c+T_e)\Vert _H \ge {\bar{M}}_0\),

that is \(X(t_0+T_c+T_e)\in B_\infty (\alpha ,{\bar{p}},\bar{a}_0, {\bar{M}}_0)\). \(\square \)