1 Introduction

A recent paper [9] studies the stochastic heat equation for \((t,x) \in (0,\infty )\times \mathbb {R}\)

$$\displaystyle \begin{aligned} \frac{\partial u}{\partial t}=\frac{\kappa}{2}\frac{\partial ^2 u}{\partial x ^2}+\sigma(u)\,\dot W\,, \end{aligned} $$
(1)

where \(\dot {W}\) is a centered Gaussian noise which is white in time and behaves as fractional Brownian motion with Hurst parameter 1∕4 < H < 1∕2 in space, and σ may be a nonlinear function with some smoothness.

However, the specific case σ(u) = u, i.e.

$$\displaystyle \begin{aligned} \frac{\partial u}{\partial t}=\frac{\kappa}{2}\frac{\partial ^2 u}{\partial x ^2}+u\,\dot W \end{aligned} $$
(2)

deserves some specific treatment due to its simplicity. Indeed, this linear equation turns out to be a continuous version of the parabolic Anderson model, and is related to challenging systems in random environment like KPZ equation [3, 6] or polymers [1, 4]. The localization and intermittency properties of (2) have thus been thoroughly studied for equations driven by a space-time white noise (see [13] for a nice survey), while a recent trend consists in extending this kind of result to equations driven by very general Gaussian noises [5, 8, 10, 11]. However, the rough noise \(\dot W\) presented in this work is not covered by the aforementioned references.

To fill this gap, we first tackle the existence and uniqueness problem. Although the existence and uniqueness of the solution in the general nonlinear case (1) has been established in [9], in this linear case (2), one can implement a rather simple procedure involving Fourier transforms. Since this point of view is interesting in its own right and is short enough, we develop it in Sect. 3.1. In Sect. 3.2, we study the random field solution using chaos expansion. Following the approach introduced in [8, 10], we obtain an explicit formula for the kernels of the Wiener chaos expansion and we show its convergence, and thus obtain the existence and uniqueness of the solution. It is worth noting these methods treat different classes of initial data which are more general than in [9] and different from [2].

We then move to a Feynman-Kac type representation for the moments of the solution. In fact, we cannot expect a Feynman-Kac formula for the solution, because the covariance is rougher than the space-time white noise case, and this type of formula requires smoother covariance structures (see, for instance, [11]). However, by means of Fourier analysis techniques as in [8, 10], we are able to obtain a Feynman-Kac formula for the moments that involves a fractional derivative of the Brownian local time.

Finally, the previous considerations allow us to handle, in the last section of the paper, the intermittency properties of the solution. More precisely, we show sharp lower bounds for the moments of the solution of the form \(\mathbf {E} [u(t,x)^n]\ge \exp (C n^{1+\frac 1H} t)\), for all t ≥ 0, \(x\in \mathbb {R} \) and n ≥ 2, where C is independent of t ≥ 0, \(x\in \mathbb {R} \) and n. These bounds entail the intermittency phenomenon and match the corresponding estimates for the case \(H>\frac 12\) obtained in [10]. After the completion of this work, three of the authors have studied the parabolic Anderson model in more details in [12]. Existence and uniqueness results are extended to wider class of initial data. In particular, exact long term asymptotics for the moments of the solution of the form \(\limsup \frac {1}{t} \sup _{|x|>\alpha t} \log \mathbb {E} (|u(t,x)|{ }^p)\) are obtained.

2 Preliminaries

Let us start by introducing our basic notation on Fourier transforms of functions. The space of Schwartz functions is denoted by \(\mathcal {S}\). Its dual, the space of tempered distributions, is \(\mathcal {S}'\). The Fourier transform of a function \(u \in \mathcal {S}\) is defined with the normalization

$$\displaystyle \begin{aligned} \mathcal{F}u ( \xi) = \int_{\mathbb{R}} e^{- i \xi x } u ( x) d x, \end{aligned}$$

so that the inverse Fourier transform is given by \(\mathcal {F}^{- 1} u ( \xi ) = ( 2 \pi )^{- 1} \mathcal {F}u ( - \xi )\). The Fourier transform of a tempered distribution can also be defined (see [18]).

Let \( \mathcal {D}((0,\infty )\times \mathbb {R})\) denote the space of real-valued infinitely differentiable functions with compact support on \((0, \infty ) \times \mathbb {R}\). Taking into account the spectral representation of the covariance function of the fractional Brownian motion in the case \(H<\frac 12\) proved in [17, Theorem 3.1], we represent our noise W by a zero-mean Gaussian family \(\{W(\varphi ) ,\, \varphi \in \mathcal {D}((0,\infty )\times \mathbb {R})\}\) defined on a complete probability space \((\Omega ,\mathcal F,\mathbf {P})\), whose covariance structure is given by

$$\displaystyle \begin{aligned} \mathbf{E}\left[ W(\varphi) \, W(\psi) \right] = c_{1,H}\int_{\mathbb{R}_{+}\times\mathbb{R}} \mathcal F\varphi(s,\xi) \, \overline{\mathcal F\psi(s,\xi)} \, |\xi|{}^{1-2H} \, ds d\xi, \end{aligned} $$
(3)

where the Fourier transforms \(\mathcal F\varphi ,\mathcal F\psi \) are understood as Fourier transforms in space only and

$$\displaystyle \begin{aligned} c_{1,H}= \frac 1 {2\pi} \Gamma(2H+1)\sin{}(\pi H) \,. \end{aligned} $$
(4)

We denote by \(\mathfrak H \) the Hilbert space obtained by completion of \( \mathcal {D}((0,\infty )\times \mathbb {R})\) with respect to the inner product

$$\displaystyle \begin{aligned} \langle\varphi, \psi \rangle_{ \mathfrak H}=c_{1, H}\int_{\mathbb{R}_+\times \mathbb{R}}\mathcal{F}\varphi(s,\xi)\overline{\mathcal{F}\psi(s,\xi)}|\xi|{}^{1-2H }d\xi ds\,. \end{aligned} $$
(5)

The next proposition is from Theorem 3.1 and Proposition 3.4 in [17].

Proposition 2.1

If \(\mathfrak H_0\) denotes the class of functions \(\varphi \in L^2( \mathbb {R}_+\times \mathbb {R})\) such that

$$\displaystyle \begin{aligned}\int_{\mathbb{R}_+\times \mathbb{R}} |\mathcal{F}\varphi(s,\xi)|{}^2|\xi|{}^{1-2H}d\xi ds < \infty\,,\end{aligned}$$

then \( \mathfrak H_0\) is not complete and the inclusion \(\mathfrak H_0 \subset \mathfrak H\) is strict.

We recall that the Gaussian family W can be extended to \(\mathfrak H\) and this produces an isonormal Gaussian process, for which Malliavin calculus can be applied. We refer to [16] and [7] for a detailed account of the Malliavin calculus with respect to a Gaussian process. On our Gaussian space, the smooth and cylindrical random variables F are of the form

$$\displaystyle \begin{aligned} F=f(W(\phi_1),\dots,W(\phi_n))\,, \end{aligned}$$

with \(\phi _i \in \mathfrak H\), \(f \in C^{\infty }_p (\mathbb {R}^n)\) (namely f and all its partial derivatives have polynomial growth). For this kind of random variable, the derivative operator D in the sense of Malliavin calculus is the \(\mathfrak H\)-valued random variable defined by

$$\displaystyle \begin{aligned} DF=\sum_{j=1}^n\frac{\partial f}{\partial x_j}(W(\phi_1),\dots,W(\phi_n))\phi_j\,. \end{aligned}$$

The operator D is closable from L 2( Ω) into \(L^2(\Omega ; \mathfrak H)\) and we define the Sobolev space \(\mathbb {D}^{1,2}\) as the closure of the space of smooth and cylindrical random variables under the norm

$$\displaystyle \begin{aligned} \|DF\|{}_{1,2}=\sqrt{\mathbf{E} [F^2]+\mathbf{E} [\|DF\|{}^2_{\mathfrak H} ]}\,. \end{aligned}$$

We denote by δ the adjoint of the derivative operator (called divergence operator) given by the duality formula

$$\displaystyle \begin{aligned} \mathbf{E} \left[ \delta (u)F \right] =\mathbf{E} \left[ \langle DF,u \rangle_{\mathfrak H}\right] , \end{aligned} $$
(6)

for any \(F \in \mathbb {D}^{1,2}\) and any element \(u \in L^2(\Omega ; \mathfrak H)\) in the domain of δ.

For any integer n ≥ 0 we denote by H n the nth Wiener chaos of W. We recall that H 0 is simply \(\mathbb {R}\) and for n ≥ 1, H n is the closed linear subspace of L 2( Ω) generated by the random variables \(\{ H_n(W(\phi )),\phi \in \mathfrak H, \|\phi \|{ }_{\mathfrak H}=1 \}\), where H n is the nth Hermite polynomial. For any n ≥ 1, we denote by \(\mathfrak H^{\otimes n}\) (resp. \(\mathfrak H^{\odot n}\)) the nth tensor product (resp. the nth symmetric tensor product) of \(\mathfrak H\). Then, the mapping I n(ϕ n) = H n(W(ϕ)) can be extended to a linear isometry between \(\mathfrak H^{\odot n}\) (equipped with the modified norm \(\sqrt {n!}\| \cdot \|{ }_{\mathfrak H^{\otimes n}}\)) and H n.

Consider now a random variable F ∈ L 2( Ω) which is measurable with respect to the σ-field \(\mathcal F\) generated by W. This random variable can be expressed as

$$\displaystyle \begin{aligned} F= \mathbf{E} \left[ F\right] + \sum_{n=1} ^\infty I_n(f_n), \end{aligned} $$
(7)

where the series converges in L 2( Ω), and the elements \(f_n \in \mathfrak H ^{\odot n}\), n ≥ 1, are determined by F. This identity is called the Wiener chaos expansion of F.

The Skorohod integral (or divergence) of a random field u can be computed by using the Wiener chaos expansion. More precisely, suppose that \(u=\{u(t,x) , (t,x) \in \mathbb {R}_+ \times \mathbb {R}\}\) is a random field such that for each (t, x), u(t, x) is an \(\mathcal F\)-measurable and square-integrable random variable. Then, for each (t, x) we have a Wiener chaos expansion of the form

$$\displaystyle \begin{aligned} u(t,x) = \mathbf{E} \left[ u(t,x) \right] + \sum_{n=1}^\infty I_n (f_n(\cdot,t,x)). \end{aligned} $$
(8)

Suppose that \(\mathbf {E} [\|u\|{ }_{ \mathfrak H}^{2}]\) is finite. Then, we can interpret u as a square-integrable random function with values in \(\mathfrak H\) and the kernels f n in the expansion (8) are functions in \(\mathfrak H ^{\otimes (n+1)}\) which are symmetric in the first n variables. In this situation, u belongs to the domain of the divergence operator (that is, u is Skorohod integrable with respect to W) if and only if the following series converges in L 2( Ω)

$$\displaystyle \begin{aligned} \delta(u)= \int_0 ^\infty \int_{\mathbb{R}^d} u(t,x) \, \delta W(t,x) = W(\mathbf{E} [u]) + \sum_{n=1}^\infty I_{n+1} (\widetilde{f}_n), \end{aligned} $$
(9)

where \(\widetilde {f}_n\) denotes the symmetrization of f n in all its n + 1 variables.

For each t ≥ 0, let \(\mathcal {F}_t\) be the σ-field generated by W up to time t. Define the predictable σ-field as the σ-field of subsets of \(\Omega \times \mathbb {R}_+ \times \mathbb {R}\) generated by the collection of sets {A × (s, t] × B, where 0 ≤ s < t, \(A\in \mathcal {F}_s\) and B is a Borel set in \(\mathbb {R}\). Denote by ΛH the space of predictable processes g defined on \(\mathbb {R}_+\times \mathbb {R}\) such that almost surely \(g\in \mathfrak H\) and \(\mathbf {E} [\|g\|{ }^2_{\mathfrak H}] < \infty \). Then, if g ∈ ΛH, the Skorohod integral of g with respect to W coincides with the Itô integral defined in [9] and we have the isometry

$$\displaystyle \begin{aligned} \mathbf{E} \left [\left( \int_{\mathbb{R}_+} \int_{\mathbb{R}} g(s,x) W(ds,dx) \right)^2 \right] = \mathbb{E} \|g\|{}^2_{\mathfrak H}\,. \end{aligned} $$
(10)

Now we are ready to state the definition of the solution to Eq. (2). Denote by p t(x) the heat kernel on the real line related to \(\frac {\kappa }{2}\Delta \). We denote by ∗ the convolution operation.

Definition 2.2

Let \(u=\{u(t,x), 0 \leq t \leq T, x \in \mathbb {R}\}\) be a real-valued predictable stochastic process such that for all t ∈ [0, T] and \(x\in \mathbb {R}\) the process \(\{p_{t-s}(x-y)u(s,y) \mathbf { 1}_{[0,t]}(s), {s\ge 0} , y \in \mathbb {R}\}\) belongs to ΛH. We say that u is a mild solution of (2) if for all t ∈ [0, T] and \(x\in \mathbb {R}\) we have

$$\displaystyle \begin{aligned} u(t,x)= p_t*u_0(x) + \int_0^t \int_{\mathbb{R}}p_{t-s}(x-y)u(s,y) W(ds,dy) \quad a.s., \end{aligned} $$
(11)

where and in what follows the stochastic integral is always understood in the sense of Itô and coincides with the Skorohod integral defined by (6).

3 Existence and Uniqueness

In this section we prove the existence and uniqueness result for the solution to Eq. (2) by means of two different methods: one is via Fourier transform and the other is via chaos expansion.

3.1 Existence and Uniqueness via Fourier Transform

In this subsection we discuss the existence and uniqueness of Eq. (2) using techniques of Fourier analysis.

Let \(\dot {H}^{\frac 12-H}_0\) be the set of functions \(f\in L^2(\mathbb {R})\) such that \(\int _{\mathbb {R}} | \mathcal F f(\xi )|{ }^2 |\xi |{ }^{1-2H} d\xi <\infty \). This space is the time independent analogue to the space \(\mathfrak H_0\) introduced in Proposition 2.1. We know that \(\dot {H}^{\frac 12-H}_0\) is not complete with the seminorm \( \left [ \int _{\mathbb {R}} | \mathcal F f(\xi )|{ }^2 |\xi |{ }^{1-2H} d\xi \right ] ^{\frac {1}{2}}\) (see [17]). However, it is not difficult to check that the space \(\dot {H}^{\frac 12-H}_0\) is complete for the seminorm \(\|f\|{ }_{\mathcal V(H)} ^2:=\int _{\mathbb {R}} | \mathcal F f(\xi )|{ }^2 (1+|\xi |{ }^{1-2H} )d\xi \).

In the next theorem we show the existence and uniqueness result assuming that the initial condition belongs to \(\dot {H}^{\frac 12-H}_0\) and using estimates based on the Fourier transform in the space variable. To this purpose, we introduce the space \(\mathcal V_T(H)\) as the completion of the set of elementary \(\dot {H}^{\frac 12-H}_0\)-valued stochastic processes

$$\displaystyle \begin{aligned} u(t)=\sum_{i=0}^{n-1} { \mathbf{ 1}_{(t_i, t_{i+1}]}}(t) u_i\,, \quad t\in [0,T]\,, \end{aligned}$$

where 0 = t 0 < t 1 < ⋯ < t n = T is a partition of [0, T] and \(u_i\in \dot {H}^{\frac 12-H}_0\), with respect to the seminorm

$$\displaystyle \begin{aligned} \|u\|{}_{\mathcal V_{T}(H)}^{2}:=\sup_{t\in [0,T]} \mathbf{E} \| u(t,\cdot)\|{}_{\mathcal V(H)}^{2}.\end{aligned} $$
(12)

We now state a convolution lemma.

Proposition 3.1

Consider a function \(u_{0}\in \dot {H}^{\frac 12-H}_0\) and \(\frac {1}{4}<H<\frac {1}{2}\) . For any \(v\in \mathcal V_{T}(H)\) we set Γ(v) = V in the following way:

$$\displaystyle \begin{aligned} \Gamma(v):=V(t,x)=p_t*u_0(x) + \int_0^t \int_{\mathbb{R}}p_{t-s}(x-y) v(s,y) W(ds,dy), \quad t\in[0,T], \, x\in\mathbb{R}.\end{aligned} $$

Then Γ is well-defined as a map from \(\mathcal V_{T}(H)\) to \(\mathcal V_{T}(H)\) . Furthermore, there exist two positive constants c 1, c 2 such that the following estimate holds true on [0, T]:

$$\displaystyle \begin{aligned} { \|V(t,\cdot)\|{}_{\mathcal V(H)}^{2} \le c_{1} \, \|u_0\|{}_{\mathcal V(H)}^{2} +c_{2}\int_0^t (t-s)^{2H-3/2} \|v(s,\cdot)\|{}_{\mathcal V(H)}^{2} \, ds\,.}\end{aligned} $$
(13)

Proof

Let v be a process in \(\mathcal V_{T}(H)\) and set V =  Γ(v). The stochastic integral appearing in the definition of Γ(v) exists as an Itô (or Skorohod) integral, because the process \(\{ p_{t-s}(x-y) v(s,y), {\mathbf {1}}_{[0,t]}(s), s\ge 0, y\in \mathbb {R}\}\) is predictable and square integrable. We focus on the bound (13) for V .

Notice that the Fourier transform of V can be computed easily. Indeed, setting v 0(t, x) = p t ∗ u 0(x) and invoking a stochastic version of Fubini’s theorem, which can be easily proved in our framework, we get

$$\displaystyle \begin{aligned} \mathcal{F}V(t,\xi)=\mathcal{F}v_0(t,\xi) +\int_0^t\int_{\mathbb{R}} \left( \int_{\mathbb{R}} e^{i x \xi} \, p_{t-s}(x-y) \, dx \right) v(s,y)W(ds,dy)\,.\end{aligned} $$

According to the expression of \(\mathcal F p_{t}\), we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{F}V(t,\xi)=\mathcal{F}v_0(t,\xi)+\int_0^t\int_{\mathbb{R}}e^{-i\xi y} e^{-\frac{\kappa}{2}(t-s)\xi^2}v(s,y)W(ds,dy)\,. \end{array} \end{aligned} $$

We now evaluate the quantity \(\mathbf {E}[\int _{\mathbb {R}}|\mathcal {F}V(t,\xi )|{ }^2|\xi |{ }^{1-2H}d\xi ]\) in the definition of \(\|V\|{ }_{\mathcal V_{T}(H)}\) given by (12). We thus write

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \ \ \mathbf{E}\left[ \int_{\mathbb{R}}|\mathcal{F}V(t,\xi)|{}^2|\xi|{}^{1-2H}d\xi \right] \leq 2 \, \int_{\mathbb{R}}|\mathcal{F}v_0(t,\xi)|{}^2|\xi|{}^{1-2H}d\xi \\ &\displaystyle &\displaystyle +2 \, \int_{\mathbb{R}}\mathbf{E}\left[\Big|\int_0^t\int_{\mathbb{R}}e^{-i\xi y}e^{-\frac{\kappa}{2}(t-s)\xi^2}v(s,y)W(ds,dy)\Big|{}^2 \right] |\xi|{}^{1-2H}d\xi := 2\left( I_{1} + I_{2} \right) \, , \end{array} \end{aligned} $$

and we handle the terms I 1 and I 2 separately.

The term I 1 can be easily bounded by using that \(u_0 \in \dot {H}^{\frac 12-H}_0\) and recalling v 0 = p t ∗ u 0. That is,

$$\displaystyle \begin{aligned} I_1 = \int_{\mathbb{R}}| \mathcal{F}u_0(\xi) |{}^2e^{-\kappa t|\xi|{}^2} |\xi|{}^{1-2H} d\xi \le C \, \|u_{0}\|{}_{\mathcal V(H)}^{2}. \end{aligned}$$

We thus focus on the estimation of I 2, and we set \(f_{\xi }(s,\eta )=e^{-i\xi \eta }e^{-\frac {\kappa }{2}(t-s)\xi ^2}v(s,\eta )\). Applying the isometry property (10) we have:

$$\displaystyle \begin{aligned} \mathbf{E}\left[\Big|\int_0^t\int_{\mathbb{R}} e^{-i\xi y}e^{-\frac{\kappa}{2}(t-s)\xi^2}v(s,y)W(ds,dy)\Big|{}^2 \right] =c_{1,H} \int_0^t \int_{\mathbb{R}} \mathbf{E}\left[ |\mathcal F_{\eta}f_{\xi}(s,\eta) |{}^{2}\right] |\eta|{}^{1-2H} \, ds d\eta, \end{aligned}$$

where \(\mathcal F_{\eta }\) is the Fourier transform with respect to η. It is obvious that the Fourier transform of e iξy V (y) is \(\mathcal {F} V(\eta +\xi )\). Thus we have

$$\displaystyle \begin{aligned} I_{2}&= C\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}e^{-\kappa(t-s)\xi^2} \, \mathbf{E}\left[|\mathcal{F}v(s,\eta+\xi)|{}^2 \right] |\eta|{}^{1-2H}|\xi|{}^{1-2H} \, d\eta d\xi ds\\ &= C\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}e^{-\kappa(t-s)\xi^2} \, \mathbf{E}\left[|\mathcal{F}v(s,\eta )|{}^2 \right] |\eta-\xi|{}^{1-2H}|\xi|{}^{1-2H} \, d\eta d\xi ds\, . \end{aligned} $$

We now bound |η − ξ|1−2H by |η|1−2H + |ξ|1−2H, which yields I 2 ≤ I 21 + I 22 with:

$$\displaystyle \begin{aligned} I_{21}&=C \int_0^t \int_{\mathbb{R}}\int_{\mathbb{R}} e^{-\kappa(t-s)\xi^2} \, \mathbf{E}\left[|\mathcal{F}v(s,\eta)|{}^2 \right] |\eta|{}^{1-2H}|\xi|{}^{1-2H} \, d\eta d\xi ds \\ I_{22}&=C\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}e^{-\kappa(t-s)\xi^2} \, \mathbf{E}\left[|\mathcal{F}v(s,\eta)|{}^2 \right] |\xi|{}^{2-4H} \, d\eta d\xi ds\,. \end{aligned} $$

Performing the change of variable ξ → (ts)−1∕2 ξ and then trivially bounding the integrals of the form \(\int _{\mathbb {R}}|\xi |{ }^{\beta } e^{-\kappa \xi ^{2}} d\xi \) by constants, we end up with

$$\displaystyle \begin{aligned} I_{21}&\leq C \int_0^t (t-s)^{H-1} \int_{\mathbb{R}} \mathbf{E}\left[|\mathcal{F}v(s,\eta)|{}^2 \right] |\eta|{}^{1-2H} \, d\eta \, ds \\ I_{22}&\leq C \int_0^t (t-s)^{2H-3/2} \int_{\mathbb{R}} \mathbf{E}\left[|\mathcal{F}v(s,\eta)|{}^2 \right] \, d\eta \, ds . \end{aligned} $$

Observe that for \(H\in (\frac 14, \frac 12)\) the term (ts)2H−3∕2 is more singular than (ts)H−1, but we still have \(2H-\frac 32>-1\) (this is where we need to impose H > 1∕4). Summarizing our consideration up to now, we have thus obtained

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \ \ \int_{\mathbb{R}}\mathbf{E}\left[ |\mathcal{F}V(t,\xi)|{}^2 \right] |\xi|{}^{1-2H}d\xi \\ &\displaystyle &\displaystyle \le C _{1,T} \, { \|u_{0}\|{}_{\mathcal V(H)}^{2}} + C_{2,T} \int_{0}^{t} (t-s)^{2H-3/2} \int_{\mathbb{R}} \mathbf{E}\left[|\mathcal{F}v(s,\xi)|{}^2 \right] \, (1+ |\xi|{}^{1-2H}) \, d\xi \, ds , \end{array} \end{aligned} $$
(14)

for two strictly positive constants C 1,T, C 2,T.

The term \(\mathbf {E}[\int _{\mathbb {R}}|\mathcal {F}V(t,\xi )|{ }^2 d\xi ]\) in the definition of \(\|V\|{ }_{\mathcal V_{T}(H)}\) can be bounded with the same computations as above, and we find

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \ \ \int_{\mathbb{R}}\mathbf{E}\left[ |\mathcal{F}V(t,\xi)|{}^2 \right] \, d\xi \\ &\displaystyle &\displaystyle \le C_{1,T} \, { \|u_{0}\|{}_{\mathcal V(H)}^{2}} + C_{2,T} \int_{0}^{t} (t-s)^{H-1} \int_{\mathbb{R}} \mathbf{E}\left[|\mathcal{F}v(s,\xi)|{}^2 \right] \, (1+ |\xi|{}^{1-2H}) \, d\eta \, ds \,. \end{array} \end{aligned} $$
(15)

Hence, gathering our estimates (14) and (15), our bound (13) is easily obtained, which finishes the proof. □

As in the forthcoming general case, Proposition 3.1 is the key to the existence and uniqueness result for Eq. (2).

Theorem 3.2

Suppose that u 0 is an element of \(\dot {H}^{\frac 12-H}_0\) and \(\frac {1}{4}<H<\frac {1}{2}\) . Fix T > 0. Then there is a unique process u in the space \(\mathcal V_{T}(H)\) such that for all t ∈ [0, T],

$$\displaystyle \begin{aligned} u(t,\cdot)=p_t* u_0 + \int_0^t \int_{\mathbb{R}}p_{t-s}(\cdot-y)u(s,y) W(ds,dy). \end{aligned} $$
(16)

Proof

The proof follows from the standard Picard iteration scheme, where we just set u n+1 =  Γ(u n). Details are left to the reader for the sake of conciseness. □

3.2 Existence and Uniqueness via Chaos Expansions

Next, we provide another way to prove the existence and uniqueness of the solution to Eq. (2), by means of chaos expansions. This will enable us to obtain moment estimates. Before stating our main theorem in this direction, let us label an elementary lemma borrowed from [10] for further use.

Lemma 3.3

For m ≥ 1 let α ∈ (−1 + ε, 1)m with ε > 0 and set \(|\alpha |= \sum _{i=1}^m \alpha _i \) . For t ∈ [0, T], the m-th dimensional simplex over [0, t] is denoted by \(T_m(t)=\{(r_1,r_2,\dots ,r_m) \in \mathbb {R}^m: 0<r_1 <\cdots < r_m < t\}\) . Then there is a constant c > 0 such that

$$\displaystyle \begin{aligned} J_m(t, \alpha):=\int_{T_m(t)}\prod_{i=1}^m (r_i-r_{i-1})^{\alpha_i} dr \le \frac { c^m t^{|\alpha|+m } }{ \Gamma(|\alpha|+m +1)}, \end{aligned}$$

where by convention, r 0 = 0.

Let us now state a new existence and uniqueness theorem for our equation of interest (2).

Theorem 3.4

Suppose that \(\frac 14 <H<\frac 12\) and that the initial condition u 0 satisfies

$$\displaystyle \begin{aligned} \int_{\mathbb{R}}(1+|\xi|{}^{\frac{1}{2}-H})|\mathcal{F}u_0(\xi)|d\xi < \infty\,. \end{aligned} $$
(17)

Then there exists a unique solution to Eq.(2), that is, there is a unique process u such that the Itô (or Skorohod) integral of the process \(\{p_{t-s}(x-y)u(s,y) {\mathbf {1}}_{[0,t]}(s), s\ge 0, y\in \mathbb {R}\} \) exists for any \((t,x)\in [0,T]\times \mathbb {R}\) and relation (11) holds true.

Remark 3.5

  1. (i)

    The formulation of Theorem 3.4 yields the definition of our solution u for all \((t,x)\in [0,T]\times \mathbb {R}\). This is in contrast with Theorem 3.2 which gives a solution sitting in \(\dot {H}^{\frac 12-H}_0\) for every value of t, and thus defined a.e. in x only.

  2. (ii)

    Obviously a constant can be considered as a tempered distribution. Condition (17) is satisfied by constant functions.

Remark 3.6

In the later paper [12], the existence and uniqueness in Theorem 3.4 is obtained under a more general initial condition. Since the proof of Theorem 3.4 for condition (17) is easier and shorter, we present the proof as follows.

Proof of Theorem 3.4

Suppose that \(u=\{u(t,x), \, t\geq 0, x \in \mathbb {R}^d\}\) is a solution to Eq. (11) in ΛH. Then according to (7), for any fixed (t, x) the random variable u(t, x) admits the following Wiener chaos expansion

$$\displaystyle \begin{aligned} u(t,x)=\sum_{n=0}^{\infty}I_n(f_n(\cdot,t,x))\,, \end{aligned} $$
(18)

where for each (t, x), f n(⋅, t, x) is a symmetric element in \(\mathfrak H^{\otimes n}\). Hence, thanks to (9) and using an iteration procedure, one can find an explicit formula for the kernels f n for n ≥ 1. Indeed, we have:

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle f_n(s_1,x_1,\dots,s_n,x_n,t,x)\\ &\displaystyle &\displaystyle \ \ =\frac{1}{n!}p_{t-s_{\sigma(n)}}(x-x_{\sigma(n)})\cdots p_{s_{\sigma(2)}-s_{\sigma(1)}}(x_{\sigma(2)}-x_{\sigma(1)}) p_{s_{\sigma(1)}}u_0(x_{\sigma(1)})\,, \end{array} \end{aligned} $$
(19)

where σ denotes the permutation of {1, 2, …, n} such that 0 < s σ(1) < ⋯ < s σ(n) < t (see, for instance, formula (4.4) in [8] or formula (3.3) in [10]). Then, to show the existence and uniqueness of the solution it suffices to prove that for all (t, x) we have

$$\displaystyle \begin{aligned} \sum_{n=0}^{\infty}n!\|f_n(\cdot,t,x)\|{}^2_{\mathfrak H^{\otimes n}}< \infty\,.\end{aligned} $$
(20)

The remainder of the proof is devoted to prove relation (20).

Starting from relation (19), some elementary Fourier computations show that

$$\displaystyle \begin{aligned} \mathcal F f_n(s_1,\xi_1,\dots,s_n,\xi_n,t,x)&= \frac{c_{H}^n}{n!} \int_{\mathbb{R}} \prod_{i=1}^n e^{-\frac{\kappa}{2}(s_{\sigma(i+1)}-s_{\sigma(i)})|\xi_{\sigma(i)}+\cdots + \xi_{\sigma(1)} -\zeta|{}^2} \\ &\quad \times { e^{-ix (\xi_{\sigma(n)}+ \cdots + \xi_{\sigma(1)}-\zeta)}} \mathcal{F}u_0(\zeta) e^{-\frac {\kappa s_{\sigma(1)}|\zeta|{}^2} 2} d\zeta,\end{aligned} $$

where we have set s σ(n+1) = t. Hence, owing to formula (5) for the norm in \(\mathfrak H\) (in its Fourier mode version), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle n!\| f_n(\cdot,t,x)\|{}_{\mathfrak H^{\otimes n}}^2 =\frac{c_H^{2n} }{n!} \, \int_{[0,t]^n}\int_{\mathbb{R}^n}\bigg| \int_{\mathbb{R}} \prod_{i=1}^n e^{-\frac {\kappa}{2} (s_{\sigma(i+1)}-s_{\sigma(i)})|\xi_i+\cdots +\xi_1-\zeta |{}^2} { e^{-ix (\xi_{\sigma(n)}+ \cdots + \xi_{\sigma(1)}-\zeta)}} \\ &\displaystyle &\displaystyle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mathcal{F}u_0(\zeta) e^{-\frac {\kappa s_{\sigma(1)}|\zeta|{}^2} 2} d\zeta \bigg|{}^2 \times \prod_{i=1}^n |\xi_i |{}^{1-2H} d\xi ds\,,\vspace{-2pt} \end{array} \end{aligned} $$
(21)

where denotes 1 n and similarly for ds. Then using the change of variable ξ i + ⋯ + ξ 1 = η i, for all i = 1, 2, …, n and a linearization of the above expression, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle n!\| f_n(\cdot,t,x)\|{}_{\mathfrak H^{\otimes n}}^2 = \frac{c_H^{2n} }{n!}\int_{[0,t]^n}\int_{\mathbb{R}^n} \int_{\mathbb{R}^2}\prod_{i=1}^n e^{-\frac{\kappa}{2}(s_{\sigma(i+1)}-s_{\sigma(i)})(|\eta_{i}-\zeta|{}^2+|\eta_i-\zeta^{\prime}|{}^2)} \mathcal{F}u_0(\zeta) \overline{\mathcal{F}u_0(\zeta^{\prime})} \\ &\displaystyle &\displaystyle \ \ \qquad \qquad \qquad \quad \qquad \qquad \times { e^{ix(\zeta -\zeta')}}e^{-\frac{\kappa s_{\sigma(1)}(|\zeta|{}^2+|\zeta^{\prime}|{}^2)}{2}} \prod_{i=1}^n|\eta_{i}-\eta_{i-1}|{}^{1-2H} d\zeta d\zeta^{\prime} d\eta ds\,, \end{array} \end{aligned} $$

where we have set η 0 = 0. Then we use Cauchy-Schwarz inequality and bound the term \(\exp (-\kappa s_{\sigma (1)}(|\zeta |{ }^2+|\zeta ^{\prime }|{ }^2)/2)\) by 1 to get

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \ \ n!\| f_n(\cdot,t,x)\|{}_{\mathfrak H^{\otimes n}}^2 \le \frac{c_H^{2n}}{n!} \int_{\mathbb{R}^2} \left ( \int_{[0,t]^n} \int_{\mathbb{R}^n} \prod_{i=1}^n e^{- \kappa (s_{\sigma(i+1)}-s_{\sigma(i)})|\eta_{i}-\zeta|{}^2}\prod_{i=1}^n|\eta_{i}-\eta_{i-1}|{}^{1-2H}d\eta ds \right)^{\frac{1}{2}} \\ &\displaystyle &\displaystyle \times \left ( \int_{[0,t]^n} \int_{\mathbb{R}^n} \prod_{i=1}^n e^{- \kappa (s_{\sigma(i+1)}-s_{\sigma(i)})|\eta_{i}-\zeta^{\prime}|{}^2}\prod_{i=1}^n|\eta_{i}-\eta_{i-1}|{}^{1-2H}d\eta ds \right)^{\frac{1}{2}} \left|\mathcal{F}u_0(\zeta)\right| \left|\mathcal{F}u_0(\zeta^{\prime})\right| d\zeta d\zeta^{\prime}.\hspace{-6pt} \end{array} \end{aligned} $$

Arranging the integrals again, performing the change of variables η i:= η i − ζ and invoking the trivial bound |η i − η i−1|1−2H ≤|η i−1|1−2H + |η i|1−2H, this yields

$$\displaystyle \begin{aligned} n!\| f_n(\cdot,t,x)\|{}_{\mathfrak H^{\otimes n}}^2 \le \frac{c_H^{2n} }{n!} \Bigg(\int_{\mathbb{R}} L_{n,t}^{\frac{1}{2}}(\zeta) \left |\mathcal{F}u_0(\zeta)\right|d\zeta\Bigg)^2 , \end{aligned} $$
(22)

where L n,t(ζ) is

$$\displaystyle \begin{aligned} \int_{[0,t]^n} \int_{\mathbb{R}^n} \prod_{i=1}^n e^{-\kappa (s_{\sigma(i+1)}-s_{\sigma(i)})|\eta_{i}|{}^2} (|\zeta|{}^{1-2H}+|\eta_1|{}^{1-2H}) \times \prod_{i=2}^n(|\eta_{i}|{}^{1-2H}+|\eta_{i-1}|{}^{1-2H})d\eta ds. \end{aligned}$$

Let us expand the product \(\prod _{i=2}^{n} (|\eta _{i}|{ }^{1-2H}+|\eta _{i-1}|{ }^{1-2H})\) in the integral defining L n,t(ζ). We obtain an expression of the form \(\sum _{\alpha \in D_{n}} \prod _{i=1}^{n} |\eta _{i}|{ }^{\alpha _{i}}\), where D n is a subset of multi-indices of length n − 1. The complete description of D n is omitted for the sake of conciseness, and we will just use the following facts: Card(D n) = 2n−1 and for any α ∈ D n we have

$$\displaystyle \begin{aligned} |\alpha|\equiv \sum_{i=1}^{n} \alpha_i = (n-1)(1-2H), \quad \text{and}\quad \alpha_{i} \in \{0, 1-2H, 2(1-2H)\}, \quad i=1,\ldots, n. \end{aligned}$$

This simple expansion yields the following bound

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle L_{n,t}(\zeta) \leq|\zeta|{}^{1-2H}\sum_{\alpha \in D_{n}} \int_{[0,t]^n} \int_{\mathbb{R}^n} \prod_{i=1}^n e^{-\kappa (s_{\sigma(i+1)}-s_{\sigma(i)})|\eta_{i}|{}^2} \prod_{i=1}^n |\eta_i|{}^{\alpha_i}d\eta ds\\ &\displaystyle &\displaystyle \qquad \qquad \quad +\sum_{\alpha \in D_{n}} \int_{[0,t]^n}\int_{\mathbb{R}^n}\prod_{i=1}^n e^{-\kappa (s_{\sigma(i+1)}-s_{\sigma(i)})|\eta_{i}|{}^2} |\eta_1|{}^{1-2H} \prod_{i=1}^n |\eta_i|{}^{\alpha_i}d\eta ds\,. \end{array} \end{aligned} $$

Perform the change of variable ξ i = (κ(s σ(i+1)s σ(i)))1∕2 η i in the above integral, and notice that \(\int _{\mathbb {R}} e^{- \xi ^{2}} |\xi |{ }^{\alpha _i}d\xi \) is bounded by a constant for α i > −1. Changing the integral over [0, t]n into an integral over the simplex, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} L_{n,t}(\zeta)&\displaystyle \leq&\displaystyle C |\zeta|{}^{1-2H} n! c_H^n \sum_{\alpha \in D _{n}} { \int_{T_n(t)}\prod_{i=1}^n (\kappa(s_{i+1}-s_{i}))^{-\frac{1}{2}(1+\alpha_i)}ds.}\\ &\displaystyle &\displaystyle +C n! c_H^n \sum_{\alpha \in D _{n}} { \int_{T_n(t)}(\kappa(s_{2}-s_{1}))^{-\frac{2-2H+\alpha_1}{2}}\prod_{i=2}^n (\kappa(s_{i+1}-s_{i}))^{-\frac{1}{2}(1+\alpha_i)}ds.} \end{array} \end{aligned} $$

We observe that whenever \(\frac {1}{4}< H < \frac {1}{2}\), we have \(\frac 12(1+\alpha _{i})<1\) for all i = 2, …n, and it is easy to see that α 1 is at most 1 − 2H so \(\frac {1}{2}(2-2H+\alpha _1)<1\). Condition H > 1∕4 comes from the requirement that when α 1 = 1 − 2H, we need \(\frac {1}{2}(2-2H+\alpha _1)=\frac {1}{2}(3-4H)<1\). Thanks to Lemma 3.3 and recalling that \(\sum _{i=1}^n\alpha _i = (n-1)(1-2H)\) for all α ∈ D n, we thus conclude that

$$\displaystyle \begin{aligned} L_{n,t}(\zeta) \leq\frac{C (1+t^{\frac{1}{2}-H}\kappa^{\frac{1}{2}-H}|\zeta|{}^{1-2H})n! c^nc_H^n t^{nH}\kappa^{nH-n}}{\Gamma(nH+1)}\,.\end{aligned} $$

Plugging this expression into (22), we end up with

$$\displaystyle \begin{aligned} n!\| f_n(\cdot,t,x)\|{}_{\mathfrak H^{\otimes n}}^2 \leq \frac{C c_H^n c^n t^{nH}\kappa^{nH-n}}{\Gamma(nH+1)}\left(\int_{\mathbb{R}}(1+t^{\frac{1}{2}-H}\kappa^{\frac{1}{2}-H}|\zeta|{}^{\frac{1}{2}-H})\left| \mathcal{F}u_0(\zeta)\right| d\zeta\right)^2.\end{aligned} $$
(23)

The proof of (20) is now easily completed thanks to the asymptotic behavior of the Gamma function and our assumption of u 0. This finishes the existence and uniqueness proof. □

4 Moment Formula and Bounds

In this section we derive the Feynman-Kac formula for the moments of the solution to Eq. (2) and the upper and lower bounds for the moments of the solution to Eq. (2) which allow us to conclude on the intermittency of the solution. We proceed by first getting an approximation result for u, and then deriving the upper and lower bounds for the approximation.

4.1 Approximation of the Solution

The approximation of the solution we consider is based on the following approximation of the noise W. For each ε > 0 and \(\varphi \in \mathfrak H \), we define

$$\displaystyle \begin{aligned} W_{\varepsilon}(\varphi) = \int_0^\infty \int_{\mathbb{R}} [\rho_{\varepsilon}*\varphi](s,x)W(ds,dy) =\int_0^\infty \int_{\mathbb{R}}\int_{\mathbb{R}}\varphi(s,x)\rho_{\varepsilon}(x-y)W(ds,dy)dx\,,\end{aligned} $$
(24)

where \(\rho _ \varepsilon (x)=(2\pi \varepsilon )^{-\frac {1}{2}} e^{-{x^2}/{(2\varepsilon )}}\). Notice that the covariance of W ε can be read (either in Fourier or direct coordinates) as:

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \mathbf{E}\left[W_{\varepsilon}(\varphi) W_{\varepsilon}(\psi) \right] &\displaystyle =&\displaystyle c_{1,H} \int_0^\infty \int_{\mathbb{R}} \mathcal{F}\varphi(s,\xi)\, \overline{\mathcal{F}\psi(s,\xi)} \, e^{-\varepsilon |\xi|{}^2} |\xi|{}^{1-2H} d\xi ds \\ &\displaystyle =&\displaystyle c_{1,H} \int_0^\infty \int_{\mathbb{R}}\int_{\mathbb{R}}\varphi(s,x)f_{\varepsilon}(x-y)\psi(s,y) \, dx dy ds, \notag \end{array} \end{aligned} $$
(25)

for every φ, ψ in \(\mathfrak H\), where f ε is given by \(f_{\varepsilon }(x)= \mathcal {F}^{-1}(e^{-\varepsilon |\xi |{ }^2} |\xi |{ }^{1-2H})\). In other words, W ε is still a white noise in time but its space covariance is now given by f ε. Note that f ε is a real positive-definite function, but is not necessarily positive. The noise W ε induces an approximation to the mild formulation of Eq. (2), namely

$$\displaystyle \begin{aligned} u_{\varepsilon}(t,x)=p_t* u_0(x) + \int_0^t \int_{\mathbb{R}}p_{t-s}(x-y)u_{\varepsilon}(s,y) \, W_{\varepsilon}(ds,dy) , \end{aligned} $$
(26)

where the integral is understood (as in Sect. 3.1) in the Itô sense. We will start by a formula for the moments of u ε.

Proposition 4.1

Let W ε be the noise defined by (24), and assume \(\frac {1}{4}<H<\frac {1}{2}\) . Assume u 0 is such that \(\int _{\mathbb {R}}(1+|\xi |{ }^{\frac {1}{2}-H})|\mathcal {F}u_0(\xi )|d\xi < \infty \) . Then

  1. (i)

    Equation (26) admits a unique solution.

  2. (ii)

    For any integer n ≥ 2 and \((t,x)\in [0,T]\times \mathbb {R}\) , we have

    $$\displaystyle \begin{aligned} \mathbf{E} \left[ u^n_{\varepsilon}(t,x)\right] = { {\mathbf{E}}_B \left[\prod_{j=1}^n u_0(x+B_{\kappa t}^j) \exp \left( c_{1,H} \sum_{1\leq j\neq k \leq n} V_{t,x}^{\varepsilon,j,k}\right)\right],} \end{aligned} $$
    (27)

    with

    $$\displaystyle \begin{aligned} V_{t,x}^{\varepsilon,j,k} = \int_0^t f_{\varepsilon}(B_{ \kappa r}^j-B_{\kappa r}^k)dr = \int_0^t \int_{\mathbb{R}}e^{-\varepsilon |\xi|{}^2} |\xi|{}^{1-2H} e^{i\xi (B_{\kappa r}^j-B_{\kappa r}^k)} \, d\xi dr . \end{aligned} $$
    (28)

    In formula (28), {B j;j = 1, …, n} is a family of n independent standard Brownian motions which are also independent of W and E B denotes the expected value with respect to the randomness in B only.

  3. (iii)

    The quantity \(\mathbf {E} [u^n_{\varepsilon }(t,x)]\) is uniformly bounded in ε. More generally, for any a > 0 we have

    $$\displaystyle \begin{aligned} \sup_{\varepsilon>0} {\mathbf{E}}_B \left[ \exp \left( a \sum_{1\leq j\neq k \leq n} V_{t,x}^{\varepsilon,j,k}\right)\right] \equiv c_{a}<\infty . \end{aligned}$$

Proof

The proof of item (i) is almost identical to the proof of Theorem 3.4, and is omitted for the sake of conciseness. Moreover, in the proof of (ii) and (iii), we may take u 0(x) ≡ 1 for simplicity.

In order to check item (ii), set

$$\displaystyle \begin{aligned} A_{t,x}^{\varepsilon}(r,y)= \rho_{\varepsilon}(B_{\kappa (t-r)}^x-y), \quad \text{and}\quad \alpha^{\varepsilon}_{t,x}=\|A^{\varepsilon}_{t,x}\|{}^2_{\mathfrak H}. \end{aligned} $$
(29)

Then one can prove, similarly to Proposition 5.2 in [8], that u ε admits a Feynman-Kac representation of the form

$$\displaystyle \begin{aligned} u_{\varepsilon}(t,x)={\mathbf{E}}_B \left[ \exp \left( W ( A_{t,x}^{\varepsilon})-\frac{1}{2}\alpha^{\varepsilon}_{t,x}\right) \right]\,. \end{aligned} $$
(30)

Now fix an integer n ≥ 2. According to (30) we have

$$\displaystyle \begin{aligned} \mathbf{E} \left[ u^n_{\varepsilon}(t,x)\right]={\mathbf{E}}_W \left[\prod_{j=1}^n {\mathbf{E}}_B\left[ \exp \left( W(A^{\varepsilon, B^j}_{t,x})- \frac{1}{2}\alpha_{t,x}^{\varepsilon,B^j}\right) \right] \right]\,, \end{aligned}$$

where for any j = 1, …, n, \(A_{t,x}^{\varepsilon ,B^j}\) and \(\alpha _{t,x}^{\varepsilon ,B^j}\) are evaluations of (29) using the Brownian motion B j. Therefore, since \(W(A^{\varepsilon , B^j}_{t,x})\) is a Gaussian random variable conditionally on B, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathbf{E} \left[ u^n_{\varepsilon}(t,x)\right]&\displaystyle =&\displaystyle {\mathbf{E}}_B \left[ \exp \left(\frac{1}{2}\|\sum_{j=1}^n A_{t,x}^{\varepsilon,B^j}\|{}^2_{\mathfrak H} -\frac{1}{2}\sum_{j=1}^n \alpha_{t,x}^{\varepsilon,B^j}\right)\right] \notag\\ &\displaystyle =&\displaystyle {\mathbf{E}}_B \left[ \exp \left(\frac{1}{2}\|\sum_{j=1}^n A_{t,x}^{\varepsilon,B^j}\|{}^2_{\mathfrak H} -\frac{1}{2}\sum_{j=1}^n \| A_{t,x}^{\varepsilon,B^j}\|{}^2_{\mathfrak H}\right)\right] \notag\\ &\displaystyle =&\displaystyle {\mathbf{E}}_B \left[ \exp \left(\sum_{1\leq i < j \leq n}\langle A_{t,x}^{\varepsilon,B^i}, A_{t,x}^{\varepsilon,B^j}\rangle _{\mathfrak H}\right)\right]\,. \end{array} \end{aligned} $$

The evaluation of \(\langle A_{t,x}^{\varepsilon ,B^i}, A_{t,x}^{\varepsilon ,B^j}\rangle _{\mathfrak H}\) easily yields our claim (27), the last details being left to the patient reader.

Let us now prove item (iii), namely

$$\displaystyle \begin{aligned} \sup_{\varepsilon > 0} \sup_{t \in [0,T], x \in \mathbb{R}} \mathbf{E} \left[ u^n_{\varepsilon}(t,x)\right] < \infty\,. \end{aligned} $$
(31)

To this aim, notice first from the expression (27) that \(\mathbf {E} \left [ u^n_{\varepsilon }(t,x)\right ]\) does not depend on \(x\in \mathbb {R}\) (since u 0(x) ≡ 1) so that the \(\sup _{t \in [0,T], x \in \mathbb {R}}\) in (31) can be reduced to a \(\sup \) in t only. Next, still resorting to formula (27), it is readily seen that it suffices to show that for two independent Brownian motions B and \(\tilde {B}\), we have

$$\displaystyle \begin{aligned} \sup_{\varepsilon > 0, t\in [0,T]} {\mathbf{E}}_{B} \left[ \exp \left (c \, F_t^{\varepsilon} \right)\right] <\infty, \quad \text{with}\quad F_t^{\varepsilon} \equiv \int_0^t \int_{\mathbb{R}} e^{-\varepsilon |\xi|{}^2} |\xi|{}^{1-2H} e^{i \xi (B_{\kappa r}-\tilde{B}_{\kappa r})}d\xi dr, \end{aligned} $$
(32)

for any positive constant c. In order to prove (32), we expand the exponential and write:

$$\displaystyle \begin{aligned} {\mathbf{E}}_{B} \left[ \exp (c \, F_t^{\varepsilon})\right] =\sum_{l=0}^{\infty}\frac{{\mathbf{E}}_{B} \left[ (c \, F_t^{\varepsilon})^l\right]}{l!}\,. \end{aligned} $$
(33)

Next, we have

$$\displaystyle \begin{aligned} {\mathbf{E}}_{B} \left[\left( F_t^{\varepsilon}\right)^l\right]&= {\mathbf{E}}_{B} \left[ \int_{[0,t]^l} \int_{\mathbb{R}^l} \prod_{j=1}^l e^{-i \xi_j (B_{\kappa r_j}-\tilde{B}_{\kappa r_j})-\varepsilon |\xi_j|{}^2} |\xi_j|{}^{1-2H} d\xi dr \right] \\& \leq \int_{[0,t]^l} \int_{\mathbb{R}^l} \prod_{j=1}^{l} e^{-\kappa (t-r_{\sigma(j)})|\xi_j+\dots+\xi_1|{}^2} \, |\xi_j|{}^{1-2H} \, d\xi dr\,, \end{aligned} $$

where σ is the permutation on {1, 2, …, l} such that t ≥ r σ(l) ≥⋯ ≥ r σ(1). We have thus gone back to an expression which is very similar to (21). We now proceed as in the proof of Theorem 3.4 to show that (31) holds true from Eq. (33). □

Starting from Proposition 4.1, let us take limits in order to get the moment formula for the solution u to Eq. (2).

Theorem 4.2

Assume \(\frac {1}{4}<H<\frac {1}{2}\) and consider n ≥ 1, j, k ∈{1, …, n} with jk. For \((t,x)\in [0,T]\times \mathbb {R}\) , denote by \(V_{t,x}^{j,k}\) the limit in L 2( Ω) as ε → 0 of

$$\displaystyle \begin{aligned} V_{t,x}^{\varepsilon,j,k} = \int_0^t \int_{\mathbb{R}}e^{-\varepsilon |\xi|{}^2} |\xi|{}^{1-2H} e^{i\xi (B_{ \kappa r}^j-B_{\kappa r}^k)}d\xi dr. \end{aligned}$$

Then \(\mathbf {E} \left [ u^n_{\varepsilon }(t,x)\right ]\) converges as ε → 0 to E[u n(t, x)], which is given by

$$\displaystyle \begin{aligned} \mathbf{E}[u^n(t,x)] ={ {\mathbf{E}}_{B}\left[ \prod_{j=1}^n u_0(B^j_{\kappa t}+x)\exp \left( c_{1,H} \sum_{1\leq j \neq k \leq n} V_{t,x}^{j,k} \right)\right]\, .} \end{aligned} $$
(34)

We note that in a recent paper [12], the moment formula for general covariance function is obtained. However we present the proof here for the sake of completeness.

Proof

As in Proposition 4.1, we will prove the theorem for u 0 ≡ 1 for simplicity. For any p ≥ 1 and 1 ≤ j < k ≤ n, we can easily prove that \(V_{t,x}^{\varepsilon ,j,k}\) converges in L p( Ω) to \(V_{t,x}^{j,k}\) defined by

$$\displaystyle \begin{aligned} V_{t,x}^{j,k} = \int_0^t \int_{\mathbb{R}} |\xi|{}^{1-2H} e^{i\xi (B_{\kappa r}^j-B_{\kappa r}^k)}d\xi dr. \end{aligned} $$
(35)

Indeed, this is due to the fact that \(e^{-\varepsilon |\xi |{ }^2} |\xi |{ }^{1-2H} e^{i\xi (B_{\kappa r}^j-B_{\kappa r}^k)}\) converges to \(|\xi |{ }^{1-2H} e^{i\xi (B_{\kappa r}^j-B_{\kappa r}^k)}\) in the  ⊗ dr ⊗ d P sense, plus standard uniform integrability arguments. Now, taking into account relation (27), Proposition 4.1 (iii), the fact that \(V_{t,x}^{\varepsilon ,j,k}\) converges to \(V_{t,x}^{j,k}\) in L 2( Ω) as ε → 0, and the inequality |e x − e y|≤ (e x + e y)|x − y|, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle {\mathbf{E}}_B\left|\exp \left(c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{\epsilon,j,k} \right)-\exp \left(c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{j,k} \right)\right|\\ &\displaystyle \leq&\displaystyle \sup_{\epsilon >0}2\left({\mathbf{E}}_B\left|\exp \left(2c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{\epsilon,j,k} \right)+\exp \left(2c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{j,k} \right)\right|{}^2\right)^{\frac{1}{2}} \\ &\displaystyle &\displaystyle \quad \times \left({\mathbf{E}}_B \left|c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{\epsilon,j,k}-c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{j,k}\right|{}^2\right)^{\frac{1}{2}}\,, \end{array} \end{aligned} $$

which implies

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \lim_{\varepsilon\to 0} \mathbf{E} \left[ u^n_{\varepsilon}(t,x)\right] &\displaystyle =&\displaystyle \lim_{\varepsilon\to 0} {\mathbf{E}}_B \left[ \exp \left( c_{1,H} \sum_{1\leq j\neq k \leq n} V_{t,x}^{\varepsilon,j,k}\right)\right] \notag \\ &\displaystyle =&\displaystyle {\mathbf{E}}_B \left[ \exp \left( c_{1,H} \sum_{1\leq j\neq k \leq n} V_{t,x}^{j,k}\right)\right]. \end{array} \end{aligned} $$
(36)

To end the proof, let us now identify the right hand side of (36) with E[u n(t, x)], where u is the solution to Eq. (2). For ε, ε′ > 0 we write

$$\displaystyle \begin{aligned} \mathbf{E} \left[ u_{\varepsilon}(t,x) \, u_{\varepsilon'}(t,x) \right]= {\mathbf{E}}_B \left[ \exp \left(\ \langle A^{\varepsilon,B^1}_{t,x} , A^{\varepsilon', B^2}_{t,x} \rangle_{\mathfrak H}\right)\right]\, , \end{aligned}$$

where we recall that \(A^{\varepsilon ,B}_{t,x}\) is defined by relation (29). As for (36) we can show that the above \(\mathbf {E} \left [ u_{\varepsilon }(t,x) \, u_{\varepsilon '}(t,x) \right ]\) converges as ε, ε′ tend to zero. So, u ε(t, x) converges in L 2 to some limit v(t, x). For any positive integer k notice the identity

$$\displaystyle \begin{aligned} \mathbf{E} |u_{\varepsilon}(t,x) - u_{\varepsilon'}(t,x) |{}^{2k} =\sum_{j=0}^{2k} \frac{(-1)^j(2k)!}{j!(2k-j)!} \mathbb{E} \left[ u_{\varepsilon}(t,x)^{2k-j} u_{\varepsilon'}(t,x) ^j\right]\,.{} \end{aligned} $$
(37)

We can find the limit of \(\mathbb {E} \left [ u_{\varepsilon }(t,x)^{2k-j} u_{\varepsilon '}(t,x) ^j\right ]\) and then show that (37) converges to 0 as ε and ε′ tend to 0. It is now clear that u ε(t, x) converges to v(t, x) in L p for all p ≥ 1. Moreover, E[v n(t, x)] is equal to the right hand side of (36). Finally, for any smooth random variable F which is a linear combination of W(1 [a,b](s)φ(x)), where φ is a C function with compact support, using the duality relation (6), we have

$$\displaystyle \begin{aligned} { \mathbf{E} \left[ F u_{\varepsilon}(t,x)\right] =\mathbf{E} \left[ F\right]+\mathbf{E} \left[ \langle Y^{\varepsilon} ,DF\rangle _{\mathfrak H}\right],} \end{aligned} $$
(38)

where

$$\displaystyle \begin{aligned} Y^{t,x}({s,z})= \left( \int_{\mathbb{R}} p_{t-s}(x-y) \, p_{\varepsilon}(y-z) u_{\varepsilon} (s,y)\, dy \right) \mathbf{ 1}_{[0,t]}(s) . \end{aligned}$$

Letting ε tend to zero in Eq. (38), after some easy calculation we get

$$\displaystyle \begin{aligned} \mathbf{E} [F v_{t,x}]= \mathbf{E}[ F] +\mathbf{E} \left[ \langle DF, v p_{t-\cdot}(x-\cdot)\rangle_{\mathfrak H}\right]\,. \end{aligned}$$

This equation is valid for any \(F \in \mathbb {D}^{1,2}\) by approximation. So the above equation implies that the process v is the solution of Eq. (2), and by the uniqueness of the solution we have v = u. □

4.2 Intermittency Estimates

In this subsection we prove some upper and lower bounds on the moments of the solution which entail the intermittency phenomenon.

Theorem 4.3

Let \(\frac {1}{4}<H<\frac {1}{2}\) , and consider the solution u to Eq.(2). For simplicity we assume that the initial condition is u 0(x) ≡ 1. Let n ≥ 2 be an integer, \(x\in \mathbb {R}\) and t ≥ 0. Then there exist some positive constants c 1, c 2, c 3 independent of n, t and κ with 0 < c 1 < c 2 < ∞ satisfying

$$\displaystyle \begin{aligned} \exp (c_{1} n^{1+\frac{1}{H}}\kappa^{1-\frac{1}{H}}t) \leq \mathbf{E}\left[ u^n(t,x) \right] \leq c_{3} \exp \big(c_{2} n^{1+\frac{1}{H}}\kappa^{1-\frac{1}{H}} t\big)\,. \end{aligned} $$
(39)

Remark 4.4

It is interesting to point out from the above inequalities that when κ 0, the moments of u go to infinity. This is because the equation \(\frac {\partial u}{\partial t}= u\,\dot W\) has no classical solution due to the singularity of the noise \(\dot W\) in spatial variable x.

Proof of Theorem 4.3

We divide this proof into upper and lower bound estimates.

Step 1: Upper bound. Recall from Eq. (18) that for \((t,x)\in \mathbb {R}_{+}\times \mathbb {R}\), u(t, x) can be written as: \(u(t,x)=\sum _{m=0}^{\infty }I_m(f_m(\cdot ,t,x))\). Moreover, as a consequence of the hypercontractivity property on a fixed chaos we have (see [16, p. 62])

$$\displaystyle \begin{aligned} \|I_m(f_m(\cdot,t,x))\|{}_{L^{n}(\Omega)}\leq (n-1)^{\frac{m}{2}}\|I_m(f_m(\cdot,t,x))\|{}_{L^{2}(\Omega)} \,, \end{aligned}$$

and substituting the above right hand side by the bound (23), we end up with

$$\displaystyle \begin{aligned} \begin{array}{rcl} \|I_m(f_m(\cdot,t,x))\|{}_{L^{n}(\Omega)} \leq n^{\frac{m}{2}}\|I_m(f_m(\cdot,t,x))\|{}_{L^{2}(\Omega)} \leq \frac{c^{\frac{n}{2}}n^{\frac{m}{2}}t^{\frac{mH}{2}}\kappa^{\frac{Hm-m}{2}}} { \Gamma(mH/2+1) } \,. \end{array} \end{aligned} $$

Therefore from by the asymptotic bound of Mittag-Leffler function \(\sum _{n\ge 0}x^{n}/\Gamma (\alpha n+1) \le c_1 \exp (c_2 x^{1/a})\) (see [14], Formula (1.8.10)), we get:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \|u(t,x)\|{}_{L^{n}(\Omega)} \leq \sum_{m=0}^{\infty} \|J_m(t,x)\|{}_{L^{n}(\Omega)} \leq \sum_{m=0}^{\infty}\frac{c^{\frac{m}{2}}n^{\frac{m}{2}}t^{\frac{mH}{2}}\kappa^{\frac{Hm-m}{2}}}{\big(\Gamma(m H+1)\big)^{\frac{1}{2}}}\leq c_{1}\exp {\big(c_{2} t n^{\frac{1}{H}} \kappa^{\frac{H-1}{H}}\big)}\,, \end{array} \end{aligned} $$

from which the upper bound in our theorem is easily deduced.

Step 2: Lower bound for u ε. For the lower bound, we start from the moment formula (27) for the approximate solution, and write

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \ \ \mathbf{E} \left[ u^n_{\varepsilon}(t,x)\right] \\ &\displaystyle &\displaystyle = {\mathbf{E}}_{B} \left[ \exp \left(c_{1,H}\left[ \int_0^t \int_{\mathbb{R}} e^{-\varepsilon |\xi|{}^2} \left| \sum_{j=1}^n e^{-i B_{\kappa r}^j \xi}\right|{}^2 |\xi|{}^{1-2H} d\xi dr -nt \int_{\mathbb{R}} e^{-\varepsilon |\xi|{}^2} |\xi|{}^{1-2H} d\xi\right] \right)\right].\hspace{-6pt} \end{array} \end{aligned} $$

In order to estimate the expression above, notice first that the obvious change of variable λ = ε 1∕2 ξ yields \(\int _{\mathbb {R}} e^{-\varepsilon |\xi |{ }^2} |\xi |{ }^{1-2H} d\xi =C \varepsilon ^{-(1-H)}\) for some constant C. Now for an additional arbitrary parameter η > 0, consider the set

$$\displaystyle \begin{aligned} A_\eta=\left\{\omega; \, \sup_{1\leq j\leq n}\sup_{0\leq r \leq t}|B_{\kappa r}^{j}(\omega)|\leq \frac{\pi}{3\eta}\right\}. \end{aligned}$$

Observe that classical small balls inequalities for a Brownian motion (see (1.3) in [15]) yield \(\mathbf {P}(A_{\eta })\geq c_{1} e^{-c_{2} \eta ^2 n \kappa t}\) for a large enough η. In addition, if we assume that A η is realized and |ξ|≤ η, some elementary trigonometric identities show that the following deterministic bound hold true: \(| \sum _{j=1}^n e^{-i B_{\kappa r}^j \xi }| \ge \frac {n}{2}\). Gathering those considerations, we thus get

$$\displaystyle \begin{aligned} \mathbf{E} \left[ u^n_{\varepsilon}(t,x)\right] &\geq \exp \left( c_1 n^2 \int_0^t \int_0^{\eta} e^{-\varepsilon |\xi|{}^2} |\xi|{}^{1-2H} d\xi dr - c_2 nt \varepsilon^{H-1} \right) \mathbf{P}\left( A_\eta \right) \\ &\geq C \exp \left( c_1 n^2 t \varepsilon^{-(1-H)} \int_0^{\varepsilon^{1/2}\eta} e^{- |\xi|{}^2} |\xi|{}^{1-2H} d\xi - c_2 nt \varepsilon^{-(1-H)} - c_{3} n \kappa t \eta^{2} \right). \end{aligned} $$

We now choose the parameter η such that κη 2 = ε −(1−H), which means in particular that η → as ε → 0. It is then easily seen that \(\int _0^{\varepsilon ^{1/2}\eta } e^{- |\xi |{ }^2} |\xi |{ }^{1-2H} d\xi \) is of order ε H(1−H) in this regime, and some elementary algebraic manipulations entail

$$\displaystyle \begin{aligned} \mathbf{E} \left[ u^n_{\varepsilon}(t,x)\right] \geq C \exp \left( c_1 n^2 t \kappa^{H-1}\varepsilon^{-(1-H)^2} -c_2 nt\varepsilon^{-(1-H)}\right) \geq C \exp \left(c_{3} t \kappa^{1-\frac{1}{H}}n^{1+\frac{1}{H}}\right), \end{aligned}$$

where the last inequality is obtained by choosing \(\varepsilon ^{-(1-H)}=c \, \kappa ^{\frac {H-1}{H}}n^{\frac {1}{H}}\) in order to optimize the second expression. We have thus reached the desired lower bound in (39) for the approximation u ε in the regime \(\varepsilon =c \, \kappa ^{\frac {1}{H}}n^{-\frac {1}{H(1-H)}}\).

Step 3: Lower bound for u. To complete the proof, we need to show that for all sufficiently small ε, \(\mathbf {E} \left [ u^n_{\varepsilon }(t,x)\right ]\leq \mathbf {E}[u^n(t,x)]\). We thus start from Eq. (27) and use the series expansion of the exponential function as in (33). We get

(40)

where we recall that \(V_{t,x}^{\varepsilon ,j,k}\) is defined by (28). Furthermore, expanding the mth power above, we have

where K n,m is a set of multi-indices defined by

$$\displaystyle \begin{aligned} K_{n,m}= \left\{ {\alpha}=(j_{1},\ldots,j_{m},k_{1},\ldots,k_{m}) \in \{1,\ldots,n\}^{2m} ; \, j_{l}<k_{l} \text{ for all } l=1,\ldots,n \right\}, \end{aligned}$$

and B α(ξ) is a shorthand for the linear combination \(\sum _{l=1}^m \xi _{l}(B_{\kappa r_{l}}^{j_{l}}-B_{\kappa r_{l}}^{k_{l}})\). The important point here is that \(E _{B} e^{iB^{\alpha }(\xi )}\) is positive for any α ∈ K n,m. We thus get the following inequality, valid for all m ≥ 1

where \(V_{t,x}^{j,k}\) is defined by (35). Plugging this inequality back into (40) and recalling expression (34) for E[u n(t, x)], we easily deduce that \(\mathbf {E} [u^n_{\varepsilon }(t,x)] \le \mathbf {E}[u^n(t,x)]\), which finishes the proof. □