1 Introduction

The fractional derivative of a function was introduced by Niels Henrik Abel in 1823 [1], in connection with his solution of the tautochrone (isochrone) problem in mechanics.

The Mittag-Leffler function \(E_{\alpha }(z)\) was introduced by Gösta Magnus Mittag-Leffler in 1903 [20]. Later it happened that this function has a connection to the fractional derivative introduced by Abel, and it appears in solutions of fractional order problems.

The fractional derivatives turn out to be useful in many situations, e.g. in the study of waves, including ocean waves around an oil platform in the North Sea, and ultrasound in bodies. In particular, the fractional heat equation may be used to describe anomalous heat diffusion, and it is related to power law attenuation. This and many other applications of fractional derivatives can be found, for example, in the book by S. Holm [12] and other numerous publications.

In this paper we study the following fractional stochastic heat equation

$$\begin{aligned} \frac{\partial ^{\alpha }}{\partial t^{\alpha }}Y(t,x)=\lambda \varDelta Y(t,x)+\sigma W(t,x);\; (t,x)\in (0,\infty )\times \mathbb {R}^{d}, \end{aligned}$$
(1.1)

where \(d\in \mathbb {N}=\{1,2,...\}\) and \(\frac{\partial ^{\alpha }}{\partial t^{\alpha }}\) is the Caputo derivative of order \(\alpha \in (0,2)\), and \(\lambda >0\) and \(\sigma \in \mathbb {R}\) are given constants,

$$\begin{aligned} \varDelta Y =\sum _{j=1}^{d}\frac{\partial ^{2}Y}{\partial x_{j}^{2}}(t,x) \end{aligned}$$
(1.2)

is the Laplacian operator and

$$\begin{aligned} W(t,x)=W(t,x,\omega )=\frac{\partial }{\partial t}\frac{\partial ^{d}B(t,x)}{\partial x_{1}...\partial x_{d}} \end{aligned}$$
(1.3)

is time-space white noise,

$$\begin{aligned} B(t,x)=B(t,x,\omega ); t\ge 0, x \in \mathbb {R}^d, \omega \in \varOmega \end{aligned}$$

is time-space Brownian sheet with probability law \(\mathbb {P}\). The boundary conditions are:

$$\begin{aligned} Y(0,x)&=\delta _0(x)\text { (the point mass at } 0), \end{aligned}$$
(1.4)
$$\begin{aligned} \lim _{x \rightarrow +/- \text { }\infty }Y(t,x)&=0. \end{aligned}$$
(1.5)

In the classical case, when \(\alpha =1\), this equation models the normal diffusion of heat in a random or noisy medium, the noise being represented by the time-space white noise W(tx).

- When \(\alpha >1\) the equation models superdiffusion or enhanced diffusion, where the particles spread faster than in regular diffusion. This occurs for example in some biological systems.

- When \(\alpha <1\) the equation models subdiffusion, in which travel times of the particles are longer than in the standard case. Such situation may occur in transport systems.

For more information about super- and subdiffusions, see Cherstvy et al. [9].

We consider the equation (1.1) in the sense of distributions, and in Theorem 2 we find an explicit expression for the \(\mathcal {S}'\)-valued solution Y(tx), where \(\mathcal {S}'\) is the space of tempered distributions.

Following the terminology of Y. Hu [11], we say that the solution is mild if \(Y(t,x) \in L^2(\mathbb {P})\) for all tx. It is well-known that in the classical case with \(\alpha = 1\), the solution is mild if and only if the space dimension \(d=1\), see e.g. Y. Hu [11].

We show that if \(\alpha \in (1,2)\) the solution is mild if \(d=1\) or \(d=2\).

Then we show that if \(\alpha < 1\) then the solution is not mild for any space dimension d. This phenomenon is in line with the results from regularization with noise, see Butkovsky et al. [6].

There are many papers dealing with various forms of stochastic fractional differential equations. Some papers which are related to ours are:

  • In the paper by Kochubei et al. [15] the fractional heat equation corresponding to random time change in Brownian motion is studied.

  • The papers by Bock et al. [4, 5] are considering stochastic equations driven by grey Brownian motion.

  • The paper by Liu et al. [17] proves existence and uniqueness of general time-fractional linear evolution equations in the Gelfand triple setting.

  • The paper by Yalçin et al. [25] studies the time-regularity of the paths of solutions to stochastic partial differential equations driven by additive infinite-dimensional fractional Brownian noise.

  • The paper by Binh et al. [3] studies the spatially-temporally Hölder continuity of mild random field solution of space time fractional stochastic heat equation driven by colored noise.

  • The paper which is closest to our paper is Chen et al. [8], where, a comprehensive discussion is given of a general fractional stochastic heat equations with multiplicative noise, and with fractional derivatives in both time and space, is given. In that paper the authors prove existence and uniqueness results as well as regularity results of the solution, and they give sufficient conditions on the coefficients and the space dimension d, for the solution to be a random field.

Our paper, however, is dealing with additive noise and a more special class of fractional heat equations. As in [8] we find explicit solution formulae in the sense of distributions and give conditions under which the solution is a random field in \(L^2(\mathbb {P})\).

We refer to Holm [12], Ibe [13], Kilbas et al. [14], Machado et al. [18, 19] and Samko et al. [22] for more information about fractional calculus operators and their applications.

2 Preliminaries

2.1 The space of tempered distributions

For the convenience of the reader we recall some of the basic properties of the Schwartz space \(\mathcal {S}\) of rapidly decreasing smooth functions and its dual, the space \(\mathcal {S}'\) of tempered distributions.

Let n be a given natural number. Let \(\mathcal {S}=\mathcal {S}(\mathbb {R}^n)\) be the space of rapidly decreasing smooth real functions f on \(\mathbb {R}^n\) equipped with the family of seminorms:

$$\begin{aligned} \Vert f \Vert _{k,\alpha }:= \sup _{y \in \mathbb {R}^n}\big \{ (1+|y|^k) \vert \partial ^\alpha f(y)\vert \big \}< \infty , \end{aligned}$$

where \(k = 0,1,...\), \(\alpha =(\alpha _1,...,\alpha _n)\) is a multi-index with \( \alpha _j= 0,1,...\) \((j=1,...,n)\) and

$$\begin{aligned} \partial ^\alpha f:= \frac{\partial ^{|\alpha |}}{\partial y_1^{\alpha _1}\cdots \partial y_n^{\alpha _n}}f \end{aligned}$$

for \(|\alpha |=\alpha _1+... +\alpha _n\).

Then \(\mathcal {S}=\mathcal {S}(\mathbb {R}^n)\) is a Fréchet space.

Let \(\mathcal {S}^{\prime }=\mathcal {S}^{\prime }(\mathbb {R}^{n})\) be its dual, called the space of tempered distributions. Let \(\mathcal {B}\) denote the family of all Borel subsets of \(\mathcal {S}^{\prime }(\mathbb {R}^{n})\) equipped with the weak* topology. If \(\varPhi \in \mathcal {S}^{\prime }\) and \(f \in \mathcal { S}\) we let

$$\begin{aligned} \varPhi (f) \text { or } \langle \varPhi ,f \rangle \end{aligned}$$
(2.1)

denote the action of \(\varPhi \) on f.

Example 1

  • (Evaluations) For \(y \in \mathbb {R}\) define the function \(\delta _y\) on \(\mathcal {S}(\mathbb {R})\) by \(\langle \delta _y,\phi \rangle =\phi (y)\). Then \(\delta _y\) is a tempered distribution.

  • (Derivatives) Consider the function D, defined for \(\phi \in \mathcal {S}(\mathbb {R})\) by \(D[\phi ]=\phi ^{\prime }(y)\). Then D is a tempered distribution.

  • (Distributional derivative) Let T be a tempered distribution, i.e. \(T \in \mathcal {S}^{'}(\mathbb {R}) \). We define the distributional derivative \(T^{'}\) of T by

    $$\begin{aligned} T^{'}[\phi ]=-T[\phi ^{'}]; \quad \phi \in \mathcal {S}. \end{aligned}$$

    Then \(T^{'}\) is again a tempered distribution.

In the following we will apply this to the case when \(n=1+d\) and \(y=(t,x) \in \mathbb {R}\times \mathbb {R}^d\).

2.2 The Mittag-Leffler functions

Definition 1

The Mittag-Leffler function of two parameters \(\alpha ,\; \beta \) is denoted by \(E_{\alpha ,\beta }(z)\) and defined by:

$$\begin{aligned} E_{\alpha ,\beta }(z)=\sum _{k=0}^{\infty }\frac{z^{k}}{\varGamma (\alpha k+\beta )}, \end{aligned}$$
(2.2)

where \(z,\; \alpha ,\; \beta \in \mathbb {C},\; Re(\alpha )>0\; \text {and}\; Re(\beta )>0,\) and \(\varGamma \) is the Gamma function.

For \(\beta =1\) we obtain the Mittag-Leffler function of one parameter \(\alpha \) denoted by \(E_{\alpha }(z)\) and defined as:

$$\begin{aligned} E_{\alpha }(z)=\sum _{k=0}^{\infty }\frac{z^{k}}{\varGamma (\alpha k+1)}, \end{aligned}$$
(2.3)

where \(z,\; \alpha \in \mathbb {C},\; Re(\alpha )>0.\)

Remark 1

Note that \(E_{\alpha }(z)= E_{\alpha ,1}(z)\) and that

$$\begin{aligned} E_{1}(z)=\sum _{k=0}^{\infty }\frac{z^{k}}{\varGamma ( k+1)}=\sum _{k=0}^{\infty }\frac{z^{k}}{k!} =e^{z}. \end{aligned}$$
(2.4)

2.3 The (Abel-)Caputo fractional derivative

In this section we present the definitions and some properties of the Caputo derivatives.

Definition 2

The (Abel-)Caputo fractional derivative of order \(\alpha > 0\) of a function f such that \(f(x)=0\) when \(x<0\) is denoted by \(D^{\alpha } f (x)\) or \(\frac{d^{\alpha }}{dx^{\alpha }} f(x)\) and defined by

$$\begin{aligned} D^{\alpha }f(x):&= {\left\{ \begin{array}{ll} \frac{1}{\varGamma (n-\alpha )}\int _0^x \frac{f^{(n)}(u)du}{(x-u)^{\alpha +1 -n}}; \quad n-1< \alpha < n\\ \frac{d^n}{dx^n}f(x); \quad \alpha =n. \end{array}\right. } \end{aligned}$$
(2.5)

Here n is an smallest integer greater than or equal to \(\alpha \).

If f is not smooth these derivatives are interpreted in the sense of distributions.

2.3.1 Laplace transform of Caputo derivatives

Some of the properties of the Laplace transform that we will need are:

$$\begin{aligned}&L[ \frac{\partial ^{\alpha }}{\partial t^{\alpha }}f(t)](s)=s^{\alpha }(L f)(s)-s^{\alpha -1}f(0). \end{aligned}$$
(2.6)
$$\begin{aligned}&\quad L[E_{\alpha }(bx^{\alpha })](s) = \frac{s^{\alpha -1}}{s^{\alpha }-b}. \end{aligned}$$
(2.7)
$$\begin{aligned}&qL[x^{\alpha -1}E_{\alpha ,\alpha }(-b x^{\alpha })](s)=\frac{1}{s^{\alpha }+b}. \end{aligned}$$
(2.8)

2.4 Time-space white noise

Let n be a fixed natural number. Later we will set \(n= 1 + d\). Define \(\varOmega ={{\mathcal {S}}}'({\mathbb { R}}^n)\), equipped with the weak-star topology. This space will be the base of our basic probability space, which we explain in the following:

As events we will use the family \(\mathcal {F}=\mathcal{B}({{\mathcal {S}}}'({\mathbb { R}}^n))\) of Borel subsets of \({{\mathcal {S}}}'({\mathbb { R}}^d)\), and our probability measure \(\mathbb {P}\) is defined by the following result:

Theorem 1

(The Bochner–Minlos theorem) There exists a unique probability measure \(\mathbb {P}\) on \({{\mathcal {B}}}({{\mathcal {S}}}'({\mathbb { R}}^n))\) with the following property:

$$\begin{aligned} \mathbb {E}[e^{i\langle \cdot ,\phi \rangle }]:=\int \limits _{{\mathcal {S}}'}e^{i\langle \omega , \phi \rangle }d\mu (\omega )=e^{-\tfrac{1}{2} \Vert \phi \Vert ^2};\quad i=\sqrt{-1} \end{aligned}$$

for all \(\phi \in {{\mathcal {S}}}({\mathbb { R}}^n)\), where \(\Vert \phi \Vert ^2=\Vert \phi \Vert ^2_{L^2({\mathbb { R}}^n)},\quad \langle \omega ,\phi \rangle = \omega (\phi )\) is the action of \(\omega \in {{\mathcal {S}}}'({\mathbb { R}}^n)\) on \(\phi \in {{\mathcal {S}}}({\mathbb { R}}^n)\) and \(\mathbb {E}=\mathbb {E}_{\mathbb {P}}\) denotes the expectation with respect to \(\mathbb {P}\).

We will call the triplet \(({{\mathcal {S}}}'({\mathbb { R}}^n),\mathcal{B}({{\mathcal {S}}}'({\mathbb { R}}^n)),\mathbb {P})\) the white noise probability space, and \(\mathbb {P}\) is called the white noise probability measure.

The measure \(\mathbb {P}\) is also often called the (normalised) Gaussian measure on \({{\mathcal {S}}}'({\mathbb { R}}^n)\). It follows from the definition of \(\mathbb {P}\) that \(\mathbb {E}[\langle \omega ,\phi \rangle ]=0\) and

$$\begin{aligned} \mathbb {E}[\langle \omega ,\phi \rangle ^2 ]=\Vert \phi \Vert ^2 (\text {the Ito isometry}). \end{aligned}$$

Using the Ito isometry it is not difficult to prove that if \(\phi \in L^2({\mathbb { R}}^n)\) and we choose \(\phi _k\in {{\mathcal {S}}}({\mathbb { R}}^n)\) such that \(\phi _k\rightarrow \phi \) in \(L^2({\mathbb { R}}^n)\), then

$$\begin{aligned} \langle \omega ,\phi \rangle :=\lim \limits _{k\rightarrow \infty }\langle \omega ,\phi _k\rangle \quad \text {exists in}\quad L^2(\mathbb {P}) \end{aligned}$$

and is independent of the choice of \(\{\phi _k\}\). In particular, if we define

$$\begin{aligned} \widetilde{B}(x):=\widetilde{B}(x_1,\cdots ,x_n,\omega )=\langle \omega ,\chi _ {[0,x_1]\times \cdots \times [0,x_n]}\rangle ; \quad x=(x_1,\cdots ,x_n)\in {\mathbb { R}}^n, \end{aligned}$$

where \([0,x_i]\) is interpreted as \([x_i,0]\) if \(x_i<0\), then \(\widetilde{B}(x,\omega )\) has an x-continuous version \(B(x,\omega )\), which becomes an n-parameter Brownian motion, in the following sense:

By an n-parameter Brownian motion we mean a family \(\{B(x,\cdot )\}_{x\in {\mathbb { R}}^n}\) of random variables on a probability space \((\varOmega ,{{\mathcal {F}}},\mathbb {P})\) such that

  • \(B(0,\cdot )=0\quad \text {almost surely with respect to } \mathbb {P},\)

  • \(\{B(x,\omega )\}\) is a continuous and Gaussian stochastic process

  • For all \(x=(x_1,\cdots ,x_n)\), \(y=(y_1,\cdots ,y_n)\in {\mathbb { R}}_+^n\), \(B(x,\cdot ),\,B(y,\cdot )\) have the covariance \(\prod _{i=1}^n x_i\wedge y_i\). For general \(x,y\in {\mathbb { R}}^n\) the covariance is \(\prod _{i=1}^n\int _{\mathbb { R}}\theta _{x_i}(s)\theta _{y_i}(s)ds,\) where \(\theta _x(t_1,\dots ,t_n)=\theta _{x_1}(t_1)\cdots \theta _{x_n}(t_n)\), with

    $$\begin{aligned} \theta _{x_j}(s)= \left\{ \begin{array}{ll} 1\quad &{}\text { if } 0<s\le x_j\\ -1\quad &{}\text { if } x_j<s\le 0\\ 0 \quad &{}\text { otherwise.} \end{array}\right. \end{aligned}$$

It can be proved that the process \(\widetilde{B}(x,\omega )\) defined above has a modification \(B(x,\omega )\) which satisfies all these properties. This process \(B(x,\omega )\) then becomes an n-parameter Brownian motion.

We remark that for \(n=1\) we get the classical (1-parameter) Brownian motion B(t) if we restrict ourselves to \(t\ge 0\). For \(n \ge 2\) we get what is often called the Brownian sheet.

With this definition of Brownian motion it is natural to define the n-parameter Wiener–Itô integral of \(\phi \in L^2({\mathbb { R}}^n)\) by

$$\begin{aligned} \int \limits _{{\mathbb { R}}^n}\phi (x)dB(x,\omega ):=\langle \omega ,\phi \rangle ;\quad \omega \in {{\mathcal {S}}}'({\mathbb { R}}^d). \end{aligned}$$

We see that by using the Bochner–Minlos theorem we have obtained an easy construction of n-parameter Brownian motion that works for any parameter dimension n. Moreover, we get a representation of the space \(\varOmega \) as the dual \(\mathcal {S}'(\mathbb {R}^d)\) of the Fréchet space \(\mathcal {S}(\mathbb {R}^d)\). This is an advantage in many situations, for example in the construction of the Hida-Malliavin derivative, which can be regarded as a stochastic gradient on \(\varOmega \). See e.g. [10] and the references therein.

In the following we put \(n=1+d\) and let

$$\begin{aligned} B(t,x)=B(t,x,\omega ); t \ge 0, x \in \mathbb {R}^d, \omega \in \varOmega \end{aligned}$$

denote the (1-dimensional) time-space Brownian motion (also called the Brownian sheet) with probaility law \(\mathbb {P}\). Since this process is (tx)-continuous a.s., we can for a.a. \(\omega \in \varOmega \) define its derivatives with respect to t and x in the sense of distributions. Thus we define the time-space white noise \(W(t,x)=W(t,x,\omega )\) by

$$\begin{aligned} W(t,x)=\frac{\partial }{\partial t}\frac{\partial ^{d}B(t,x)}{\partial x_{1}...\partial x_{d}}. \end{aligned}$$
(2.9)

In particular, for \(d=1\) and \(x_1=t\) and get the familiar identity

$$\begin{aligned} W(t)={{d}\over {dt}}B(t)\hbox { in } \mathcal {S}^{'}. \end{aligned}$$

The process (2.9) can also be interpreted as an element of the Hida space \((\mathcal {S})^*\) of stochastic distributions, and in that setting it has been proved (see Lindstrøm, Ø., Ubøe [16] and Benth [2]) that the Ito-Skorohod integral with respect to B(dtdx) can be expressed as

$$\begin{aligned} \int _0^T \int _{\mathbb {R}^d} f(t,x,\omega ) B(dt, dx)=\int _0^T \int _{\mathbb {R}^d} f(t,x,\omega ) \diamond W(t, x) dt dx, \end{aligned}$$
(2.10)

where \(\diamond \) denotes the Wick product.

In particular, if \(f(t, x,\omega )=f(t, x)\) is deterministic, this gives

$$\begin{aligned} \int _0^T \int _{\mathbb {R}^d} f(t, x) B(dt, dx)=\int _0^T \int _{\mathbb {R}^d} f(t,x) W(t,x) dt dx. \end{aligned}$$
(2.11)

This is the interpretation we are using in this paper.

3 The solution of the fractional stochastic heat equation

Theorem 2

The unique solution \(Y(t,x) \in \mathcal {S}'\) of the fractional stochastic heat equation (1.1) - (1.5) is given by

$$\begin{aligned} Y(t,x)&=I_1 + I_2, \end{aligned}$$
(3.1)

where

$$\begin{aligned} I_1=(2\pi )^{-d} \int _{\mathbb {R}^d} e^{ixy} E_{\alpha }(- \lambda t^{\alpha } |y|^2) dy =(2\pi )^{-d} \int _{\mathbb {R}^d} e^{ixy}\sum _{k=0}^{\infty } \frac{(- \lambda t^{\alpha } |y|^2)^k}{\varGamma (\alpha k +1)}dy, \end{aligned}$$
(3.2)

and

$$\begin{aligned}&I_2= \sigma (2\pi )^{-d} \int _{0}^{t}(t-r)^{\alpha -1}\int _{\mathbb {R}^{d}}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y} E_{\alpha ,\alpha }(-\lambda (t-r)^{\alpha }|y|^2) dy\right) B(dr,dz)\nonumber \\&=\sigma (2\pi )^{-d} \int _{0}^{t}(t-r)^{\alpha -1}\int _{\mathbb {R}^{d}}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y}\sum _{k=0}^{\infty }\frac{(-\lambda (t-r)^{\alpha }|y|^2)^{k}}{\varGamma (\alpha k+\alpha ))}dy\right) B(dr,dz) , \end{aligned}$$
(3.3)

where \(|y|^2=y^2=\sum _{j=1}^d y_j^2.\)

Proof

a) First assume that Y(tx) is a solution of (1.1). We apply the Laplace transform L to both sides of (1.1) and obtain (see (2.6)):

$$\begin{aligned} s^{\alpha }\widetilde{Y}(s,x)-s^{\alpha -1}Y(0,x)=\lambda \widetilde{\varDelta Y}(s,x)+\sigma \widetilde{W}(s,x). \end{aligned}$$
(3.4)

Applying the Fourier transform F, defined by

$$\begin{aligned} Fg(y)=\int _{\mathbb {R}}e^{-ixy}g(x)dx=:\widehat{g}(y);\; g\in L^{1}(\mathbb {R}^d), \end{aligned}$$
(3.5)

we get, since \(\widehat{Y}(0,y)=1\),

$$\begin{aligned} s^{\alpha }\widehat{\widetilde{Y}}(s,y)-s^{\alpha -1}=\lambda \sum _{j=1}^{d}y_{j}^{2}\widehat{\widetilde{Y}}(s,y)+\sigma \widehat{{\widetilde{W}}}(s,y), \end{aligned}$$
(3.6)

or,

$$\begin{aligned} \left( s^{\alpha }+\lambda |y|^2\right) \widehat{\widetilde{Y}}(s,y)= s^{\alpha -1}\widehat{Y}(0^{+},y)+\sigma \widehat{\widetilde{W}}(s,y). \end{aligned}$$
(3.7)

Hence

$$\begin{aligned} \widehat{\widetilde{Y}}(s,y)=\frac{s^{\alpha -1}}{s^{\alpha } + \lambda |y|^2} + \frac{\sigma \widehat{\widetilde{W}}(s,y)}{s^{\alpha }+\lambda |y|^2}. \end{aligned}$$
(3.8)

Since the Laplace transform and the Fourier transform commute, this can be written

$$\begin{aligned} \widetilde{\widehat{Y}}(s,y)=\frac{s^{\alpha -1}}{s^{\alpha } + \lambda |y|^2} + \frac{\sigma \widetilde{\widehat{W}}(s,y)}{s^{\alpha }+\lambda |y|^2}. \end{aligned}$$
(3.9)

Applying the inverse Laplace operator \(L^{-1}\) to this equation we get

$$\begin{aligned} \widehat{Y}(t,y)&=L^{-1} \Big (\frac{s^{\alpha -1}}{s^{\alpha } + \lambda |y|^2}\Big )(t,y) + L^{-1}\Big (\frac{\sigma \widetilde{\widehat{W}}(s,y)}{s^{\alpha }+\lambda |y|^2}\Big )(t,y)\nonumber \\&=E_{\alpha ,1}(-\lambda |y|^2 t^{\alpha }) + L^{-1}\Big (\frac{\sigma \widetilde{\widehat{W}}(s,y)}{s^{\alpha }+\lambda |y|^2}\Big )(t,y), \end{aligned}$$
(3.10)

where we recall that

$$\begin{aligned} E_{\alpha ,\beta }(z)=\sum _{k=0}^{\infty } \frac{z^{k}}{\varGamma (\alpha k + \beta )} \end{aligned}$$
(3.11)

is the Mittag-Leffler function.

It remains to find \(L^{-1}\left( \frac{\sigma \widehat{\widetilde{W}}(s,y)}{s^{\alpha }+\lambda |y|^2}\right) \): Recall that the convolution \(f*g\) of two functions \(f,g: [0,\infty ) \mapsto \mathbb {R}\) is defined by

$$\begin{aligned} (f *g)(t)=\int _0^t f(t-r)g(r) dr; \quad t \ge 0. \end{aligned}$$
(3.12)

The convolution rule for Laplace transform states that

$$\begin{aligned} L\left( \int _{0}^{t}f(t-r)g(r)dr\right) (s)=Lf(s)Lg(s), \end{aligned}$$

or

$$\begin{aligned} \int _{0}^{t}f(t-w)g(w)dw=L^{-1}\left( Lf(s)Lg(s)\right) (t). \end{aligned}$$
(3.13)

By (2.8) we have

$$\begin{aligned} L^{-1}\left( \frac{1}{s^{\alpha }+\lambda |y|^2} \right) (t)&=t^{\alpha -1}E_{\alpha ,\alpha }(-\lambda t^{\alpha }|y|^2)\nonumber \\&=\sum _{k=0}^{\infty }\frac{t^{\alpha -1}(-\lambda t^{\alpha }|y|^2)^{k}}{\varGamma (\alpha k+\alpha )}\nonumber \\&=\sum _{k=0}^{\infty }\frac{(-\lambda |y|^2)^{k}t^{\alpha (k+1)-1}}{\varGamma (\alpha (k+1))}\nonumber \\&=\sum _{k=0}^{\infty }\frac{(-\lambda t^{\alpha }|y|^2)^{k}t^{\alpha -1}}{\varGamma (\alpha (k+1))} \nonumber \\&=: \varLambda (t,y). \end{aligned}$$
(3.14)

In other words,

$$\begin{aligned} \frac{\sigma }{s^{\alpha }+\lambda |y|^2}=\sigma L \varLambda (s,y). \end{aligned}$$
(3.15)

Combining with (3.13) we get

$$\begin{aligned} L^{-1}\left( \frac{\sigma }{s^{\alpha }+\lambda |y|^2} \widehat{\widetilde{W}} (s,y)\right) (t)&=L^{-1}\left( L\left( \sigma \varLambda (s,y)\right) \widetilde{\widehat{W}}(s,y)\right) (t)\end{aligned}$$
(3.16)
$$\begin{aligned}&=\sigma \int _{0}^{t}\varLambda (t-r,y)\widehat{W}(r,y)dr. \end{aligned}$$
(3.17)

Substituting this into (3.10) we get

$$\begin{aligned} \widehat{Y}(t,y)=E_{\alpha ,1}\left( -\lambda t^{\alpha }|y|^2\right) +\sigma \int _{0}^{t}\varLambda (t-r,y)\widehat{W}(r,y)dr. \end{aligned}$$
(3.18)

Taking inverse Fourier transform we end up with

$$\begin{aligned} Y(t,x)=F^{-1}\left( E_{\alpha ,1}\left( -\lambda t^{\alpha }|y|^2 \right) \right) (x)+\sigma F^{-1}\left( \int _{0}^{t}\varLambda (t-r,y)\widehat{W}(r,y)dr\right) (x). \end{aligned}$$
(3.19)

Now we use that

$$\begin{aligned} F\left( \int _{\mathbb {R}}f(x-z)g(z)dz\right) (y)=Ff(y)Fg(y), \end{aligned}$$

or

$$\begin{aligned} \int _{\mathbb {R}}f(x-z)g(z)dz=F^{-1}\Big ( Ff(y) Fg(y)\Big )(x). \end{aligned}$$
(3.20)

This gives

$$\begin{aligned}&F^{-1}\left( \int _{0}^{t}\varLambda (t-r,y)\widehat{W}(r,y)dr\right) (x)\\&=\int _{0}^{t}F^{-1}\left( \varLambda (t-r,y)\widehat{W}(r,y)\right) (x)dr\\&=\int _{0}^{t}F^{-1}\Big ( F\left( F^{-1}\varLambda (t-r,y)\right) (y)FW(r,x)(y)\Big ) (x)dr\\&=\int _{0}^{t}\int _{\mathbb {R}^{d}}\left( F^{-1}\varLambda (t-r,y)(x-z)\right) W(r,z)dzdr\\&=\int _{0}^{t}\int _{\mathbb {R}^{d}}\left( (2\pi )^{-d}\int _{\mathbb {R}^{d}}e^{i(x-z)y}\varLambda (t-r,y)dy\right) W(r,z)dzdr\\&=(2\pi )^{-d}\int _{0}^{t}\int _{\mathbb {R}^{d}}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y}\varLambda (t-r,y)dy\right) B(dr,dz). \end{aligned}$$

Combining this with (3.19), (3.11) and (3.14) we get

$$\begin{aligned} Y(t,x)&= F^{-1}(\sum _{k=0}^{\infty } \frac{(- \lambda t^{\alpha } |y|^2)^k}{\varGamma (\alpha k +1)})\\&+\sigma (2\pi )^{-d} \int _{0}^{t}\int _{\mathbb {R}^{d}}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y}\varLambda (t-r,y)dy\right) B(dr,dz)\\&=(2\pi )^{-d} \int _{\mathbb {R}^d} e^{ixy}\sum _{k=0}^{\infty } \frac{(- \lambda t^{\alpha } |y|^2)^k}{\varGamma (\alpha k +1)}dy \\&+ \sigma (2\pi )^{-d} \int _{0}^{t}(t-r)^{\alpha -1}\\&\int _{\mathbb {R}^{d}}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y}\sum _{k=0}^{\infty }\frac{(-\lambda (t-r)^{\alpha }|y|^2)^{k}}{\varGamma (\alpha (k+1))}dy\right) B(dr,dz). \end{aligned}$$

This proves uniqueness and also that the unique solution (if it exists) is given by the above formula.

b) Next, define Y(tx) by the above formula. Then we can prove that Y(tx) satisfies (1.1) by reversing the argument above. We skip the details. \(\square \)

3.1 The classical case (\(\alpha \) = 1)

It is interesting to compare the above result with the classical case when, \(\alpha \)=1: If \(\alpha =1\), we get \(Y(t,x)=I_{1}+I_{2}\), where

$$\begin{aligned} I_{1}=(2\pi )^{-d}\int _{\mathbb {R}^{d}}e^{ixy}\sum _{k=0}^{\infty }\frac{\left( -\lambda t |y|^{2}\right) ^{k}}{k!}dy \end{aligned}$$

and

$$\begin{aligned} I_{2}=\sigma (2\pi )^{-d}\int _{0}^{t}\int _{\mathbb {R}^{d}} \int _{\mathbb {R}^{d}}e^{i(x-z)y}\sum _{k=0}^{\infty }\frac{ \left( -\lambda (t-r)|y|^{2}\right) ^{k}}{k!}dyB(dr,dz), \end{aligned}$$

where we have used that \(\varGamma (k+1)=k!\)

By the Taylor expansion of the exponential function, we get

$$\begin{aligned} I_{1}&=(2\pi )^{-d}\int _{\mathbb {R}^{d}}e^{ixy}e^{-\lambda t |y|^{2}} dy\\&=(2\pi )^{-d}\left( \frac{\pi }{\lambda t}\right) ^{\frac{d}{2}}e^{-\frac{|x|^{2}}{4\lambda t}}\\&=(4 \pi \lambda t) ^{-\frac{d}{2}} e^{-\frac{|x|^{2}}{4\lambda t}}, \end{aligned}$$

where we used the general formula

$$\begin{aligned} \int _{\mathbb {R}^{d}}e^{-\left( a |y|^{2}+2by\right) }dy=\left( \frac{\pi }{a} \right) ^{\frac{d}{2}} e^{\frac{b^{2}}{a}};\;a>0;\;b\in \mathbb {C}^d. \end{aligned}$$
(3.21)

Similarly,

$$\begin{aligned} I_{2}&=\sigma (2\pi )^{-d}\int _{0}^{t}\int _{\mathbb {R}^{d}} \int _{\mathbb {R}^{d}}e^{i(x-z)y}\sum _{k=0}^{\infty }\frac{\left( -\lambda (t-r)|y|^{2} \right) ^{k} }{k!}dyB(dr,dz)\\&=\sigma (2\pi )^{-d}\int _{0}^{t}\int _{\mathbb {R}^{d}}\left( \frac{\pi }{\lambda (t-r)}\right) ^{\frac{d}{2}}e^{-\frac{|x-z|^{2}}{4\lambda (t-r) }}B(dr,dz)\\&=\sigma (4 \pi \lambda )^{-\tfrac{d}{2}}\int _0^t \int _{\mathbb {R}^d} (t-r)^{-\tfrac{d}{2}} e^{-\frac{|x-z|^2}{4 \lambda (t-r)}} B(dr,dz). \end{aligned}$$

Summarising the above, we get, for \(\alpha =1\),

$$\begin{aligned} Y(t,x)&=(4\pi \lambda t)^{-\frac{d}{2}}e^{-\frac{|x|^{2}}{4\lambda t}}\nonumber \\&+\sigma (4\pi \lambda )^{-\frac{d}{2}}\int _{0}^{t}\int _{\mathbb {R}^{d}}(t-r)^{-\frac{d}{2}}e^{-\frac{|x-z|^{2}}{ 4\lambda (t-r)}}B(dr,dz).\;\; \end{aligned}$$
(3.22)

This is in agreement with a well-known classical result. See e.g. Section 4.1 in Y. Hu [11].

4 When is Y(tx) a mild solution?

It was pointed out already in 1984 by John Walsh [24] that (classical) SPDEs driven by time-space white noise \(W(t,x); (t,x) \in [0,\infty ) \times \mathbb {R}^d\) may have only distribution valued solutions if \(d \ge 2\). Indeed, the solution Y(tx) that we found in the previous section is in general distribution valued. But in some cases the solution can be represented as an element of \(L^2(\mathbb {P})\). Following Y. Hu [11] we make the following definition:

Definition 3

The solution Y(tx) is called mild if \(Y(t,x) \in L^2(\mathbb {P})\) for all \(t>0, x \in \mathbb {R}^d\).

The second main issue of this paper is the following:

Problem 1

For what values of \(\alpha \in (0,2)\) and what dimensions \(d=1,2,...\) is Y(tx) mild?

A partial answer is given in the following:

Theorem 3

Let Y(tx) be the solution of the \(\alpha \)-fractional stochastic heat equation.

Then the following holds:

  1. a)

    If \(\alpha = 1\), then Y(tx) is mild if and only if \(d=1\).

  2. b)

    If \(\alpha > 1\) then Y(tx) is mild if \(d=1\) or \(d=2\).

  3. c)

    If \(\alpha < 1\) then Y(tx) is not mild for any d.

Proof

Recall that \(Y(t,x)= I_1 +I_2\), with

$$\begin{aligned}&I_1=(2\pi )^{-d} \int _{\mathbb {R}^d} e^{ixy}\sum _{k=0}^{\infty } \frac{(- \lambda t^{\alpha } |y|^2)^k}{\varGamma (\alpha k +1)}dy, \end{aligned}$$
(4.1)
$$\begin{aligned}&I_2= \sigma (2\pi )^{-d} \int _{0}^{t}(t-r)^{\alpha -1}\int _{\mathbb {R}^{d}}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y}\sum _{k=0}^{\infty }\frac{(-\lambda (t-r)^{\alpha }|y|^2)^{k}}{\varGamma (\alpha (k+1))}dy\right) B(dr,dz) . \end{aligned}$$
(4.2)

a) The case \(\alpha =1\):

This case is well-known, but for the sake of completeness we prove this by our method: By (3.22) and the Ito isometry we get

$$\begin{aligned} {\mathbb { E}}[Y^2(t,x)]= J_1 + J_2, \end{aligned}$$
(4.3)

where

$$\begin{aligned} J_1=I_1^2=(4\pi \lambda t)^{-d} e^{- \frac{\Vert x \Vert ^2}{2 \lambda t}} \end{aligned}$$
(4.4)

and, by using (3.21),

$$\begin{aligned} J_2&= \sigma ^2 (4\pi \lambda )^{-d} \int _0^t (t-r)^{-d}(2\pi \lambda (t-r))^{\frac{d}{2}} dr\nonumber \\&= \sigma ^2 2^{-d} (2\pi \lambda )^{-\frac{d}{2}} \int _0^t (t-r)^{-\frac{d}{2}} dr, \end{aligned}$$
(4.5)

which is finite if and only if \(d=1\).

b) The case \(\alpha > 1:\) By the It\(\hat{o}\) isometry we have \(\mathbb {E}\left[ Y^{2}\left( t,x\right) \right] =J_{1}+J_{2},\) where

$$\begin{aligned} J_{1}&=(2\pi )^{-2d}\left( \int _{\mathbb {R}^{d}}e^{ixy}\sum _{k=0}^{\infty }\frac{\left( -\lambda t^{\alpha }|y| ^{2}\right) ^{k} }{\varGamma (\alpha k+1)}dy\right) ^{2}\nonumber \\&=(2\pi )^{-2d}\left( \int _{\mathbb {R}^{d}}e^{ixy}E_{\alpha } (-\lambda t^{\alpha }|y|^2)dy\right) ^{2} \end{aligned}$$
(4.6)

and

$$\begin{aligned}&J_{2}=\sigma ^{2}(2\pi )^{-2d}\int _{0}^{t}\int _{\mathbb {R}^{d}}(t-r)^{2\alpha -2}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y}\sum _{k=0}^{\infty }\frac{\left( -\lambda (t-r)^{\alpha }|y|^{2} \right) ^{k}}{\varGamma (\alpha k+\alpha ))}dy \right) ^{2}dzdr\nonumber \\&=\sigma ^{2}(2\pi )^{-2d}\int _{0}^{t}\int _{\mathbb {R}^{d}}(t-r)^{2\alpha -2}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y}E_{\alpha ,\alpha }(-\lambda (t-r)^{\alpha } |y|^2)dy \right) ^{2}dzdr. \end{aligned}$$
(4.7)

By Abel’s test and Lemma 2 (Appendix) and (3.21) we get

$$\begin{aligned} J_{1}&=(2\pi )^{-2d}\Big (\int _{\mathbb {R}^{d}}\left( \sum _{k=0}^{\infty }\frac{e^{ixy}\left( -\lambda t^{\alpha }|y |^{2}\right) ^{k}}{\varGamma (k+1)}\frac{\varGamma (k+1)}{\varGamma (\alpha k+1)} \right) dy\Big )^2\\&\le C_{1}\Big (\int _{\mathbb {R}^{d}}e^{ixy}\sum _{k=0}^{\infty }\frac{\left( -\lambda t^{\alpha }|y|^{2}\right) ^{k}}{ \varGamma (k+1)}dy\Big )^2\\&=C_{1}\Big (\int _{\mathbb {R}^{d}}e^{ixy}e^{-\lambda t^{\alpha }|y|^{2}}dy\Big )^2\\&=C_{1}\left( \frac{\pi }{\lambda t^{\alpha }}\right) ^{d} e^{- \frac{2|x|^{2}}{\lambda t^{\alpha }}} < \infty \text { for all }t>0,x \in \mathbb {R}^d \text { and for all } d. \end{aligned}$$

By the Plancherel theorem, Lemma 3 (Appendix) and (3.21) we get

$$\begin{aligned} J_{2}&=\sigma ^{2}(2\pi )^{-2d}\int _{0}^{t}(t-r)^{2\alpha -2} \int _{\mathbb {R}^{d}}\left( \sum _{k=0}^{\infty }\frac{(-\lambda (t-r)^{\alpha }|x-z|^{2} )^{k}}{\varGamma (\alpha k+\alpha )}\right) ^{2}dzdr \\&=\sigma ^{2}(2\pi )^{-2d}\int _{0}^{t}\int _{\mathbb {R}^{d}}(t-r)^{2\alpha -2}\\&\int _{\mathbb {R}^{d}}\left( \sum _{k=0}^{\infty }\frac{(-\lambda (t-r)^{\alpha }|x-z|^{2} )^{k}}{\varGamma (k+1)}\frac{\varGamma (k+1)}{\varGamma (\alpha k+\alpha )} \right) ^{2}dzdr\\&\le C_{2}\int _{0}^{t}(t-r)^{2\alpha -2}\int _{\mathbb {R}^{d}}\left( \sum _{k=0}^{\infty }\frac{(-\lambda (t-r)^{\alpha }|x-z|^{2} )^{k}}{\varGamma (k+1)}\right) ^{2}dzdr \\&=C_{2}\int _{0}^{t}(t-r)^{2\alpha -2}\int _{\mathbb {R}^{d}}\left( e^{-\lambda (t-r)^{\alpha }|x-z|^{2}}\right) ^{2}dzdr\\&=C_{2}\int _{0}^{t}(t-r)^{2\alpha -2}\int _{\mathbb {R}^{d}}\left( e^{-2\lambda (t-r)^{\alpha }|x-z|^{2}}\right) dzdr \\&=C_{2}\int _{0}^{t}(t-r)^{2\alpha -2}\left( \frac{\pi }{2\lambda (t-r)^{\alpha }}\right) ^{\frac{d}{2}}dr\\&=C_{3}\int _{0}^{t}(t-r)^{2\alpha -2}(t-r)^{-\frac{\alpha d}{2}}dr \\&=C_{3}\int _{0}^{t}(t-r)^{2\alpha -2-\frac{\alpha d}{2}}dr. \end{aligned}$$

This is finite if and only if \(2\alpha -2-\frac{\alpha d}{2}>-1\), i.e. \(d<4-\frac{2}{\alpha }.\)

If \(\alpha =1+\epsilon \), then \(4-\frac{2}{\alpha }=2+\frac{2\epsilon }{1+\epsilon }>2\) for all \(\epsilon >0.\) Therefore \(J_2 < \infty \) for \(d=1\) or \(d=2\), as claimed.

c) The case \(\alpha < 1:\)

By (4.2) we see that

$$\begin{aligned} J_2&= \sigma ^{2}(2\pi )^{-2d}\int _{0}^{t}(t-r)^{2\alpha -2} \int _{\mathbb {R}^{d}}\big (E_{\alpha ,\alpha }( -\lambda (t-r)^{\alpha } |x-z|^2)\big )^{2}dzdr\\&= \sigma ^{2}(2\pi )^{-2d}\int _{0}^{t}(t-r)^{2\alpha -2} \int _{\mathbb {R}^{d}}\big (E_{\alpha ,\alpha }( -\lambda (t-r)^{\alpha } |y|^2)\big )^{2}dydr. \end{aligned}$$

Choose \(\beta \) such that \(0 < \alpha \le \beta \le 1\).

A result of Pollard [21], as extended by Schneider [23], states that the map

$$\begin{aligned} x\mapsto h(x):=E_{\alpha ,\beta }(-x);\; x\in \mathbb {R}^{d} \end{aligned}$$
(4.8)

is completely monotone, i.e,

$$\begin{aligned} (-1)^{n}\frac{d^{n}}{dx^{n}}h(x)\ge 0; \nonumber \\ \text{ for } \text{ all } n=0,1,2,...;\;x\in \mathbb {R}^{d}. \end{aligned}$$
(4.9)

Therefore by Bernstein’s theorem there exists a positive, \(\sigma \)-finite measure \(\mu \) on \(\mathbb {R}^{+}\) such that

$$\begin{aligned} E_{\alpha ,\beta }(-x)=\int _{0}^{\infty }e^{-xs}\mu (ds). \end{aligned}$$
(4.10)

In fact, it is known that \(\mu \) is absolutely continuous with respect to Lebesgue measure and

$$\begin{aligned} t^{\beta -1}E_{\alpha ,\beta }(-t^{\alpha })=\int _{0}^{\infty }e^{-st}K_{\alpha ,\beta }(s)ds, \end{aligned}$$
(4.11)

where

$$\begin{aligned} K_{\alpha ,\beta }(s)=\frac{s^{\alpha -\beta }\left[ \sin ((\beta -\alpha )\pi )+s^{\alpha }\sin (\beta \pi )\right] }{\pi \left[ s^{2\alpha }+2s^{\alpha }\cos (\alpha \pi )+1\right] }. \end{aligned}$$
(4.12)

See Capelas de Oliveira et al. [7], Section 2.3.

Putting \(t^{\alpha }=x\) this can be written

$$\begin{aligned} E_{\alpha ,\beta }(-x)=x^{\frac{1-\beta }{\alpha }}\int _{0}^{\infty }e^{-s x^{\frac{1}{\alpha }}}K_{\alpha ,\beta }(s)ds;\;x>0. \end{aligned}$$
(4.13)

This gives

$$\begin{aligned} E_{\alpha ,\beta }(-\rho |y|^2)=\rho ^{\frac{1-\beta }{\alpha }}|y|^{\frac{2(1-\beta )}{\alpha }}\int _{0}^{\infty }e^{-s \rho ^{\frac{1}{\alpha }}|y|^{\frac{2}{\alpha }}}K_{\alpha ,\beta }(s)ds. \end{aligned}$$
(4.14)

It follows that

$$\begin{aligned} \big (E_{\alpha ,\beta }(-\rho |y|^2)\big )^2&\sim \big (\rho ^{\frac{1-\beta }{\alpha }}|y|^{\frac{2(1-\beta )}{\alpha }} \rho ^{\frac{-1}{\alpha }}|y|^{\frac{-2}{\alpha }}\big )^2\nonumber \\&= \rho ^{-\frac{2\beta }{\alpha }} |y|^{-\frac{4\beta }{\alpha }}. \end{aligned}$$
(4.15)

Hence, by using polar coordinates we see that

$$\begin{aligned} \int _{\mathbb {R}^d} \big (E_{\alpha ,\beta }(-\rho |y|^2)\big )^2dy \sim \int _0^{\infty } R^{-\frac{4\beta }{\alpha }} R^{d-1} dR =\infty , \end{aligned}$$
(4.16)

for all d.

Therefore \(J_2 = \infty \) for all d. \(\square \)

Remark 2

  • See Y. Hu [11], Proposition 4.1 for a generalisation of the above result in the case \(\alpha =1\).

  • In the cases \(\alpha > 1, d \ge 3\) we do not know if the solution Y(tx) is mild or not. This is a topic for future research.

5 Applications

5.1 Example 1

Let us consider the following heat equation where \(\alpha <1.\) In this case our equation models subdiffusion, in which travel times of the particles are longer than in the standard case. Such situation may occur in transport systems. For \(\alpha =\frac{1}{2}\) and \(d=2\) we get

$$\begin{aligned} \frac{\partial ^{\frac{1}{2}}}{\partial t^{\frac{1}{2}}}Y(t,x)=\lambda \varDelta Y(t,x)+\sigma W(t,x);\; (t,x)\in (0,\infty )\times \mathbb {R}^{2}. \end{aligned}$$
(5.1)

The solution is given by:

$$\begin{aligned} Y(t,x)&=I_1 + I_2, \end{aligned}$$
(5.2)

where

$$\begin{aligned} I_1=(2\pi )^{-2} \int _{\mathbb {R}^2} e^{ixy} E_{\frac{1}{2}}(- \lambda t^{\frac{1}{2}} |y|^2) dy =(2\pi )^{-2} \int _{\mathbb {R}^2} e^{ixy} \text{ erf }(-\lambda t^{\frac{1}{2}}|y|^{2})^{\frac{1}{2}}dy, \end{aligned}$$
(5.3)

(with \(\text{ erf }(z)=\frac{2}{\sqrt{\pi }}\int _{0}^{z} \exp (-t^{2})dt\)) and

$$\begin{aligned} I_2= \sigma (2\pi )^{-2} \int _{0}^{t}(t-r)^{\frac{1}{2} -1}\int _{\mathbb {R}^{2}}\left( \int _{\mathbb {R}^{2}}e^{i(x-z)y} E_{\frac{1}{2},\frac{1}{2}}(-\lambda (t-r)^{\frac{1}{2}}|y|^2) dy\right) B(dr,dz). \end{aligned}$$
(5.4)

By Theorem 3 this solution is not mild.

5.2 Example 2

Next, let us consider the heat equation for \(\alpha =\frac{3}{2}\). In this case the equation models superdiffusion or enhanced diffusion, where the particles spread faster than in regular diffusion. This occurs for example in some biological systems. Now the equation gets the form

$$\begin{aligned} \frac{\partial ^{\frac{3}{2}}}{\partial t^{\frac{3}{2}}}Y(t,x)=\lambda \varDelta Y(t,x)+\sigma W(t,x);\; (t,x)\in (0,\infty )\times \mathbb {R}^{2}. \end{aligned}$$
(5.5)

By Theorem 2 the solution is

$$\begin{aligned} Y(t,x)&=I_1 + I_2, \end{aligned}$$
(5.6)

where

$$\begin{aligned} I_1=(2\pi )^{-2} \int _{\mathbb {R}^2} e^{ixy} E_{\frac{3}{2}}(- \lambda t^{\frac{3}{2}} |y|^2) dy =(2\pi )^{-2} \int _{\mathbb {R}^2} e^{ixy}\sum _{k=0}^{\infty } \frac{(- \lambda t^{\frac{3}{2}} |y|^2)^k}{\varGamma (\frac{3}{2} k +1)}dy, \end{aligned}$$
(5.7)

and

$$\begin{aligned}&I_2= \sigma (2\pi )^{-2} \int _{0}^{t}(t-r)^{\frac{3}{2} -1}\int _{\mathbb {R}^{2}}\left( \int _{\mathbb {R}^{2}}e^{i(x-z)y} E_{\frac{3}{2},\frac{3}{2}}(-\lambda (t-r)^{\frac{3}{2}}|y|^2) dy\right) B(dr,dz)\nonumber \\&=\sigma (2\pi )^{-2} \int _{0}^{t}(t-r)^{\frac{1}{2} }\int _{\mathbb {R}^{2}}\left( \int _{\mathbb {R}^{d}}e^{i(x-z)y}\sum _{k=0}^{\infty }\frac{(-\lambda (t-r)^{\frac{3}{2}}|y|^2)^{k}}{\varGamma (\frac{3}{2} k+\frac{3}{2}))}dy\right) B(dr,dz). \end{aligned}$$
(5.8)

By Theorem 3 this solution is mild.

6 Conclusions

We study the time-fractional stochastic heat equation driven by time-space white noise, interpreted in the sense of distribution. The time derivative is taken in the sense of Caputo, with parameter \(\alpha \in (0,2)\). We find an explicit expression for the solution in general, and use this to prove that

  • if \(\alpha > 1\) the solution is mild if the space dimension d is either 1 or 2.

  • If \(\alpha < 1\) the solution is not mild for any d.

7 Appendix

Lemma 1

(Abel’s test)

Suppose \(\sum _{n=1}^{\infty }b_{n}\) is convergent and put \(M= \sup \limits _{n} |b_{n}|\). Let \(\left\{ \rho _{n}\right\} \) be a bounded monotone sequence, and put \(R=\sup \limits _{n}|\rho _{n}|\). Then \(\sum _{n=1}^{\infty } b_{n}\rho _{n}\) is convergent, and \(|\sum _{n=1}^{\infty } b_{n}\rho _{n}|\le MR+R|\sum _{n=1}^{\infty } b_{n}|\).

Proof

By summation by parts we have, with \(B_{N}=\sum _{k=1}^{N}b_{k};\;N=1,2,...,\)

$$\begin{aligned} \sum _{k=1}^{N}b_{k}\rho _{k}&=\sum _{k=0}^{N}\rho _{k}(B_{k}-B_{k-1})\end{aligned}$$
(7.1)
$$\begin{aligned}&=\sum _{k=1}^{N-1}B_{k}(\rho _{k}-\rho _{k+1})+\rho _{N}B_{N}. \end{aligned}$$
(7.2)

Note that

$$\begin{aligned} \Bigg |\sum _{k=0}^{N-1}B_{k}(\rho _{k}-\rho _{k+1})|&\le M\Bigg |\sum _{k=0}^{N-1}\rho _{k}-\rho _{k+1}|=M(\rho _{1}-\rho _{n})\end{aligned}$$
(7.3)
$$\begin{aligned}&\le M R. \end{aligned}$$
(7.4)

Hence

$$\begin{aligned} \left| \sum _{k=1}^{N}b_{k}\rho _{k}\right| \le MR+R|B_{N}|. \end{aligned}$$
(7.5)

\(\square \)

Lemma 2

Suppose \(\alpha > 1\). Define

$$\begin{aligned} \rho _k=\frac{\varGamma (k+1)}{\varGamma (\alpha k +1)}; k=1,2, ... \end{aligned}$$
(7.6)

Then \(\{\rho _k\}_{k}\) is a decreasing sequence.

Proof

This follows by considering

$$\begin{aligned} \frac{\rho _{k+1}}{\rho _k}, \end{aligned}$$
(7.7)

and using that \(\alpha > 1.\) \(\square \)

Lemma 3

Suppose \(\alpha > 1\). Define

$$\begin{aligned} r_k= \frac{\varGamma (k+1)}{\varGamma (\alpha k+\alpha )}; \quad k=1, 2, ... \end{aligned}$$
(7.8)

Then \(\{r_k\}_{k}\) is a decreasing sequence.

Proof

Consider

$$\begin{aligned} \frac{r_{k+1}}{r_k}=\frac{k+1}{\alpha k + 2 \alpha -1} \cdot \frac{\varGamma (\alpha k + \alpha )}{\varGamma (\alpha k+2\alpha -1)} < 1. \end{aligned}$$

\(\square \)