1 Introduction and main results

1.1 Background material

Recently, there has been an increased interest in fractional calculus. This is because, time fractional operators are proving to be very useful for modelling purposes. For example, while the classical heat equation \(\partial _tu_t(x)=\Delta u_t(x)\), used for modelling heat diffusion in homogeneous media, the fractional heat equation \(\partial ^\beta _tu_t(x)=\Delta u_t(x)\) are used to describe heat propagation in inhomogeneous media. It is known that as opposed to the classical heat equation, this equation is known to exhibit sub diffusive behaviour and are related with anomalous diffusions or diffusions in non-homogeneous media, with random fractal structures; see, for instance, [17]. The main aim of this paper is study a class of stochastic fractional heat equations. In particular, it will become clear how this sub diffusive feature affects other properties of the solution.

Stochastic partial differential equations (SPDE) have been studied in mathematics, and in many disciplines that include statistical mechanics, theoretical physics, theoretical neuroscience, theory of complex chemical reactions, fluid dynamics, hydrology, and mathematical finance; see, for example, Khoshnevisan [14] for an extensive list of references. The area of SPDEs is interesting to mathematicians as it contains a lot of hard open problems. So far most of the work done on the stochastic heat equations have dealt with the usual time derivative, that is \(\beta =1\). Its only recently that Mijena and Nane has introduced time fractional SPDEs in [18]. These types of time fractional stochastic heat type equations are attractive models that can be used to model phenomenon with random effects with thermal memory. In another paper [19] they have proved exponential growth of solutions of time fractional SPDEs–intermittency–under the assumption that the initial function is bounded from below. A related class of time-fractional SPDE was studied by Karczewska [13], Chen et al. [5], and Baeumer et al [1]. They have proved regularity of the solutions to the time-fractional parabolic type SPDEs using cylindrical Brownain motion in Banach spaces in the sense of [6]. For a comparison of the two approaches to SPDE’s see the paper by Dalang and Quer-Sardanyons [7].

A Physical explanation of time fractional SPDEs is given in [5]. The time-fractional SPDEs studied in this paper may arise naturally by considering the heat equation in a material with thermal memory.

Before we describe our equations with more care, we provide some heuristics. Consider the following fractional equation,

$$\begin{aligned} \begin{aligned} \partial ^\beta _tu_t(x)&=-\nu (-\Delta )^{\alpha /2} u_t(x)\\ \end{aligned} \end{aligned}$$

with \(\beta \in (0,1)\) and \(\partial ^\beta _t\) is the Caputo fractional derivative which first appeared in [3] and is defined by

$$\begin{aligned} \partial ^\beta _t u_t(x)=\frac{1}{\Gamma (1-\beta )}\int _0^t \partial _r u_r(x)\frac{\mathrm{d}r}{(t-r)^\beta } . \end{aligned}$$
(1.1)

If \(u_0(x)\) denotes the initial condition to the above equation, then the solution can be written as

$$\begin{aligned} u_t(x)=\int _{{\mathbb {R}}^d}G_{t}(x-y)u_0(y)\mathrm{d}y. \end{aligned}$$

\(G_t(x)\) is the time-fractional heat kernel, which we will analyse a bit more later. Let us now look at

$$\begin{aligned} \partial ^\beta _tu_t(x)=-\nu (-\Delta )^{\alpha /2}u_t(x)+f(t,x), \end{aligned}$$
(1.2)

with the same initial condition \(u_0(x)\) and f(tx) is some nice function. We will make use of time fractional Duhamel’s principle [20,21,22] to get the correct version of (1.3). Using the fractional Duhamel principle, the solution to (1.2) is given by

$$\begin{aligned} u_t(x)=\int _{{\mathbb {R}}^d}G_{t}(x-y)u_0(y)\mathrm{d}y+\int _0^t\int _{{\mathbb {R}}^d}G_{t-r}(x-y)\partial ^{1-\beta }_r f(r,y)\mathrm{d}y\mathrm{d}r. \end{aligned}$$

We will remove the fractional derivative appearing in the second term of the above display. Define the Riesz fractional integral operator by

$$\begin{aligned} I^{\gamma }_tf(t):=\frac{1}{\Gamma (\gamma )} \int _0^t(t-\tau )^{\gamma -1}f(\tau )\mathrm{d}\tau . \end{aligned}$$

For every \(\beta \in (0,1)\), and \(g\in L^\infty ({{\mathbb {R}}}_+)\) or \(g\in C({{\mathbb {R}}}_+)\)

$$\begin{aligned} \partial _t^\beta I^\beta _t g(t)=g(t). \end{aligned}$$

We consider the time fractional PDE with a force given by \(f(t,x)=I^{1-\beta }_tg(t,x)\), then by the Duhamel’s principle, the mild solution to (1.2) will be given by

$$\begin{aligned} u_t(x)=\int _{{{\mathbb {R}}^d}} G_t(x-y)u_0(y)\mathrm{d}y+\int _0^t\int _{{{\mathbb {R}}^d}} G_{t-r}(x-y) g(r,y)\mathrm{d}y\mathrm{d}r. \end{aligned}$$

The reader can consult [5] for more information. The first equation we will study in this paper is the following.

$$\begin{aligned} \begin{aligned} \partial ^\beta _t u_t(x)&=-\nu (-\Delta )^{\alpha /2} u_t(x)+I^{1-\beta }_t[\lambda \sigma (u_t(x))\mathop {W}\limits ^{\cdot }(t,x)],\, x\in {{\mathbb {R}}}^d, \end{aligned} \end{aligned}$$
(1.3)

where the initial datum \(u_0\) is a non-random measurable function. \(\mathop {W}\limits ^{\cdot }(t,x)\) is a space-time white noise with \(x\in {{\mathbb {R}}}^d\) and \(\sigma :{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) is a globally Lipschitz function. \(\lambda \) is a positive parameter called the “level of noise”. We will make sense of the above equation using an idea in Walsh [23]. In light of the above discussion, a solution \(u_t\) to the above equation will in fact be a solution to the following integral equation.

$$\begin{aligned} u_t(x)=({\mathcal {G}}u_0)_t(x)+\lambda \int _0^t\int _{{{\mathbb {R}}}^d}G_{t-s}(x-y)\sigma (u_s(y))W(\mathrm{d}y\,\mathrm{d}s), \end{aligned}$$
(1.4)

where

$$\begin{aligned} ({\mathcal {G}}u_0)_t(x):=\int _{{\mathbb {R}}^d}G_t(x-y)u_0(y)\mathrm{d}y. \end{aligned}$$

We now fix the parameters \(\alpha \) and \(\beta \). We will restrict \(\beta \in (0,\,1)\). The dimension d is related with \(\alpha \) and \(\beta \) via

$$\begin{aligned} d<(2\wedge \beta ^{-1})\alpha . \end{aligned}$$

Note that when \(\beta =1\), the equation reduces to the well known stochastic heat equation and the above restrict the problem to a one-dimensional one. This is the so called curse of dimensionality explored in [9]. We will require the following notion of “random-field” solution. We will need \(d< 2\alpha \) while computing the \(L^2-\)norm of the heat kernel, while \(d<\beta ^{-1}\alpha \) is needed for an integrability condition needed for ensuring existence and uniqueness of the solution.

Definition 1.1

A random field \(\{u_t(x), \ t\ge 0, x\in {{\mathbb {R}}^d}\}\) is called a mild solution of (1.3) if

  1. 1.

    \(u_t(x)\) is jointly measurable in \(t\ge 0\) and \(x\in {{\mathbb {R}}^d}\);

  2. 2.

    \(\forall (t,x)\in [0,\infty )\times {{\mathbb {R}}^d}, \int _0^t\int _{{{\mathbb {R}}}^d}G_{t-s}(x-y)\sigma (u_s(y))W(\mathrm{d}y\,\mathrm{d}s)\) is well-defined in \(L^2(\Omega )\); by the Walsh-Dalang isometry this is the same as requiring

    $$\begin{aligned} \sup _{x\in {{\mathbb {R}}^d}}\sup _{t>0}{{\mathbb {E}}}|u_t(x)|^2<\infty . \end{aligned}$$
  3. 3.

    The following holds in \(L^2(\Omega )\),

    $$\begin{aligned} u_t(x)=({\mathcal {G}}u_0)_t(x)+\lambda \int _0^t\int _{{{\mathbb {R}}}^d}G_{t-s}(x-y)\sigma (u_s(y))W(\mathrm{d}y\,\mathrm{d}s). \end{aligned}$$

Next, we introduce the second class equation with space colored noise.

$$\begin{aligned} \partial ^\beta _t u_t(x)=-\nu (-\Delta )^{\alpha /2} u_t(x)+I_t^{1-\beta }[\lambda \sigma (u_t(x)){\dot{F}}(t,\,x)],\, x\in {{\mathbb {R}}}^d. \end{aligned}$$
(1.5)

The only difference with (1.3) is that the noise term is now colored in space. All the other conditions are the same. We now briefly describe the noise.

\({\dot{F}}\) denotes the Gaussian colored noise satisfying the following property,

$$\begin{aligned} {{\mathbb {E}}}[{\dot{F}}(t,x){\dot{F}}(s,y)]=\delta _0(t-s)f(x,y). \end{aligned}$$

This can be interpreted more formally as

$$\begin{aligned} Cov \bigg (\int \phi \mathrm{d}F, \int \psi \mathrm{d}F\bigg )=\int _0^\infty \mathrm{d}s\int _{{{\mathbb {R}}^d}}\mathrm{d}x\int _{{{\mathbb {R}}^d}}\mathrm{d}y\phi _s(x)\psi _s(y)f(x-y), \end{aligned}$$
(1.6)

where we use the notation \(\int \phi \mathrm{d}F\) to denote the wiener integral of \(\phi \) with respect to F, and the right-most integral converges absolutely.

We will assume that the spatial correlation of the noise term is given by the following function for \(\gamma <d\),

$$\begin{aligned} f(x,y):=\frac{1}{|x-y|^\gamma }. \end{aligned}$$

Following Walsh [23], we define the mild solution of (1.5) as the predictable solution to the following integral equation

$$\begin{aligned} \begin{aligned} u_t(x)&=({\mathcal {G}}u_0)_t(x)+\lambda \int _{{\mathbb {R}}^d}\int _0^t G_{t-s}(x-y)\sigma (u_s(y))F(\mathrm{d}s \mathrm{d}y). \end{aligned} \end{aligned}$$
(1.7)

As before, we will look at random field solution, which is defined by (1.7). We will also assume the following

$$\begin{aligned} \gamma <\alpha \wedge d. \end{aligned}$$

That we should have \(\gamma <d\) follows from an integrability condition about the correlation function. We need \(\gamma <\alpha \) which comes from an integrability condition needed for the existence and uniqueness of the solution.

We now briefly give an outline of the paper. We state main results in the next subsection. We give some preliminary results in Sect. 2, we prove a number of interesting properties of the heat kernel of the time fractional heat type partial differential equations that are essential to the proof of our main results. The proofs of the results in the space-time white noise are given in Sect. 3. In Sect. 4, we prove the main results about the space colored noise equation, and the continuity of the solution to the time fractional SPDEs with space colored noise. Throughout the paper, we use the letter C or c with or without subscripts to denote a constant whose value is not important and may vary from places to places. If \(x\in {{\mathbb {R}}}^d\), then |x| will denote the euclidean norm of \(x\in {{\mathbb {R}}}^d\), while when \(A\subset {{\mathbb {R}}}^d, |A|\) will denote the Lebesgue measure of A.

1.2 Statement of main results

Before stating our main results precisely, we describe some of the conditions we need. The first condition is required for the existence-uniquess result as well as the upper bound on the second moment of the solution.

Assumption 1.2

  • We assume that initial condition is a non-random bounded non-negative function \(u_0:{{\mathbb {R}}}^d\rightarrow {{\mathbb {R}}}\).

  • We assume that \(\sigma :{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) is a globally Lipschitz function satisfying \(\sigma (x)\le L_\sigma |x|\) with \(L_\sigma \) being a positive number.

The following condition is needed for lower bound on the second moment.

Assumption 1.3

  • We will assume that the initial function \(u_0\) is non-negative on a set of positive measure.

  • The function \(\sigma \) satisfies \(\sigma (x)\ge l_\sigma |x|\) with \(l_\sigma \) being a positive number.

Mijena and Nane [18, Theorem 2] have essentially proved the next theorem. We give a new proof of this theorem in this paper.

Theorem 1.4

Suppose that \(d<(2\wedge \beta ^{-1})\alpha \). Then under Assumption 1.2, there exists a unique random-field solution to (1.3) satisfying

$$\begin{aligned} \sup _{x\in {{\mathbb {R}}}^d}{{\mathbb {E}}}|u_t(x)|^2\le c_1e^{c_2\lambda ^{\frac{2\alpha }{\alpha -d\beta }}t}\quad \text {for all}\quad t>0. \end{aligned}$$

Here \(c_1\) and \(c_2\) are positive constants.

Remark 1.5

This theorem says that second moment grows at most exponentially. While this has been known [18], the novelty here is that we give a precise rate with respect to the parameter \( \lambda \). Theorem 1.4 implies that a random field solution exists when \(d<(2\wedge \beta ^{-1})\alpha \). It follows from this theorem that TFSPDEs in the case of space-time white noise is that a random field solution exists in space dimension greater than 1 in some cases, in contrast to the parabolic stochastic heat type equations, the case \(\beta =1\). So in the case \(\alpha =2, \beta <1/2\), a random field solution exists when \(d=1,2,3\). When \(\beta =1\) a random field solution exists only in spatial dimension \(d=1\).

The next theorem shows that under some additional condition, the second moment will have exponential growth. This greatly extends results of [4, 8, 10], and [11].

Theorem 1.6

Suppose that the conditions of Theorem 1.4 are in force. Then under Assumption 1.3, there exists a \(T>0\), such that

$$\begin{aligned} \inf _{x\in B(0,\,t^{\beta /\alpha })}{{\mathbb {E}}}|u_t(x)|^2\ge c_3e^{c_4\lambda ^{\frac{2\alpha }{\alpha -d\beta }}t}\quad \text {for all}\quad t>T. \end{aligned}$$

Here \(c_3\) and \(c_4\) are positive constants.

The lower bound in the previous theorem is completely new. Most of the results of these kinds have been derived from the renewal theoretic ideas developed in [10] and [11]. The methods used in this article are completely different. In particular, we make use of a localisation argument together with heat kernel estimates for the time fractional diffusion equation.

Remark 1.7

The two theorems above imply that, under some conditions, there exist some positive constants \(a_5\) and \(a_6\) such that,

$$\begin{aligned} a_5\lambda ^{2\alpha /(\alpha -\beta d)}\le \liminf _{t\rightarrow \infty }\frac{1}{t}\log {{\mathbb {E}}}|u_t(x)|^2\le \limsup _{t\rightarrow \infty }\frac{1}{t}\log {{\mathbb {E}}}|u_t(x)|^2\le a_6\lambda ^{2\alpha /(\alpha -\beta d)} , \end{aligned}$$

for any fixed \(x\in {{\mathbb {R}}^d}\).

The exponential growth of the second moment of the solution have been proved under the assumption that the initial function is bounded from below in [19]. This exponential growth property have been proved by [10] when \(\beta =1\) and \(d=1\) when the initial function is also bounded from below. When \(\beta =1\), and the initial function satisfies the assumption 1.3, this was established by [8]. Chen [4] has established intermittency of the solution of (1.3) when \(d=1, \alpha =2\), and \(\beta \in (0,1)\) and \(\beta \in (1,2)\) with measure-valued initial data.

We will need the following definition which we borrow from [15]. Set

$$\begin{aligned} {\mathcal {E}}_t(\lambda ):=\sqrt{\int _{{{\mathbb {R}}}^d}{{\mathbb {E}}}|u_t(x)|^2\,\mathrm{d}x}. \end{aligned}$$

and define the nonlinear excitation index by

$$\begin{aligned} e(t):=\lim _{\lambda \rightarrow \infty }\frac{\log \log {\mathcal {E}}_t(\lambda )}{\log \lambda }. \end{aligned}$$

The next theorem gives the rate of growth of the second moment with respect to the parameter \(\lambda \), which extends results in [8]. We note that for time t large enough, this follows from the theorem above. But for small t, we need to work a bit harder.

Theorem 1.8

Fix \(t>0\) and \(x\in {{\mathbb {R}}}^d\), we then have

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } \frac{\log \log {{\mathbb {E}}}|u_t(x)|^2}{\log \lambda }=\frac{2\alpha }{\alpha -d\beta }. \end{aligned}$$

Moreover, if the energy of the solution exists, then the excitation index, e(t) is also equal to \(\frac{2\alpha }{\alpha -d\beta }\).

Note that for the energy of the solution to exists, we need some assumption on the initial condition. One can always impose boundedness with compact support.

The following theorem is essentially Theorem 2 in [18]. We only state it to compare the Hölder exponent with the excitation index. This shows that the relationship mentioned in [8] holds for this equation as well: \(\eta \le 1/e(t)\). Hence showcasing a link between noise excitability and continuity of the solution.

Theorem 1.9

[18] Let \(\eta < (\alpha -\beta d)/2\alpha \) then for every \(x\in {{\mathbb {R}}^d}, \{u_t(x), t>0\}\), the solution to (1.3) has Hölder continuous trajectories with exponent \(\eta \).

All the above results were about the white noise driven equation. Our first result on space colored noise case reads as follows.

Theorem 1.10

Under the Assumption 1.2, there exists a unique random field solution \(u_t\) of (1.5) whose second moment satisfies

$$\begin{aligned} \sup _{x\in {{\mathbb {R}}}^d}{{\mathbb {E}}}|u_t(x)|^2\le c_5 \exp (c_6\lambda ^{2\alpha /(\alpha -\gamma \beta )}t)\quad \text {for all}\quad t>0. \end{aligned}$$

Here the constants \(c_5, c_6\) are positive numbers. If we impose the further requirement that Assumption 1.3 holds, then there exists a \(T>0\) such that

$$\begin{aligned} \inf _{x\in B(0,\,t^{\beta /\alpha })}{{\mathbb {E}}}|u_t(x)|^2\ge c_7 \exp (c_8\lambda ^{2\alpha /(\alpha -\gamma \beta )}t)\quad \text {for all}\quad t>T, \end{aligned}$$

where T and the constants \(c_7, c_8\) are positive numbers.

Remark 1.11

Theorem 1.10 implies that there exist some positive constants \(c_9\) and \(c_{10}\) such that

$$\begin{aligned} c_9\lambda ^{2\alpha /(\alpha -\beta \gamma )}\le \liminf _{t\rightarrow \infty }\frac{1}{t}\log {{\mathbb {E}}}|u_t(x)|^2\le \limsup _{t\rightarrow \infty }\frac{1}{t}\log {{\mathbb {E}}}|u_t(x)|^2\le c_{10}\lambda ^{2\alpha /(\alpha -\beta \gamma )} , \end{aligned}$$

for any fixed \(x\in {{\mathbb {R}}^d}\).

Theorem 1.12

Fix \(t>0\) and \(x\in {{\mathbb {R}}}^d\), we then have

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } \frac{\log \log {{\mathbb {E}}}|u_t(x)|^2}{\log \lambda }=\frac{2\alpha }{\alpha -\gamma \beta }. \end{aligned}$$

Moreover, if the energy of the solution exists, then the excitation index, e(t) is also equal to \(\frac{2\alpha }{\alpha -\gamma \beta }\).

We now give a relationship between the excitation index of (1.5) and its continuity properties.

Theorem 1.13

Let \(\eta < (\alpha -\beta \gamma )/2\alpha \) then for every \(x\in {{\mathbb {R}}^d}, \{u_t(x), t>0\}\), the solution to (1.5) has Hölder continuous trajectories with exponent \(\eta \).

A key difference from the methods used in [8] is that, here we develop some new important tools. For example, we need analyse the heat kernel and prove some relevant estimates. In [8], this step was relatively straightforward. But here the lack of semigroup property makes it that we need to work much harder. To address this, we heavily rely on subordination. This insight, absent in [4] makes it that we are able to vastly generalise the results of that paper. Another key tool is showing that with time, \(({\mathcal {G}}u_0)_t(x)\) decays at most like the inverse of a polynomial. This also requires techniques based on subordination. We also point out that a significant difference from early work is that here our analysis is based on restricting the spatial variable to a dynamic ball. This enables us to prove the exponential growth of the second moment and the right rate with respect to \(\lambda \). Finding this precise rate for stochastic partial differential equations is quite a new problem and this current paper shows how this rate depends on the fractional nature of the operator.

2 Preliminaries

As mentioned in the introduction, the behaviour of the heat kernel \(G_t(x)\) will play an important role. This section will mainly be devoted to estimates involving this quantity. We start by giving a stochastic representation of this kernel. Let \(X_t\) denote a symmetric \(\alpha \) stable process with density function denoted by \(p(t,\,x)\). This is characterized through the Fourier transform which is given by

$$\begin{aligned} \widehat{p(t,\,\xi )}=e^{-t\nu |\xi |^\alpha }. \end{aligned}$$
(2.1)

Let \(D=\{D_r,\,r\ge 0\}\) denote a \(\beta \)-stable subordinator and \(E_t\) be its first passage time. It is known that the density of the time changed process \(X_{E_t}\) is given by the \(G_t(x)\). By conditioning, we have

$$\begin{aligned} G_t(x)=\int _{0}^\infty p(s,\,x) f_{E_t}(s)\mathrm{d}s, \end{aligned}$$
(2.2)

where

$$\begin{aligned} f_{E_t}(x)=t\beta ^{-1}x^{-1-1/\beta }g_\beta \left( tx^{-1/\beta }\right) , \end{aligned}$$
(2.3)

where \(g_\beta (\cdot )\) is the density function of \(D_1\) and is infinitely differentiable on the entire real line, with \(g_\beta (u)=0\) for \(u\le 0\). Moreover,

$$\begin{aligned} g_\beta (u)\sim K(\beta /u)^{(1-\beta /2)/(1-\beta )}\exp \left\{ -|1-\beta |(u/\beta )^{\beta /(\beta -1)}\right\} \quad \text{ as }\,\, u\rightarrow 0+, \end{aligned}$$
(2.4)

and

$$\begin{aligned} g_\beta (u)\sim \frac{\beta }{\Gamma (1-\beta )}u^{-\beta -1} \quad \text{ as }\,\, u\rightarrow \infty . \end{aligned}$$
(2.5)

While the above expressions will be very important, we will also need the Fourier transform of \(G_t(x)\).

$$\begin{aligned} {G_t}^*(\xi )=E_\beta \left( -\nu |\xi |^\alpha t^{\beta }\right) , \end{aligned}$$

where the Mittag-Leffler function

$$\begin{aligned} E_\beta (x) = \sum _{k=0}^\infty \frac{x^k}{\Gamma (1+\beta k)} \end{aligned}$$
(2.6)

satisfies the following inequality,

$$\begin{aligned} \frac{1}{1 + \Gamma (1-\beta )x}\le E_{\beta }(-x)\le \frac{1}{1+\Gamma (1+\beta )^{-1}x} \ \ \ \text {for}\ x>0. \end{aligned}$$
(2.7)

Even though, we will be mainly using the representation given by (2.2), we also have another explicit description of the heat kernel.

Using the convention \(\sim \) to denote the Laplace transform and \(*\) the Fourier transform we get

$$\begin{aligned} {\tilde{G}}^*_t(x) = \frac{\lambda ^{\beta -1}}{\lambda ^{\beta } +\nu |\xi |^\alpha }. \end{aligned}$$
(2.8)

Inverting the Laplace transform yields

$$\begin{aligned} G^*_t(\xi ) = E_\beta \left( -\nu |\xi |^\alpha t^\beta \right) . \end{aligned}$$
(2.9)

In order to invert the Fourier transform when \(d=1\), we will make use of the integral [12, eq. 12.9]

$$\begin{aligned} \int _0^\infty \cos (ks)E_{\beta ,\alpha }(-as^\mu )\mathrm{d}s = \frac{\pi }{k}H_{3,3}^{2,1}\bigg [\frac{k^\mu }{a}\bigg |^{(1,1), (\alpha ,\beta ), (1,\mu /2)}_{(1,\mu ),(1,1),(1,\mu /2)}\bigg ], \end{aligned}$$

where \({\mathcal {R}}(\alpha )>0,\mathcal {\beta }>0,k>0,a>0, H_{p,q}^{m,n}\) is the H-function given in [16, Definition 1.9.1, p. 55] and the formula

$$\begin{aligned} \frac{1}{2\pi }\int _{-\infty }^\infty e^{-i\xi x}f(\xi )\mathrm{d}\xi = \frac{1}{\pi }\int _0^\infty f(\xi )\cos (\xi x)\mathrm{d}\xi . \end{aligned}$$

Then this gives the function as

$$\begin{aligned} G_t(x) = \frac{1}{|x|} H_{3,3}^{2,1}\bigg [\frac{|x|^\alpha }{\nu t^\beta }\bigg |^{(1,1), (1,\beta ), (1,\alpha /2)}_{(1,\alpha ),(1,1),(1,\alpha /2)}\bigg ]. \end{aligned}$$
(2.10)

Note that for \(\alpha = 2\) using reduction formula for the H-function we have

$$\begin{aligned} G_t(x) = \frac{1}{|x|}H^{1,0}_{1,1}\bigg [\frac{|x|^2}{\nu t^\beta }\bigg |^{(1,\beta )}_{(1,2)}\bigg ]. \end{aligned}$$
(2.11)

Note that for \(\beta = 1\) it reduces to the Gaussian density

$$\begin{aligned} G_t(x) = \frac{1}{(4\nu \pi t)^{1/2}}\exp \left( -\frac{|x|^2}{4\nu t}\right) . \end{aligned}$$
(2.12)

We will need following properties of the heat kernel of stable process.

  • $$\begin{aligned} p(t,\,x)=t^{-d/\alpha }p(1,\,t^{-1/\alpha }x). \end{aligned}$$
  • $$\begin{aligned} p(st,\,x)=s^{-d/\alpha }p(t,\,s^{-1/\alpha }x). \end{aligned}$$
  • \(p(t,\,x)\ge p(t,\,y)\)whenever \(|x|\le |y|\).

  • For t large enough so that \(p(t,\,0)\le 1\) and \(\tau \ge 2\), we have

    $$\begin{aligned} p\left( t,\,\frac{1}{\tau }(x-y)\right) \ge p(t,\,x)p(t,\,y). \end{aligned}$$

All these properties, except the last one, are straightforward. They follow from scaling. We therefore provide a quick proof of the last inequality. Suppose that t is large enough so that \(p(t,\,0)\le 1\). Now, we have that \(\frac{|x-y|}{\tau }\le \frac{2|x|}{\tau }\vee \frac{2|y|}{\tau }\le |x|\vee |y|.\) Therefore by the monotonicity property of the heat kernel and the fact that time is large enough, we have

$$\begin{aligned} p\left( t,\,\frac{1}{\tau }(x-y)\right)\ge & {} p(t,\,|x|\vee |y|)\\\ge & {} p(t,\,|x|)\wedge p(t,\,|y|)\\\ge & {} p(t,\,|x|)p(t,\,|y|). \end{aligned}$$

We will need the lower bound described in the following lemma. The upper bound is given for the sake of completeness and is true under the additional assumption that \(\alpha >d\), a condition which we will not need in this paper.

Lemma 2.1

  1. (a)

    There exists a positive constant \(c_1\) such that for all \(x\in {{\mathbb {R}}}^d\)

    $$\begin{aligned} G_t(x)\ge c_1 \bigg (t^{-\beta d/\alpha }\wedge \frac{t^\beta }{|x|^{d+\alpha }}\bigg ). \end{aligned}$$
  2. (b)

    If we further suppose that \(\alpha >d\), then there exists a positive constant \(c_2\) such that for all \(x\in {{\mathbb {R}}}^d\)

    $$\begin{aligned} G_t(x)\le c_2 \bigg (t^{-\beta d/\alpha }\wedge \frac{t^\beta }{|x|^{d+\alpha }}\bigg ). \end{aligned}$$

Proof

It is well known that the transition density \(p(t,\,x)\) of any strictly stable process is given by

$$\begin{aligned} c_1\bigg (t^{-d/\alpha }\wedge \frac{t}{|x|^{d+\alpha }}\bigg )\le p(t,\,x)\le c_2\bigg (t^{-d/\alpha }\wedge \frac{t}{|x|^{d+\alpha }}\bigg ), \end{aligned}$$
(2.13)

where \(c_1\) and \(c_2\) are positive constants. We have

$$\begin{aligned} G_t(x)=\int _{0}^\infty p(s,\,x) f_{E_t}(s)\mathrm{d}s, \end{aligned}$$

which after using (2.3) and an appropriate substitution gives the following

$$\begin{aligned} G_t(x)=\int _{0}^\infty p((t/u)^\beta ,\,x) g_\beta (u)\mathrm{d}u. \end{aligned}$$

Suppose that \(|x|\le t^{\beta /\alpha }\) then \(t/|x|^{\alpha /\beta }\ge 1\). When we have \(u\le t/|x|^{\alpha /\beta }\), we can write

$$\begin{aligned} \begin{aligned} \int _0^\infty p((t/u)^\beta ,\,x)g_\beta (u)\mathrm{d}u&\ge c_5\int _0^{t/|x|^{\alpha /\beta }} (t/u)^{-\beta d/\alpha }g_\beta (u)\mathrm{d}u \\&\ge c_6\int _0^{1} (t/u)^{-\beta d/\alpha }g_\beta (u)\mathrm{d}u\\&=c_7t^{-\beta d/\alpha } \int _0^{1} u^{\beta d/\alpha }g_\beta (u)du. \end{aligned} \end{aligned}$$
(2.14)

Since the integral appearing in the right hand side of the above display is finite, we have \(G_t(x)\ge c_8t^{-\beta d/\alpha }\) whenever \(|x|\le t^{\beta /\alpha }\). We now look at the case \(|x|\ge t^{\beta /\alpha }\).

$$\begin{aligned} \begin{aligned} \int _0^\infty p((t/u)^\beta ,\,x)g_\beta (u)\mathrm{d}u&\ge \int _{t/|x|^{\alpha /\beta }}^\infty c_9\frac{(t/u)^{\beta }}{|x|^{d+\alpha }}g_\beta (u)\mathrm{d}u\\&\ge c_{10}\frac{t^\beta }{|x|^{d+\alpha }}\int _1^{\infty } (u)^{-\beta }g_\beta (u)\mathrm{d}u\\&\ge \frac{c_{11}t^\beta }{|x|^{d+\alpha }}, \end{aligned} \end{aligned}$$
(2.15)

where we have used the fact that \(\int _1^{\infty } (u)^{-\beta }g_\beta (u)du\) is a positive finite constant to come up with the last line.

We now use the fact that \(p((t/u)^\beta ,\,x)\le c_1 \frac{u^{\beta d/\alpha }}{t^{\beta d/\alpha }}\), we have

$$\begin{aligned} \begin{aligned} G_t(x)&\le c_1 \int _{0}^\infty \frac{u^{\beta d/\alpha }}{t^{\beta d/\alpha }} g_\beta (u)\mathrm{d}u\\&=\frac{c_1}{t^{\beta d/\alpha }}\int _{0}^\infty u^{\beta d/\alpha } g_\beta (u)\mathrm{d}u. \end{aligned} \end{aligned}$$

The inequality on the right hand side is bounded only if \(\alpha >d\). This follows from the fact that for large \(u, g_\beta (u)\) behaves like \(u^{-\beta -1}\). So we have \(G_t(x)\le \frac{c_2}{t^{\beta d/\alpha }}\). Similarly, we can use \(p((t/u)^\beta ,\,x)\le \frac{c_3 t^{\beta }}{u^\beta |x|^{d+\alpha }}\), to write

$$\begin{aligned} \begin{aligned} G_t(x)&\le c_3 \int _{0}^\infty \frac{t^{\beta }}{u^\beta |x|^{d+\alpha }}g_\beta (u)\mathrm{d}u\\&=\frac{c_3t^{\beta }}{|x|^{d+\alpha }}\int _{0}^\infty u^{-\beta } g_\beta (u)\mathrm{d}u. \end{aligned} \end{aligned}$$

Since the integral appearing in the above display is finite, we have \(G_t(x)\le \frac{c_4t^{\beta }}{|x|^{d+\alpha }}\). We therefore have

$$\begin{aligned} G_t(x)\le c_5\bigg (t^{-d\beta /\alpha }\wedge \frac{t^\beta }{|x|^{d+\alpha }}\bigg ). \end{aligned}$$

\(\square \)

Remark 2.2

When \(\alpha \le d\), then the function \(G_t(x)\) is not well defined everywhere. But its representation in terms of H functions, one can show that \(x=0\) is the only point where it is undefined. We won’t use the pointwise upper bound. The lower bound is trivially true when \(x=0\).

The \(L^2\)-norm of the heat kernel can be calculated as follows. This lemma is crucial in showing existence of solutions to our equation (1.3).

Lemma 2.3

Suppose that \(d < 2\alpha \), then

$$\begin{aligned} \int _{{{{\mathbb {R}}}^d}}G^2_t(x)\mathrm{d}x =C^*t^{-\beta d/\alpha }, \end{aligned}$$
(2.16)

where the constant \(C^{*}\) is given by

$$\begin{aligned} C^*= \frac{(\nu )^{-d/\alpha }2\pi ^{d/2}}{\alpha \Gamma \left( \frac{d}{2}\right) }\frac{1}{(2\pi )^d}\int _0^\infty z^{d/\alpha -1} (E_\beta (-z))^2 \mathrm{d}z. \end{aligned}$$

Proof

Using Plancherel theorem and (2.9), we have

$$\begin{aligned} \int _{{{\mathbb {R}}}^d}|G_t(x)|^2 \mathrm{d}x= & {} \frac{1}{(2\pi )^d}\int _{{{\mathbb {R}}}^d}|\hat{G_t}(\xi )|^2 \mathrm{d}\xi = \frac{1}{(2\pi )^d}\int _{{{\mathbb {R}}}^d}\left| E_\beta \left( -\nu |\xi |^\alpha t^\beta \right) \right| ^2 \mathrm{d}\xi \nonumber \\= & {} \frac{2\pi ^{d/2}}{\Gamma \left( \frac{d}{2}\right) }\frac{1}{(2\pi )^d}\int _0^\infty r^{d-1} \left( E_\beta \left( -\nu r^\alpha t^\beta \right) \right) ^2 \mathrm{d}r. \end{aligned}$$
(2.17)
$$\begin{aligned}= & {} \frac{(\nu t^\beta )^{-d/\alpha }2\pi ^{d/2}}{\alpha \Gamma \left( \frac{d}{2}\right) }\frac{1}{(2\pi )^d}\int _0^\infty z^{d/\alpha -1} \left( E_\beta (-z)\right) ^2 \mathrm{d}z. \end{aligned}$$
(2.18)

To finish the proof, we need to show that the integral on the right hand side of the above display is bounded. We use equation (2.7) to get

$$\begin{aligned} \int _{0}^\infty \frac{z^{d/\alpha -1} }{(1+\Gamma (1-\beta )z)^2} \mathrm{d}r\le & {} \int _0^\infty z^{d/\alpha -1} \left( E_\beta (-z)\right) ^2 \mathrm{d}z\nonumber \\\le & {} \int _{0}^\infty \frac{z^{d/\alpha -1} }{(1+\Gamma (1+\beta )^{-1}z)^2}\mathrm{d}z. \end{aligned}$$
(2.19)

Hence \(\int _0^\infty z^{d/\alpha -1} \left( E_\beta (-z)\right) ^2 \mathrm{d}z<\infty \) if and only if \(d<2\alpha \). \(\square \)

Recall the Fourier transform of the heat kernel

$$\begin{aligned} G^*_t(\xi ) = E_\beta \left( -\nu |\xi |^\alpha t^\beta \right) . \end{aligned}$$
(2.20)

We will use this to prove the following.

Lemma 2.4

For \(\gamma < 2\alpha ,\)

$$\begin{aligned} \int _{{{{\mathbb {R}}}^d}}[\hat{G_t}(\xi )]^2\frac{1}{|\xi |^{d-\gamma }} \mathrm{d}\xi =C^*_1 t^{-\beta \gamma /\alpha }, \end{aligned}$$
(2.21)

where \(C^*_1= \frac{(\nu )^{-\gamma /\alpha }2\pi ^{d/2}}{\alpha \Gamma \left( \frac{d}{2}\right) }\frac{1}{(2\pi )^d}\int _0^\infty z^{\gamma /\alpha -1} \left( E_\beta (-z)\right) ^2 \mathrm{d}z.\)

Proof

We have

$$\begin{aligned} \int _{{{{\mathbb {R}}}^d}}[G^*_t(\xi )]^2\frac{1}{|\xi |^{d-\gamma }} \mathrm{d}\xi= & {} \int _{{{\mathbb {R}}^d}}\left| E_\beta \left( -\nu |\xi |^\alpha t^\beta \right) \right| ^2 \frac{1}{|\xi |^{d-\gamma }} \mathrm{d}\xi \nonumber \\= & {} \frac{2\pi ^{d/2}}{\Gamma \left( \frac{d}{2}\right) }\int _0^\infty r^{d-1} \left( E_\beta \left( -\nu r^\alpha t^\beta \right) \right) ^2 \frac{1}{r^{d-\gamma }}\mathrm{d}r.\nonumber \\= & {} \frac{(\nu t^\beta )^{-\gamma /\alpha }2\pi ^{d/2}}{\alpha \Gamma \left( \frac{d}{2}\right) }\frac{1}{(2\pi )^d}\int _0^\infty z^{\gamma /\alpha -1} \left( E_\beta (-z)\right) ^2 \mathrm{d}z. \end{aligned}$$
(2.22)

We used the integration in polar coordinates for radially symmetric function in the last equation above. Now using equation (2.7) we get

$$\begin{aligned} \int _{0}^\infty \frac{z^{\gamma /\alpha -1} }{(1+\Gamma (1-\beta )z)^2} \mathrm{d}r\le & {} \int _0^\infty z^{\gamma /\alpha -1} \left( E_\beta (-z)\right) ^2 \mathrm{d}z\nonumber \\\le & {} \int _{0}^\infty \frac{z^{\gamma /\alpha -1} }{\left( 1+\Gamma (1+\beta )^{-1}z\right) ^2}\mathrm{d}z. \end{aligned}$$
(2.23)

Hence \(\int _0^\infty z^{\gamma /\alpha -1} (E_\beta (-z))^2 \mathrm{d}z<\infty \) if and only if \(\gamma <2\alpha \). In this case the upper bound in equation (2.23) is

$$\begin{aligned} \int _0^\infty \frac{z^{\gamma /\alpha -1} }{(1+\Gamma (1+\beta )^{-1}z)^2} \mathrm{d}z = \frac{\text {B}(\gamma /\alpha , 2-\gamma /\alpha )}{\Gamma (1+\beta )^{-\gamma /\alpha }}, \end{aligned}$$

where \(\text {B}(\gamma /\alpha , 2-\gamma /\alpha )\) is a Beta function. \(\square \)

Remark 2.5

For \(\gamma <2\alpha \),

$$\begin{aligned} \frac{\text {B}(\gamma /\alpha , 2-\gamma /\alpha )}{\Gamma (1-\beta )^{\gamma /\alpha }} \le \int _{0}^\infty z^{\gamma /\alpha -1}\left( E_\beta (-z)\right) ^2\mathrm{d}z \le \frac{\text {B}(\gamma /\alpha , 2-\gamma /\alpha )}{\Gamma (1+\beta )^{-\gamma /\alpha }}. \end{aligned}$$

We have the following estimate which will be useful for establishing temporal continuity property of the solution of (1.5).

Proposition 2.6

Let \(\gamma <\min \{2, \beta ^{-1}\}\alpha \) and \(h\in (0,1)\), we then have

$$\begin{aligned} \int _0^t\int _{{{\mathbb {R}}^d}}\left| {\hat{G}}_{t-s+h}(\xi )-{\hat{G}}_{t-s}(\xi )\right| ^2\frac{1}{|\xi |^{d-\gamma }}\ \mathrm{d}\xi \mathrm{d}s\le c_1 h^{1-\beta \gamma /\alpha }. \end{aligned}$$

Proof

The computation in Lemma 2.4 we have

$$\begin{aligned}&\int _{{{\mathbb {R}}}^d}|{\hat{G}}_{t-s+h}(\xi )- {\hat{G}}_{t-s}(\xi )|^2\frac{1}{|\xi |^{d-\gamma }} \mathrm{d}\xi \\&\quad =\int _{{{\mathbb {R}}}^d} ({\hat{G}}_{t-s+h}(\xi ))^2 \frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi + \int _{{{\mathbb {R}}}^d} ({\hat{G}}_{t-s}(\xi ))^2 \frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi \\&\qquad -2\int _{{{\mathbb {R}}}^d} {\hat{G}}_{t-s+h}(\xi ){\hat{G}}_{t-s}(\xi ) \frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi \\&\quad = C^*_1 (t- s+h)^{-\beta \gamma /\alpha } + C^*_1 (t-s)^{-\beta \gamma /\alpha } -2\int _{{{\mathbb {R}}}^d} {\hat{G}}_{t-s+h}(\xi ){\hat{G}}_{t-s}(\xi ) \frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi . \end{aligned}$$

Using integration in polar coordinates in \({{\mathbb {R}}^d}\), and the fact that \(h(z)=E_\beta (-z)\) is decreasing (since it is completely monotonic, i.e. \((-1)^nh^{(n)}(z)\ge 0\) for all \(z>0, n=0,1,2,3,\ldots \)), we get

$$\begin{aligned}&2\int _{{{\mathbb {R}}}^d} {\hat{G}}_{t-s+h}(\xi ){\hat{G}}_{t-s}(\xi ) \frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi \\&\quad =2\int _{{{\mathbb {R}}}^d}E_\beta \left( -\nu |\xi |^\alpha (t-s+h)^\beta \right) E_\beta \left( -\nu |\xi |^\alpha (t-s)^\beta \right) \frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi \\&\quad \ge 2\int _{{{\mathbb {R}}}^d}E_\beta \left( -\nu |\xi |^\alpha (t-s+h)^\beta \right) E_\beta \left( -\nu |\xi |^\alpha (t-s+h)^\beta \right) \frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi \\&\quad = 2C^*_1(t-s+h)^{-\beta \gamma /\alpha }. \end{aligned}$$

Now integrating both sides wrt to s from 0 to t we get

$$\begin{aligned}&\int _0^t\int _{{{\mathbb {R}}}^d}|{\hat{G}}_{t-s+h}(\xi )- {\hat{G}}_{t-s}(\xi )|^2\frac{1}{|\xi |^{d-\gamma }} \mathrm{d}\xi \mathrm{d}r \nonumber \\&\quad \le \frac{-C^*_1 (h)^{1-\beta \gamma /\alpha }}{1-\beta \gamma /\alpha } + \frac{C^*_11 (t+h)^{1-\beta \gamma /\alpha }}{1-\beta \gamma /\alpha } + \frac{C^*_1 t^{1-\beta \gamma /\alpha }}{1-\beta \gamma /\alpha }\nonumber \\&\qquad +\frac{ 2C^*_1 (h)^{1-\beta \gamma /\alpha }}{1-\beta \gamma /\alpha }-\frac{ 2C^*_1(t+h)^{1-\beta d/\alpha }}{1-\beta d/\alpha }\nonumber \\&\quad =\frac{C^*_1(h)^{1-\beta \gamma /\alpha }}{1-\beta \gamma /\alpha }-\frac{C^*_1 (t+h)^{1-\beta \gamma /\alpha }}{1-\beta \gamma /\alpha }+\frac{C^*_1 t^{1-\beta \gamma /\alpha }}{1-\beta \gamma /\alpha }\nonumber \\&\quad \le \frac{C^*_1(h)^{1-\beta \gamma /\alpha }}{1-\beta \gamma /\alpha }, \end{aligned}$$
(2.24)

the last inequality follows since \(t<t'\). \(\square \)

Lemma 2.7

Suppose that \(\gamma <\alpha \), then there exists a constant \(c_1\) such that for all \(x,\,y\in {{\mathbb {R}}}^d\), we have

$$\begin{aligned} \int _{{{\mathbb {R}}}^d}\int _{{{\mathbb {R}}}^d}G_t(x-w)G_t(y-z)f(z,\,w)\mathrm{d}w \mathrm{d}z\le \frac{c_1}{t^{\gamma \beta /\alpha }}. \end{aligned}$$

Proof

We start by writing

$$\begin{aligned} \begin{aligned} \int _{{{\mathbb {R}}}^d}\int _{{{\mathbb {R}}}^d}&p(t,\,x-w)p(t',\,y-z)f(z,\,w)\mathrm{d}w \mathrm{d}z\\&=\int _{{{\mathbb {R}}}^d}p(t+t',\,x-y+w)|w|^{-\gamma }\,\mathrm{d}w\\&\le \frac{c_2}{(t+t')^{\gamma /\alpha }}. \end{aligned} \end{aligned}$$

We use subordination again to write

$$\begin{aligned}&\int _{{{\mathbb {R}}}^d}\int _{{{\mathbb {R}}}^d}G_t(x-w)G_t(y-z)f(z,\,w)\mathrm{d}w \mathrm{d}z\\&\quad =\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d} \int _0^\infty \int _0^\infty p(s,\,x-w)p(s',\,y-z)f_{E_t}(s)f_{E_t}(s')\mathrm{d}s\mathrm{d}s'f(z,\,w)\mathrm{d}w \mathrm{d}z\\&\quad =\int _0^\infty \int _0^\infty \int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d} p(s,\,x-w)p(s',\,y-z)f(z,\,w)\mathrm{d}w \mathrm{d}zf_{E_t}(s)f_{E_t}(s')\mathrm{d}s\mathrm{d}s'\\&\quad \le \int _0^\infty \int _0^\infty \frac{c_2}{(s+s')^{\gamma /\alpha }}f_{E_t}(s)f_{E_t}(s')\mathrm{d}s\mathrm{d}s'\\&\quad \le \int _0^\infty \int _0^\infty \frac{c_2}{s^{\gamma /\alpha }}f_{E_t}(s)f_{E_t}(s')\mathrm{d}s\mathrm{d}s'. \end{aligned}$$

Recalling that \(f_{E_t}(s')\) is a probability density of inverse subordinator \(D_t\), we can use a change of variable to see that the right hand side of the above display is bounded by

$$\begin{aligned} \frac{c_3}{t^{\gamma \beta /\alpha }}\int _0^\infty u^{\gamma \beta /\alpha }g_\beta (u)\,\mathrm{d}u. \end{aligned}$$

Since the above integral is finite, the result is proved. \(\square \)

The next result gives the behaviour of non-random term for the mild formulation for the solution. For notational convenience, we set

$$\begin{aligned} ({\mathcal {G}}u)_t(x):=\int _{{{\mathbb {R}}}^d}G_t(x-y)u_0(y)\,\mathrm{d}y. \end{aligned}$$

The proof will strongly rely on the representation given by (2.2) and we will also need

$$\begin{aligned} ({\tilde{{\mathcal {G}}}} u)_t(x):=\int _{{{\mathbb {R}}}^d}p(t,\,x-y)u_0(y)\,\mathrm{d}y, \end{aligned}$$

where \(p(t,\,x)\) is the heat kernel of the stable process. We will need the fact that for t large enough, we have \(({\tilde{{\mathcal {G}}}} u)_t(x)\ge c_1t^{-d/\alpha }\) for \(x\in B(0,\,t^{1/\alpha })\). We will prove this fact and a bit more in the following. The proof heavily relies on the properties of \(p(t,\,x)\) which we stated earlier in this section.

Lemma 2.8

There exists a \(t_0>0\) large enough such that for all \(t>0\)

$$\begin{aligned} ({\tilde{{\mathcal {G}}}} u)_{t+t_0}(x)\ge c_1t^{-d/\alpha },\quad \text {whenever}\quad x\in B(0,\,t^{1/\alpha }), \end{aligned}$$

where \(c_1\) is a positive constant. More generally, there exists a positive constant \(\kappa >0\) such that for \(s\le t\) and \(t\ge t_0\), we have

$$\begin{aligned} ({\tilde{{\mathcal {G}}}} u)_{s+t_0}(x)\ge c_2t^{-\kappa },\quad \text {whenever}\quad x\in B(0,\,t^{1/\alpha }). \end{aligned}$$

\(c_2\) is some positive constant.

Proof

We begin with the following observation about the heat kernel. Choose \(t_0\) large enough so that \(p(t_0,\,0)\le 1\). We therefore have

$$\begin{aligned} \begin{aligned} p(t_0,\,x-y)&=p(t_0,\,2(x-y)/2)\\&\ge p(t_0,\,2x)p(t_0,\,2y)\\&=\frac{1}{2^d}p(t_0/2^\alpha ,\,x)p(t_0,\,2y). \end{aligned} \end{aligned}$$

This immediately gives

$$\begin{aligned} \begin{aligned} ({\tilde{{\mathcal {G}}}} u)_{t_0}(x)&=\int _{{{\mathbb {R}}}^d}p(t,\,x-y)u_0(y)\,\mathrm{d}y\\&\ge c_1p(t_0/2^\alpha ,\,x)\int _{{{\mathbb {R}}}^d}p(t_0,\,2y)u_0(y)\,\mathrm{d}y. \end{aligned} \end{aligned}$$

We now use the semigroup property to obtain

$$\begin{aligned} ({\tilde{{\mathcal {G}}}} u)_{t+t_0}(x)= & {} \int _{{{\mathbb {R}}}^d}p(t+t_0,\,x-y)u_0(y)\,\mathrm{d}y\nonumber \\= & {} \int _{{{\mathbb {R}}}^d}p(t,\,x-y )({\tilde{{\mathcal {G}}}} u)_{t_0}(y)\,\mathrm{d}y\nonumber \\\ge & {} c_2p(t+t_0/2,\,x), \end{aligned}$$
(2.25)

This inequality shows that for any fixed \(x, ({\tilde{{\mathcal {G}}}} u)_{t+t_0}(x)\) decays as t goes to infinity. It also shows that

$$\begin{aligned} ({\tilde{{\mathcal {G}}}} u)_{t+t_0}(x)\ge c_3t^{-d/\alpha },\quad \text {whenever}\quad |x|\le t^{1/\alpha }. \end{aligned}$$

This follows from the fact that \(p(t+t_0/2,\,x)\ge c_4t^{-d/\alpha }\) if \(|x|\le t^{1/\alpha }\). The more general statement of the lemma needs a bit more work.

$$\begin{aligned} \begin{aligned} ({\tilde{{\mathcal {G}}}} u)_{s+t_0}(x)&\ge c_2p(s+t_0/2,\,x)\\&\ge c_3\left( \frac{t_0}{2s+t_0}\right) ^{d/\alpha }p(t_0,\,x)\\&\ge c_3\left( \frac{t_0}{2s+t_0}\right) ^{d/\alpha }p\left( t_0,\,t^{1/\alpha }\right) . \end{aligned} \end{aligned}$$

Since we are interested in the case when \(s\le t\) and \(t\ge t_0\), the right hand side can be bounded as follows

$$\begin{aligned} ({\tilde{{\mathcal {G}}}} u)_{s+t_0}(x)\ge c_4\left( \frac{t_0}{2t+t_0}\right) ^{d/\alpha }\frac{t_0}{t^{d/\alpha +1}}. \end{aligned}$$

The second inequality in the statement of the lemma follows from the above. \(\square \)

Lemma 2.9

There exists a \(t_0>0\) and a constant \(c_1\) such that for all \(t>t_0\) and all \(x\in B(0,\,t^{\beta /\alpha })\), we have

$$\begin{aligned} ({\mathcal {G}}u)_{s+t}(x)\ge \frac{c_1}{t^{\beta \kappa }}\quad \text {for all}\quad s\le t. \end{aligned}$$

Proof

We start off by writing

$$\begin{aligned} \begin{aligned} ({\mathcal {G}}u)_t(x)&=\int _{{{\mathbb {R}}}^d}G_t(x-y)u_0(y)\,\mathrm{d}y\\&=\int _{{{\mathbb {R}}}^d}\int _0^\infty p(s,\,x-y)f_{E_t}(s)\,\mathrm{d}s\,u_0(y)\mathrm{d}y\\&=\int _0^\infty ({\tilde{{\mathcal {G}}}} u)_s(x)f_{E_t}(s)\,\mathrm{d}s. \end{aligned} \end{aligned}$$

After the usual change of variable, we have

$$\begin{aligned} ({\mathcal {G}}u)_t(x)=\int _0^\infty ({\tilde{{\mathcal {G}}}} u)_{(t/u)^\beta }(x)g_\beta (u)\,\mathrm{d}u, \end{aligned}$$

which immediately gives

$$\begin{aligned} \begin{aligned} ({\mathcal {G}}u)_t(x)&\ge \int _0^1({\tilde{{\mathcal {G}}}} u)_{(t/u)^\beta }(x)g_\beta (u)\,\mathrm{d}u. \end{aligned} \end{aligned}$$

The above holds for any time t. In particular, we have

$$\begin{aligned} \begin{aligned} ({\mathcal {G}}u)_{t+s}(x)&\ge \int _0^1({\tilde{{\mathcal {G}}}} u)_{((t+s)/u)^\beta }(x)g_\beta (u)\,\mathrm{d}u. \end{aligned} \end{aligned}$$

We now that note that \(x\in B(0,\,t^{\beta /\alpha })\), so we have \(x\in B(0,\,t^{\beta /\alpha }/u)\) and hence for t large enough and \(s\le t\), we have \(({\tilde{{\mathcal {G}}}} u)_{((s+t_0)/u)^\beta }(x)\ge \left( \frac{u}{t}\right) ^{\beta \kappa }\) by the previous lemma. Combining the above estimates, we have the result.\(\square \)

Remark 2.10

The above is enough for the lower bound given in Theorem 1.6 and the lower bound described in Theorem 1.10. But we need an analogous result for the the noise excitability result which hold for all \(t>0\). Fix \({\tilde{t}}>0\) such that \(p(t,\,0)\le 1\) whenever \(t\ge {\tilde{t}}\). For any fixed \(t>0\), we choose k large enough so that \(2^kt>{\tilde{t}}\). Set \(t^*:=2^kt\) and \(s=2^{-k}\).

$$\begin{aligned} \begin{aligned} p(t,\,x-y)&=p(st^*,\,x-y)\\&=s^{-d/\alpha }p(t^*,\,s^{-1/\alpha }(x-y))\\&=s^{-d/\alpha }p\left( t^*,\,\frac{s^{-1/\alpha }}{2}(2x-2y)\right) .\\ \end{aligned} \end{aligned}$$

For any fixed \(t>0\), we choose k large enough so that \(2^kt>{\tilde{t}}\).

$$\begin{aligned} \begin{aligned} p(t,\,x-y)&\ge s^{-d/\alpha }p\left( t^*,\,2s^{-1/\alpha }x\right) p\left( t^*,\,2s^{-1/\alpha }y\right) \\&=2^{dk/\alpha }p\left( 2^kt,\,2^{1+k/\alpha }x\right) p\left( 2^kt,\,2^{1+k/\alpha }y\right) . \end{aligned} \end{aligned}$$

Note that the above holds for any time t. We therefore have

$$\begin{aligned} \begin{aligned} ({\tilde{{\mathcal {G}}}} u_0)_{t_0+s}(x)&=\int _{{{\mathbb {R}}}^d}p(t_0+s,\,x-y)u_0(y)\,\mathrm{d}y\\&\ge 2^{dk/\alpha }p\left( 2^k(t_0+s),\,2^{1+k/\alpha }x\right) \int _{{{\mathbb {R}}}^d}p\left( 2^k(t_0+s),\,2^{1+k/\alpha }y\right) u_0(y)\,\mathrm{d}y. \end{aligned} \end{aligned}$$

We have that \(t_0+s\ge t_0\). Therefore,

$$\begin{aligned} p\left( 2^k(t_0+s),\,2^{1+k/\alpha }x\right) \ge \left( \frac{t_0}{s+t_0}\right) ^{d/\alpha }p\left( 2^kt_0,\,2^{1+k/\alpha }x\right) \end{aligned}$$

We thus have

$$\begin{aligned} \begin{aligned} ({\tilde{{\mathcal {G}}}} u_0)_{t_0+s}(x)&\ge 2^{dk/\alpha }\left( \frac{t_0}{s+t_0}\right) ^{2d/\alpha }p\left( 2^kt_0,\,2^{1+k/\alpha }x\right) \int _{{{\mathbb {R}}}^d}p\left( 2^kt_0,\,2^{1+k/\alpha }y\right) u_0(y)\,\mathrm{d}y. \end{aligned} \end{aligned}$$

So now since \(|x|\le t^{1/\alpha }\), we have

$$\begin{aligned} ({\tilde{{\mathcal {G}}}} u_0)_{t_0+s}(x)\ge c_1\left( \frac{1}{t_0+s}\right) ^{2d/\alpha }, \end{aligned}$$

where the constant \(c_1\) is dependent on \(t_0\). We can now use similar ideas as in the proof of the previous result to conclude that if \(x\in B(0,\,t^{\beta /\alpha })\), we have

$$\begin{aligned} ({\mathcal {G}}u_0)_{t_0+s}(x)\ge c_2\left( \frac{1}{t_0+s}\right) ^{2\beta d/\alpha }. \end{aligned}$$

Since we have \(s\le t\), we have essentially found a lower bound for \(({\mathcal {G}}u_0)_{t_0+s}(x)\); a bound which depends only on t. This holds for any \(t_0>0\) and any \(t>0\).

We end this section with a few results from [8]. These will be useful for the proofs of our main results.

Lemma 2.11

(Lemma 2.3 in [8]) Let \(0<\rho <1\), then there exists a positive constant \(c_1\) such that or all \(b\ge (e/\rho )^\rho \),

$$\begin{aligned} \sum _{j=0}^\infty \bigg (\frac{b}{j^\rho }\bigg )^j\ge \exp \bigg (c_1b^{1/\rho }\bigg ). \end{aligned}$$

Proposition 2.12

(Proposition 2.5 in [8]) Let \(\rho >0\) and suppose f(t) is a locally integrable function satisfying

$$\begin{aligned} f(t)\le c_1+\kappa \int _0^t(t-s)^{\rho -1} f(s)\mathrm{d}s \ \ \mathrm {for all} \ \ t>0, \end{aligned}$$

where \(c_1\) is some positive number. Then, we have

$$\begin{aligned} f(t)\le c_2\exp \left( c_3(\Gamma (\rho ))^{1/\rho }\kappa ^{1/\rho } t\right) \ \ \mathrm {for all} \ \ t>0, \end{aligned}$$

for some positive constants \(c_2\) and \(c_3\).

Also we give the following converse.

Proposition 2.13

(Proposition 2.6 in [8]) Let \(\rho >0\) and suppose f(t) is nonnegative, locally integrable function satisfying

$$\begin{aligned} f(t)\ge c_1+\kappa \int _0^t(t-s)^{\rho -1} f(s)\mathrm{d}s \ \ \mathrm {for all} \ \ t>0, \end{aligned}$$

where \(c_1\) is some positive number. Then, we have

$$\begin{aligned} f(t)\ge c_2\exp \left( c_3(\Gamma (\rho ))^{1/\rho }\kappa ^{1/\rho } t\right) \ \ \mathrm {for all} \ \ t>0, \end{aligned}$$

for some positive constants \(c_2\) and \(c_3\).

3 Proofs for the white noise case

3.1 Proofs of Theorem 1.4

Proof

We first show the existence of a unique solution. This follows from a standard Picard iteration; see [23], so we just briefly spell out the main ideas. For more information, see [18]. Set

$$\begin{aligned} u_t^{(0)}(x):=({\mathcal {G}}u_0)_t(x) \end{aligned}$$

and

$$\begin{aligned} u_t^{(n+1)}(x):=({\mathcal {G}}u_0)_t(x)+\lambda \int _0^t\int _{{{\mathbb {R}}}^d}G_{t-s}(x-y)\sigma \left( u^{(n)}_s(y))W(\mathrm{d}y\,\mathrm{d}s\right) \quad \text {for}\quad n\ge 0. \end{aligned}$$

Define \(D_n(t\,,x):={{\mathbb {E}}}|u^{(n+1)}_t(x)-u^{(n)}_t(x)|^2\) and \(H_n(t):=\sup _{x\in {{\mathbb {R}}}^d}D_n(t\,,x)\). We will prove the result for \(t\in [0,\,T]\), where T is some fixed number. We now use this notation together with Walsh’s isometry and the assumption on \(\sigma \) to write

$$\begin{aligned} \begin{aligned} D_n(t,\,x)&=\lambda ^2\int _0^t\int _{{{\mathbb {R}}}^d}G^2_{t-s}(x-y){{\mathbb {E}}}\left| \sigma \left( u^{(n)}_s(y)\right) -\sigma \left( u^{(n-1)}_s(y)\right) \right| ^2\mathrm{d}y\,\mathrm{d}s\\&\le \lambda ^2L_\sigma ^2\int _0^tH_{n-1}(s)\int _{{{\mathbb {R}}}^d}G^2_{t-s}(x-y)\,\mathrm{d}y\,\mathrm{d}s\\&\le \lambda ^2L_\sigma ^2\int _0^T\frac{H_{n-1}(s)}{(t-s)^{d\beta /\alpha }}\,\mathrm{d}s \end{aligned} \end{aligned}$$

We therefore have

$$\begin{aligned} H_{n}(t)\le \lambda ^2L_\sigma ^2\int _0^T\frac{H_{n-1}(s)}{(t-s)^{d\beta /\alpha }}\,\mathrm{d}s. \end{aligned}$$

We now note that the integral appearing on the right hand side of the above display is finite when \(d<\alpha /\beta \). Hence, by Lemma 3.3 in Walsh [23], the series \(\sum _{n=0}^\infty H^{\frac{1}{2}}_n(t)\) converges uniformly on \([0,\,T].\) Therefore, the sequence \(\{u_n\}\) converges in \(L^2\) and uniformly on \([0,\,T]\times {{\mathbb {R}}}^d\) and the limit satisfies (1.4). We can prove uniqueness in a similar way. We now turn to the proof of the exponential bound. From Walsh’s isometry, we have

$$\begin{aligned} {{\mathbb {E}}}|u_t(x)|^2=|({\mathcal {G}}u_0)_t(x)|^2+\lambda ^2\int _0^t\int _{{{\mathbb {R}}}^d}G^2_{t-s}(x-y){{\mathbb {E}}}|\sigma (u_s(y))|^2\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

Since we are assuming that the initial condition is bounded, we have that \(|({\mathcal {G}}u_0)_t(x)|^2\le c_1\) and the second term is bounded by

$$\begin{aligned} \begin{aligned}&\lambda ^2L_\sigma ^2\int _0^t\int _{{{\mathbb {R}}}^d}G^2_{t-s}(x-y){{\mathbb {E}}}|u_s(y)|^2\mathrm{d}y\,\mathrm{d}s\\&\quad \le c_1\lambda ^2L_\sigma ^2\int _0^t\frac{1}{(t-s)^{d\beta /\alpha }}\sup _{y\in {{\mathbb {R}}}^d}{{\mathbb {E}}}|u_s(y)|^2\mathrm{d}y\,\mathrm{d}s. \end{aligned} \end{aligned}$$

We therefore have

$$\begin{aligned} \sup _{x\in {{\mathbb {R}}}^d}{{\mathbb {E}}}|u_s(x)|^2\le c_1+c_2\lambda ^2L_\sigma ^2\int _0^t\frac{1}{(t-s)^{d\beta /\alpha }}\sup _{y\in {{\mathbb {R}}}^d}{{\mathbb {E}}}|u_s(y)|^2\,\mathrm{d}s. \end{aligned}$$

The renewal inequality in Proposition 2.12 with \(\rho =(\alpha -d\beta )/\alpha \) proves the result. \(\square \)

3.2 Proof of Theorem 1.6

The proof of Theorem 1.6 will rely on the following observation. From Walsh isometry, we have

$$\begin{aligned} {{\mathbb {E}}}|u_t(x)|^2=|({\mathcal {G}}u_0)_t(x)|^2+\lambda ^2\int _0^t\int _{{{\mathbb {R}}}^d}G^2_{t-s}(x-y){{\mathbb {E}}}|\sigma (u_s(y))|^2\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

For any fixed \(t_0>0\), we use a change of variable and the fact that all the terms are non-negative to obtain

$$\begin{aligned} {{\mathbb {E}}}|u_{t+t_0}(x)|^2\ge |({\mathcal {G}}u_0)_{t+t_0}(x)|^2+\lambda ^2l_\sigma ^2\int _0^t\int _{{{\mathbb {R}}}^d}G^2_{t-s}(x-y){{\mathbb {E}}}|u_{s+t_0}(y)|^2\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

Using the above relation again, we obtain

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}|u_{t+t_0}(x)|^2\ge |({\mathcal {G}}u_0)_{t+t_0}(x)|^2\\&\quad +\lambda ^2l_\sigma ^2\int _0^t\int _{{{\mathbb {R}}}^d}G^2_{t-s}(x-y)|({\mathcal {G}}u_0)_{s+t_0}(x)|^2\mathrm{d}y\,\mathrm{d}s\\&\quad +\lambda ^4l_\sigma ^4\int _0^t\int _{{{\mathbb {R}}}^d}\int _0^s\int _{{{\mathbb {R}}}^d}G^2_{t-s}(x-y)G^2_{s-s_1}(y-z){{\mathbb {E}}}|u_{s_1+t_0}(z)|^2\mathrm{d}z\,\mathrm{d}s_1\mathrm{d}y\,\mathrm{d}s. \end{aligned} \end{aligned}$$

Using the same procedure recursively, we obtain

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}|u_{t+t_0}(x)|^2\\&\quad \ge |({\mathcal {G}}u_0)_{t+t_0}(x)|^2\\&\qquad +\sum _{k=1}^\infty \lambda ^{2k}l_\sigma ^{2k}\int _0^t\int _{{{\mathbb {R}}}^d}\int _0^{s_1}\int _{{{\mathbb {R}}}^d}\dots \int _0^{s_{k-1}}\int _{{{\mathbb {R}}}^d} |({\mathcal {G}}u_0)_{t_0+s_k}(z_k)|^2\\&\qquad \prod _{i=1}^{k}G^2_{s_{i-1}-s_i}(z_{i-1},z_i)\,\mathrm{d}z_{k+1-i}\,\mathrm{d}s_{k+1-i}, \end{aligned} \end{aligned}$$
(3.1)

where we have used the convention that \(s_0:=t\) and \(z_0:=x\). Let \(x\in B(0,\,t^{\beta /\alpha })\) and \(0\le s\le t\) and set

$$\begin{aligned} ({\mathcal {G}}u_0)_{t_0+s}(x)\ge g_t. \end{aligned}$$
(3.2)

The existence of such a function \(g_t\) is guaranteed by Lemma 2.9 and Remark 2.10. We can now use the above representation to prove the following result.

Proposition 3.1

Fix \(t_0>0\) such that for \(t\ge 0\),

$$\begin{aligned} {{\mathbb {E}}}\left| u_{t+t_0}(x)\right| ^2\ge g_t^2\sum _{k=0}^\infty \left( \lambda ^2l_\sigma ^2c_1 \right) ^k\left( \frac{t}{k}\right) ^{k(\alpha -\beta d)/\alpha }\quad \text {for}\quad x\in B(0, t^{\beta /\alpha }), \end{aligned}$$

where \(c_1\) is a positive constant.

Proof

Our starting point is (3.1). Recall the notation introduced above,

$$\begin{aligned} ({\mathcal {G}}u_0)_{t_0+s_k}(z_k)\ge g_t, \end{aligned}$$

whenever \(z_k\in B(0, t^{\beta /\alpha })\) and \(0\le s_k\le t\). The infinite sum of the right of (3.1) is thus bounded below by

$$\begin{aligned} g_t^2\sum _{k=1}^\infty \lambda ^{2k}l_\sigma ^{2k}\int _0^t\int _{{{\mathbb {R}}}^d}\int _0^{s_1}\int _{{{\mathbb {R}}}^d}\dots \int _0^{s_{k-1}}\int _{B(0,\,t^{\beta /\alpha })}\prod _{i=1}^{k}G^2_{s_{i-1}-s_i}(z_{i-1},z_i)\,\mathrm{d}z_{k+1-i}\,\mathrm{d}s_{k+1-i}. \end{aligned}$$

We now reduce the temporal domain of integration and make an appropriate change of variable to find a lower bound of the above display

$$\begin{aligned} g_t^2\sum _{k=1}^\infty \lambda ^{2k}l_\sigma ^{2k}\int _0^{t/k}\int _{{{\mathbb {R}}}^d}\int _0^{t/k}\int _{{{\mathbb {R}}}^d}\dots \int _0^{t/k}\int _{B(0,\,t^{\beta /\alpha })}\prod _{i=1}^{k}G^2_{s_i}(z_{i-1},z_i)\,\mathrm{d}z_{k+1-i}\,\mathrm{d}s_{k+1-i}. \end{aligned}$$

We will reduce the domain of the function

$$\begin{aligned} \prod _{i=1}^{k}G^2_{s_i}(z_{i-1},z_i), \end{aligned}$$

by choosing the points \(z_i\) appropriately so that they are ”not too far way”. We choose \(z_1\in B(0,\,t^{\beta /\alpha })\) such that \(|z_1-x_0|\le s_1^{\beta /\alpha }\). In general, for \(i=1,\ldots ,k\), we choose \(z_i\in B(z_{i-1},\,s_i^{\beta /\alpha })\cap B(0,\,t^{\beta /\alpha })\). An immediate consequence of this restriction is that

$$\begin{aligned} \prod _{i=1}^{k}G^2_{s_i}(z_{i-1},z_i)\ge \prod _{i=1}^{k}\frac{c_1}{s_i^{2d\beta /\alpha }}. \end{aligned}$$

Since the area of the set \(B(z_{i-1},\,s_i^{\beta /\alpha })\cap B(0,\,t^{\beta /\alpha })\) is \(c_2s_i^{d\beta /\alpha }\), we have

$$\begin{aligned} \begin{aligned}&\int _0^{t/k}\int _{{{\mathbb {R}}}^d}\int _0^{t/k}\int _{{{\mathbb {R}}}^d}\dots \int _0^{t/k}\int _{B(0,\,t^{\beta /\alpha })}\prod _{i=1}^{k}G^2_{s_i}(z_{i-1},z_i)\,\mathrm{d}z_{k+1-i}\,\mathrm{d}s_{k+1-i}\\&\quad \ge \int _0^{t/k}\cdots \int _0^{t/k} \frac{c_3^k}{s_i^{d\beta /\alpha }} \mathrm{d}s_{1}\cdots \mathrm{d}s_{k}\\&\quad =c^k_3\left( \frac{t}{k}\right) ^{(\alpha -d\beta )k/\alpha }. \end{aligned} \end{aligned}$$

Putting all the estimates together we have

$$\begin{aligned} \begin{aligned} {{\mathbb {E}}}|&u_{t+t_0}(x)|^2\ge g_t^2\sum _{k=0}^\infty \lambda ^{2k}l_\sigma ^{2k}c_4^k\left( \frac{t}{k}\right) ^{(\alpha -d\beta )k/\alpha }. \end{aligned} \end{aligned}$$

\(\square \)

Proof of Theorem 1.6

We make the important observation that \(g_t\) decays no faster than polynomial. After a simple substitution and the use of Lemma 2.11, the theorem is proved. \(\square \)

Remark 3.2

It should be noted that we do not need the full statement of Proposition 3.1. All that we need is the statement when time is large.

3.3 Proof of Theorem 1.8

Proof

From the upper bound in Theorem 1.4, we have that for any \(x\in {{\mathbb {R}}^d}\)

$$\begin{aligned} {{\mathbb {E}}}|u_t(x)|^2\le c_1e^{c_2\lambda ^{\frac{2\alpha }{\alpha -d\beta }}t}\quad \text {for all}\quad t>0, \end{aligned}$$

from which we have

$$\begin{aligned} \limsup _{\lambda \rightarrow \infty } \frac{\log \log {{\mathbb {E}}}|u_t(x)|^2}{\log \lambda }\le \frac{2\alpha }{\alpha -d\beta }. \end{aligned}$$

Next, we will establish a lower bound. Fix \(x\in {{\mathbb {R}}^d}\), for any \(t>0\), we can always find a time \(t_0\) such that \(t=t-t_0+t_0\) and \(t-t_0>0\). If t is already large enough so that \(x\in B(0,t^{\beta /\alpha })\) then by Proposition 3.1 and Lemma 2.11 we get

$$\begin{aligned} \liminf _{\lambda \rightarrow \infty } \frac{\log \log {{\mathbb {E}}}|u_t(x)|^2}{\log \lambda }\ge \frac{2\alpha }{\alpha -d\beta }. \end{aligned}$$

Now if \(x\notin B(0,t^{\beta /\alpha })\), we can choose a \(\kappa >0\) so that \(x\in B(0,(\kappa t)^{\beta /\alpha })\). Then we can use the ideas in Proposition 3.1 to end up with

$$\begin{aligned} {{\mathbb {E}}}|u_{t+t_0}(x)|^2\ge g^2_{\kappa t}\sum _{k=0}^\infty \left( \lambda ^2l_\sigma ^2c_1 \right) ^k\left( \frac{t}{k}\right) ^{k(\alpha -\beta d)/\alpha }, \end{aligned}$$

and the result follows from this using Lemma 2.11. \(\square \)

4 Proofs for the colored noise case

4.1 Proof of upper bound in Theorem 1.10

Proof

The proof of existence and uniqueness is standard. For more information, see [23]. We set

$$\begin{aligned} u^{(0)}(t,\,x):=({\mathcal {G}}u_0)_t(x), \end{aligned}$$

and

$$\begin{aligned} u^{(n+1)}(t,\,x):=({\mathcal {G}}u_0)_t(x)+\lambda \int _0^t\int _{{{\mathbb {R}}}^d}G_{t-s}(x-y)\sigma \left( u^{(n)}(s,\,y)\right) F(\mathrm{d}y\,\mathrm{d}s), \quad n\ge 0. \end{aligned}$$

Define \(D_n(t\,,x):={{\mathbb {E}}}|u^{(n+1)}(t,\,x)-u^{(n)}(t,\,x)|^2, H_n(t):=\sup _{x\in {{\mathbb {R}}}^d}D_n(t\,,x)\) and \(\Sigma (t,y,n)=\left| \sigma (u^{(n)}(t,\,y))-\sigma (u^{(n-1)}(t,\,y))\right| \). We will prove the result for \(t\in [0,\,T]\) where T is some fixed number. We now use this notation together with the covariance formula (1.6) and the assumption on \(\sigma \) to write

$$\begin{aligned} \begin{aligned}&D_n(t,\,x) \\&\quad =\lambda ^2\int _0^t\int _{{{\mathbb {R}}}^d}\int _{{\mathbb {R}}^d}G_{t-s}(x-y)G_{t-s}(x-z){{\mathbb {E}}}[\Sigma (s,y,n) \Sigma (s,z,n)] f(y,z)\mathrm{d}y d z \mathrm{d}s. \end{aligned} \end{aligned}$$

Now we estimate the expectation on the right hand side using Cauchy-Schwartz inequality.

$$\begin{aligned} \begin{aligned} {{\mathbb {E}}}[\Sigma (s,y,n) \Sigma (s,z,n)]&\le L_\sigma ^2{{\mathbb {E}}}\left| u^{(n)}(s,\,y)-u^{(n-1)}(s,\,y)\right| \left| u^{(n)}(s,\,z)-u^{(n-1)}(s,\,z)\right| \\&\quad \le L_\sigma ^2\bigg ({{\mathbb {E}}}\left| u^{(n)}(s,\,y)-u^{(n-1)}(s,\,y)\right| ^2\bigg )^{1/2}\\&\qquad \times \bigg ({{\mathbb {E}}}\left| u^{(n)}(s,\,z)-u^{(n-1)}(s,\,z)\right| ^2\bigg )^{1/2}\\&\quad \le L_\sigma ^2 \bigg ( D_{n-1}(s,y) D_{n-1}(s,z)\bigg )^{1/2}\\&\quad \le L_\sigma ^2 H_{n-1}(s). \end{aligned} \end{aligned}$$

Hence we have for \(\gamma <\alpha \) using Lemma  2.7

$$\begin{aligned} \begin{aligned}&D_n(t,\,x) \\&\quad \le \lambda ^2L_\sigma ^2\int _0^t H_{n-1}(s)\int _{{{\mathbb {R}}}^d}\int _{{\mathbb {R}}^d}G_{t-s}(x-y)G_{t-s}(x-z) f(y,z)\mathrm{d}y \mathrm{d}z \,\mathrm{d}s\\&\quad \le c_1 \lambda ^2L_\sigma ^2\int _0^t \frac{H_{n-1}(s)}{(t-s)^{\gamma \beta /\alpha }}\,\mathrm{d}s. \end{aligned} \end{aligned}$$

We therefore have

$$\begin{aligned} H_{n}(t)\le c_1 \lambda ^2L_\sigma ^2\int _0^t \frac{H_{n-1}(s)}{(t-s)^{\gamma \beta /\alpha }}\,\mathrm{d}s. \end{aligned}$$

We now note that the integral appearing on the right hand side of the above display is finite when \(d<\alpha /\beta \). Hence, by Lemma 3.3 in Walsh [23], the series \(\sum _{n=0}^\infty H^{\frac{1}{2}}_n(t)\) converges uniformly on \([0,\,T].\) Therefore, the sequence \(\{u_n\}\) converges in \(L^2\) and uniformly on \([0,\,T]\times {{\mathbb {R}}}^d\) and the limit satisfies (1.7). We can prove uniqueness in a similar way.

We now turn to the proof of the exponential bound. Set

$$\begin{aligned} A(t):=\sup _{x\in {{\mathbb {R}}^d}}{{\mathbb {E}}}|u_t(x)|^2. \end{aligned}$$

We claim that there exists constants \(c_4, c_5\) such that for all \(t>0, \) we have

$$\begin{aligned} A(t)\le c_4+ c_5(\lambda L_\sigma )^2\int _0^t \frac{A(s)}{(t-s)^{\beta \gamma /\alpha }}\,\mathrm{d}s. \end{aligned}$$

The renewal inequality in Proposition 2.12 with \(\rho =(\alpha -\gamma \beta )/\alpha \) then proves the exponential upper bound. To prove this claim, we start with the mild formulation given by (1.7), then take the second moment to obtain the following

$$\begin{aligned} {{\mathbb {E}}}|u_t(x)|^2= & {} |({\mathcal {G}}u)_t(x)|^2\nonumber \\&+\lambda ^2 \int _0^t\int _{{{\mathbb {R}}^d}\times {{\mathbb {R}}^d}}G_{t-s}(x,y)G_{t-s}(x,z)f(y,z){{\mathbb {E}}}[\sigma (u_s(y))\sigma (u_s(z))]\mathrm{d}y\mathrm{d}z\mathrm{d}s\nonumber \\= & {} I_1+I_2. \end{aligned}$$
(4.1)

Since \(u_0\) is bounded, we have \(I_1\le c_4\). Next we use the assumption on \(\sigma \) together with Hölder’s inequality to see that

$$\begin{aligned} \begin{aligned} {{\mathbb {E}}}[\sigma (u_s(y))\sigma (u_s(z))]&\le L_\sigma ^2 {{\mathbb {E}}}[u_s(y)u_s(z)]\\&\le \left[ {{\mathbb {E}}}\left| u_s(y)\right| ^2\right] ^{1/2} \left[ {{\mathbb {E}}}\left| u_s(z)\right| ^2\right] ^{1/2}\\&\le \sup _{x\in {{\mathbb {R}}^d}}{{\mathbb {E}}}|u_s(x)|^2. \end{aligned} \end{aligned}$$
(4.2)

Therefore, using Lemma 2.7 the second term \(I_2\) is thus bounded as follows.

$$\begin{aligned} I_2\le c_5 (\lambda L_\sigma )^2\int _0^t\frac{A(s)}{(t-s)^{\beta \gamma /\alpha }}\, \mathrm{d}s. \end{aligned}$$

Combining the above estimates, we obtain the required result in the claim. \(\square \)

4.2 Proof of lower bound in Theorem 1.10

The starting point of the proof of the lower bound hinges on the following recursive argument.

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}|u_t(x)|^2\\&\quad =|({\mathcal {G}}u)_t(x)|^2+\lambda ^2 \int _0^{t}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\\&\qquad G(t-s_1,\,x,\,z_1)G\left( t-s_1,\,x,\,z_1'\right) {{\mathbb {E}}}\left[ \sigma \left( u_{s_1}(z_1)\right) \sigma \left( u_{s_1}(z_1')\right) f\left( z_1,z_1'\right) \right] \mathrm{d}z_1\mathrm{d}z_1'\mathrm{d}s_1. \end{aligned} \end{aligned}$$

We now use the assumption that \(\sigma (x)\ge l_\sigma |x|\) for all x to reduce the above to

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}\left| u_t(x)\right| ^2\\&\quad \ge \left| ({\mathcal {G}}u)_t(x)\right| ^2+\lambda ^2 l_\sigma ^2\int _0^{t}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\\&\qquad G\left( t-s_1,\,x,\,z_1\right) G\left( t-s_1,\,x,\,z_1'\right) {{\mathbb {E}}}\left| u_{s_1}(z_1)u_{s_1}\left( z_1'\right) \right| f\left( z_1,z_1'\right) \mathrm{d}z_1\mathrm{d}z_1'\mathrm{d}s_1. \end{aligned} \end{aligned}$$

We now replace the t above by \(t+{\tilde{t}}\) and use a substitution to reduce the above to

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}|u_{t+{\tilde{t}}}(x)|^2\\&\quad \ge \left| ({\mathcal {G}}u)_{t+{\tilde{t}}}(x)\right| ^2+\lambda ^2 l_\sigma ^2\int _0^{t}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\\&\qquad G\left( t-s_1,\,x,\,z_1\right) G\left( t-s_1,\,x,\,z_1'\right) {{\mathbb {E}}}\left| u_{{\tilde{t}}+s_1}(z_1)u_{{\tilde{t}}+s_1}(z_1')\right| f\left( z_1,z_1'\right) \mathrm{d}z_1\mathrm{d}z_1'\mathrm{d}s_1. \end{aligned} \end{aligned}$$

We also have

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}|u_{{\tilde{t}}+s_1}(z_1)u_{{\tilde{t}}+s_1}(z_1')|\ge |({\mathcal {G}}u)_{{\tilde{t}}+s_1}(z_1)({\mathcal {G}}u)_{{\tilde{t}}+s_1}(z_1)|+\lambda ^2l_\sigma ^2\int _0^{s_1}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d} \\&\quad G\left( s_1-s_2,\,z_1,\,z_2\right) G\left( s_1-s_2,\,z_1',\,z_2'\right) {{\mathbb {E}}}\left| u_{{\tilde{t}}+s_2}(z_2)u_{{\tilde{t}}+s_2}\left( z_2'\right) \right| f\left( z_2, z_2'\right) \mathrm{d}z_2\mathrm{d}z_2'\mathrm{d}s_2. \end{aligned} \end{aligned}$$

The above two inequalities thus give us

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}\left| u_{t+{\tilde{t}}}(x)\right| ^2\\&\quad \ge \left| ({\mathcal {G}}u)_{t+{\tilde{t}}}(x)\right| ^2+\lambda ^2 l_\sigma ^2\int _0^{t}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\\&\qquad \times G\left( t-s_1,\,x,\,z_1\right) G\left( t-s_1,\,x,\,z_1'\right) {{\mathbb {E}}}\left| u_{{\tilde{t}}+s_1}(z_1)u_{{\tilde{t}}+s_1}\left( z_1'\right) \right| f\left( z_1,z_1'\right) \mathrm{d}z_1\mathrm{d}z_1'\mathrm{d}s_1\\&\quad \ge \left| ({\mathcal {G}}u)_{{\tilde{t}}+t}(x)\right| ^2+\lambda ^2l_\sigma ^2\int _0^{t}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\\&\qquad \times G\left( t-s_1,\,x,\,z_1\right) G\left( t-s_1,\,x,\,z_1'\right) f\left( z_1,z_1'\right) ({\mathcal {G}}u)_{{\tilde{t}}+s_1}(z_1)({\mathcal {G}}u)_{{\tilde{t}}+s_1}(z_1')\mathrm{d}z_1\mathrm{d}z_1'\mathrm{d}s_1\\&\qquad +(\lambda l_\sigma )^4\int _0^{t}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}G(t-s_1,\,x,\,z_1)G(t-s_1,\,x,\,z_1')f(z_1,z_1')\int _0^{{\tilde{t}}+s_1}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\\&\qquad \times G(s_1-s_2,\,z_1,\,z_2)G\left( s_1-s_2,\,z_1,'\,z_2'\right) {{\mathbb {E}}}\left| u_{{\tilde{t}}+s_2}(z_2)u_{{\tilde{t}}+s_2}(z_2')\right| f\left( z_2, z_2'\right) \mathrm{d}z_2\mathrm{d}z_2'\mathrm{d}s_2\mathrm{d}z_1\mathrm{d}z_1'\mathrm{d}s_1. \end{aligned} \end{aligned}$$
(4.3)

We set \(z_0=z_0':=x\) and \(s_0:=t\) and continue the recursion as above to obtain

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}\left| u_{{\tilde{t}}+t}(x)\right| ^2\\&\quad \ge \left| ({\mathcal {G}}u)_{{\tilde{t}}+t}(x)\right| ^2\\&\qquad + \sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^t\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\int _0^{s_1}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\cdots \int _0^{s_{k-1}}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d} \left| ({\mathcal {G}}u)_{{\tilde{t}}+s_k}(z_k)({\mathcal {G}}u)_{{\tilde{t}}+s_k}(z_k')\right| \\&\qquad \prod _{i=1}^kG\left( s_{i-1}-s_{i}, z_{i-1},\,z_i\right) G\left( s_{i-1}-s_{i}, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \mathrm{d}z_i\mathrm{d}z_i'\mathrm{d}s_i. \end{aligned}\nonumber \\ \end{aligned}$$
(4.4)

Therefore,

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}\left| u_{{\tilde{t}}+t}(x)\right| ^2\\&\quad \ge \left| ({\mathcal {G}}u)_{{\tilde{t}}+t}(x)\right| ^2\\&\qquad + \sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^t\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\int _0^{s_1}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\cdots \int _0^{s_{k-1}}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d} |({\mathcal {G}}u)_{{\tilde{t}}+s_k}(z_k)({\mathcal {G}}u)_{{\tilde{t}}+s_k}(z_k')|\\&\qquad \prod _{i=1}^kG\left( s_{i-1}-s_{i}, z_{i-1},\,z_i\right) G\left( s_{i-1}-s_{i}, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \mathrm{d}z_i\mathrm{d}z_i'\mathrm{d}s_i. \end{aligned} \end{aligned}$$

Proposition 4.1

There exists a \(t_0>0\) such that for \(t>t_0\),

$$\begin{aligned} {{\mathbb {E}}}|u(t+t_0,\,x)|^2\ge g_t^2\sum _{k=0}^\infty \left( \lambda ^2l_\sigma ^2c_1 \right) ^k\left( \frac{t}{k}\right) ^{k(\alpha -\gamma \beta )/\alpha }\quad \text {whenever}\ x \in B(0, t^{\beta /\alpha }), \end{aligned}$$

where \(c_1\) is a positive constant.

Proof

We will look at the following term which comes from the recursive relation described above,

$$\begin{aligned} \begin{aligned}&\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^t\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\int _0^{s_1}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\cdots \int _0^{s_{k-1}}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d} \left| ({\mathcal {G}}u)_{{\tilde{t}}+s_k}(z_k)({\mathcal {G}}u)_{{\tilde{t}}+s_k}\left( z_k'\right) \right| \\&\quad \prod _{i=1}^kG\left( s_{i-1}-s_{i}, z_{i-1},\,z_i\right) G\left( s_{i-1}-s_{i}, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \mathrm{d}z_i\mathrm{d}z_i'\mathrm{d}s_i. \end{aligned} \end{aligned}$$

We can bound the above term by

$$\begin{aligned} \begin{aligned}&g_t^2\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^t\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\int _0^{s_1}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\cdots \int _0^{s_{k-1}}\int _{B\left( 0,\,t^{\beta /\alpha }\right) \times B\left( 0,\,t^{\beta /\alpha }\right) } \\&\quad \prod _{i=1}^kG\left( s_{i-1}-s_{i}, z_{i-1},\,z_i\right) G\left( s_{i-1}-s_{i}, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \mathrm{d}z_i\mathrm{d}z_i'\mathrm{d}s_i. \end{aligned} \end{aligned}$$

We now make a substitution and reduce the temporal region of integration to write

$$\begin{aligned} \begin{aligned}&g_t^2\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^{t/k}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\int _0^{t/k}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\cdots \int _0^{t/k}\int _{B\left( 0,\,t^{\beta /\alpha }\right) \times B\left( 0,\,t^{\beta /\alpha }\right) } \\&\quad \prod _{i=1}^kG\left( s_{i}, z_{i-1},\,z_i\right) G\left( s_{i}, z'_{i-1},\,z'_i)f(z_i, z_i'\right) \mathrm{d}z_i\mathrm{d}z_i'\mathrm{d}s_i. \end{aligned} \end{aligned}$$

We will further reduce the domain of integration so the function

$$\begin{aligned} \prod _{i=1}^kG\left( s_{i}, z_{i-1},\,z_i\right) G\left( s_{i}, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) , \end{aligned}$$

has the required lower bound. For \(i=0,\cdots , k\), we set

$$\begin{aligned} z_i\in B\left( x,\,s_1^{\beta /\alpha }/2\right) \cap B\left( z_{i-1},\,s_i^{\beta /\alpha }\right) \end{aligned}$$

and

$$\begin{aligned} z'_i\in B\left( x,\,s_1^{\beta /\alpha }/2\right) \cap B\left( z'_{i-1},\,s_i^{\beta /\alpha }\right) . \end{aligned}$$

We therefore have \(|z_i-z'_i|\le s_1^{\beta /\alpha }, |z_i-z_{i-1}|\le s_i^{\beta /\alpha }\) and \(\left| z'_i-z'_{i-1}\right| \le s_i^{\beta /\alpha }\). We use the lower bound on the heat kernel to find that

$$\begin{aligned} \begin{aligned}&\prod _{i=1}^k G\left( s_{i}, z_{i-1},\,z_i\right) G\left( s_{i}, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \\&\quad \ge \frac{c^k}{s_1^{k\gamma \beta /\alpha }}\prod _{i=1}^k\frac{1}{s_i^{2\beta d/\alpha }}, \end{aligned} \end{aligned}$$

for some \(c>0\). We set \({\mathcal {A}}_i:=B\left( x,\,s_1^{\beta /\alpha }/2\right) \cap B\left( z_{i-1},\,s_i^{\beta /\alpha }\right) \) and \({\mathcal {A}}_i':=B\left( x,\,s_1^{\beta /\alpha }/2\right) \cap B\left( z'_{i-1},\,s_i^{\beta /\alpha }\right) \). We will further choose that \(s_i^{\beta /\alpha }\le \frac{s_1^{\beta /\alpha }}{2}\) and note that \(|{\mathcal {A}}_i|\ge c_1s_i^{d\beta /\alpha }\) and \(|{\mathcal {A}}'_i|\ge c_1s_i^{d\beta /\alpha }\). We therefore have

$$\begin{aligned} \begin{aligned}&g_t^2\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^{t/k}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\int _0^{t/k}\int _{{{\mathbb {R}}}^d\times {{\mathbb {R}}}^d}\cdots \int _0^{t/k}\int _{B(0,\,t^{\beta /\alpha })\times B(0,\,t^{\beta /\alpha })} \\&\qquad \prod _{i=1}^kG(s_{i}, z_{i-1},\,z_i)G\left( s_{i}, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \mathrm{d}z_i\mathrm{d}z_i'\mathrm{d}s_i\\&\quad \ge g_t^2\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^{t/k}\int _{{\mathcal {A}}_1\times {\mathcal {A}}'_1}\int _0^{s_1/2^{\beta /\alpha }}\int _{{\mathcal {A}}_2\times {\mathcal {A}}'_2}\cdots \int _0^{s_1/2^{\beta /\alpha }}\int _{{\mathcal {A}}_k\times {\mathcal {A}}'_k}\\&\qquad \times \frac{1}{s_1^{k\gamma \beta /\alpha }}\prod _{i=1}^k\frac{1}{s_i^{2\beta d/\alpha }}\mathrm{d}z_i\mathrm{d}z_i'\mathrm{d}s_i\\&\quad \ge g_t^2\sum _{k=1}^\infty (\lambda l_\sigma c_2)^{2k} \int _0^{t/k}\frac{1}{s_1^{k\gamma \beta /\alpha }}s_1^{k-1}\mathrm{d}s_1\\&\quad \ge g_t^2\sum _{k=1}^\infty (\lambda l_\sigma c_3)^{2k} \left( \frac{t}{k}\right) ^{k(1-\gamma \beta /\alpha )}. \end{aligned} \end{aligned}$$

We now take time large enough and use Lemma 2.11 to complete the proof of theorem. \(\square \)

4.3 Proof of Theorem 1.12

The proof of this theorem is exactly as that of Theorem 1.8 and it is omitted. \(\square \)

4.4 Proof of Theorem 1.13

Proof

We will make use of the Kolmogorov’s continuity theorem. Therefore we consider the increment \({{\mathbb {E}}}|u_{t+h}(x)-u_{t}(x)|^p\) for \(h\in (0,1)\) and \(p\ge 2\). We have

$$\begin{aligned} \begin{aligned} u_{t+h}(x)-u_{t}(x)&=\int _{{\mathbb {R}}^d}[G_{t+h}(x-y)-G_{t}(x-y)]u_0(y)\mathrm{d}y\\&\quad +\lambda \int _0^t\int _{{\mathbb {R}}^d}[G_{t+h-s}(x-y)- G_{t-s}(x-y)]\sigma (u_s(y))F(\mathrm{d}s\, \mathrm{d}y)\\&\quad +\lambda \int _t^{t+h}\int _{{\mathbb {R}}^d}G_{t+h-s}(x-y) \sigma (u_s(y))F(\mathrm{d}s\, \mathrm{d}y). \end{aligned} \end{aligned}$$
(4.5)

The first term, \(\int _{{\mathbb {R}}^d}G_{t}(x-y)u_0(y)\mathrm{d}y\) is smooth for \(t>0\). This essentially follows from the fact that under the condition on the initial condition, we can interchange integral and derivatives. We will therefore look at higher moments of the remaining terms. Recall that we can use the similar ideas in the proof of Theorem 1.10 to show that \(\sup _{x\in {{\mathbb {R}}^d}}{{\mathbb {E}}}|u_t(x)|^p\) is finite for all \(t>0\). We use the Burkholder’s inequality together with Proposition 2.6 to write

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}\left| \int _0^t\int _{{\mathbb {R}}^d}\left[ G_{t+h-s}(x-y)- G_{t-s}(x-y)\right] \sigma (u_s(y))F(\mathrm{d}s\mathrm{d}y)\right| ^p \\&\quad \le c_1\left| \int _0^t\int _{{{\mathbb {R}}^d}}\left| G^*_{t-s+h}(\xi )-G^*_{t-s}(\xi )\right| ^2\frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi \mathrm{d}s \right| ^{p/2}\\&\quad \le c_1 h^{\frac{p(1-\beta \gamma /\alpha )}{2}}. \end{aligned} \end{aligned}$$
(4.6)

Similarly we have

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}\left| \int _t^{t+h}\int _{{\mathbb {R}}^d}G_{t+h-s}(x-y) \sigma (u_s(y))F(\mathrm{d}s\mathrm{d}y)\right| ^p \\&\quad \le c_2\left| \int _t^{t+h}\int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d}G_{t+h-s}(x-y) G_{t+h-s}(x-z)f(y,z)\mathrm{d}s\mathrm{d}y\mathrm{d}z\right| ^{p/2}\\&\quad \le c_2\left| \int _t^{t+h}\int _{{\mathbb {R}}^d}\left[ E_\beta \left( -\nu |\xi |^\alpha ( t+h-s)^\beta \right) \right] ^2 \frac{1}{|\xi |^{d-\gamma }}\mathrm{d}\xi \mathrm{d}s\right| ^{p/2}\\&\quad \le c_1 \left( \frac{C_1^*h}{1-\beta \gamma /\alpha }\right) ^{\frac{p(1-\beta \gamma /\alpha )}{2}}. \end{aligned} \end{aligned}$$
(4.7)

Combine the above estimates, we see that

$$\begin{aligned} {{\mathbb {E}}}|u_{t+h}(x)-u_{t}(x)|^p\le C h^{\frac{p(1-\beta \gamma /\alpha )}{2}}. \end{aligned}$$

Now an application of Kolmogorov’s continuity theorem as in [2] completes the proof. \(\square \)