1 Introduction and Main Results

Consider the following stochastic heat equation on the interval (0, 1) with Dirichlet boundary condition:

$$\begin{aligned} \left| \begin{aligned}&\partial _t u_t(x)=\frac{1}{2}\partial _{xx}u_t(x)+\lambda u_t(x)\dot{w}(t,\,x)\quad \text {for}\quad 0<x<1\quad \text {and}\quad t>0\\&u_t(0)=u_t(1)=0 \quad \text {for}\quad t>0. \end{aligned} \right. \end{aligned}$$

Here \(\dot{w}\) denotes white noise, \(\lambda \) is a positive parameter and \(u_0(x)\) is the initial condition. Set

$$\begin{aligned} \mathcal {E}_t(\lambda ):=\sqrt{\int _{0}^1\mathrm {E}|u_t(x)|^2 \mathrm{d} x}. \end{aligned}$$

The study of \(\mathcal {E}_t(\lambda )\) as \(\lambda \) gets large was initiated in [13, 14]. In Ref. [9], it was shown that \(\mathcal {E}_t(\lambda )\) grows like \(\text {const}\times \exp (\lambda ^4)\) as \(\lambda \) gets large. The main aim of this paper is to extend similar results to a much wider class of stochastic equations. Existence and uniqueness of solutions to these equations is a direct consequence of the methods [6, 18]. We will provide a proof in the appendix. We will first look at equations driven by white noise. Fix \(R>0\) and consider the following:

$$\begin{aligned} \left| \begin{aligned}&\partial _t u_t(x)=\mathcal {L}u_t(x)+\lambda \sigma (u_t(x))\dot{w}(t,\,x),\\&u_t(x)=0, \quad \text {for all}\quad x\in B(0,\,R)^c, \end{aligned} \right. \end{aligned}$$
(1.1)

where \(\dot{w}\) denotes white noise \((0,\infty )\times B(0,\,R)\). Here and throughout this paper, we will make the following assumptions on the function \(\sigma \) and the initial condition \(u_0\).

Assumption 1.1

The function \(\sigma :\mathbf {R}^d \rightarrow \mathbf {R}\) is a Lipschitz continuous function with

$$\begin{aligned} l_\sigma |x|\le \sigma (x)\le L_\sigma |x| \quad \text {for all}\quad x\in \mathbf {R}^d, \end{aligned}$$

where \(l_\sigma \) and \(L_\sigma \) are some positive constants.

The above assumptions on \(\sigma \) are quite natural and have been used in various works; see [10, 11]. The lower bound is essentially a growth condition which is needed for our results. These inequalities also imply that \(\sigma (0)=0\). This is needed for non-negativity of solutions to stochastic heat equations. Even though we do not need non-negativity of the solution in this paper, the upper bound makes our computations easier to follow.

Assumption 1.2

The initial function \(u_0\) is a non-negative, non-random bounded function which is strictly positive in a set of positive measure in \(B(0\,,R)\). More precisely, we will assume that if \(0<\epsilon \ll R\), then

$$\begin{aligned} \int _{B(0,\,R-\epsilon )}u_0(y)\mathrm{d} y \end{aligned}$$

is strictly positive. Throughout this paper, whenever we fix \(\epsilon >0\), we will always assume that it is much less than R so that the above is satisfied.

\(\mathcal {L}\) is the generator of a symmetric \(\alpha \)-stable process killed upon exiting \(B(0,\,R)\) so that (1.1) can be thought of as the Dirichlet problem for fractional Laplacian of order \(\alpha \).

Following Walsh [18], we say that u is a mild solution to (1.1) if it satisfies the following evolution equation,

$$\begin{aligned} u_t(x)= (\mathcal {G}_\mathrm{D}u)_t(x)+ \lambda \int _{B(0,\,R)}\int _0^t p_\mathrm{D}(t-s,\,x,\,y)\sigma (u_s(y))w(\mathrm{d} s\,\mathrm{d} y), \end{aligned}$$
(1.2)

where

$$\begin{aligned} (\mathcal {G}_\mathrm{D} u)_t(x):=\int _{B(0,\,R)} u_0(y)p_\mathrm{D}(t,\,x,\,y)\,\mathrm{d} y. \end{aligned}$$

Here \(p_\mathrm{D}(t,\,x,\,y)\) denotes the fractional Dirichlet heat kernel. It is also well known that this unique mild solution satisfies the following integrality condition

$$\begin{aligned} \sup _{x\in B(0,\,R)}\sup _{t\in [0,\,T]} \mathrm {E}|u_t(x)|^k<\infty \quad \text {for all}\quad T>0 \quad \text {and}\quad k\in [2,\,\infty ], \end{aligned}$$
(1.3)

which imposes the restriction that \(d=1\) and \(1<\alpha <2\), which will be in force whenever we deal with (1.1).

Here is our first result.

Theorem 1.3

Fix \(\epsilon >0\) and let \(x\in B(0,\,R-\epsilon )\), then for any \(t>0\),

$$\begin{aligned} \lim _{\lambda \rightarrow \infty }\frac{\log \log \mathrm {E}|u_t(x)|^2}{\log \lambda }=\frac{2\alpha }{\alpha -1}, \end{aligned}$$

where \(u_t\) is the unique solution to (1.1).

Set

$$\begin{aligned} \mathcal {E}_t(\lambda ):=\sqrt{\int _{B(0,\,R)}\mathrm {E}|u_t(x)|^2 \mathrm{d} x}. \end{aligned}$$
(1.4)

We have the following definition.

Definition 1.4

The excitation index of u at time \(t>0\) is given by

$$\begin{aligned} e(t):=\lim _{\lambda \rightarrow \infty }\frac{\log \log \mathcal {E}_t(\lambda )}{\log \lambda } \end{aligned}$$

We then have the following corollary.

Corollary 1.5

The excitation index of the solution to (1.1) is \(\frac{2\alpha }{\alpha -1}\).

It can be seen that when \(\alpha = 2\) this gives the result in [9]. Our second main result concerns coloured noise driven equations. Consider

$$\begin{aligned} \left| \begin{aligned}&\partial _t u_t(x)=\mathcal {L}u_t(x)+\lambda \sigma (u_t(x))\dot{F}(t,\,x),\\&u_t(x)=0, \quad \text {for all}\quad x\in B(0,\,R)^c. \end{aligned} \right. \end{aligned}$$
(1.5)

This equation is exactly the same as (1.1) except for the noise which is now given by \(\dot{F}\) and can be described as follows.

$$\begin{aligned} \mathrm {E}[\dot{F}(t,\,x)\dot{F}(s,\,y)]=\delta _{0}(t-s)f(x,y), \end{aligned}$$

where f is given by the so-called Riesz kernel:

$$\begin{aligned} f(x,\,y):=\frac{1}{|x-y|^\beta }\quad \text {for all}\quad x,y\in \mathbf {R}^d. \end{aligned}$$

Here \(\beta \) is some positive parameter satisfying \(\beta <d\). Other than the noise term, we will work under the exact conditions as those for Eq. (1.1). The mild solution will thus satisfy the following integral equation.

$$\begin{aligned} u_t(x)= (\mathcal {G}_\mathrm{D}u)_t(x)+ \lambda \int _{B(0,\,R)}\int _0^t p_\mathrm{D}(t-s,\,x,\,y)\sigma (u_s(y))F(\mathrm{d} s\,\mathrm{d} y). \end{aligned}$$
(1.6)

Existence–uniqueness considerations will force us to impose \(\beta <\alpha \wedge d\); see for instance [8] or the appendix of this current paper. We point out that the stochastic integral in the above display is well defined for an even larger class of coloured noises. This is thanks to [7, 18]. The same applies for existence and uniqueness. One can prove well-posedness of equations which are driven by noises which are more general. For other papers studying coloured noise driven equations on bounded domains; see [16, 17]. Our first result concerning (1.5) is the following.

Theorem 1.6

Fix \(\epsilon >0\) and let \(x\in B(0,\,R-\epsilon )\), then for any fixed \(t>0\),

$$\begin{aligned} \lim _{\lambda \rightarrow \infty }\frac{\log \log \mathrm {E}|u_t(x)|^2}{\log \lambda }=\frac{2\alpha }{\alpha -\beta }, \end{aligned}$$

where \(u_t\) is the unique solution to (1.5).

Corollary 1.7

The excitation index of the solution to (1.5) is \(\frac{2\alpha }{\alpha -\beta }\).

It is clear that our results are significant extensions of those in [9, 13]. The techniques are also considerably harder and required some new highly non-trivial ideas which we now mention.

  • We need to compare the heat kernel estimates for killed stable process with that of “unkilled” one. To do that, we will need sharp estimates of the Dirichlet heat kernel.

  • We will also need to study some renewal-type inequalities, and by doing so, we come across the Mittag-Leffler function whose asymptotic properties become crucial.

  • While the above two ideas are enough for the proof of Theorem 1.3, we will also need to significantly modify the localisation techniques of [13] to complete the proof of Theorem 1.6.

Our method seems suited for the study of a much wider class of equations. To illustrate this, we devote a section to various extensions.

Here is a plan of the article. In Sect. 2, we collect some information about the heat kernel and the renewal-type inequalities. In Sect. 3, we prove the main results concerning (1.1). Section 4 contains analogous proofs for (1.5). In Sect. 5, we extend our study to a much wider class of equations.

Finally, throughout this paper, the letter c with or without subscripts will denote constants whose exact values are not important to us and can vary from line to line.

2 Preliminaries

Let \(X_t\) denote the \(\alpha \)-stable process on \(\mathbf {R}^d\) with \(p(t,\,x,\,y)\) being its transition density. It is well known that

$$\begin{aligned} c_1\left( t^{-d/\alpha }\wedge \frac{t}{|x-y|^{d+\alpha }}\right) \le p(t,\,x,\,y)\le c_2\left( t^{-d/\alpha }\wedge \frac{t}{|x-y|^{d+\alpha }}\right) , \end{aligned}$$

where \(c_1\) and \(c_2\) are positive constants. We define the first exit of time \(X_t\) from the ball \(B(0,\,R)\) by

$$\begin{aligned} \tau _{B(0,\,R)}:=\inf \{t>0, X_t\notin B(0,\,R) \}. \end{aligned}$$

We then have the following representation for \(p_\mathrm{D}(t,\,x,\,y)\)

$$\begin{aligned} p_\mathrm{D}(t,\,x,\,y)=p(t,\,x,\,y)-\mathrm {E}^x[p(t-\tau _{B(0,\,R)}, X_{\tau _{B(0,\,R)}}, y);\tau _{B(0,\,R)}<t]. \end{aligned}$$

From the above, it is immediate that

$$\begin{aligned} p_\mathrm{D}(t,\,x,\,y)\le p(t,\,x,\,y) \quad \text {for all}\quad x,\,y\in \mathbf {R}^d. \end{aligned}$$

This in turn implies that

$$\begin{aligned} p_\mathrm{D}(t,\,x,\,y)\le \frac{c_1}{t^{d/\alpha }}\quad \text {for all}\quad x,\,y\in \mathbf {R}^d. \end{aligned}$$
(2.1)

We now provide some sort of converse to the above inequality. Not surprisingly, this inequality will hold for small times only.

Proposition 2.1

Fix \(\epsilon >0\). Then for all \(x,\,y\in B(0,\,R-\epsilon )\), there exists a \(t_0>0\) and a constant \(c_1\), such that

$$\begin{aligned} p_\mathrm{D}(t,\,x,\,y)\ge c_1 p(t,\,x,\,y), \end{aligned}$$

whenever \(t\le t_0\). And if we further impose that \(|x-y|\le t^{1/\alpha }\), we obtain the following

$$\begin{aligned} p_\mathrm{D}(t,\,x,\,y)\ge \frac{c_2}{t^{d/\alpha }}, \end{aligned}$$
(2.2)

where \(c_2\) is some positive constant.

Proof

Set \(\delta _{B(0,\,R)}(x):=\text {dist}(x, B(0,\,R)^c)\). It is known that

$$\begin{aligned} p_\mathrm{D}(t,\,x,\,y)\ge c_1\left( 1\wedge \frac{\delta ^{\alpha /2}_{B(0,\,R)}(x)}{t^{1/2}}\right) \left( 1\wedge \frac{\delta ^{\alpha /2}_{B(0,\,R)}(y)}{t^{1/2}}\right) p(t,\,x,\,y), \end{aligned}$$

for some constant \(c_1\). See for instance [2] and references therein. Since \(x\in B(0,\,R-\epsilon )\), we have \(\delta _{B(0,\,R)}(x)\ge \epsilon \). Now choosing \(t_0=\epsilon ^\alpha \), we have \(\delta ^{\alpha /2}_{B(0,\,R)}(x)\ge t^{1/2}\) for all \(t\le t_0\). Similarly, we have \(\delta ^{\alpha /2}_{B(0,\,R)}(y)\ge t^{1/2}\) which together with the above display yield

$$\begin{aligned} p_\mathrm{D}(t,\,x,\,y)\ge c_2p(t,\,x,\,y)\quad \text {for all} \quad x,y\in B(0,\,R-\epsilon ), \end{aligned}$$

whenever \(t\le t_0\). We now use the fact that

$$\begin{aligned} p(t,\,x,\,y)\ge c_3\left( \frac{t}{|x-y|^{d+\alpha }}\wedge t^{-d/\alpha } \right) . \end{aligned}$$

to end up with (2.2).\(\square \)

We now make a simple remark which will be important in the sequel.

Remark 2.2

Recall that for any \(\tilde{t}>0\) and \(x\in B(0,\,R)\).

$$\begin{aligned} (\mathcal {G}_\mathrm{D}u)_{s+\tilde{t}}(x):=\int _{B(0,\,R)}p_\mathrm{D}(s+\tilde{t},\,x,\,y)u_0(y)\mathrm{d} y. \end{aligned}$$

Fix \(\epsilon >0\) and note that for \(x\in B(0,\,R-\epsilon )\),

$$\begin{aligned} (\mathcal {G}_\mathrm{D}u)_{s+\tilde{t}}(x)\ge \inf _{x,\,y\in B(0,\,R-\epsilon )}p_\mathrm{D}(s+\tilde{t},\,x,\,y) \int _{B(0,\,R-\epsilon )}u_0(y)\mathrm{d} y. \end{aligned}$$

Let \(t>0\). Choose \(\epsilon \) small enough if necessary. Then, for any \(0\le s\le t\), the right hand side is strictly positive. For “small times,” that is, for \(t+\tilde{t}\le t_0\), we can use the argument of the above result to write \(p_\mathrm{D}(s+\tilde{t},\,x,\,y)\ge c_1 p(s+\tilde{t},\,x,\,y)\). While for “large times,” we have \(p_\mathrm{D}(s+\tilde{t},\,x,\,y)\ge c_2\mathrm{e}^{-\lambda (s+\tilde{t})}\) for some positive constant \(\lambda \). This follows from general spectral theory and can be found in [2] and references therein. For \(x\in B(0,\,R-\epsilon )\) and \(0\le s\le t\), we have therefore found a strictly positive lower bound on \((\mathcal {G}_\mathrm{D}u)_{s+\tilde{t}}(x)\). We denote this bound by \(g_t\) to indicate its possible dependence on t. In a sense, this fact is analogous to the well-known “infinite propagation of heat” for the Laplacian.

We now give a definition of the Mittag-Leffler function which is denoted by \(E_\beta \) where \(\beta \) is some positive parameter. Define

$$\begin{aligned} E_\beta (t):=\sum _{n=0}^\infty \frac{t^{n}}{\Gamma (n\beta +1)}\quad \text {for}\quad t>0. \end{aligned}$$

This function is well studied and crops up in a variety of settings including the study of fractional Eq. [15]. In our context, we encounter it in the study of the renewal inequalities mentioned in the introduction. Even though a lot is known about this function, we will need the following simple fact whose statement is motivated by the use we make of it later. We will need the upper and lower bounds separately.

Proposition 2.3

For any fixed \(t>0\), we have

$$\begin{aligned} \limsup _{\theta \rightarrow \infty }\frac{\log \log E_\beta (\theta t)}{\log \theta }\le \frac{1}{\beta }, \end{aligned}$$

and

$$\begin{aligned} \liminf _{\theta \rightarrow \infty }\frac{\log \log E_\beta (\theta t)}{\log \theta }\ge \frac{1}{\beta }. \end{aligned}$$

In other words, we have

$$\begin{aligned} \lim _{\theta \rightarrow \infty }\frac{\log \log E_\beta (\theta t)}{\log \theta }=\frac{1}{\beta }. \end{aligned}$$

Proof

By using Laplace transforms techniques, one can show that for large z,

$$\begin{aligned} \left| E_\beta (z) - \frac{1}{\beta } \mathrm{e}^{z^{1/\beta }} \right| = o \left( \mathrm{e}^{z^{1/\beta }}\right) . \end{aligned}$$

See for instance [12] and references therein for more details. Thus, for any positive constant \(\epsilon > 0\) there exists a \(Z > 0\) such that for all \(z > Z\)

$$\begin{aligned} \left| E_\beta (z) - \frac{1}{\beta } \mathrm{e}^{z^{1/\beta }} \right| \le \epsilon \mathrm{e}^{z^{1/\beta }}. \end{aligned}$$

Choosing \(\epsilon < 1/\beta \), it is easy to see that

$$\begin{aligned} \log \left( \log \left( \frac{1}{\beta } - \epsilon \right) + z^{1/\beta }\right) \le \log \log E_\beta (z) \le \log \left( \log \left( \frac{1}{\beta } + \epsilon \right) + z^{1/\beta }\right) . \end{aligned}$$

Letting \(z= \theta t\), the above yield the assertions of the proposition.\(\square \)

What follows is a consequence of Lemma 14.1 of [14]. But for the sake of completeness, we give a quick proof based on the asymptotic behaviour of the Mittag-Leffler function which we used in the above proof. Fix \(\rho >0\) and consider the following:

$$\begin{aligned} S(t):=\sum _{k=1}^\infty \left( \frac{t}{k^\rho }\right) ^{k}\quad \text {for}\quad t>0. \end{aligned}$$
(2.3)

Lemma 2.4

For any fixed \(t>0\), we have

$$\begin{aligned} \liminf _{\theta \rightarrow \infty }\frac{\log \log S(\theta t)}{\log \theta }\ge \frac{1}{\rho }. \end{aligned}$$

Proof

From the asymptotic property of the gamma function, there exists an \(N>0\) such that for \(k\ge N\), we have \(\Gamma (k\rho +1)\ge \left( \frac{\rho k}{e}\right) ^{\rho k}\). We thus have

$$\begin{aligned} \begin{aligned} S(t)&\ge \sum _{k=N}^\infty \left[ \left( \frac{\rho }{e}\right) ^\rho t\right] ^{k}\frac{1}{\Gamma (k\rho +1)}\\&=E_\rho \left[ \left( \frac{\rho }{e}\right) ^\rho t\right] -\sum _{k<N}\left[ \left( \frac{\rho }{e}\right) ^\rho t\right] ^{k}\frac{1}{\Gamma (k\rho +1)}. \end{aligned} \end{aligned}$$

An application of Proposition 2.3 proves the result.\(\square \)

We now present the renewal inequalities.

Proposition 2.5

Let \(T<\infty \) and \(\beta >0\). Suppose that f(t) is a non-negative locally integrable function satisfying

$$\begin{aligned} f(t)\le c_1+\kappa \int _0^t(t-s)^{\beta -1}f(s)\mathrm{d} s\quad \mathrm {for\, all}\quad 0\le t\le T, \end{aligned}$$
(2.4)

where \(c_1\) is some positive number. Then for any \(t\in (0,T]\), we have the following

$$\begin{aligned} \limsup _{\kappa \rightarrow \infty }\frac{\log \log f(t)}{\log \kappa }\le \frac{1}{\beta }. \end{aligned}$$

Proof

We begin by setting \((\mathcal {A}\psi )(t):=\kappa \int _0^t(t-s)^{\beta -1}\psi (s)\mathrm{d} s\) where \(\psi \) can be any locally integrable function. And for any fixed integer \(k>1\), we have \((\mathcal {A}^k\psi )(t):=\kappa \int _0^t(t-s)^{\beta -1}(\mathcal {A}^{k-1}\psi )(s)\mathrm{d} s\). We further set \(1(s):=1\) for all \(0\le s\le T\). With these notations, (2.4) can be succinctly written as \(f(t)\le c_1+(\mathcal {A}f)(t)\) which upon iterating becomes

$$\begin{aligned} f(t)\le c_1\sum _{k=0}^{n-1}(\mathcal {A}^k1)(t)+(\mathcal {A}^n f)(t). \end{aligned}$$
(2.5)

Some further computations show that

$$\begin{aligned} (\mathcal {A}^n f)(t)=\frac{(\kappa \Gamma (\beta ))^n}{\Gamma (n\beta )}\int _0^t(t-s)^{n\beta -1}f(s)\,\mathrm{d} s \end{aligned}$$

and therefore we also have

$$\begin{aligned} (\mathcal {A}^n 1)(t)=\frac{(\kappa \Gamma (\beta ))^nt^{n\beta }}{\Gamma (n\beta +1)}. \end{aligned}$$

As \(n\rightarrow \infty \), we have \((\mathcal {A}^n f)(t)\rightarrow 0\). We thus end up with

$$\begin{aligned} \begin{aligned} f(t)&\le c_1\sum _{k=0}^\infty (\mathcal {A}^k1)(t)\\&=c_1\sum _{k=0}^\infty \frac{(\kappa \Gamma (\beta ))^nt^{n\beta }}{\Gamma (n\beta +1)}\\&=c_1E_\beta \left( \kappa \Gamma (\beta ) t^\beta \right) . \end{aligned} \end{aligned}$$

Keeping in mind that we are interested in the behaviour as \(\kappa \) tends to infinity while t is fixed, we can apply Proposition 2.3 to obtain the result.\(\square \)

We have the “converse” of the above result.

Proposition 2.6

Let \(T<\infty \) and \(\beta >0\). Suppose that f(t) is a non-negative locally integrable function satisfying

$$\begin{aligned} f(t)\ge c_2+\kappa \int _0^t(t-s)^{\beta -1}f(s)\mathrm{d} s\quad \mathrm {for\, all}\quad 0\le t\le T, \end{aligned}$$
(2.6)

where \(c_2\) is some positive number. Then for any \(t\in (0,T]\), we have the following

$$\begin{aligned} \liminf _{\kappa \rightarrow \infty }\frac{\log \log f(t)}{\log \kappa }\ge \frac{1}{\beta }. \end{aligned}$$

Proof

With the notations introduced in the proof of Proposition 2.5, (2.6) yields

$$\begin{aligned} f(t)\ge c_2\sum _{k=0}^{n-1}(\mathcal {A}^k1)(t)+(\mathcal {A}^n f)(t). \end{aligned}$$
(2.7)

Now similar arguments as in Proposition 2.5 prove the result. We leave it to the reader to fill in the details.\(\square \)

The above inequalities are well studied; see for instance [12]. But the novelty here is that, as opposed to what is usually done, instead of t, we take \(\kappa \) to be large.

3 Proofs of Theorem 1.3 and Corollary 1.5

We will begin with the proof of Theorem 1.3. We will prove it in two steps. Set

$$\begin{aligned} \mathcal {S}_t(\lambda ):=\sup _{x\in B(0,\,R)}\mathrm {E}|u_t(x)|^2. \end{aligned}$$
(3.1)

We then have the following proposition.

Proposition 3.1

Fix \(t>0\), then

$$\begin{aligned} \limsup _{\lambda \rightarrow \infty }\frac{\log \log \mathcal {S}_t(\lambda )}{\log \lambda }\le \frac{2\alpha }{\alpha -1}. \end{aligned}$$

Proof

We start off with the representation (1.2) and take the second moment to obtain

$$\begin{aligned} \begin{aligned} \mathrm {E}|u_t(x)|^2&=|(\mathcal {G}_\mathrm{D}u)_t(x)|^2+\lambda ^2\int _0^t\int _{B(0,\,R)}p^2_\mathrm{D}(t-s,\,x,\,y)\mathrm {E}|\sigma (u_s(y))|^2\mathrm{d} y\mathrm{d} s\\&=I_1+I_2. \end{aligned} \end{aligned}$$
(3.2)

Clearly, for any fixed \(t>0\), \(I_1\le c_1\) where \(c_1\) is a constant depending on t. We now focus our attention on \(I_2\). The Lipschitz property of \(\sigma \) together with the Markov property of killed stable processes yield the following

$$\begin{aligned} \begin{aligned} I_2&\le (\lambda L_\sigma )^2\int _0^t\int _{B(0,\,R)}p^2_\mathrm{D}(t-s,\,x,\,y)\mathrm {E}|u_s(y)|^2\mathrm{d} y\mathrm{d} s\\&\le (\lambda L_\sigma )^2\int _0^t\mathcal {S}_s(\lambda )\int _{B(0,\,R)}p^2_D(t-s,\,x,\,y)\mathrm{d}y\mathrm{d} s\\&\le (\lambda L_\sigma )^2\int _0^t\mathcal {S}_s(\lambda ) p_\mathrm{D}(2(t-s),\,x,\,x)\mathrm{d} s\\&\le c_2\lambda ^2\int _0^t\frac{\mathcal {S}_s(\lambda )}{(t-s)^{1/\alpha }}\mathrm{d} s. \end{aligned} \end{aligned}$$

Putting these estimates together, we have

$$\begin{aligned} \mathcal {S}_t(\lambda )\le c_1+c_2\lambda ^2\int _0^t\frac{\mathcal {S}_s(\lambda )}{(t-s)^{1/\alpha }}\mathrm{d} s. \end{aligned}$$

Now an application of Proposition 2.5 proves the result.\(\square \)

For any fixed \(\epsilon >0\), set

$$\begin{aligned} \mathcal {I}_{\epsilon , t}(\lambda ):=\inf _{x\in B(0,\,R-\epsilon )}\mathrm {E}|u_t(x)|^2. \end{aligned}$$

Proposition 3.2

For any fixed \(\epsilon >0\), there exists a \(t_0>0\) such that for all \(0<t\le t_0\),

$$\begin{aligned} \liminf _{\lambda \rightarrow \infty }\frac{\log \log \mathcal {I}_{\epsilon , t}(\lambda )}{\log \lambda }\ge \frac{2\alpha }{\alpha -1}. \end{aligned}$$

Proof

As in the proof of the previous proposition, we start off with (3.2) and seek to find lower bound on each of the terms. We fix \(\epsilon >0\) and choose \(t_0\) as in Proposition 2.1. Using Remark 2.2, we have that for \(0<t\le t_0\), we have \(\inf _{x\in B(0,\,R-\epsilon )}\mathcal {G}_\mathrm{D}(t,\,x):= \tilde{g}_{t_0}\). Hence, \(I_1\ge \tilde{g}_{t_0}^2\). We now turn our attention to \(I_2\).

$$\begin{aligned} \begin{aligned} I_2&\ge (\lambda l_\sigma )^2\int _0^t\int _{B(0,\,R)}p^2_\mathrm{D}(t-s,\,x,\,y)\mathrm {E}|u_s(y)|^2\mathrm{d} y\mathrm{d} s\\&\ge (\lambda l_\sigma )^2\int _0^t\mathcal {I}_{\epsilon , s}(\lambda )\int _{B(0,\,R-\epsilon )}p^2_\mathrm{D}(t-s,\,x,\,y)\mathrm{d} y\mathrm{d} s\\ \end{aligned} \end{aligned}$$

Set \(A:=\{y\in B(0,\,R-\epsilon );|x-y|\le (t-s)^{1/\alpha }\}\). Since \(t-s\le t_0\), we have \(|A|\ge c_1(t-s)^{1/\alpha }\). Now using Proposition 2.1, we have

$$\begin{aligned} \begin{aligned} \int _{B(0,\,R-\epsilon )}p^2_\mathrm{D}(t-s,\,x,\,y)\mathrm{d} y&\ge c_2\int _{A}\frac{1}{(t-s)^{2/\alpha }}\mathrm{d} y\\&=c_3\frac{1}{(t-s)^{1/\alpha }}. \end{aligned} \end{aligned}$$

We thus have

$$\begin{aligned} I_2\ge c_4\lambda ^2\int _0^t\frac{\mathcal {I}_{\epsilon , s}(\lambda )}{(t-s)^{1/\alpha }}\mathrm{d} s. \end{aligned}$$

Combining the above estimates, we have

$$\begin{aligned} \mathcal {I}_{\epsilon , t}\ge \tilde{g}_{t_0}^2+c_4\lambda ^2\int _0^t\frac{\mathcal {I}_{\epsilon , s}(\lambda )}{(t-s)^{1/\alpha }}\mathrm{d} s. \end{aligned}$$

We now apply Proposition 2.6 to obtain the result.\(\square \)

Proof of Theorem 1.3

The proof of the result when \(t\le t_0\) follows easily from the above two propositions. To prove the theorem for all \(t>0\), we only need to prove the above proposition for all \(t>0\). For any fixed \(T,\,t>0\), by changing the variable we have

$$\begin{aligned} \begin{aligned} \mathrm {E}|&u_{T+t}(x)|^2\\&=|(\mathcal {G}_\mathrm{D}u)_{T+t}(x)|^2+\lambda ^2\int _0^{T+t}\int _{B(0,\,R)}p^2_\mathrm{D}(T+t-s,\,x,\,y)\mathrm {E}|\sigma (u_s(y))|^2\mathrm{d} y\mathrm{d} s\\&=|(\mathcal {G}_\mathrm{D}u)_{T+t}(x)|^2+\lambda ^2\int _0^{T}\int _{B(0,\,R)}p^2_\mathrm{D}(T+t-s,\,x,\,y)\mathrm {E}|\sigma (u_s(y))|^2\mathrm{d} y\mathrm{d} s\\&\quad +\,\lambda ^2\int _0^{t}\int _{B(0,\,R)}p^2_\mathrm{D}(t-s,\,x,\,y)\mathrm {E}|\sigma (u_{T+s}(y))|^2\mathrm{d} y\mathrm{d} s.\\ \end{aligned} \end{aligned}$$

This gives us

$$\begin{aligned} \mathrm {E}|u_{T+t}(x)|^2\ge |(\mathcal {G}_\mathrm{D}u)_{T+t}(x)|^2+ \lambda ^2l_\sigma ^2\int _0^{t}\int _{B(0,\,R)}p^2_\mathrm{D}(t-s,\,x,\,y)\mathrm {E}|u_{T+s}(y)|^2\mathrm{d} y\mathrm{d} s. \end{aligned}$$

Since by Remark 2.2, \(|(\mathcal {G}_\mathrm{D}u)_{T+t}(x)|^2\) is strictly positive, we can use the proof of the above proposition with an obvious modification to conclude that

$$\begin{aligned} \liminf _{\lambda \rightarrow \infty }\frac{\log \log \mathrm {E}|u_{T+t}(x)|^2}{\log \lambda }\ge \frac{2\alpha }{\alpha -1}, \end{aligned}$$

for \(x\in B(0,\,R-\epsilon )\) and small t. \(\square \)

Proof of Corollary 1.5

Note that

$$\begin{aligned} \int _{-R}^R\mathrm {E}|u_t(x)|^2\mathrm{d} x\,\le \,2R\sup _{x\in [-R,\,R]}\mathrm {E}|u_t(x)|^2 \end{aligned}$$

and

$$\begin{aligned} \int _{-R}^R\mathrm {E}|u_t(x)|^2\mathrm{d} x\,\ge \, 2(R-\epsilon )\inf _{x\in [-(R-\epsilon ),\,R-\epsilon ]}\mathrm {E}|u_t(x)|^2. \end{aligned}$$

We now apply Theorem 1.3 and use the definition of \(\mathcal {E}_t(\lambda )\) to obtain the result.\(\square \)

4 Proofs of Theorem 1.6 and Corollary 1.7

Recall that

$$\begin{aligned} \mathcal {S}_t(\lambda ):=\sup _{x\in B(0,\,R)}\mathrm {E}|u_t(x)|^2, \end{aligned}$$

where here and throughout the rest of this section, \(u_t\) will denote the solution to (1.5). The following lemma will be crucial later. In what follows, f denotes the spatial correlation of the noise \(\dot{F}\).

Lemma 4.1

For all \(x,y\in B(0,\,R)\),

$$\begin{aligned} \iint _{B(0,\,R)\times B(0,\,R)}p_\mathrm{D}(t,\,x,\,w)p_\mathrm{D}(t,\,y,\,z)f(w,z)\mathrm{d} w \mathrm{d} z\le \frac{c_1}{t^{\beta /\alpha }}, \end{aligned}$$
(4.1)

for some positive constant \(c_1\).

Proof

We begin by noting that

$$\begin{aligned} \begin{aligned} \iint _{B(0,\,R)\times \, B(0,\,R)}&p_\mathrm{D}(t,\,x,\,w)p_\mathrm{D}(t,\,y,\,z)f(w-z,0)\mathrm{d} w \mathrm{d} z\\&\le \int _{\mathbf {R}^d}\int _{\mathbf {R}^d} p(t,\,x,\,w)p(t,\,y,\,z)f(w-z,0)\mathrm{d} w \mathrm{d} z\\&\le \int _{\mathbf {R}^d} p(2t,\,w,\,x-y) f(w,0) \mathrm{d} w. \end{aligned} \end{aligned}$$

Now the scaling property of the heat kernel and a proper change of variable proves the result.\(\square \)

Proposition 4.2

Fix \(t>0\), then

$$\begin{aligned} \limsup _{\lambda \rightarrow \infty }\frac{\log \log \mathcal {S}_t(\lambda )}{\log \lambda }\le \frac{2\alpha }{\alpha -\beta }. \end{aligned}$$

Proof

We start with the mild formulation to the solution to (1.5) which after taking the second moment gives us

$$\begin{aligned} \begin{aligned} \mathrm {E}|u(t,\,x)|^2&=|(\mathcal {G}_\mathrm{D}u)_t(x)|^2\\&\quad +\,\lambda ^2\int _0^t\int _{B(0,\,R)\times \, B(0,\,R)}p_\mathrm{D}(t-s,x,y)p_\mathrm{D}(t-s,x,z)f(y,z)\\&\quad \times \,\mathrm {E}[\sigma (u_s(y))\sigma (u_s(z))]\mathrm{d} y\mathrm{d} z \mathrm{d} s\\&=I_1+I_2. \end{aligned} \end{aligned}$$

We obviously have \(I_1\le c_1\). Note that the Lipschitz assumption on \(\sigma \) together with Hölder’s inequality give

$$\begin{aligned} \begin{aligned} \mathrm {E}[\sigma (u_s(y))\sigma (u_s(z))]&\le L_\sigma ^2 \left[ \mathrm {E}|u_s(y)|^2]^{1/2}[\mathrm {E}|u_s(z)|^2\right] ^{1/2}\\&\le L_\sigma ^2S_s(\lambda ). \end{aligned} \end{aligned}$$

We can use the above inequality and Lemma 4.1 to bound \(I_2\) as follows.

$$\begin{aligned} \begin{aligned} I_2\le (\lambda L_\sigma )^2\int _0^t\frac{\mathcal {S}_s(\lambda )}{(t-s)^{\beta /\alpha }}\mathrm{d} s. \end{aligned} \end{aligned}$$

Combining the above estimates, we obtain

$$\begin{aligned} \mathcal {S}_t(\lambda )\le c_1+c_2\lambda ^2\int _0^t\frac{\mathcal {S}_s(\lambda )}{(t-s)^{\beta /\alpha }}\mathrm{d} s, \end{aligned}$$

which immediately yields the result upon an application of Proposition 2.5.\(\square \)

We have the following lower bound on the second of the solution. Inspired by the localisation arguments of [13], we have the following.

Proposition 4.3

Fix \(\epsilon >0\) and \(\tilde{t}>0\). Then for all \(x\in B(0,\,R-2\epsilon )\) and \(0\le t\le t_0\),

$$\begin{aligned} \begin{aligned} \mathrm {E}\left| u_{t+\tilde{t}}(x)\right| ^2\,\ge \,g_t^2+g_t^2\sum _{k=1}^\infty (\lambda l_\sigma c_1)^{2k}\left( \frac{t}{k} \right) ^{k(\alpha -\beta )/\alpha }, \end{aligned} \end{aligned}$$

where \(c_1\) is some positive constant depending on \(\alpha \) and \(\beta \).

Proof

Fix \(\epsilon >0\) and for convenience, set \(B:=B(0,\,R)\) and \(B_\epsilon :=B(0,\,R-\epsilon )\). We will also use the following notation; \(B^2:=B\times B\) and \(B^2_\epsilon :=B_\epsilon \times B_\epsilon \).

After taking the second moment, the mild formulation of the solution together with the growth condition on \(\sigma \) gives us

$$\begin{aligned} \begin{aligned} \mathrm {E}|u_t(x)|^2&=|(\mathcal {G}_\mathrm{D}u)_t(x)|^2+\lambda ^2 \int _0^{t}\int _{B^2}p_\mathrm{D}(t-s_1,\,x,\,z_1)\\&\quad \times \, p_\mathrm{D}\left( t-s_1,\,x,\,z_1'\right) \mathrm {E}\left[ \sigma (u_{s_1}(z_1))\sigma \left( u_{s_1}(z_1')\right) f\left( z_1,z_1'\right) \right] \mathrm{d} z_1\mathrm{d} z_1'\mathrm{d} s_1. \end{aligned} \end{aligned}$$

We now use the assumption that \(\sigma (x)\ge l_\sigma |x|\) for all x to reduce the above to

$$\begin{aligned} \begin{aligned} \mathrm {E}|u_t(x)|^2&\ge |(\mathcal {G}_\mathrm{D}u)_t(x)|^2+\lambda ^2 l_\sigma ^2\int _0^{t}\int _{B^2}p_\mathrm{D}(t-s_1,\,x,\,z_1)\\&\quad \times \, p_\mathrm{D}(t-s_1,\,x,\,z_1')\mathrm {E}|u_{s_1}(z_1)u_{s_1}(z_1')|f(z_1,z_1')\mathrm{d} z_1\mathrm{d} z_1'\mathrm{d} s_1. \end{aligned} \end{aligned}$$

We now replace the above by \(t+\tilde{t}\) and use a substitution to reduce the above to

$$\begin{aligned} \begin{aligned} \mathrm {E}\left| u_{t+\tilde{t}}(x)\right| ^2&\ge |(\mathcal {G}_\mathrm{D}u)_{t+\tilde{t}}(x)|^2\\&\quad +\,\lambda ^2 l_\sigma ^2\int _0^{t}\int _{B^2}p_\mathrm{D}(t-s_1,\,x,\,z_1)p_\mathrm{D}(t-s_1,\,x,\,z_1')\\&\quad \times \, \mathrm {E}|u_{\tilde{t}+s_1}(z_1)u_{\tilde{t}+s_1}(z_1')|f(z_1,z_1')\mathrm{d} z_1\mathrm{d} z_1'\mathrm{d} s_1. \end{aligned} \end{aligned}$$

We also have

$$\begin{aligned} \begin{aligned} \mathrm {E}\left| u_{\tilde{t}+s_1}(z_1)u_{\tilde{t}+s_1}(z_1')\right|&\ge |(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_1}(z_1)(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_1}(z_1)|\\&\quad +\,\lambda ^2l_\sigma ^2\int _0^{s_1}\int _{B^2}p_\mathrm{D}(s_1-s_2,\,z_1,\,z_2)p_\mathrm{D}(s_1-s_2,\,z_1',\,z_2')\\&\quad \times \,\mathrm {E}|u_{\tilde{t}+s_2}(z_2)u_{\tilde{t}+s_2}(z_2')|f(z_2, z_2')\mathrm{d} z_2\mathrm{d} z_2'\mathrm{d} s_2. \end{aligned} \end{aligned}$$

The above two inequalities thus give us

$$\begin{aligned} \begin{aligned} \mathrm {E}\left| u_{t+\tilde{t}}(x)\right| ^2&\ge |(\mathcal {G}_\mathrm{D}u)_{t+\tilde{t}}(x)|^2\\&\quad +\,\lambda ^2 l_\sigma ^2\int _0^{t}\int _{B^2}p_\mathrm{D}(t-s_1,\,x,\,z_1)p_\mathrm{D}\left( t-s_1,\,x,\,z_1'\right) \\&\quad \times \,\mathrm {E}\left| u_{\tilde{t}+s_1}(z_1)u_{\tilde{t}+s_1}(z_1')\right| f\left( z_1,z_1'\right) \mathrm{d} z_1\mathrm{d} z_1'\mathrm{d} s_1\\&\ge \left| (\mathcal {G}_\mathrm{D}u)_{\tilde{t}+t}(x)\right| ^2\\&\quad +\,\lambda ^2l_\sigma ^2\int _0^{t}\int _{B^2}p_\mathrm{D}(t-s_1,\,x,\,z_1)p_\mathrm{D}\left( t-s_1,\,x,\,z_1'\right) \\&\quad \times \,f\left( z_1,z_1'\right) (\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_1}(z_1)(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_1}(z_1')\mathrm{d} z_1\mathrm{d} z_1'\mathrm{d} s_1\\&\quad +\,(\lambda l_\sigma )^4\int _0^{t}\int _{B^2}p(t-s_1,\,x,\,z_1)p\left( t-s_1,\,x,\,z_1'\right) f\left( z_1,z_1'\right) \\&\quad \times \, \int _0^{\tilde{t}+s_1}\int _{B^2}p_\mathrm{D}(s_1-s_2,\,z_1,\,z_2)p_\mathrm{D}\left( s_1-s_2,\,z_1',\,z_2'\right) \\&\quad \times \,\mathrm {E}\left| u_{\tilde{t}+s_2}(z_2)u_{\tilde{t}+s_2}(z_2')\right| f\left( z_2, z_2'\right) \mathrm{d} z_2\mathrm{d} z_2'\mathrm{d} s_2\mathrm{d} z_1\mathrm{d} z_1'\mathrm{d} s_1. \end{aligned} \end{aligned}$$
(4.2)

We set \(z_0=z_0':=x\) and \(s_0:=t\) and continue the recursion as above to obtain

$$\begin{aligned} \mathrm {E}\left| u_{\tilde{t}+t}(x)\right| ^2\ge & {} \left| (\mathcal {G}_\mathrm{D}u)_{\tilde{t}+t}(x)\right| ^2\nonumber \\&+\, \sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^t\int _{B^2}\int _0^{s_1}\int _{B^2}\nonumber \\&\cdots \int _0^{s_{k-1}}\int _{B^2} |(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_k}(z_k)(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_k}(z_k')|\nonumber \\&\times \, \prod _{i=1}^kp_\mathrm{D}(s_{i-1}-s_{i}, z_{i-1},\,z_i)p_\mathrm{D}\nonumber \\&(s_{i-1}-s_{i}, z'_{i-1},\,z'_i)f\left( z_i, z_i'\right) \mathrm{d} z_i\mathrm{d} z_i'\mathrm{d} s_i. \end{aligned}$$
(4.3)

Therefore,

$$\begin{aligned} \mathrm {E}\left| u_{\tilde{t}+t}(x)\right| ^2\ge & {} |(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+t}(x)|^2\\&+\,\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^t\int _{B_\epsilon ^2}\int _0^{s_1}\int _{B_\epsilon ^2}\nonumber \\&\cdots \int _0^{s_{k-1}}\int _{B_\epsilon ^2} \left| (\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_k}(z_k)(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_k}\left( z_k'\right) \right| \quad \\&\times \, \prod _{i=1}^kp_\mathrm{D}(s_{i-1}-s_{i}, z_{i-1},\,z_i)p_\mathrm{D}\nonumber \\&(s_{i-1}-s_{i}, z'_{i-1},\,z'_i)f\left( z_i, z_i'\right) \mathrm{d} z_i\mathrm{d} z_i'\mathrm{d} s_i. \end{aligned}$$

Using the fact that for \(z_k,\,z_k'\in B_\epsilon \),

$$\begin{aligned} \begin{aligned} (\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_k}(z_k)(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s_k}\left( z_k'\right)&\ge \inf _{x,y\in B_\epsilon }\inf _{0\le s\le t}(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s}(x)(\mathcal {G}_\mathrm{D}u)_{\tilde{t}+s}(y)\\&\ge g_t^2, \end{aligned} \end{aligned}$$

we obtain

$$\begin{aligned} \mathrm {E}\left| u_{\tilde{t}+t}(x)\right| ^2\ge & {} g_t^2+g_t^2\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^t\int _{B_\epsilon ^2}\int _0^{s_1}\int _{B_\epsilon ^2}\cdots \int _0^{s_{k-1}}\int _{B_\epsilon ^2}\nonumber \\&\times \, \prod _{i=1}^k p_\mathrm{D}(s_{i-1}-s_{i}, z_{i-1},\,z_i)p_\mathrm{D}\\&(s_{i-1}-s_{i}, z'_{i-1},\,z'_i)f\left( z_i, z_i'\right) \mathrm{d} z_i\mathrm{d} z_i'\mathrm{d} s_i. \end{aligned}$$

We reduce the temporal region of integration as follows.

$$\begin{aligned} \mathrm {E}\left| u_{\tilde{t}+t}(x)\right| ^2\ge & {} g_t^2+g_t^2\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _{t-t/k}^t\int _{B_\epsilon ^2}\int _{s_1-t/k}^{s_1}\int _{B_\epsilon ^2}\cdots \int _{s_{k-1}-t/k}^{s_{k-1}}\int _{B_\epsilon ^2} \\&\times \,\prod _{i=1}^k p_\mathrm{D}(s_{i-1}-s_{i}, z_{i-1},\,z_i)p_\mathrm{D}\\&(s_{i-1}-s_{i}, z'_{i-1},\,z'_i)f\left( z_i, z_i'\right) \mathrm{d} z_i\mathrm{d} z_i'\mathrm{d} s_i. \end{aligned}$$

Now we make a change the temporal variable, \(s_{i-1}-s_{i} \rightarrow s_{i}\), in the following way such that for all integers \(i \in [1,k] \), we have

$$\begin{aligned} \begin{aligned}&\int _{s_{i-1}-t/k}^{s_{i-1}} p_\mathrm{D}(s_{i-1}-s_{i}, z_{i-1},\,z_i)p_\mathrm{D}\left( s_{i-1}-s_{i}, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \mathrm{d} s_i \\&\quad = \int _0^{t/k} p_\mathrm{D}(s_{i}, z_{i-1},\,z_i)p_\mathrm{D}\left( s_{i}, z'_{i-1},\,z'_i)f(z_i, z_i'\right) \mathrm{d} s_i. \end{aligned} \end{aligned}$$

We thus have

$$\begin{aligned} \begin{aligned} \mathrm {E}\left| u_{\tilde{t}+t}(x)\right| ^2&\ge g_t^2+g_t^2\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^{t/k}\int _{B_\epsilon ^2}\int _0^{t/k}\int _{B_\epsilon ^2}\cdots \int _0^{t/k}\int _{B_\epsilon ^2} \\&\quad \times \,\prod _{i=1}^kp_\mathrm{D}(s_i, z_{i-1},\,z_i)p_\mathrm{D}\left( s_i, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \mathrm{d} z_i\mathrm{d} z_i'\mathrm{d} s_i. \end{aligned} \end{aligned}$$

We now focus our attention on the multiple integral appearing in the above inequality. We will further restrict its spatial domain of integration so that we have the required lower bound on each component of the following product,

$$\begin{aligned} \prod _{i=1}^kp_\mathrm{D}\left( s_i, z_{i-1},\,z_i\right) p_\mathrm{D}\left( s_i, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) . \end{aligned}$$
(4.4)

Recall that \(x\in B(0,\,R-2\epsilon )\). For each \(i=1,\ldots , k\), choose \(z_i\) and \(z_i'\) satisfying

$$\begin{aligned} z_i\in B\left( z_0, s_1^{1/\alpha }/2\right) \cap B\left( z_{i-1}, s_i^{1/\alpha }\right) \end{aligned}$$

and

$$\begin{aligned} z_i'\in B\left( z'_0, s_1^{1/\alpha }/2\right) \cap B\left( z'_{i-1}, s_i^{1/\alpha }\right) , \end{aligned}$$

so that we have \(|z_i-z_i'|\le s_i^{1/\alpha }\) together with \(|z_i-z_{i-1}|\le s_i^{1/\alpha }\) and \(|z'_i-z'_{i-1}|\le s_i^{1/\alpha }\). Now using Proposition 2.1, we can conclude that \(p_\mathrm{D}(s_i, z_{i-1},\,z_i)\ge s_i^{-d/\alpha }\) and \(p_\mathrm{D}(s_i, z_{i-1}',\,z_i')\ge s_i^{-d/\alpha }\). Moreover, we have \(|z_i-z_i'|\le s_1^{1/\alpha }\), which gives us \(f(z_i, z_i')\ge s_1^{-\beta /\alpha }\). In other words, we are looking at the points \(\{s_i,\,z_i,\,z_i' \}_{i=0}^k\) such that the following holds

$$\begin{aligned} \prod _{i=1}^kp_\mathrm{D}(s_i, z_{i-1},\,z_i)p_\mathrm{D}\left( s_i, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \ge \prod _{i=1}^k\frac{1}{s_i^{2d/\alpha }s_1^{\beta /\alpha }}. \end{aligned}$$

For notational convenience, we set \(\mathcal {A}_i:=\{z_i\in B(x,\,s_1^{1/\alpha }/2)\cap B(z_{i-1}, s_i^{1/\alpha })\}\) and \(\mathcal {A}_i':=\{z_i'\in B(x,\,s_1^{1/\alpha }/2)\cap B(z'_{i-1}, s_i^{1/\alpha })\}\) which lead us to

$$\begin{aligned} \begin{aligned}&\int _0^{t/k}\int _{\mathcal {A}_1}\int _{\mathcal {A}'_1}\int _0^{t/k}\int _{\mathcal {A}_2}\int _{\mathcal {A}'_2}\cdots \int _0^{t/k}\int _{\mathcal {A}_k}\int _{\mathcal {A}'_k}\prod _{i=1}^kp_\mathrm{D}(s_i, z_{i-1},\,z_i)\\&\quad \quad \times \, p_\mathrm{D}\left( s_i, z'_{i-1},\,z'_i\right) f\left( z_i, z_i'\right) \mathrm{d} z_i\mathrm{d} z_i'\mathrm{d} s_i\\&\quad \ge \, \int _0^{t/k}\int _{\mathcal {A}_1}\int _{\mathcal {A}'_1}\int _0^{t/k}\int _{\mathcal {A}_2}\int _{\mathcal {A}'_2}\cdots \int _0^{t/k}\int _{\mathcal {A}_k}\int _{\mathcal {A}'_k} \prod _{i=1}^k\frac{1}{s_i^{2d/\alpha }s_1^{\beta /\alpha }}\mathrm{d} z_i\mathrm{d} z_i'\mathrm{d} s_i. \end{aligned} \end{aligned}$$

We now use the lower bounds on the area of \(\mathcal {A}'_i\)s and \(\mathcal {A}_i\)s to estimate the above integrals. We note that for \(s_i\le s_1/2\), the area of \(\mathcal {A}'_i\) and \(\mathcal {A}_i\) is bounded below by \(c_1s_i^{d/\alpha }\). After some computations and using the fact that \(s_i\le s_1/2\), we see that the above integral is bounded below by

$$\begin{aligned} \begin{aligned} c_2^{2k}\int _0^{t/k}\frac{1}{s_1^{k\beta /\alpha }}\left[ \prod _{i=2}^k \int _0^{s_1/2}\mathrm{d} s_i\right] \mathrm{d} s_1=\frac{\alpha c_3^{2k}}{k(\alpha -\beta )}\left( \frac{t}{k} \right) ^{k(\alpha -\beta )/\alpha }. \end{aligned} \end{aligned}$$

Putting the above estimates together we obtain

$$\begin{aligned} \begin{aligned} \mathrm {E}\left| u_{\tilde{t}+t}(x)\right| ^2&\ge g_t^2+g_t^2\sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\frac{\alpha c_3^{2k}}{k(\alpha -\beta )}\left( \frac{t}{k} \right) ^{k(\alpha -\beta )/\alpha }\\&\ge g_t^2+g_t^2\sum _{k=1}^\infty (\lambda l_\sigma c_4)^{2k}\left( \frac{t}{k} \right) ^{k(\alpha -\beta )/\alpha }, \end{aligned} \end{aligned}$$

for some constant \(c_4\).\(\square \)

Recall that

$$\begin{aligned} \mathcal {I}_{\epsilon , t}(\lambda ):=\inf _{x\in B(0,\,R-\epsilon )}\mathrm {E}|u_t(x)|^2, \end{aligned}$$

where here \(u_t\) is the solution to (1.5). We now have

Proposition 4.4

For any fixed \(\epsilon >0\), then for any fixed \(t>0\), we have

$$\begin{aligned} \liminf _{\lambda \rightarrow \infty }\frac{\log \log \mathcal {I}_{\epsilon , t}(\lambda )}{\log \lambda }\ge \frac{2\alpha }{\alpha -\beta }. \end{aligned}$$

Proof

We begin by noting that any fixed \(t>0\) can be written as \(t=\tilde{t}+t'\), where \(\tilde{t}\) is strictly positive and \(t'\) is small as in the previous proposition.

$$\begin{aligned} \begin{aligned} \sum _{k=1}^\infty (\lambda l_\sigma c_1)^{2k}&\left( \frac{t'}{k} \right) ^{k(\alpha -\beta )/\alpha }=\sum _{k=1}^\infty \left( \frac{(\lambda l_\sigma c_1)^2t'^{(\alpha -\beta )/\alpha }}{k^{(\alpha -\beta )/\alpha }} \right) ^{k}. \end{aligned} \end{aligned}$$

Lemma 2.4 with \(\rho :=(\alpha -\beta )/\alpha \) and \(\theta :=\lambda ^2\) together with the above result finishes the proof.\(\square \)

Proof of Theorem 1.6

The above two propositions prove the theorem for all \(t\le t_0\). We now extend the result to all \(t>0\). As in the proof of Theorem 1.3, we only need to extend the above proposition to any fixed \(t>0\). For any \(T,\,t>0\),

$$\begin{aligned} \begin{aligned} \mathrm {E}\left| u_{T+t}(x)\right| ^2&\ge |(\mathcal {G}_\mathrm{D}u)_{t+T}(x)|^2+\lambda ^2 l_\sigma ^2\int _0^{T+t}\int _{B^2}p_\mathrm{D}(T+t-s_1,\,x,\,z_1)\\&\quad \times \,p_\mathrm{D}\left( T+t-s_1,\,x,\,z_1'\right) \mathrm {E}\left| u_{s_1}(z_1)u_{s_1}\left( z_1'\right) \right| f\left( z_1,z_1'\right) \mathrm{d} z_1\mathrm{d} z_1'\mathrm{d} s_1. \end{aligned} \end{aligned}$$

This leads to

$$\begin{aligned} \begin{aligned} \mathrm {E}\left| u_{T+t}(x)\right| ^2&\ge |(\mathcal {G}_\mathrm{D}u)_{t+T}(x)|^2+\lambda ^2 l_\sigma ^2\int _0^{t}\int _{B^2}p_\mathrm{D}(t-s_1,\,x,\,z_1)\\&\quad \times \, p_\mathrm{D}\left( t-s_1,\,x,\,z_1'\right) \mathrm {E}\left| u_{T+s_1}(z_1)u_{T+s_1}\left( z_1'\right) \right| f\left( z_1,z_1'\right) \mathrm{d} z_1\mathrm{d} z_1'\mathrm{d} s_1. \end{aligned} \end{aligned}$$

A similar argument to that used in the proof of Proposition 4.3 shows that

$$\begin{aligned} \mathrm {E}\left| u_{T+t}(x)\right| ^2\ge & {} |(\mathcal {G}_\mathrm{D}u)_{T+t}(x)|^2\\&+\, \sum _{k=1}^\infty (\lambda l_\sigma )^{2k}\int _0^t\int _{B^2}\int _0^{s_1}\int _{B^2}\\&\cdots \int _0^{s_{k-1}}\int _{B^2} \left| (\mathcal {G}_Du)_{T+s_k}(z_k)(\mathcal {G}_\mathrm{D}u)_{T+s_k}\left( z_k'\right) \right| \\&\times \,\prod _{i=1}^kp_\mathrm{D}(s_{i-1}-s_{i}, z_{i-1},\,z_i)p_\mathrm{D}\\&(s_{i-1}-s_{i}, z'_{i-1},\,z'_i)f\left( z_i, z_i'\right) \mathrm{d} z_i\mathrm{d} z_i'\mathrm{d} s_i. \end{aligned}$$

Similar ideas to those used in the rest of the proof of Proposition 4.3 together with the proof of the above proposition show that for all \(t\le t_0\), we have

$$\begin{aligned} \liminf _{\lambda \rightarrow \infty }\frac{\log \log \mathrm {E}|u_{T+t}(x)|^2}{\log \lambda }\ge \frac{2\alpha }{\alpha -\beta }. \end{aligned}$$

for all \(T>0\) and whenever \(x\in B(0,\,R-\epsilon )\). \(\square \)

Proof of Corollary 1.7

The proof is exactly the same as that of Corollary 1.5 and is omitted.\(\square \)

5 Some Extensions

We begin this section by showing that the methods developed in this paper can be used to study the stochastic wave equation as well. More precisely, we give an alternative proof of a very interesting result proved in [13]. Consider the following equation

$$\begin{aligned} \partial _{tt} u_t(x)=\partial _{xx} u_t(x)+\lambda \sigma (u_t(x))\dot{w}(t,\,x) \quad \text {for}\quad x\in \mathbf {R}\quad t>0, \end{aligned}$$
(5.1)

with initial condition \(u_0(x)=0\) and non-random initial velocity \(v_0\) satisfying \(v_0\in L^1(\mathbf {R})\cap L^2(\mathbf {R})\) and \(\Vert v_0\Vert _{L^2(\mathbf {R})}>0\). As before \(\sigma \) satisfies the conditions mentioned in the introduction. We set \(\mathcal {E}_t(\lambda ):=\sqrt{\int _{-\infty }^\infty \mathrm {E}|u_t(x)|^2\,\mathrm dx}\) and restate the result of [13] as follows,

Theorem 5.1

Fix \(t>0\), we then have

$$\begin{aligned} \lim _{\lambda \rightarrow \infty }\frac{\log \log \mathcal {E}_t(\lambda )}{\log \lambda }=1 \end{aligned}$$

Proof

We again use the theory of Walsh [18] to make sense of (5.1) as the solution to the following integral equation

$$\begin{aligned} u_t(x)=\frac{1}{2}\int _{-t}^tv_0(x-y)\,\mathrm{d} y+\frac{1}{2}\lambda \int _0^t\int _{\mathbf {R}}1_{[0,t-s]}(|x-y|)\sigma (u_s(y))W(\mathrm{d} s\mathrm{d} y). \end{aligned}$$

We now use Walsh’s isometry to obtain

$$\begin{aligned} \begin{aligned} \mathrm {E}|u_t(x)|^2&=\frac{1}{4}\left| \int _{-t}^tv_0(x-y)\,\mathrm{d} y\right| ^2\\&\quad +\,\frac{1}{4}\lambda ^2\int _0^t\int _{\mathbf {R}}1_{[0,t-s]}(|x-y|)\mathrm {E}|\sigma (u_s(y)|^2\,\mathrm{d} s\,\mathrm{d} y. \end{aligned} \end{aligned}$$

Recall that from the assumption on the initial velocity, we have

$$\begin{aligned} \int _\mathbf {R}\left| \int _{-t}^tv_0(x-y)\,\mathrm{d} y\right| ^2\,\mathrm{d} x\le 4t^2\Vert v_0\Vert ^2_{L^2(\mathbf {R})}. \end{aligned}$$

This and the assumption on \(\sigma \) yields

$$\begin{aligned} \mathcal {E}^2_t(\lambda )\le t^2\Vert v_0\Vert ^2_{L^2(\mathbf {R})}+\frac{1}{4}\lambda ^2L^2_\sigma \int _0^t(t-s)\mathcal {E}^2_s(\lambda )\,\mathrm{d} s. \end{aligned}$$
(5.2)

Using similar ideas, we can obtain the following lower bound,

$$\begin{aligned} \mathcal {E}^2_t(\lambda )\ge t^2\Vert v_0\Vert ^2_{L^2(\mathbf {R})}+\frac{1}{4}\lambda ^2L^2_\sigma \int _0^t(t-s)\mathcal {E}^2_s(\lambda )\,\mathrm{d} s. \end{aligned}$$
(5.3)

We now use Propositions 2.5 and 2.6 together with the above two inequalities to obtain the result.\(\square \)

The method developed so far can be adapted to the study of a much wider class of stochastic heat equations, once we have the “right” heat kernel estimates. Indeed, (2.1) and (2.2) were two crucial elements of our method. So by considering operators whose heat kernels behave in a nice way, we can generate examples of stochastic heat equations for which we can apply our method. Recall that we are considering equations of the type,

$$\begin{aligned} \partial _t u_t(x)=\mathcal {L}u_t(x)+\lambda \sigma (u_t(x))\dot{F}(t,\,x). \end{aligned}$$
(5.4)

In what follows, we will choose different \(\mathcal {L}\)s while keeping all the other conditions as before. And again, the choice of these operators \(\mathcal {L}\)s will make the boundary conditions clear. Some of the equations below appear to be new. We again do not prove existence–uniqueness results as these are fairly standard once we have a grip on the heat kernel. See [6, 18].

Example 5.2

We choose \(\mathcal {L}\) to be the generator of a Brownian motion defined on the interval (0, 1) which is reflected at the point 1 and killed at the other end of the interval. So, we are in fact looking at

$$\begin{aligned} \left| \begin{array}{ll} \partial _t u_t(x)=\frac{1}{2}\partial _{xx}u_t(x)+\lambda \sigma (u_t(x))\dot{F}(t,\,x)&{}\quad \text {for}\quad 0<x<1\quad \text {and}\quad t>0\\ u_t(0)=0,\quad \partial _{x}u_t(1)=0 &{}\quad \text {for}\quad t>0. \end{array} \right. \end{aligned}$$

It can be shown that for any \(\epsilon >0\), there exists a \(t_0>0\), such that for all \(x\in [\epsilon , 1)\) and \(t\le t_0\), the heat kernel of this Brownian motion satisfies

$$\begin{aligned} p(t,\,x,\,y)\asymp t^{-d/2}, \end{aligned}$$

whenever \(|x-y|\le t^{1/2}\). We use the method developed in this paper to conclude that

$$\begin{aligned} \lim _{\lambda \rightarrow \infty }\frac{\log \log \mathrm {E}|u_t(x)|^2}{\log \lambda }=\frac{4}{2-\beta }, \end{aligned}$$

whenever \(x\in [\epsilon ,1)\).

Example 5.3

Let \(X_t\) be censored stable process as introduced in [1]. These have been studied in [4]. Roughly speaking, the censored stable process in the ball \(B(0,\,R)\) can be obtained by suppressing the jump from \(B(0,\,R)\) to the complement of \(B(0,\,R)^c\). The process is thus forced to stay inside \(B(0,\,R)\). We denote the generator of this process by \(-(-\Delta )^{\alpha /2}|_{B(0,\,R)}\) and consider the following equation

$$\begin{aligned} \partial _t u_t(x)=-(-\Delta )^{\alpha /2}|_{B(0,\,R)} u_t(x)+\lambda \sigma (u_t(x))\dot{F}(t,\,x), \end{aligned}$$
(5.5)

In a sense, the above equation can be regarded as fractional equation with Neumann boundary condition. In Ref. [4], it was shown that the probability density function of \(X_t\), which we denote by \(\bar{p}(t,\,x,\,y)\) satisfies

$$\begin{aligned} \bar{p}(t,\,x,\,y)\asymp \left( 1\wedge \frac{\delta ^{\alpha /2}_{B(0,\,R)}(x)}{t^{1/2}}\right) \left( 1\wedge \frac{\delta ^{\alpha /2}_{B(0,\,R)}(y)}{t^{1/2}}\right) p(t,\,x,\,y), \end{aligned}$$

So we can proceed as in the proof of Theorem 1.6 to see that we have

$$\begin{aligned} \lim _{\lambda \rightarrow \infty }\frac{\log \log \mathrm {E}|u_t(x)|^2}{\log \lambda }=\frac{2\alpha }{\alpha -\beta }, \end{aligned}$$
(5.6)

where the conditions on \(\alpha \) and \(\beta \) are the same as those stated in Sect. 1.

Example 5.4

In this example, we choose \(\mathcal {L}\) be the generator of the relativistic stable process killed upon exiting the ball \(B(0,\,R)\). We are therefore looking at the following equation

$$\begin{aligned} \left| \begin{aligned}&\partial _t u_t(x)= mu_t(x)-(m^{2/\alpha }-\Delta )^{\alpha /2}u_t(x)+\lambda \sigma (u_t(x))\dot{F}(t,\,x),\\&u_t(x)=0, \quad \text {for all}\quad x\in B(0,\,R)^c. \end{aligned} \right. \end{aligned}$$

Here m is some fixed positive number. One can show that for any \(\epsilon >0\), there exists a \(t_0>0\), such that for all \(x,y\in B(0,\,R-\epsilon )\) and \(t\le t_0\), we have

$$\begin{aligned} p(t,\,x,\,y)\asymp t^{-d/\alpha }, \end{aligned}$$

whenever \(|x-y|\le t^{1/\alpha }\). See for instance [5]. The constants involved in the above inequality depends on m. We therefore have the same conclusion as that of Theorem 1.6. In other words, we have

$$\begin{aligned} \lim _{\lambda \rightarrow \infty }\frac{\log \log \mathrm {E}|u_t(x)|^2}{\log \lambda }=\frac{2\alpha }{\alpha -\beta }, \end{aligned}$$
(5.7)

whenever \(x\in B(0,R-\epsilon )\) and the conditions on \(\alpha \) and \(\beta \) are the same as those stated in Sect. 1.

Example 5.5

Let \(0<\bar{\alpha }\le \alpha \) with \(1<\alpha <2\) and consider the following

$$\begin{aligned} \left| \begin{aligned}&\partial _t u_t(x)= -(-\Delta )^{\alpha /2}u_t(x)-(-\Delta )^{\bar{\alpha }/2}u_t(x)+\lambda \sigma (u_t(x))\dot{F}(t,\,x),\\&u_t(x)=0, \quad \text {for all}\quad x\in B(0,\,R)^c. \end{aligned} \right. \end{aligned}$$

The Dirichlet heat kernel for the operator \(\mathcal {L}:=-(-\Delta )^{\alpha /2}-(-\Delta )^{\bar{\alpha }/2}\) has been studied in [3]. Since \(\bar{\alpha }\le \alpha \), it is known that for small times, the behaviour of the heat kernel estimates is dominated by the fractional Laplacian \(-(-\Delta )^{\alpha /2}\). More precisely, for any \(\epsilon >0\), there exists a \(t_0>0\), such that for all \(x,y\in B(0,\,R-\epsilon )\) and \(t\le t_0\), we have

$$\begin{aligned} p(t,\,x,\,y)\asymp t^{-d/\alpha }, \end{aligned}$$

whenever \(|x-y|\le t^{1/\alpha }\). Therefore, in this case also, we have (5.7).