1 Introduction and main results

A very influential paper by Fujita [12] studies the following equation

$$\begin{aligned} \partial _tu(t, x)&= \Delta u(t, x)+u(t,\,x)^{1+\eta }\quad x\in \mathbb {R}^d\nonumber \\ u(0, x)&=u_0(x). \end{aligned}$$
(1.1)

Let \(\eta _c=\frac{2}{d}\). It was shown in [12] that when \(0<\eta <\eta _c\), there is no nontrivial global solution no matter how small the initial condition \(u_0\) is, provided it is nonnegative. When \(\eta >\eta _c\), then one can construct nontrivial global solution when \(u_0\) is small enough. The critical case \(\eta =\eta _c\) was shown to fall into the first category; see [13, 18]. These results have inspired a lot of generalizations. See the survey papers [8, 20] and the book [23]. Equation (1.1) could be interpreted via the integral equation

$$\begin{aligned} u(t,\,x)=\int _{\mathbb {R}^d}p(t,\,x-y)u_0(y)\,\mathrm{d}y+\int _0^t\int _{\mathbb {R}^d}p(t-s,\,x-y)u(s,\,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s,\nonumber \\ \end{aligned}$$
(1.2)

where \(p(t,\,x)\) is the Gaussian heat kernel. This is the approach we adopt here.

Roughly speaking, our aim here is to look at similar questions but for a class of equations which involve the fractional Laplacian as well as a fractional time derivative. Equations of these types have been receiving a lot of attention lately; see the recent works of Allen et al. [3, 4] and of Allen [1, 2] among others on the purely analytic side and the very recent work of Capitanelli and D’Ovidio [5] and references therein for the more probabilistic aspects. See [7, 22] for the space–time fractional equations in bounded domains with Dirichlet boundary conditions. Consider the following generalization of (1.2),

$$\begin{aligned} V(t, x)= \int _{\mathbb {R}^d} G(t,\,x-y)V_0(y)\,\mathrm{d}y + \int _{\mathbb {R}^d}\int _0^t G(t-s,\,x-y)V(s, y)^{1+\eta }\mathrm{d}s\,\mathrm{d}y.\nonumber \\ \end{aligned}$$
(1.3)

The first term in the above display now solves the space–time fractional heat equation

$$\begin{aligned} \partial ^\beta _tV(t, x)&= -(-\Delta )^{\alpha /2} V(t, x),\ \ x\in \mathbb {R}^d,\nonumber \\ V(0, x)&=V_0(x), \end{aligned}$$
(1.4)

where \(\alpha \in (0,\,2)\) and \(\beta \in (0,\,1)\). The fractional time derivative is the Caputo derivative defined by

$$\begin{aligned} \partial ^\beta _t V(t,x)=\frac{1}{\Gamma (1-\beta )}\int _0^t \frac{\partial V(r,x)}{\partial r}\frac{\mathrm{d}r}{(t-r)^\beta }. \end{aligned}$$

The solution to (1.3) is referred to as the integral solution to the following equation

$$\begin{aligned} {\begin{aligned} \partial ^\beta _tV(t, x)&= -(-\Delta )^{\alpha /2} V(t, x)+I^{1-\beta }_t[ V(t, x)^{1+\eta }],\\ V(0, x)&=V_0(x). \end{aligned} } \end{aligned}$$
(1.5)

More precisely, the integral solution to (1.5) is given by a measurable and finite almost everywhere function V which satisfies (1.3) almost everywhere \((t,\,x)\in \mathbb {R}^+\times \mathbb {R}^d\). See page 78 of [23] for more information. The operator \(-(-\Delta )^{\alpha /2} \) denotes the fractional Laplacian which is the generator of an \(\alpha \)-stable process. \(V_0\) will always be assumed to be a non-negative function. We will impose further assumptions on \(V_0\) later. The operator \(I^{1-\beta }_t\) is defined by

$$\begin{aligned} I^{1-\beta }_{t}f(t):=\frac{1}{\Gamma (1-\beta )}\int _{0}^{t}(t-\tau )^{-\beta }f(\tau )\mathrm{d}\tau . \end{aligned}$$

Its presence is important in making the connection between (1.3) and (1.5). See [26] for the fractional Duhamel’s principle. We note that the time-fractional equation (1.5) is a particular type of reaction–diffusion equation and is therefore useful from the point of view of applications. The presence of the term \(I_t^{1-\beta }\) in front of the non-linear term means that in the absence of diffusion, the reaction behaves according to the classical dynamics \(\dot{V}=V^{1+\eta }\) while the fractional time derivative is used to model subdiffusive behavior rather than diffusion behavior which would have been the case if there were no fractional time derivative. For more information regarding fractional dynamics, see [22].

Our main findings can be summarized as follows:

  • We show that \(\eta _c=\frac{\alpha }{\beta d}\). This is a direct generalization of the dichotomy first discovered in [12, 13, 18]. When \(\beta =1\) and \(\alpha =2\), (1.3) becomes (1.2). Our new found exponent is therefore consistent with that obtained in [12].

  • We also study (1.3) on a bounded domain with Dirichlet boundary conditions. For the usual heat equation, that is with the usual time derivative and Laplacian, there is no such dichotomy. This means that one can always produce global solutions no matter what \(\eta \) is ; See [23]. In our case, we show that this is not true; for small \(\eta \), there is no global solution other than the trivial one.

We focus only on integral solution to (1.5). The book [23] contains a list of other concepts of solution. There are also various meanings of non-existence or blow-up of solution, we will be focusing mainly on point-wise non-existence. See [20] or [23], where this is explained in great details. Our method will rely on some new estimates on the heat kernel associated with (1.4) some of which were proved in [11] and later extended in [6]. We will make use of subordination to get new information about the heat kernel. See (2.8) of this current paper. A difficulty in establishing non-existence on the whole line is that the heat kernel does not satisfy the semigroup property. We had to establish a new strategy to achieve our first result. Since we had to bypass the semigroup property our method might even be new for the classical heat equation; that is ; when \(\alpha =2\) and \(\beta =1\). Our first theorem reads as follows.

Theorem 1.1

Suppose that \(0<\eta \leqslant \alpha /\beta d\) and \(\Vert V_0\Vert _{L^1(\mathbb {R}^d)}<\infty \). If we further assume that \(V_0\) is strictly positive on a set of positive measure, then for any fixed \(x\in \mathbb {R}^d\), there exists a \(t_0>0\) such that the solution to (1.5) blows up at \(t_0\).

The above theorem generalizes Theorem 18.3 of [23] but the method is different. The presence of the time fractional derivative makes it that when \(\alpha \leqslant d\), the heat kernel has a singularity at \(x=0\) for all \(t>0\). This partly motivated the proof of the next theorem.

Theorem 1.2

Suppose that \(\eta > \alpha /\beta d\) and let \(q_c=\beta d\eta /\alpha \). Then, for \(V_0\not \equiv 0\) such that \(\Vert V_0\Vert _{L^{q_c}({{\mathbb {R}}^d})}\) is small, the solutions to (1.5) exist globally in the sense that \(\Vert V(t,\,\cdot ) \Vert _{L^\infty (\mathbb {R}^d)}<\infty \) for all \(t>0\).

In fact, we will also show that for some \(p>1\), \(\Vert V(t,\,\cdot )\Vert _{L^p(\mathbb {R}^d)}\) decays polynomially. This is also an extension over previously known results. We will also show that the solution is jointly continuous whenever it exists. Even though regularity properties of the solution is not a priority here, our results in this direction seem to be new. When \(d<\alpha \), we have better estimates on the heat kernel so that we can establish the following stronger result. Since \(\alpha \in (0,\,2)\), this condition reduces the dimension to \(d=1\). The theorem below significantly extends Theorem 20.1 of [23].

Theorem 1.3

Let \(d<\alpha \) and \(\eta >\alpha /\beta d\). There is some \(\delta >0\) such that if \(V_0\) satisfies

$$\begin{aligned} 0\leqslant V_0(x)\leqslant \delta G(\gamma ,\,x)\quad \text {for all}\quad x\in \mathbb {R}^d, \end{aligned}$$

where \(\gamma \) is a positive constant, then

$$\begin{aligned} V(t,\,x)\lesssim G(t+\gamma ,\,x). \end{aligned}$$

Moreover, the solution is jointly continuous on \((0,\,\infty )\times \mathbb {R}^d\).

We have therefore shown that \(\eta _c=\frac{\alpha }{\beta d}\). This is consistent with the following characterization which says that this exponent is the reciprocal of the following quantity,

$$\begin{aligned} \eta ^*:=\sup _{a>0}\left\{ \sup _{t\in (0,\,\infty ),x\in \mathbb {R}^d}t^a\int _{\mathbb {R}^d}G(t,\,x-y)V_0(y)\,\mathrm{d}y<\infty \right\} . \end{aligned}$$

Indeed one can show that the supremum of \(\int _{\mathbb {R}^d}G(t,\,x-y)V_0(y)\,\mathrm{d}y\) behaves like \(t^{-\beta d/\alpha }\). This characterization also gives \(\eta _c=0\) when (1.1) is solved on a bounded domain with Dirichlet boundary condition. See page 108 of [23] where this is described in more details. Our results should be compared with those in [17] where a different class of fractional equations were studied; Fujita exponents were obtained for those equations. However, the presence of the operator \(I^{1-\beta }\) in (1.5) makes it that their results do not cover ours.

Our next result shows that this is not true when one looks at the corresponding equation with a time-fractional derivative. Fix \(R>0\) and consider the following

$$\begin{aligned} \begin{aligned} \partial ^\beta _tV(t, x)&= -(-\Delta )^{\alpha /2} V(t,x)+I^{1-\beta }_t[ V(t, x)^{1+\eta }], \quad {t>0}\quad \text {and}\quad x\in B(0,\,R),\\ V(t, x)&= 0,\ \ x\in B(0,\,R)^c,\\ V(0, x)&=V_0(x), \ \ x\in B(0,\,R). \end{aligned} \end{aligned}$$
(1.6)

Here \(-(-\Delta )^{\alpha /2}\) denotes the generator of \(\alpha \)-stable process killed upon exiting the ball \(B(0,\,R)\). This is the infinitesimal generator of the semigroup corresponding to symmetric stable process killed at the exterior of \(B(0,\,R)\). We can also obtain the same process by subordinating the Brownian motion and then killing it at the exterior. Fractional powers of the Dirichlet Laplacian corresponds to a different process which is obtained by killing the Brownian motion upon reaching the boundary of the ball \(B(0,\,R)\) and then subordinating it.

We will again look at the integral formulation of the equation,

$$\begin{aligned} V(t, x)= & {} \int _{B(0,\,R)} G_D(t,\,x-y)V_0(y)\,\mathrm{d}y\nonumber \\&+ \int _{B(0,\,R)}\int _0^t G_D(t-s,\,x-y)V(s, y)^{1+\eta }\mathrm{d}s\,\mathrm{d}y, \end{aligned}$$
(1.7)

where now \(G_D(t,\,x,\,y)\) is the Dirichlet heat kernel of the associated operator. Denote \(\phi _1\) to be the first eigenfunction of the above Dirichlet fractional Laplacian and set

$$\begin{aligned} K_{V_0, \phi _1}:=\int _{B(0,\,R)}V_0(x)\phi _1(x)\,\mathrm{d}x. \end{aligned}$$

We are now ready to state the final theorem of this paper. This is a consequence of the spectral decomposition of the heat kernel in terms of Mittag-Leffler functions and the proof uses the eigenfunction method of [14]. The first part of this theorem is in sharp contrast with Theorem 19.2 of [23].

Theorem 1.4

Suppose that \(0<\eta <1/\beta -1\), then there is no global solution to (1.6) whenever \(K_{V_0, \phi _1}>0\). For any \(\eta >0\), there is no global solution whenever \(K_{V_0, \phi _1}>0\) is large enough.

At this point we do not investigate the dichotomy as in the equation on the whole plane. One can perhaps argue that since the solution to the Dirichlet equation is smaller than that on the whole plane, we can find global solution when \(\eta \) is large enough.

Here is a plan of the article. Section 2 contains estimates needed for the proof of Theorem 1.1. This is given in Sect. 3. Section 4 is devoted to the proof of Theorem 1.2 while the proofs of Theorems 1.3 and 1.4 are given in Sects. 5 and 6 respectively. We use the notation \(f(t,x)\lesssim (\gtrsim )g(t,x)\) when there exists a constant C independent of (tx) such that \(f(t,x)\leqslant (\geqslant ) C g(t,x)\) for all \((t, x)\in (0,\,\infty )\times \mathbb {R}^d\).

2 Some estimates

We begin this section by giving a brief description of the process associated with (1.4). However; we will not use this process directly. Instead we will use it to derive a suitable representation of its heat kernel. See [7, 21] for more information. Let \(X_t\) denote a symmetric \(\alpha \)-stable process associated with the fractional Laplacian where \(\alpha \in (0,2)\). Its density function will be denoted by \(p(t,\,x)\). This is characterized through the Fourier transform which is given by

$$\begin{aligned} {\widehat{p}(t,\,\xi )=\int _{{\mathbb {R}}^d}e^{-i\xi \cdot x}p(t,x)\mathrm{d}x}=e^{-t|\xi |^\alpha }. \end{aligned}$$

The following properties of p(tx) will be needed in this paper:

  • $$\begin{aligned} p(st, x)=s^{-d/\alpha }p(t,s^{-1/\alpha }x). \end{aligned}$$
    (2.1)
  • $$\begin{aligned} \frac{\partial p(t,\,x)}{\partial t}\lesssim \frac{1}{t}p(t,\,x). \end{aligned}$$
    (2.2)
  • $$\begin{aligned} \nabla p(t,\,x)\lesssim \frac{1}{t^{1/\alpha }}p(t,\,x). \end{aligned}$$
    (2.3)
  • For all \(t>0\), \(x,y \in \mathbb {R}^d\) and \(\rho \in [0,\,1]\),

    $$\begin{aligned} {|p(t,\,y)-p(t,\,x)|\lesssim \frac{|x-y|^\rho }{t^{\rho /\alpha }}[p(t,\,x/2)+p(t,\,y/2)].} \end{aligned}$$
    (2.4)

We also have

$$\begin{aligned} c_1\bigg (t^{-d/\alpha }\wedge \frac{t}{|x|^{d+\alpha }}\bigg )\leqslant p(t,x)\leqslant c_2\bigg (t^{-d/\alpha }\wedge \frac{t}{|x|^{d+\alpha }}\bigg ), \end{aligned}$$
(2.5)

for some positive constants \(c_1\) and \(c_2\). The first identity (2.1) follows from scaling. The bounds on the derivatives are also standard and can be found for instance [19, 25]. Inequalities (2.4) and (2.5) can be found in [10, 19] respectively. The process associated with (1.4) is not Markov and the heat kernel \(G(t,\,x)\) does not satisfy the semigroup property. We describe this process next. Let \(D=\{D_r,\,r\geqslant 0\}\) be a \(\beta \)-stable subordinator with \(\beta \in (0,1)\). Its Laplace transform is given by \(\mathbb {E}(e^{-sD_t})=e^{-ts^\beta }\). Let \(E_t\) be its first passage time defined by

$$\begin{aligned} E_t=\inf \{\tau : D_\tau >t\}. \end{aligned}$$
(2.6)

\(E_t\) is also called the inverse subordinator. The process which we will be interested in is given by the time changed process \(X_{E_t}\). This is the process associated with the time fractional heat equation given by (1.4). Its density \(G(t,\,x)\) is given by a simple conditioning as follows

$$\begin{aligned} G(t,\,x) = \int _{0}^\infty p(s,\,x) f_{E_t}(s)\mathrm{d}s, \end{aligned}$$
(2.7)

where

$$\begin{aligned} f_{E_t}(x)=t\beta ^{-1}x^{-1-1/\beta }g_\beta (tx^{-1/\beta }). \end{aligned}$$

The function \(g_\beta (\cdot )\) is the density function of \(D_1\) and is infinitely differentiable on the entire real line, with \(g_\beta (u)=0\) for \(u\leqslant 0\). After a change of variable, (2.7) turns into

$$\begin{aligned} G(t,\,x)=\int _0^\infty p\left( \left( \frac{t}{u}\right) ^\beta , x\right) g_\beta (u)\,\mathrm{d}u, \end{aligned}$$
(2.8)

which makes the following asymptotic properties particularly useful,

$$\begin{aligned} g_\beta (u)\sim & {} K(\beta /u)^{(1-\beta /2)/(1-\beta )}\exp \{-|1-\beta |(u/\beta )^{\beta /(\beta -1)}\}\quad \text{ as }\,\, u\rightarrow 0+,\nonumber \\&{\quad \text{ where }\, K>0,} \end{aligned}$$
(2.9)

and

$$\begin{aligned} g_\beta (u)\sim \frac{\beta }{\Gamma (1-\beta )}u^{-\beta -1} \quad \text{ as }\,\, u\rightarrow \infty . \end{aligned}$$
(2.10)

Using (2.8) together with (2.1), we obtain

  • $$\begin{aligned} G(st, x)=s^{-\beta d/\alpha }G(t,s^{-\beta /\alpha }x), \end{aligned}$$
    (2.11)

As explained above, our method will be partly inspired by the following inequality which was proved in [11] and subsequently generalized in [6].

$$\begin{aligned} c_1\bigg (t^{-\beta d/\alpha }\wedge \frac{t^\beta }{|x|^{d+\alpha }}\bigg )\leqslant G(t,\,x)\leqslant c_2\bigg (t^{-\beta d/\alpha }\wedge \frac{t^\beta }{|x|^{d+\alpha }}\bigg ), \end{aligned}$$
(2.12)

where the upper bound is valid for \(\alpha > d\) only. In this case, we immediately have

$$\begin{aligned} p(t^\beta ,\,x)\lesssim G(t,\,x)\lesssim p(t^\beta ,\,x), \end{aligned}$$
(2.13)

which we will use to compensate for the lack of the semigroup property. If \(|x|\leqslant t^{\beta /\alpha }\), then when \(\alpha =d\), we have

$$\begin{aligned} t^{-\beta }\log \left( \frac{2}{|x|t^{-\beta /\alpha }}\right) \lesssim G(t,\,x)\lesssim t^{-\beta }\log \left( \frac{2}{|x|t^{-\beta /\alpha }}\right) \end{aligned}$$

and when \(d>\alpha \),

$$\begin{aligned} \frac{t^{-\beta }}{|x|^{d-\alpha }}\lesssim G(t,\,x)\lesssim \frac{t^{-\beta }}{|x|^{d-\alpha }}. \end{aligned}$$

When \(|x|\geqslant t^{\beta /\alpha }\), then \(G(t,\,x)\) satisfy the bounds given by (2.12) even when \(d\geqslant \alpha \). This was shown in [6]. See also Lemma 3.3 and Lemma 3.7 in [15] for point-wise and gradient estimates of the kernel \(G(t,\,x)\). Another recent paper with estimates on the kernel \(G(t,\,x)\) is the paper by Kim and Lim [16]. In the very interesting paper [9], the authors also noticed that when \(d>\alpha \), the heat kernel is better behaved. We have the following estimates on the derivatives of the heat kernel.

Proposition 2.1

For any \(t>0\) and \(x\in \mathbb {R}^d\), we have

  1. (a)
    $$\begin{aligned} \frac{\partial G(t,\,x)}{\partial t}\lesssim \frac{1}{t}G(t,\,x). \end{aligned}$$
    (2.14)
  2. (b)

    Set \(T>0\) and let \(\rho <\alpha \), then for \(t\in (0,\,T]\), we have

    $$\begin{aligned} \int _{\mathbb {R}^d}|G(t,\,x+h)-G(t,\,x)|f(t,\,x)\,\mathrm{d}x\lesssim \frac{|h|^{\rho }}{t^{\rho \beta /\alpha }}, \end{aligned}$$
    (2.15)

    where \(h\in \mathbb {R}^d\) and \(f(t,\,x)\) is a function satisfying \(\sup _{t\in [0,\,T]}\Vert f(t,\,\cdot )\Vert _{L^\infty (\mathbb {R}^d)}<\infty \).

Proof

The proof of the first part follows from

$$\begin{aligned} G(t,\,x)=\int _0^\infty p\left( \left( \frac{t}{u}\right) ^\beta , x\right) g_\beta (u)\,\mathrm{d}u, \end{aligned}$$

and (2.2), (2.3) and the asymptotic properties of \(g_\beta (u)\). For the second part, we use (2.4) to obtain

$$\begin{aligned} G(t,\,x+h)-G(t,\,x)&=\int _0^\infty \left[ p\left( \left( \frac{t}{u}\right) ^\beta , x+h\right) -p\left( \left( \frac{t}{u}\right) ^\beta , x\right) \right] g_\beta (u)\,\mathrm{d}u\\&\lesssim \frac{|h|^{\rho }}{t^{\rho \beta /\alpha }}\int _0^\infty u^{\rho \beta /\alpha }\left[ p\left( \left( \frac{t}{u}\right) ^\beta , \frac{x+h}{2}\right) \right. \\&\quad \left. +\,p\left( \left( \frac{t}{u}\right) ^\beta , \frac{x}{2}\right) \right] g_\beta (u)\,\mathrm{d}u. \end{aligned}$$

Hence, we have

$$\begin{aligned} \int _{\mathbb {R}^d}|G(t,\,x+h)-G(t,\,x)|f(t,\,x)\,\mathrm{d}x&\lesssim \frac{|h|^{\rho }}{t^{\rho \beta /\alpha }}\int _0^\infty u^{\rho \beta /\alpha }g_\beta (u)\,\mathrm{d}u \\&\lesssim \frac{|h|^{\rho }}{t^{\rho \beta /\alpha }}, \end{aligned}$$

where we have used the fact that for any \(s\geqslant 0\) and \(z\in \mathbb {R}^d\),

$$\begin{aligned} \int _{\mathbb {R}^d} p(s,\,x+y+z)&f(t,\,y)\,\mathrm{d}y\\&\leqslant \sup _{t\in [0,\,T]}\Vert f(t,\,\cdot )\Vert _{L^\infty (\mathbb {R}^d)}\int _{\mathbb {R}^d} p(s,\,x+y+z)\mathrm{d}y, \end{aligned}$$

to arrive at the first inequality. That the integral

$$\begin{aligned} \int _0^\infty u^{\rho \beta /\alpha }g_\beta (u)\,\mathrm{d}u \end{aligned}$$

is finite when \(\rho <\alpha \) can be seen by looking at the behavior of \(g_\beta (u)\) as \(u\rightarrow \infty .\)

\(\square \)

Set

$$\begin{aligned} \mathcal {G}f(t,\,x):=\int _{\mathbb {R}^d}G(t,\,x-y)f(y)\,\mathrm{d}y, \end{aligned}$$

and

$$\begin{aligned} \mathcal {A}f(t,\,x):=\int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)f(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

We will need the following to argue that the solution is jointly continuous whenever it exists.

Proposition 2.2

  • Suppose that \(V_0\) is such that \(\sup _{(0,\,T)} \Vert \mathcal {G}V_0(t,\,\cdot )\Vert _{L^\infty (\mathbb {R}^d)}<\infty \) for some \(T\leqslant \infty \), then \(\mathcal {G}V_0(t,\,x)\) is jointly continuous on \((0,\,T)\times \mathbb {R}^d\).

  • Suppose that \(\sup _{t\in (0,T]}\Vert f(t,\,\cdot )\Vert _{L^\infty (\mathbb {R}^d)}<\infty \) for some \(T\leqslant \infty \). Then \(\mathcal {A}f(t,\,x)\) is jointly continuous on \((0,\,T)\times \mathbb {R}^d\).

Proof

The proof uses Proposition 2.1. We merely indicate how to start the proof of the more technical part. For \(h>0\), \(k\in \mathbb {R}^d\), we write

$$\begin{aligned} \mathcal {A}f(t+h,\,x+k)-\mathcal {A}f(t,\,x)&=\mathcal {A}f(t+h,\,x+k)-\mathcal {A}f(t,\,x+k)\\&\quad +\mathcal {A}f(t,\,x+k)-\mathcal {A}f(t,\,x)\\&:=I+II. \end{aligned}$$

For the first part, we have

$$\begin{aligned} I&=\int _0^{t+h}\int _{\mathbb {R}^d}G(t+h-s,\,x+k-y)f(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&\quad -\int _0^t\int _{\mathbb {R}^d}G(t-s,\,x+k-y)f(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&=\int _0^t\int _{\mathbb {R}^d}[G(t+h-s,\,x+k-y)-G(t-s,\,x+k-y)]f(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&\quad +\int _t^{t+h}\int _{\mathbb {R}^d}G(t+h-s,\,x+k-y)f(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

We can now use the above Proposition to bound each term. We deal with the second part in a similar fashion.

\(\square \)

Lemma 2.3

Suppose that \(\Vert V_0\Vert _{L^1(\mathbb {R}^d)}<\infty \). If we further assume that \(V_0\) is strictly positive on a set of positive measure, then there exists a \(T>0\), such that for all \(t\geqslant T\),

$$\begin{aligned} \mathcal {G}V_0(t,\,x)\gtrsim \frac{1}{t^{\beta d/\alpha }}\quad \text {for all}\quad x\in B(0,\,t^{\beta /\alpha }). \end{aligned}$$

Proof

Let \(x\in B(0,\,t^{\beta /\alpha })\). We now use the lower bound on the heat kernel to write

$$\begin{aligned} \mathcal {G}V_0(t,\,x)&= \int _{\mathbb {R}^d}G(t,\,x-y)V_0(y)\,\mathrm{d}y\\&\geqslant \int _{B(0,\,t^{\beta /\alpha })}G(t,\,x-y)V_0(y)\,\mathrm{d}y\\&\gtrsim \frac{1}{t^{\beta d/\alpha }} \int _{B(0,\,t^{\beta /\alpha })}V_0(y)\,\mathrm{d}y. \end{aligned}$$

Choosing T so that \(\int _{B(0,\,T^{\beta /\alpha })}V_0(y)\,\mathrm{d}y\geqslant \frac{1}{4}\Vert V_0\Vert _{L^1(\mathbb {R}^d)}\), we obtain the desired inequality for \(t\geqslant T\). \(\square \)

3 Proof of Theorem 1.1

Proposition 3.1

Suppose that \(\eta <\frac{\alpha }{\beta d}\). Let \(M>0\), then there exists a \(T_0>0\) such that for \(t\geqslant T_0\),

$$\begin{aligned} \inf _{x\in B(0,\,t^{\beta /\alpha })}V(t,\,x)\geqslant M. \end{aligned}$$

Proof

We begin with the integral solution,

$$\begin{aligned} V(t, x)&= \mathcal {G}V_0(t,\,x)+ \int _{\mathbb {R}^d}\int _0^t G(t-s,\,x-y)V(s, y)^{1+\eta }\mathrm{d}s\,\mathrm{d}y. \end{aligned}$$

We look at the second term first. For \(x\in B(0,\,t^{\beta /\alpha })\), we have

$$\begin{aligned} \int _{\mathbb {R}^d}\int _0^t&G(t-s,\,x-y)V(s, y)^{1+\eta }\mathrm{d}s\,\mathrm{d}y\\&\geqslant \int _0^t\inf _{y\in B(0,\,s^{\beta /\alpha })}V(s,\,y)^{1+\eta }\int _{B(0,\,s^{\beta /\alpha })}G(t-s,\,x-y)\mathrm{d}y\,\mathrm{d}s\\&\geqslant \int _0^{t/2}\inf _{y\in B(0,\,s^{\beta /\alpha })}V(s,\,y)^{1+\eta }\int _{B(0,\,s^{\beta /\alpha })}G(t-s,\,x-y)\mathrm{d}y\,\mathrm{d}s\\&\gtrsim \int _0^{t/2}\inf _{y\in B(0,\,s^{\beta /\alpha })}V(s,\,y)^{1+\eta }\frac{s^{\beta d/\alpha }}{t^{\beta d/\alpha }}\,\mathrm{d}s, \end{aligned}$$

where we have used the lower bounds given by (2.12). For the first term we use Lemma 2.3 to write

$$\begin{aligned} \inf _{x\in B(0,\,t^{\beta /\alpha })}\mathcal {G}V_0(t,\,x)\gtrsim \frac{1}{t^{\beta d/\alpha }}, \end{aligned}$$

whenever \(t\geqslant T\), where T is from Lemma 2.3. Combining these estimates, we obtain

$$\begin{aligned} \inf _{x\in B(0,\,t^{\beta /\alpha })}V(t,\,x)\gtrsim \frac{1}{t^{\beta d/\alpha }}+\int _0^{t/2}\inf _{y\in B(0,\,s^{\beta /\alpha })}V(s,\,y)^{1+\eta }\frac{s^{\beta d/\alpha }}{t^{\beta d/\alpha }}\,\mathrm{d}s. \end{aligned}$$

Set

$$\begin{aligned} F(t):=\inf _{x\in B(0,\,t^{\beta /\alpha })}t^{\beta d/\alpha }V(t,\,x), \end{aligned}$$

The above inequality then reduces to

$$\begin{aligned} F(t)\gtrsim 1+\int _0^{t/2}\frac{F(s)^{1+\eta }}{s^{\eta \beta d/\alpha }}\,\mathrm{d}s. \end{aligned}$$
(3.1)

Now using \(F(t)\geqslant 1\) and \(\eta <\frac{\alpha }{\beta d}\), we obtain

$$\begin{aligned} F(t)\gtrsim 1+\int _0^{t/2}\frac{1}{s^{\eta \beta d/\alpha }}\,\mathrm{d}s=1+C_1t^{1-\beta d \eta /\alpha }\geqslant C_1t^{1-\beta d \eta /\alpha }. \end{aligned}$$

Plugging \(F(t)\geqslant C_1t^{1-\beta d \eta /\alpha } \) into the inequality (3.1) we get

$$\begin{aligned} F(t)\geqslant C_2t^{2(1-\beta d \eta /\alpha )} \end{aligned}$$

Similar computations imply that for any given fixed integer \(N>0\), there are strictly positive constants \(c_N\) and \(\tilde{c}_N\) such that

$$\begin{aligned} F(t)\gtrsim \tilde{c}_Nt^{c_N}. \end{aligned}$$

We therefore obtain \(\inf _{x\in B(0,\,t^{\beta /\alpha })}V(t,\,x)\gtrsim t^{c_N-\beta d/\alpha }\). Since \(\eta <\frac{\alpha }{\beta d}\), we can take N so that \({c_N-\beta d/\alpha }>0\). Hence for any fixed \(M>0\), there exists a \(T_0>0\) such that \(\inf _{x\in B(0,\,t^{\beta /\alpha })}V(t,\,x)\gtrsim M\) whenever \(t\geqslant T_0\).

\(\square \)

A consequence of the above is the following.

Proposition 3.2

Let \(\eta <\frac{\alpha }{\beta d}\), then for T large enough

$$\begin{aligned} \int _0^T\int _{\mathbb {R}^d}V(s,\,y)^{1+\eta }G(T+t-s,\,x-y)\,\mathrm{d}s\,\mathrm{d}y\gtrsim T, \end{aligned}$$

whenever \(0<t<\frac{T}{3}\) and \(x\in B(0,\,T^{\beta /\alpha })\).

Proof

We use the previous proposition to write

$$\begin{aligned}&\int _0^T\int _{\mathbb {R}^d}V(s,\,y)^{1+\eta }G(T+t-s,\,x-y)\,\mathrm{d}y\,\mathrm{d}s\\&\quad \geqslant \int _{(T+t)/2}^{3(T+t)/4}\int _{B(0,\,s^{\beta /\alpha })}V(s,\,y)^{1+\eta }G(T+t-s,\,x-y)\,\mathrm{d}y\,\mathrm{d}s\\&\quad \geqslant M^{1+\eta } \int _{(T+t)/2}^{3(T+t)/4}\int _{B(0,\,s^{\beta /\alpha })}G(T+t-s,\,x-y)\,\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

Since \(t<\frac{T}{3}\), we have \(B(0,\,(T+t-s)^{\beta /\alpha })\subset B(0,\,s^{\beta /\alpha })\) and \(|x-y|\leqslant c_1(T+t-s)^{\beta /\alpha }\). We therefore have

$$\begin{aligned}&\int _{B(0,\,s^{\beta /\alpha })}G(T+t-s,\,x-y)\,\mathrm{d}y\\&\quad \geqslant \int _{B(0,\,(T+t-s)^{\beta /\alpha })}G(T+t-s,\,x-y)\,\mathrm{d}y\\&\quad \gtrsim 1, \end{aligned}$$

where we have used the lower bound given by (2.12) to obtain the last inequality. We combine these estimates above to obtain the result. \(\square \)

Remark 3.3

When \(\eta =\frac{\alpha }{\beta d}\), instead of (3.1), we obtain

$$\begin{aligned} F(t)\gtrsim 1+\int _1^{t/2}\frac{F(s)^{1+\eta }}{s}\,\mathrm{d}s. \end{aligned}$$

This immediately gives us \(F(t)\gtrsim \ln t\) for \(t>1\). Similar computations to that of the proof of Proposition 3.2 then give us the following bound

$$\begin{aligned} \int _0^T\int _{\mathbb {R}^d}V(s,\,y)^{1+\eta }G(T+t-s,\,x-y)\,\mathrm{d}s\,\mathrm{d}y\gtrsim \frac{(\ln T)^{1+\eta }}{T^{\beta d/\alpha }}, \end{aligned}$$

under the same conditions as Proposition 3.2.

We are now ready to prove Theorem 1.1.

Proof of Theorem 1.1

Let \(T>0\) which we are going to fix later. We will assume that there exists a solution up to time T. If this not the case, then there is nothing to prove. From the integral solution, we have

$$\begin{aligned} V(t+T, x)= & {} \int _{\mathbb {R}^d} G(t+T,\,x-y)V_0(y)\,\mathrm{d}y\\&+ \int _{\mathbb {R}^d}\int _0^{t+T} G(t+T-s,\,x-y)V(s, y)^{1+\eta } d s\,\mathrm{d}y. \end{aligned}$$

A simple change of variables and the fact that the first term of the above display is non-negative, we obtain

$$\begin{aligned} V(t+T, x)&\geqslant \int _{\mathbb {R}^d}\int _0^{T} G(t+T-s,\,x-y)V(s, y)^{1+\eta }\mathrm{d}s\,\mathrm{d}y\\&\quad +\int _{\mathbb {R}^d}\int _0^{t} G(t-s,\,x-y)V(s+T, y)^{1+\eta }\mathrm{d}s\,\mathrm{d}y. \end{aligned}$$

We bound the first term of the above display. We now restrict our attention to the case \(\eta <\beta d/\alpha \). By Proposition 3.2, for \(x\in B(0,\,1)\), we have upon taking T large enough,

$$\begin{aligned} \int _{\mathbb {R}^d}\int _0^{T} G(t+T-s,\,x-y)V(s, y)^{1+\eta }\mathrm{d}s\,\mathrm{d}y\gtrsim T. \end{aligned}$$

We now look at the second term.

$$\begin{aligned}&\int _{\mathbb {R}^d}\int _0^{t} G(t-s,\,x-y)V(s+T, y)^{1+\eta }\mathrm{d}s\,\mathrm{d}y\\&\quad \geqslant \int _0^{t} \inf _{y\in B(0,\,1)}V(s+T,\,y)^{1+\eta }\int _{B(0,1)}G(t-s,\,x-y)\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

Set \(A=\{y\in B(0,1):\ |x-y|\leqslant (t-s)^{\beta /\alpha }\}\) with \(t \leqslant (1/2)^{\alpha /\beta }\). We now use the fact that on A, we have \(G_{t-s}(x-y)\gtrsim (t-s)^{-\beta d/\alpha }\) to write

$$\begin{aligned} \begin{aligned}&\int _{B(0,1)}G_{t-s}(x-y)\mathrm{d}y\\&\quad \geqslant \int _{A}G_{t-s}(x-y)\mathrm{d}y\geqslant c, \end{aligned} \end{aligned}$$
(3.2)

for some constant c. Putting the estimates above, we obtain

$$\begin{aligned} \inf _{x\in B(0,\,1)}V(t+T,\,x)\gtrsim T+\int _0^t \inf _{x\in B(0,\,1)}V(s+T,\,x)^{1+\eta }\,\mathrm{d}s, \end{aligned}$$
(3.3)

This implies that \(\inf _{x\in B(0,\,1)}V(t+T,\,x)\) blows up in finite time. By choosing T large enough, we can make sure that the blow-up time \(\tilde{t}\) is less than \((1/2)^{\alpha /\beta }\) as required. This finishes the proof for the case \(\eta <\beta d/\alpha \). When \(\eta =\frac{\alpha }{\beta d}\), we can use Remark 3.3 and very similar computations which led to (3.3) to obtain

$$\begin{aligned} \inf _{x\in B(0,\,T^{\beta /\alpha })}V(t+T,\,x)\gtrsim \frac{(\ln T)^{1+\eta }}{T^{\beta d/\alpha }}+\int _0^t \inf _{x\in B(0,\,T^{\beta /\alpha })}V(s+T,\,x)^{1+\eta }\,\mathrm{d}s, \end{aligned}$$
(3.4)

whenever \(0<t\leqslant T/3\). This shows that \(V(t+T,\,x)\) has a blow-up time of order \(\frac{T}{(\ln T)^{1+\eta }}\) which can be made to be strictly less than T/3 upon taking T large enough. The proof is now complete. \(\square \)

4 Proof of Theorem 1.2

The proof of the following result is a straightforward application of Young’s convolution inequality and the heat kernel estimates. The first part below can also be found in [15].

Lemma 4.1

For all \(t>0\), we have

  1. (a)
    $$\begin{aligned} \Vert \mathcal {G}V_0(t,\,\cdot ) \Vert _{L^r(\mathbb {R}^d)}\lesssim t^{-\frac{\beta d}{\alpha }(\frac{1}{p}-\frac{1}{r})}\Vert V_0\Vert _{L^p(\mathbb {R}^d)} \end{aligned}$$

    with \(p,\,r\in [1,\,\infty ]\) satisfying \(0\leqslant \frac{1}{p}-\frac{1}{r}<\frac{\alpha }{d}\)

  2. (b)

    For \(0\leqslant s\leqslant t\), we have

    $$\begin{aligned} \left\| \int _{\mathbb {R}^d}G(t-s,\,\cdot -y)f(s,y)^{1+\eta }\,\mathrm{d}y\right\| _{L^r(\mathbb {R}^d)} \lesssim (t-s)^{-\frac{\beta d}{\alpha }(\frac{1+\eta }{p}-\frac{1}{r})}\Vert f(s,\,\cdot )\Vert _{L^p(\mathbb {R}^d)}^{1+\eta }\nonumber \\ \end{aligned}$$
    (4.1)

    with \(\frac{p}{1+\eta }, r\in [1,\,\infty ]\) satisfying \(0\leqslant \frac{1+\eta }{p}-\frac{1}{r}<\frac{\alpha }{d}.\)

Proof

Young’s convolution inequality gives us

$$\begin{aligned} \Vert \mathcal {G}V_0(t,\,\cdot ) \Vert _{L^r(\mathbb {R}^d)}\leqslant \Vert G(t,\,\cdot )\Vert _{L^q(\mathbb {R}^d)}\Vert V_0\Vert _{L^p(\mathbb {R}^d)}, \end{aligned}$$

for any \(p, q, r\in [1,\infty ]\) satisfying \(1+\frac{1}{r}=\frac{1}{p}+\frac{1}{q}.\) The first part now follows by noting that from the scaling property and the heat kernel estimates,

$$\begin{aligned} \Vert G(t,\,\cdot )\Vert _{L^q(\mathbb {R}^d)}\lesssim t^{-\frac{\beta d}{\alpha }(1-\frac{1}{q})}, \end{aligned}$$

whenever \(1-\frac{1}{q}<\frac{\alpha }{d} \). For the second inequality, we use Young’s inequality again and the above but this time with parameters \(\frac{p}{1+\eta }, q, r\in [1,\,\infty ]\) satisfying \(1+\frac{1}{r}=\frac{1+\eta }{p}+\frac{1}{q}\). \(\square \)

For the next result, we will need the following notation. Set

$$\begin{aligned} \Vert V\Vert _{p, \theta }:= \sup _{t>0}t^{\theta }\Vert V(t,\,\cdot )\Vert _{L^p(\mathbb {R}^d)}. \end{aligned}$$
(4.2)

Corollary 4.2

Suppose that \(\eta >\frac{\alpha }{\beta d}\) and let \(p> \frac{\beta d \eta }{\alpha }\). Let

$$\begin{aligned} \theta :=\frac{\beta d}{\alpha }\left( \frac{\alpha }{\beta d \eta }-\frac{1}{p}\right) . \end{aligned}$$

Then, we have

  1. (a)
    $$\begin{aligned} \Vert \mathcal {G}f \Vert _{p, \theta }\lesssim \Vert f\Vert _{L^{q_c}(\mathbb {R}^d)}, \end{aligned}$$

    where \(q_c:=\frac{\beta d \eta }{\alpha }\) and \(\theta /\beta <1\).

  2. (b)
    $$\begin{aligned} \Vert \mathcal {A}f\Vert _{p,\,\theta }\lesssim \Vert f\Vert ^{1+\eta }_{p,\,\theta }, \end{aligned}$$

    with \(\frac{p}{1+\eta }\in [1,\,\infty ]\) and \(p>\frac{d\eta }{\alpha }\).

  3. (c)

    Suppose that f and g satisfy \(\Vert f\Vert _{p,\,\theta }<M\) and \(\Vert g\Vert _{p,\,\theta }<M\) for some \(M>0\). We then have

    $$\begin{aligned} \Vert \mathcal {A}f-\mathcal {A}g\Vert _{p,\,\theta }\lesssim M^\eta \Vert f-g\Vert _{p,\,\theta }, \end{aligned}$$

    whenever \((1+\eta )\theta <1\), \(\frac{p}{1+\eta }\in [1,\,\infty ]\) and \(p>\frac{d\eta }{\alpha }\).

Proof

The first part is a straightforward consequence of the first part of the above Lemma 4.1. For the second part, the same lemma gives us

$$\begin{aligned} \Vert \mathcal {A}f\Vert _{L^p(\mathbb {R}^d)}\lesssim t^{1-\frac{\beta d\eta }{\alpha p}}\Vert f\Vert ^{1+\eta }_{L^p(\mathbb {R}^d)}, \end{aligned}$$

from which we obtain the result after some computations. The final part is slightly more involved. For the second inequality below, we use Young’s inequality with parameters \(1+\frac{1}{p}=\frac{\eta +1}{p}+\frac{p-\eta }{p}\) along with the assumption that \(p>\frac{d\eta }{\alpha }\),

$$\begin{aligned}&\Vert \mathcal {A}f(t,\,\cdot )-\mathcal {A}g(t,\,\cdot )\Vert _{L^p(\mathbb {R}^d)}\\&\quad =\left\| \int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)[f(s,y)^{1+\eta }-g(s,y)^{1+\eta }]\,\mathrm{d}y\,\mathrm{d}s\right\| _{L^p(\mathbb {R}^d)}\\&\quad \lesssim \left\| \int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)|f(s,y)-g(s,y)||f(s,y)^\eta +g(s,y)^\eta |\,\mathrm{d}y\,\mathrm{d}s\right\| _{L^p(\mathbb {R}^d)}\\&\quad \lesssim \int _0^t(t-s)^{-\beta \eta d/\alpha p}\Vert |f(s,\cdot )-g(s,\cdot )||f(s,\cdot )^\eta +g(s,\cdot )^\eta |\Vert _{L^{\frac{p}{1+\eta }}(\mathbb {R}^d)}\,\mathrm{d}s\\&\quad \lesssim \int _0^t(t-s)^{-\beta \eta d/\alpha p}\Vert f(s,\cdot )-g(s,\cdot )\Vert _{L^p(\mathbb {R}^d)}[\Vert f(s,\cdot )\Vert ^\eta _{L^p(\mathbb {R}^d)}+\Vert g(s,\cdot )\Vert _{L^p(\mathbb {R}^d)}^\eta ] \,\mathrm{d}s\\&\quad \lesssim M^\eta \Vert f-g\Vert _{p,\theta } \int _0^t(t-s)^{-\beta \eta d/\alpha p}s^{-(1+\eta )\theta }\,\mathrm{d}s. \end{aligned}$$

Since \((1+\eta )\theta <1\) and \(p> \frac{\beta d \eta }{\alpha }\), the integral in the above makes sense. We now obtain the result after some computations. \(\square \)

Proposition 4.3

Let \(\eta >\alpha /\beta d\) and set \(q_c=\frac{\beta d\eta }{\alpha }\). Then for \(\Vert V_0\Vert _{L^{q_c}(\mathbb {R}^d)}\) small enough, there is a unique solution to (1.3) such that

$$\begin{aligned} \Vert V\Vert _{p,\,\theta }<\infty \quad \text {for some}\quad p>q_c, \end{aligned}$$

where the norm \(\Vert \cdot \Vert _{p,\,\theta }\) is defined by (4.2) and \(\theta \) is as in Corollary 4.2.

Proof

The proof is a usual fixed point argument as in say the proof of Theorem 15.2 of [23]. We assume that \(\Vert V_0\Vert _{L^{q_c}(\mathbb {R}^d)}<\frac{M}{2}\) for some \(M>0\). Let

$$\begin{aligned} {B_M:=\{V(t,\,\cdot )\in L^p(\mathbb {R}^d); \Vert V\Vert _{p,\theta }<M \},} \end{aligned}$$

and

$$\begin{aligned} I(V)(t,\,x):=\mathcal {G}V_0(t,\,x)+\mathcal {A}V(t,\,x). \end{aligned}$$

Then one can show that the map \(I: B_M\rightarrow B_M\) has a unique fixed point whenever M is small enough. We now sketch the main steps. From Corollary 4.2, we have

$$\begin{aligned} \Vert I(V)\Vert _{p,\theta }\lesssim \Vert V_0\Vert _{L^{q_c}({{\mathbb {R}}^d})}+CM^{1+\eta }, \end{aligned}$$

which upon choosing \(\Vert V_0\Vert _{L^{q_c}({{\mathbb {R}}^d})}\) and M small enough yields,

$$\begin{aligned} \Vert I(V)\Vert _{p,\theta }< M. \end{aligned}$$

Corollary  4.2 also yields

$$\begin{aligned} \Vert I(u)-I(v)\Vert _{p,\theta }\leqslant 1/2 \Vert u-v\Vert _{p,\theta }, \end{aligned}$$

for M small enough. By a simple contraction principle argument, we get that I(V) has a fixed point in \(B_M\).

\(\square \)

Proposition 4.4

Suppose that \(\Vert V_0\Vert _{L^\infty (\mathbb {R}^d)}<\infty \). Then there exists a \(T>0\) such that there is a unique solution to (1.3) satisfying

$$\begin{aligned} \Vert V(t,\,\cdot )\Vert _{L^\infty (\mathbb {R}^d)}\leqslant C_T\quad \text {for all}\quad t\in (0,\,T]. \end{aligned}$$

The constant \(C_T\) depends on T and \(\Vert V_0\Vert _{L^\infty (\mathbb {R}^d)}\).

Proof

We can use a fixed point as in the above proposition. We leave it to the reader to fill in the details. \(\square \)

Proof of Theorem 1.2

We choose \(V_0\in L^{q_c}(\mathbb {R}^d)\cap L^\infty (\mathbb {R}^d)\). In particular, we can take \(V_0\) to be finitely supported and bounded above by a small positive constant. Proposition 4.3 ensures that we have a global solution satisfying

$$\begin{aligned} \Vert V(t,\,\cdot ) \Vert _{L^p(\mathbb {R}^d)}\lesssim t^{-\theta }\quad \text {for all}\quad t>0, \end{aligned}$$

where \(p>q_c\) is such that \(\theta /\beta <1\) and \((1+\eta )\theta <1\). We now use the following interpolation inequality

$$\begin{aligned} \Vert V(t,\,\cdot )\Vert _{L^q(\mathbb {R}^d)}\leqslant \Vert V(t,\,\cdot )\Vert ^{p/q}_{L^p(\mathbb {R}^d)}\Vert V(t,\,\cdot )\Vert ^{1-p/q}_{L^\infty (\mathbb {R}^d)}\quad \text {for}\quad q\in [p,\,\infty ] \end{aligned}$$

and Proposition 4.4 to conclude that there exists a \(T>0\) such that

$$\begin{aligned} \Vert V(t,\,\cdot )\Vert _{L^q(\mathbb {R}^d)}\leqslant C_Tt^{-p\theta /q}\quad \text {for all}\quad t\in (0,\,T]. \end{aligned}$$
(4.3)

Now fix any \(T_0\) such that \(T_0>T\). We consider \(V(t+T,\,x)\) for \(t\in (0,\,T_0-T]\). For \(i=0,1,2,\ldots , k\), let \(p_{i+1}>p_i\geqslant p\) so that \(\frac{1+\eta }{p_i}-\frac{1}{p_{i+1}}<\frac{\alpha }{d}\).

$$\begin{aligned} \mathcal {A}V(t+T,\,x)&=\int _0^{t+T}\int _{\mathbb {R}^d}G(t+T-s,\,x-y)V(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&= \int _0^{T}\int _{\mathbb {R}^d}G(t+T-s,\,x-y)V(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&\quad +\int _T^{t+T}\int _{\mathbb {R}^d}G(t+T-s,\,x-y)V(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&:=I_1+I_2. \end{aligned}$$

Using (4.1) and (4.3), we obtain

$$\begin{aligned} \Vert \int _0^T \int _{\mathbb {R}^d}G(t+T-s,\,\cdot -y)&V(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\Vert _{L^{p_{i+1}}(\mathbb {R}^d)}\\&\lesssim \int _0^T(t+T-s)^{-\frac{\beta d}{\alpha }(\frac{1+\eta }{p_i}-\frac{1}{p_{i+1}})}\Vert V(s,\,\cdot )\Vert _{L^{p_i}(\mathbb {R}^d)}^{1+\eta }\,\mathrm{d}s\\&\lesssim \int _0^T(t+T-s)^{-\frac{\beta d}{\alpha }(\frac{1+\eta }{p_i}-\frac{1}{p_{i+1}})}s^{-p(1+\eta )\theta /p_i}\,\mathrm{d}s\\&\leqslant c_T, \end{aligned}$$

where \(c_T\) is a constant depending on T. After a change of variable we obtain,

$$\begin{aligned} I_2=\int _0^{t}\int _{\mathbb {R}^d}G(t-s,\,x-y)V(s+T,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

We use (4.1) again to write

$$\begin{aligned} \Vert \int _0^t\int _{\mathbb {R}^d}G(t-s,\,\cdot -y)&V(s+T,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\Vert _{L^{p_{i+1}}(\mathbb {R}^d)}\\&\lesssim \int _0^t(t-s)^{-\frac{\beta d}{\alpha }(\frac{1+\eta }{p_i}-\frac{1}{p_{i+1}})}\Vert V(s+T,\,\cdot )\Vert _{L^{p_i}(\mathbb {R}^d)}^{1+\eta }\,\mathrm{d}s\\&\lesssim \tilde{c}_{T_0, T}^{1+\eta }\int _0^t (t-s)^{-\frac{\beta d}{\alpha }(\frac{1+\eta }{p_i}-\frac{1}{p_{i+1}})}\,\mathrm{d}s, \end{aligned}$$

where we have assumed that \(\Vert V(s+T,\,\cdot )\Vert _{L^{p_i}(\mathbb {R}^d)}\leqslant \tilde{c}_{T_0, T}\). We now choose \(p_0=p\) and apply the above inequality recursively to show that after a finite number of iterations, \(\Vert V(t,\,\cdot )\Vert _{L^{\infty }(\mathbb {R}^d)}\) is bounded on \((0,\,T_0].\) Since \(T_0\) was arbitrary, this finishes the proof. \(\square \)

5 Proof of Theorem 1.3

Throughout this section, we will assume that \(d<\alpha .\) As seen above, the \(G(t,\,x)\) does not satisfy the semigroup property. However, we can use (2.13) to obtain

$$\begin{aligned} \int _{\mathbb {R}^d}G(s,\,x-y)G(t,\,y-z)\,\mathrm{d}y&\lesssim \int _{\mathbb {R}^d}p(s^\beta ,\,x-y)p(t^\beta ,\,y-z)\,\mathrm{d}y\\&= p(t^\beta +s^\beta ,\,x-z) , \text {by the semigroup property of } p\\&\lesssim p((t+s)^\beta ,\,x-z)\\&\lesssim G(t+s,\,x-z). \end{aligned}$$

We have used the fact that for any \(x\in \mathbb {R}^d\) and \(s,\,t>0\), we have

$$\begin{aligned} p(t^\beta +s^\beta ,\,x)&\leqslant c_1\bigg ((s^\beta +t^\beta )^{-d/\alpha }\wedge \frac{s^\beta +t^\beta }{|x|^{d+\alpha }}\bigg )\\&\leqslant c_2\bigg ((s+t)^{-\beta d/\alpha }\wedge \frac{(s+t)^\beta }{|x|^{d+\alpha }}\bigg )\\&\lesssim p((t+s)^\beta ,\,x). \end{aligned}$$

A straightforward consequence of the above is the following proposition where \(\gamma \) will be a strictly positive constant; we will assume this throughout this section.

Proposition 5.1

If \(V_0(x)\leqslant \delta G(\gamma ,\,x)\), for some constant \(\delta >0\), then

$$\begin{aligned} \int _{\mathbb {R}^d}G(t,\,x-y)V_0(y)\,\mathrm{d}y\lesssim \delta G(t+\gamma ,\,x),\quad \text {for all}\quad t>0\quad \text {and}\quad x\in \mathbb {R}^d. \end{aligned}$$

Proof

Using the above we obtain

$$\begin{aligned} \int _{\mathbb {R}^d}G(t,\,x-y)V_0(y)\,\mathrm{d}y&\leqslant \delta \int _{\mathbb {R}^d}G(t,\,x-y)G(\gamma ,\,y)\,\mathrm{d}y\\&\lesssim \delta G(t+\gamma ,\,x). \end{aligned}$$

\(\square \)

Proposition 5.2

Suppose \(\eta >\frac{\alpha }{\beta d}\), then for all \(t>0\) and \(x\in \mathbb {R}^d\),

$$\begin{aligned} \int _{\mathbb {R}^d}\int _0^tG(t-s,\,x-y)G(s+\gamma ,\,y)^{\eta +1}\,\mathrm{d}s\,\mathrm{d}y\lesssim G(t+\gamma ,\,x). \end{aligned}$$

Proof

We have

$$\begin{aligned}&\int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)G(s+\gamma ,\,y)^{\eta +1}\,\mathrm{d}s\,\mathrm{d}y\\&\quad \lesssim \int _0^t\sup _{y\in \mathbb {R}^d}G(s+\gamma ,\,y)^\eta \int _{\mathbb {R}^d}G(t-s,\,x-y)G(s+\gamma ,\,y)\,\mathrm{d}s\,\mathrm{d}y\\&\quad \lesssim G(t+\gamma ,\,x)\int _0^t \frac{1}{(s+\gamma )^{\eta \beta d/\alpha }}\,\mathrm{d}s. \end{aligned}$$

Since \(\gamma >0\), some calculus finishes the proof. \(\square \)

Proposition 5.3

Suppose \(\eta >\frac{\alpha }{\beta d}\), then

$$\begin{aligned} {\sup _{t>0,\,x\in \mathbb {R}^d}\frac{(\mathcal {A}V)(t,\,x)}{G(t+\gamma ,\,x)}\lesssim \sup _{t>0,\,x\in \mathbb {R}^d}\left( \frac{V(t,\,x)}{G(t+\gamma ,\,x)}\right) ^{1+\eta }.} \end{aligned}$$

Proof

We have

$$\begin{aligned}&\int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)V(s,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&\quad \leqslant \int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)G(s+\gamma ,\,y)^{1+\eta }\left| \frac{V(s,y)}{G(s+\gamma ,\,y)}\right| ^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&\quad \leqslant {\sup _{t>0,\,y\in \mathbb {R}^d}\left( \frac{V(t,y)}{G(t+\gamma ,\,y)}\right) ^{1+\eta }}\int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)G(s+\gamma ,\,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

We now use Proposition 5.2 to complete the proof. \(\square \)

We need one final result before the proof of Theorem 1.3.

Proposition 5.4

Suppose that \(\eta >\frac{\alpha }{\beta d}\) and

$$\begin{aligned} {\sup _{t>0,\,x\in \mathbb {R}^d}\frac{V(t,\,x)}{G(t+\gamma ,\,x)}\leqslant M\quad \text {and}\quad \sup _{t>0,\,x\in \mathbb {R}^d}\frac{W(t,\,x)}{G(t+\gamma ,\,x)}\leqslant M,} \end{aligned}$$

for some \(M>0\), then we have

$$\begin{aligned} \sup _{t>0,\,x\in \mathbb {R}^d}\frac{|(\mathcal {A}V)(t,\,x)-(\mathcal {A}W)(t,\,x)|}{G(t+\gamma ,\,x)}\lesssim M^\eta \sup _{t>0,\,x\in \mathbb {R}^d}\left| \frac{V(t,\,x)-W(t,\,x)}{G(t+\gamma ,\,x)}\right| . \end{aligned}$$

Proof

We start off by writing

$$\begin{aligned}&\left| \mathcal {A}V(t,\,x)-\mathcal {A}W(t,x)\right| \\&\quad =\left| \int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)[V(s,y)^{1+\eta }-W(s,y)^{1+\eta }]\,\mathrm{d}y\,\mathrm{d}s \right| \\&\quad \lesssim \int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)[|V(s,y)-W(s,y)|][V(s,y)^\eta +W(s,y)^\eta ]\,\mathrm{d}y\,\mathrm{d}s\\&\quad \lesssim M^{\eta }\int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)G(s+\gamma ,\,y)^{1+\eta }\frac{|V(s,y)-W(s,y)|}{G(t+\gamma ,\,y)}\,\mathrm{d}y\,\mathrm{d}s\\&\quad \lesssim M^\eta \sup _{t>0, x\in \mathbb {R}^d}\frac{|V(t,x)-W(t,x)|}{G(t+\gamma ,\,x)}\int _0^t\int _{\mathbb {R}^d}G(t-s,\,x-y)G(t+\gamma ,\,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s. \end{aligned}$$

An application of Proposition 5.2 yields the desired result. \(\square \)

We set

$$\begin{aligned} {\Vert V\Vert :=\sup _{t>0, x\in \mathbb {R}^d}\frac{V(t,\,x)}{G(t+\gamma ,\,x)}}. \end{aligned}$$
(5.1)

The proof of Theorem 1.3 involves a Picard iteration which we define as follows. For \(n\geqslant 0\),

$$\begin{aligned} V_{n+1}(t,\,x):=\int _{\mathbb {R}^d}G(t,\,x-y)V_0(y)\,\mathrm{d}y+ (\mathcal {A}V_n)(t,\,x). \end{aligned}$$
(5.2)

Proof of Theorem 1.3

We have all the ingredients to follow the proof of [12], which is just a standard fixed point argument. Indeed, one can use Propositions 5.3 and 5.4 to show that the above Picard Iteration scheme has a fixed point. The steps are in fact similar to the proof of Proposition 4.3. We leave it to the reader for filling in the details. \(\square \)

6 Proof of Theorem 1.4

The proof of this theorem relies on the following spectral decomposition of the Dirichlet heat kernel,

$$\begin{aligned} G_D(t,\,x,\,y)=\sum _{n=1}^\infty E_\beta (-\nu _nt^\beta )\phi _n(x)\phi _n(y). \end{aligned}$$
(6.1)

\(\nu _n\) are the eigenvalues of the the fractional Laplacian on the domain \(B(0,\,R)\) and the corresponding eigenfunctions \(\{\phi _n\}_{n\geqslant 1}\) form an orthonormal basis of \(L^2(B(0,\,R))\). Here \(E_\beta (t)=\sum _{k=0}^\infty t^{\beta k}/\Gamma (1+\beta k)\) is the Mittag-Leffler function. See [7, 21] for more information about this. If \(\beta \) were one, the above representation would have been in terms of the exponential function instead of the Mittag-Leffler function. The key observation is that we have the following polynomial decay [24, Theorem 4]:

$$\begin{aligned} \frac{1}{1+\Gamma (1-\beta )t}\leqslant E_\beta (-t)\leqslant \frac{1}{1+\Gamma (1+\beta )^{-1}t}\quad \text {for all}\quad t>0. \end{aligned}$$
(6.2)

We will only need the lower bound for the proof. The proof follows the same idea as that of Kaplan [14].

Proof of Theorem 1.4

Set

$$\begin{aligned} F(t):=\int _{B(0,\,R)}V(t,\,x)\phi _1(x)\,\mathrm{d}x. \end{aligned}$$

We now use the integral formulation of the equation given by (1.7) together with the representation (6.1) to write

$$\begin{aligned} F(t)&=E_\beta (-\nu _1t^\beta )\int _{B(0,\,R)}V(t,\,y)\phi _1(y)\,\mathrm{d}y\\&\quad +\int _0^tE_\beta (-\nu _1(t-s)^\beta )\int _{B(0,\,R)}\phi _1(y)V(s,\,y)^{1+\eta }\,\mathrm{d}y\,\mathrm{d}s\\&\gtrsim E_\beta (-\nu _1t^\beta )K_{V_0, \phi _1}+\int _0^tE_\beta (-\nu _1(t-s)^\beta )F(s)^{1+\eta }\mathrm{d}s\\&\gtrsim \frac{K_{V_0, \phi _1}}{t^\beta }+\int _0^t\frac{F(s)^{1+\eta }}{t^\beta }\mathrm{d}s, \end{aligned}$$

where we have also taken t to be large enough. We now let \(G(t):=t^\beta F(t)\) and consider the case \(\beta (1+\eta )<1\). Then the above inequality reduces to

$$\begin{aligned} G(t)\gtrsim K_{V_0, \phi _1}+\int _0^t\frac{G(s)^{1+\eta }}{s^{\beta (1+\eta )}}\,\mathrm{d}s. \end{aligned}$$

Since G(t) is a super-solution to the following non-linear ordinary differential equation.

$$\begin{aligned} \frac{\tilde{G}'(s)}{\tilde{G}(s)^{1+\eta }}=\frac{1}{s^{\beta (1+\eta )}}\quad \text {with}\quad \tilde{G}(0)=K_{V_0, \phi _1}. \end{aligned}$$

Therefore there exists a \(t_0\) such that \(G(t)=\infty \) for all \(t\geqslant t_0\) no matter what the initial condition \(K_{V_0, \phi _1}\) is.

When \(\beta (1+\eta )\geqslant 1\) and \(K_{V_0, \phi _1}>0\), we obtain for \(t>1\)

$$\begin{aligned} G(t)\gtrsim K_{V_0, \phi _1}+\int _1^t\frac{G(s)^{1+\eta }}{s^{\beta (1+\eta )}}\,\mathrm{d}s \end{aligned}$$

which can now be compared with

$$\begin{aligned} \frac{\tilde{G}'(s)}{\tilde{G}(s)^{1+\eta }}=\frac{1}{s^{\beta (1+\eta )}}\quad \text {with}\quad \tilde{G}(1)=K_{V_0, \phi _1}. \end{aligned}$$

Therefore there exists a \(t_1\) such that \(G(t)=\infty \) for all \(t\geqslant t_1\) provided that the initial condition \(K_{V_0, \phi _1}\) is large enough. This finishes the proof since \(\phi _1(x)\) is strictly positive. \(\square \)