1 Introduction

In this paper we determine the exact rate of acceleration for the spreading behaviour governed by the Fisher-KPP equation with nonlocal diffusion and free boundaries considered in [10, 16, 20], which has the form:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle u_t=d\Bigg [\int _{\mathbb {R}}J(x-y)u(t,y)dy-u\Bigg ]+f(u), &{} t>0,~x\in (g(t),h(t)),\\ \displaystyle u(t,g(t))=u(t,h(t))=0, &{}t>0,\\ \displaystyle h'(t)=\mu \int _{g(t)}^{h(t)}\int _{h(t)}^{+\infty } J(x-y)u(t,x)dydx, &{}t>0,\\ \displaystyle g'(t)=-\mu \int _{g(t)}^{h(t)}\int _{-\infty }^{g(t)} J(x-y)u(t,x)dydx, &{}t>0,\\ \displaystyle u(0,x)=u_0(x),~h(0)=-g(0)=h_0, &{} x\in [-h_0,h_0], \end{array}\right. } \end{aligned}$$
(1.1)

where \(x=g(t)\) and \(x=h(t)\) are the moving boundaries to be determined together with u(tx), which is always assumed to be identically 0 for \(x\in \mathbb {R}\setminus [g(t), h(t)]\)Footnote 1; d and \(\mu \) are given positive constants.

The initial function \(u_0(x)\) satisfies

$$\begin{aligned} u_0\!\in \!C([-h_0,h_0]),~u_0(-h_0)\!=\!u_0(h_0)=0 ~\text { and }~u_0(x)>0~\text {in}~(-h_0,h_0).\quad \end{aligned}$$
(1.2)

The basic assumptions on the kernel function \(J: {\mathbb {R}}\rightarrow {\mathbb {R}}\) are

$$\begin{aligned} \mathbf{(J)}: J\in C(\mathbb {R})\cap L^\infty (\mathbb {R}),\; J\ge 0,\; J(0)>0,~\int _{{\mathbb {R}}}J(x)dx=1, J \text { is even }.\qquad \end{aligned}$$

The nonlinear term f(u) is assumed to be a Fisher-KPP function, namely it satisfies

$$\begin{aligned} \mathbf{(f)}: \ \ {\left\{ \begin{array}{ll} f\in C^1,\; f>0=f(0) = f(1) \hbox { in } (0,1), f'(0)>0>f'(1),\\ f(u)/u \hbox { is nonincreasing in } u>0. \end{array}\right. } \end{aligned}$$

The nonlocal free boundary problem (1.1) may be viewed as a model describing the spreading of a new or invasive species with population density u(tx), whose population range [g(t), h(t)] expands according to the free boundary conditions

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle h'(t)=\mu \int _{g(t)}^{h(t)}\int _{h(t)}^{+\infty } J(x-y)u(t,x)dydx,\\ \displaystyle g'(t)=-\mu \int _{g(t)}^{h(t)}\int _{-\infty }^{g(t)} J(x-y)u(t,x)dydx, \end{array}\right. } \end{aligned}$$

that is, the expanding rate of the range [g(t), h(t)] is proportional to the outward flux of the population across the boundary of the range. Such a free boundary condition was proposed independently in [10, 12]; [12] assumes \(f(u)\equiv 0\), and hence the long-time dynamics of the model there is completely different from the Fisher-KPP case studied in [10] and here.

Problem (1.1) is a “nonlocal diffusion" version of the following free boundary problem with “local diffusion”:

$$\begin{aligned} \left\{ \begin{aligned}&u_t-du_{xx}=f(u),{} & {} t>0,~g(t)<x<h(t),\\&u(t,g(t))=u(t,h(t))=0,{} & {} t>0,\\&g'(t)=-\mu u_x(t,g(t)),\; h'(t)=-\mu u_x(t,h(t)),{} & {} t>0,\\&g(0)=-h_0,\; h(0)=h_0,~u(0,x)=u_0(x),{} & {} -h_0\le x\le h_0, \end{aligned} \right. \end{aligned}$$
(1.3)

where \(u_0\) is a \(C^2\) function which is positive in \((-h_0, h_0)\) and vanishes at \(x=\pm h_0\). For a special Fisher-KPP type of f(u), (1.3) was first studied in [17] (see [18] for more general f), as a model for the spreading of a new or invasive species with population density u(tx), whose population range [g(t), h(t)] expands through its boundaries \(x=g(t)\) and \(x=h(t)\) according to the Stefan conditions \(g'(t)=-\mu u_x(t, g(t)),\; h'(t)=-\mu u_x(t,h(t))\). A deduction of these conditions based on some ecological assumptions can be found in [8].

By [17, 18], problem (1.3) admits a unique solution (u(tx), g(t), h(t)) defined for all \(t>0\), and its long-time dynamical behaviour is characterised by a “spreading-vanishing dichotomy”: Either (g(t), h(t)) is contained in a bounded set of \(\mathbb {R}\) for all \(t>0\) and \(u(t,x)\rightarrow 0\) uniformly as \(t\rightarrow \infty \) (called the vanishing case), or (g(t), h(t)) expands to \(\mathbb {R}\) and u(tx) converges to 1 locally uniformly in \(x\in \mathbb {R}\) as \(t\rightarrow \infty \) (the spreading case). Moreover, when spreading occurs,

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{-g(t)}{t}=\lim _{t\rightarrow \infty }\frac{h(t)}{t}=k_0\in (0,\infty ), \end{aligned}$$

and \(k_0\) is uniquely determined by a semi-wave problem associated to (1.3) (see [8, 18]).

Problem (1.3) is closely related to the corresponding Cauchy problem

$$\begin{aligned}{} & {} {\left\{ \begin{array}{ll} U_t-dU_{xx}=f(U), &{}t>0,~x\in \mathbb {R},\\ U(0,x)=U_0(x), &{} x\in \mathbb {R}, \end{array}\right. } \nonumber \\{} & {} \hbox { where } U_0(x):={\left\{ \begin{array}{ll} u_0(x),&{} x\in [g_0,h_0],\\ 0,&{} x\in \mathbb {R}\setminus [g_0,h_0]. \end{array}\right. } \nonumber \\ \end{aligned}$$
(1.4)

Indeed, it follows from [15] that the unique solution (ugh) of (1.3) and the unique solution U of (1.4) are related in the following way: For any fixed \(T>0\), as \(\mu \rightarrow \infty \), \((g(t), h(t))\rightarrow \mathbb {R}\) and \(u(t,x)\rightarrow U(t,x)\) locally uniformly in \((t,x)\in (0, T]\times \mathbb {R}\). Thus (1.4) may be viewed as the limiting problem of (1.3) (as \(\mu \rightarrow \infty \)).

Problem (1.4) with \(U_0\) a nonnegative function having nonempty compact support has long been used to describe the spreading of a new or invasive species; see, for example, classical works of Fisher [24], Kolmogorov-Petrovski-Piscunov (KPP) [29] and Aronson-Weinberger [2].

In both (1.3) and (1.4), the dispersal of the species is described by the diffusion term \(d u_{xx}\), widely called a “local diffusion” operator, which is obtained from the assumption that individual members of the species move in space according to the rule of Brownian motion. One advantage of the nonlocal problem (1.1) over the local problem (1.3) is that the nonlocal diffusion term

$$\begin{aligned} d\left[ \int _{\mathbb {R}}J(x-y)u(t,y)dy-u(t,x)\right] \end{aligned}$$

in (1.1) is capable to include spatial dispersal strategies of the species beyond random diffusion modelled by the term \(d u_{xx}\) in (1.3). Here \(J(x-y)\) may be interpreted as the probability that an individual of the species moves from x to y in a unit of time.

The long-time dynamical behaviour of (1.1), similar to that of (1.3), is determined by a “spreading-vanishing dichotomy" (see Theorem 1.2 in [10]): As \(t\rightarrow \infty \), either

  1. (i)

    Spreading: \(\lim _{t\rightarrow +\infty } (g(t), h(t))=\mathbb {R}\) and \(\lim _{t \rightarrow +\infty }u(t,x)=1\) locally uniformly in \({\mathbb {R}}\), or

  2. (ii)

    Vanishing: \(\lim _{t\rightarrow +\infty } (g(t), h(t))=(g_\infty , h_\infty )\) is a finite interval and \(\lim _{t \rightarrow +\infty }u(t,x)=0\) uniformly for \(x\in [g(t),h(t)]\).

Criteria for spreading and vanishing were also obtained in [10] (see Theorem 1.3 there). In particular, if the size of the initial population range \(2h_0\) is larger than a certain critical number determined by an associated eigenvalue problem, then spreading always happens.

A new phenomenon for the nonlocal Fisher-KPP model (1.1), in comparison with (1.3), is that when spreading is successful, “accelerated spreading" may happen; namely one may have

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{-g (t)}{t}=\lim _{t\rightarrow \infty }\frac{h(t)}{t}=\infty . \end{aligned}$$

It was shown in [16] that whether this new phenomenon happens is determined by the following threshold condition on the kernel function J:

$$\begin{aligned} \mathbf{(J1)}: \ \ \displaystyle \displaystyle \int _0^\infty x J(x)dx<+\infty . \end{aligned}$$

More precisely, we have

Theorem A

([16]). Suppose that (J) and (f) are satisfied, and spreading happens to the unique solution (ugh) of (1.1). Then

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{h(t)}{t}=\lim _{t\rightarrow \infty } \frac{-g(t)}{t}={\left\{ \begin{array}{ll} c_0\in (0,\infty )&{} \hbox { if } \mathbf{(J1)} \ \hbox {holds},\\ \infty &{} \hbox { if } \mathbf{(J1)} \ \hbox {does not hold. }\end{array}\right. } \end{aligned}$$

As usual, when (J1) holds, we call \(c_0\) the spreading speed of (1.1), which is determined by the semi-wave solutions to (1.1); see [16] for details.

When (J1) is not satisfied, and hence accelerated spreading happens, the rate of growth of h(t) (and \(-g (t)\)) was investigated in [20] for kernel functions satisfying, for some \(\alpha >0\),

$$\begin{aligned} \displaystyle J(x) \sim |x|^{-\alpha } \ \ \ \hbox { for } |x|\gg 1; \end{aligned}$$
(1.5)

namely,

$$\begin{aligned} c_1\le J(x)|x|^{\alpha }\le c_2 \hbox { for some positive constants } c_1,\ c_2 \hbox { and all large } |x|. \end{aligned}$$

For such kernel functions, clearly condition \((\textbf{J})\) is satisfied only if \(\alpha >1\), and \(\mathbf {(J1)}\) is satisfied only if \(\alpha >2\). Thus accelerated spreading can happen exactly when \(\alpha \in (1,2]\). The following result was proved in [20]:

Theorem B

([20]). In Theorem A, if additionally the kernel function satisfies (1.5) for some \(\alpha \in (1, 2]\), then for \(t\gg 1\),

$$\begin{aligned} -g(t),\; h(t)\sim {\left\{ \begin{array}{ll} t\ln t\ &{}\textrm{if }\, {\alpha }=2,\\ t^{1/({\alpha }-1)}\ {} &{} \textrm{if }\, {\alpha }\in (1,2). \end{array}\right. } \end{aligned}$$

One naturally asks:

$$\begin{aligned} \hbox { If } \lim _{|x|\rightarrow \infty }J(x)|x|^\alpha \hbox { exists }, \hbox { does } {\left\{ \begin{array}{ll}\lim _{t\rightarrow \infty }\frac{h(t)}{t\ln t},&{} \hbox { when } \alpha =2,\\ \lim _{t\rightarrow \infty }\frac{h(t)}{ t^{1/(\alpha -1)}}, &{} \hbox { when } \alpha \in (1,2) \end{array}\right. } \hbox { also exist }? \end{aligned}$$

This question was left unanswered in [20]. Let us note that in the case of finite speed propagation, the speed can be determined via a traveling wave problem, where the wave profile determines the long-time profile of the density function u(tx), which provides crucial information for the construction of suitable upper and lower solutions to yield the precise propagation speed. However, in the case of accelerated spreading, it is unknown whether the density function u(tx) converges in some sense to a definite profile function, which makes the determination of the precise rate of acceleration particularly challenging.

The main purpose of this paper is to give a complete answer to the above question, although a precise asymptotic profile of u(tx) is still lacking. Moreover, we will also treat a new case not considered in [20], namely

$$\begin{aligned} \lim _{|x|\rightarrow \infty }J(x)|x| (\ln |x|)^\beta =\lambda \in (0,\infty )\ \mathrm{for\ some }\ \beta \in (1,\infty ). \end{aligned}$$

By finding the right improvements on the form of the lower solutions used in [20], we are able to prove the following result:

Theorem 1.1

Let the assumptions in Theorem A be satisfied.

  1. (i)

    If

    $$\begin{aligned} \lim _{|x|\rightarrow \infty }J(x)|x|^\alpha =\lambda \in (0,\infty )\ \mathrm{for\ some }\ \alpha \in (1,2], \end{aligned}$$

    then

    $$\begin{aligned} {\left\{ \begin{array}{ll}\displaystyle \lim _{t\rightarrow \infty }\frac{h(t)}{t\ln t}=\lim _{t\rightarrow \infty }\frac{-g (t)}{t\ln t}=\mu \lambda ,&{} \hbox { when } \alpha =2,\\ \displaystyle \lim _{t\rightarrow \infty }\frac{h(t)}{ t^{1/(\alpha -1)}}= \lim _{t\rightarrow \infty }\frac{-g (t)}{ t^{1/(\alpha -1)}}=\frac{2^{2-\alpha }}{2-\alpha }\mu \lambda , &{} \hbox { when } \alpha \in (1,2). \end{array}\right. } \end{aligned}$$
  2. (ii)

    If

    $$\begin{aligned} \lim _{|x|\rightarrow \infty }J(x)|x| (\ln |x|)^\beta =\lambda \in (0,\infty )\ \mathrm{for\ some }\ \beta \in (1,\infty ), \end{aligned}$$

    then

    $$\begin{aligned} \displaystyle \lim _{t\rightarrow \infty }\frac{\ln h(t)}{t^{1/\beta }}= \lim _{t\rightarrow \infty }\frac{\ln [-g (t)]}{t^{1/\beta }}= \left( \frac{2\beta \mu \lambda }{\beta -1}\right) ^{1/\beta }, \end{aligned}$$

    namely,

    $$\begin{aligned} -g (t), h(t)=\exp \Big \{\Big [\Big (\frac{2\beta \mu \lambda }{\beta -1}\Big )^{1/\beta }+o(1)\Big ]t^{1/\beta }\Big \} \hbox { as } t\rightarrow \infty . \end{aligned}$$

Before ending this section, let us mention some further related works. Similar to the relationship between the local diffusion problems (1.3) and (1.4), problem (1.1) is closely related to the following nonlocal version of (1.4):

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle u_t=d\Big [\int _{\mathbb {R}}J(x-y)u(t,y)dy-u\Big ]+f(u), &{} t>0,~x\in \mathbb {R},\\ \displaystyle u(0,x)=u_0(x), &{} x\in \mathbb {R}. \end{array}\right. } \end{aligned}$$
(1.6)

It was proved in [16] (see Theorem 5.3 there) that as \(\mu \rightarrow \infty \), the limiting problem of (1.1) is (1.6). Problem (1.6) and its many variations have been extensively studied in the literature; see, for example, [1, 3,4,5,6, 11, 13, 14, 22, 23, 25, 27, 28, 30, 31, 33, 36] and the references therein. In particular, if (J) and (f) are satisfied, and if the nonnegative initial function \(u_0\) has non-empty compact support, then the basic long-time dynamical behaviour of (1.6) is given by

$$\begin{aligned} \lim _{t\rightarrow \infty } u(t,x)=1 \ \ \hbox { locally uniformly for } x\in \mathbb {R}. \end{aligned}$$

Similar to (1.4), the nonlocal Cauchy problem (1.6) does not give a finite population range when \(t>0\). To understand the spreading behaviour of (1.6), one examines the level set

$$\begin{aligned} E_\lambda (t):=\{x\in \mathbb {R}: u(t,x)=\lambda \} \hbox { with fixed } \lambda \in (0,1), \end{aligned}$$

by considering the large time behaviour of

$$\begin{aligned} x_\lambda ^+(t):=\sup E_\lambda (t) \ \ \hbox { and } \;\; x_\lambda ^-(t)=\inf E_\lambda (t). \end{aligned}$$

As \(t\rightarrow \infty \), \(|x^{\pm }_\lambda (t)|\) may go to \(\infty \) linearly in t or super-linearly in t, depending on whether the following threshold condition is satisfied by the kernel function, apart from (J),

$$\begin{aligned} \mathbf{(J2)}: \ \ \text { There exists }\gamma >0 \text { such that } \displaystyle \int _{\mathbb {R}} J(x)e^{\gamma x}dx<\infty . \end{aligned}$$

Yagisita [36] has proved the following result on traveling wave solutions to (1.6):

Theorem C

([36]). Suppose that f satisfies (f) and J satisfies (J). If additionally J satisfies (J2), then there is a constant \(c_*>0\) such that (1.6) has a traveling wave solution with speed c if and only if \(c\ge c_*\).

Condition (J2) is often called a “thin tail" condition for J. When f satisfies (f), and J satisfies (J) and (J2), it is well known (see, for example, [22, 34]) that

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{|x_\lambda ^{\pm }(t)|}{t}=c_*, \end{aligned}$$
(1.7)

with \(c_*\) given by Theorem C. On the other hand, if (f) and (J) hold but (J2) is not satisfied, then it follows from Theorem 6.4 of [34] that \(|x_\lambda ^{\pm }(t)|\) grows faster than any linear function of t as \(t\rightarrow \infty \), namely, accelerated spreading happens:

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{|x_\lambda ^{\pm }(t)|}{t}=\infty . \end{aligned}$$

We refer to [1, 6, 7, 9, 21, 23, 25, 26, 32, 35] and references therein for further progress on accelerated spreading for (1.6) and related problems.

It is easily seen that (J2) implies (J1), but the reverse is not true; for example, \(J(x)=C(1+x^2)^{-\sigma }\) with \(\sigma >1\) satisfies (J1) (for some suitable \(C>0\)) but not (J2). Therefore accelerated propagation is more likely to happen in (1.6) than in the corresponding free boundary model (1.1).

The relationship between \(c_0=c_0(\mu )\) in Theorem A and \(c_*\) in (1.7) is given in the following result (see Theorems 5.1 and 5.2 of [16]):

Theorem D

([16]). Suppose that (J), (J1) and (f) hold. Then \(c_0(\mu )\) increases to \(c_*\) as \(\mu \rightarrow \infty \), where we define \(c_*=\infty \) when (J2) does not hold.

Finally we briefly describe the organisation of the paper. Throughout the remainder of this paper, unless otherwise specified, we assume that J satisfies (J) and either

$$\begin{aligned} \hbox { for some } \alpha \in (1,2],\ {\left\{ \begin{array}{ll} \underline{\lambda }:=\liminf _{|x|\rightarrow \infty }J(x)|x|^\alpha >0,\\ \bar{\lambda }:=\limsup _{|x|\rightarrow \infty }J(x)|x|^\alpha <\infty , \end{array}\right. } \end{aligned}$$
(1.8)

or

$$\begin{aligned} \hbox { for some } \beta>1,\ {\left\{ \begin{array}{ll} \underline{\lambda }:=\liminf _{|x|\rightarrow \infty }J(x)|x|(\ln |x|)^{\beta }>0,\\ \bar{\lambda }:=\limsup _{|x|\rightarrow \infty }J(x)|x|(\ln |x|)^{\beta }<\infty .\end{array}\right. } \end{aligned}$$
(1.9)

We will prove some sharp estimates (see Lemmas 3.1, 3.2 and 4.1) under the above assumptions for J; Theorem 1.1 is a direct consequence of these more general results.

To be precise, in Sect. 2, we give some crucial preparatory results which will be used in the later sections; Lemma 2.1 contains key ingredients of the strategy of estimates toward the precise values of the limits in Theorem 1.1, while Lemma 2.2 reveals the right structure the lower solution should take in order to obtain these precise rates. In Sect. 3, we obtain sharp lower bounds for h(t) and \(-g(t)\), which constitute Lemma 3.1 (for the case (1.8) holds with \(\alpha \in (1,2)\) and for the case that (1.9) is satisfied) and Lemma 3.2 (for the case that (1.8) holds with \(\alpha =2\)); the proofs are based on subtle constructions of lower solutions of (1.1), which turns out to possess the right improvements on those used in [20]. In Sect. 4, we prove the sharp upper bounds for h(t) and \(-g(t)\), which is much less demanding technically.

2 Some preparatory results

We prove two lemmas in this section, which will play a crucial role in Sects. 3 and 4. The first contains important information on the strategy of estimating the key terms to reach the precise limiting values in Theorem 1.1, while the second is a rather general result, where only (J) is needed for the kernel function J, neither (1.8) nor (1.9) is required. The function \(\phi (t,x)\) in Lemma 2.2 dictates the structure of the lower solutions to be used to obtain the desired precise rates in the main results.

Lemma 1.2

For \(k>1\), \(\delta \in [0,1)\), define

$$\begin{aligned} \displaystyle A=A(k,\delta ,J):= {\left\{ \begin{array}{ll} \displaystyle \int _{-k}^{-\delta k}\int _{0}^\infty J(x-y)\textrm{d}y\textrm{d}x &{}\hbox { if } (1.8) \hbox { holds with } \alpha \in (1,2) \hbox { or if}\, (1.9) \hbox { holds },\\ \displaystyle \int _{-k}^{-k^\delta }\int _{0}^\infty J(x-y)\textrm{d}y\textrm{d}x &{}\hbox { if } (1.8) \hbox { holds with }\ \alpha =2. \end{array}\right. } \end{aligned}$$

Then

$$\begin{aligned} \begin{aligned}&{\left\{ \begin{array}{ll} \displaystyle \liminf _{k\rightarrow \infty } \frac{A}{k^{2-\alpha }}\ge \frac{1-\delta ^{2-\alpha }}{(\alpha -1)(2-\alpha )}\underline{\lambda }, \\ \displaystyle \limsup _{k\rightarrow \infty } \frac{A}{k^{2-\alpha }} \le \frac{1-\delta ^{2-\alpha }}{(\alpha -1)(2-\alpha )} \bar{\lambda }, \end{array}\right. }{} & {} \hbox { if }(1.8) \hbox { holds with } \alpha \in (1,2),\\&{\left\{ \begin{array}{ll} \displaystyle \liminf _{k\rightarrow \infty } \frac{A}{\ln k}\ge (1-\delta )\underline{\lambda }, \\ \displaystyle \limsup _{k\rightarrow \infty } \frac{A}{\ln k} \le (1-\delta )\bar{\lambda }, \end{array}\right. }{} & {} \hbox { if } (1.8) \hbox { holds with } \alpha =2,\\&{\left\{ \begin{array}{ll} \displaystyle \liminf _{k\rightarrow \infty }\frac{A}{k(\ln k)^{1-\beta }}\ge \frac{(1-\delta )\underline{\lambda }}{\beta -1}, \\ \displaystyle \limsup _{k\rightarrow \infty }\frac{A}{k(\ln k)^{1-\beta }}\le \frac{(1-\delta )\bar{\lambda }}{\beta -1}, \end{array}\right. }{} & {} \hbox { if }(1.9) \hbox { holds }. \end{aligned} \end{aligned}$$

Proof

Case 1: (1.8) holds with \(\alpha \in (1,2)\).

Denote

$$\begin{aligned} D_\delta :=\frac{1 }{\alpha -1}\int _{0}^{\infty }[(y+\delta )^{1-\alpha }-(y+1)^{1-\alpha }]\textrm{d}y. \end{aligned}$$
(2.1)

A direct calculation gives

$$\begin{aligned} D_\delta =\lim _{M\rightarrow \infty }\frac{(M+\delta )^{2-\alpha }-(M+1)^{2-\alpha }+1-\delta ^{2-\alpha }}{(\alpha -1)(2-\alpha )}=\frac{1-\delta ^{2-\alpha }}{(\alpha -1)(2-\alpha )}. \end{aligned}$$

Moreover,

$$\begin{aligned} A=&\int _{-k}^{-\delta k}\int _{0}^\infty J(x-y)\textrm{d}y\textrm{d}x=\int _{\delta k}^{k}\int _{0}^\infty J(x+y)\textrm{d}y\textrm{d}x\\ =&\int _{\delta k}^{k}\int _{0}^2 J(x+y)\textrm{d}y\textrm{d}x+\int _{\delta k}^{k}\int _{2}^\infty J(x+y)\textrm{d}y\textrm{d}x=:A_1+A_2, \end{aligned}$$

and by (J),

$$\begin{aligned} 0\le A_1\le \int _{0}^2 1 \textrm{d}y\le 2. \end{aligned}$$

Clearly,

$$\begin{aligned} A_2&= \int _{\delta k}^{k}\int _{2}^\infty J(x+y)\textrm{d}y\textrm{d}x= \int _{2}^\infty \int _{\delta k}^{k} J(x+y)\textrm{d}x \textrm{d}y\\&=k^{2-\alpha }\left( \int _{2k^{-1}}^{k^{-1/2}}+\int _{k^{-1/2}}^\infty \right) \int _{\delta +y}^{1+y} \frac{ J(kx)}{(kx)^{-\alpha }}x^{-\alpha }\textrm{d}x \textrm{d}y. \end{aligned}$$

We have

$$\begin{aligned} 0\le&\int _{2k^{-1}}^{k^{-1/2}}\int _{\delta +y}^{1+y} \frac{ J(kx)}{(kx)^{-\alpha }}x^{-\alpha }\textrm{d}x \textrm{d}y\le \sup _{\xi \ge 1}[J(\xi )\xi ^\alpha ] \int _{2k^{-1}}^{k^{-1/2}}\int _{\delta +y}^{1+y} x^{-\alpha }\textrm{d}x \textrm{d}y\\ \le&\ \frac{\sup _{\xi \ge 1}[J(\xi )\xi ^\alpha ]}{\alpha -1} \int _{2k^{-1}}^{k^{-1/2}}y^{1-\alpha } \textrm{d}y\rightarrow 0 \ \hbox { as }k\rightarrow \infty . \end{aligned}$$

By this and (1.8), we deduce

$$\begin{aligned}&\limsup _{k\rightarrow \infty }\frac{A_2}{k^{2-\alpha }}=\limsup _{k\rightarrow \infty }\int _{k^{-1/2}}^\infty \int _{\delta +y}^{1+y} \frac{ J(kx)}{(kx)^{-\alpha }}x^{-\alpha }\textrm{d}x \textrm{d}y\\&\le \bar{\lambda }\int _{0}^\infty \int _{\delta +y}^{1+y} x^{-\alpha }\textrm{d}x \textrm{d}y=\frac{\bar{\lambda }}{\alpha -1}\int _{0}^\infty [(\delta +y)^{1-\alpha }-(1+y)^{1-\alpha }] \textrm{d}y=\bar{\lambda }D_\delta . \end{aligned}$$

Thus,

$$\begin{aligned} \limsup _{k\rightarrow \infty }\frac{A}{k^{2-\alpha }}=\limsup _{k\rightarrow \infty }\frac{A_2}{k^{2-\alpha }}\le \bar{\lambda }D_\delta . \end{aligned}$$

Similarly,

$$\begin{aligned} \liminf _{k\rightarrow \infty }\frac{A}{k^{2-\alpha }}=\liminf _{k\rightarrow \infty }\frac{A_2}{k^{2-\alpha }}\ge \underline{\lambda } D_\delta . \end{aligned}$$

Case 2: (1.9) holds.

Let \(A_1\) and \(A_2\) be as in Case 1. Clearly, \(0\le A_1\le 2\). A simple calculation gives

$$\begin{aligned} A_2&= \int _{\delta k}^{k}\int _{2+x}^\infty J(y)\textrm{d}y\textrm{d}x= \int _{\delta k+2}^{k+2}\int _{\delta k}^{y-2} J(y)\textrm{d}x\textrm{d}y+\int _{k+2}^\infty \int _{\delta k}^{k} J(y)\textrm{d}x\textrm{d}y\\&=\int _{\delta k+2}^{k+2}(y-2-\delta k) J(y)\textrm{d}y+(1-\delta )k\int _{k+2}^\infty J(y)\textrm{d}y. \end{aligned}$$

By (1.9), there exists \(C>0\) such that for all large \(k>0\),

$$\begin{aligned}&\int _{\delta k+2}^{k+2}(y-2-\delta k) J(y)\textrm{d}y\le C\int _{\delta k+2}^{k+2} (\ln y)^{-\beta }\textrm{d}y\le C(1-\delta )k\Big [\ln (\delta k+2)\Big ]^{-\beta }, \end{aligned}$$

and

$$\begin{aligned} k\int _{k+2}^\infty J(y)\textrm{d}y\le \bar{\lambda }[1+o_k(1)]k\int _{k+2}^\infty y^{-1}(\ln y)^{-\beta }\textrm{d}y=\frac{\bar{\lambda }[1+o_k(1)]k}{\beta -1}[\ln (k+2)]^{1-\beta }, \end{aligned}$$

where \(o_k(1)\rightarrow 0\) as \(k\rightarrow \infty \). Hence,

$$\begin{aligned} \limsup _{k\rightarrow \infty }\frac{A}{k(\ln k)^{1-\beta }}=\limsup _{k\rightarrow \infty }\frac{A_2}{k(\ln k)^{1-\beta }}\le \frac{(1-\delta )\bar{\lambda }}{\beta -1}. \end{aligned}$$

Similarly,

$$\begin{aligned} \liminf _{k\rightarrow \infty }\frac{A}{k(\ln k)^{1-\beta }}=\liminf _{k\rightarrow \infty }\frac{A_2}{k(\ln k)^{1-\beta }}\ge \frac{(1-\delta )\underline{\lambda }}{\beta -1}. \end{aligned}$$

Case 3: (1.8) holds with \(\alpha =2\).

By direct calculation,

$$\begin{aligned} A=&\int _{-k}^{-k^\delta }\int _{0}^\infty J(x-y)\textrm{d}y\textrm{d}x=\int _{k^\delta }^{k}\int _{0}^\infty J(x+y)\textrm{d}y\textrm{d}x\\ =&\int _{k^\delta }^{k}\int _{0}^1 J(x+y)\textrm{d}y\textrm{d}x+\int _{k^\delta }^{k}\int _{1}^\infty J(x+y)\textrm{d}y\textrm{d}x=:\widetilde{A}_1+\widetilde{A}_2, \end{aligned}$$

and by (J),

$$\begin{aligned} 0\le \widetilde{A}_1\le \int _{0}^1 1 \textrm{d}y=1. \end{aligned}$$

By (1.8), we have

$$\begin{aligned}&\widetilde{A}_2=\int _{1}^{\infty }\int _{k^{\delta }+y}^{k+y} J(x)\textrm{d}x\textrm{d}y\le \bar{\lambda }[1+o_k(1)] \int _{1}^{\infty }\int _{k^{\delta }+y}^{k+y} x^{-2}\textrm{d}x\textrm{d}y = \bar{\lambda }[1+o_k(1)] \ln \left( \frac{k+1}{k^\delta +1}\right) , \end{aligned}$$

where \(o_k(1)\rightarrow 0\) as \(k\rightarrow \infty \). Therefore,

$$\begin{aligned} \limsup _{k\rightarrow \infty }\frac{A}{\ln k}=\limsup _{k\rightarrow \infty }\frac{\widetilde{A}_2}{\ln k}\le \lim _{k\rightarrow \infty }\bar{\lambda }\ \frac{\ln (k+1)-\ln (k^\delta +1)}{\ln k}=(1-\delta )\bar{\lambda }. \end{aligned}$$

Similarly,

$$\begin{aligned} \liminf _{k\rightarrow \infty }\frac{A}{\ln k}=\liminf _{k\rightarrow \infty }\frac{\widetilde{A}_2}{\ln k}\ge \lim _{k\rightarrow \infty }\underline{\lambda }\ \frac{\ln (k+1)-\ln (k^\delta +1)}{\ln k}=(1-\delta )\underline{\lambda }. \end{aligned}$$

The proof is finished. \(\square \)

Lemma 1.3

Suppose that J satisfies (J) but neither (1.8) nor (1.9) is required. Let \(1<\xi (t)<L(t)\) be functions in \(C([0,\infty ))\), \(\rho \ge 2\) a constant, and define

$$\begin{aligned} \phi (t,x):=\min \left\{ 1, \Big [1-\frac{|x|}{L(t)}\Big ]^\rho \xi (t)^\rho \right\} \hbox { for } x\in [-L(t),L(t)],\ t\in [0,\infty ). \end{aligned}$$

Then, for any \(\epsilon \in (0,1)\), there exists a constant \(\theta ^*=\theta ^*(\epsilon ,J)>1\), such that

$$\begin{aligned} \int _{-L(t)}^{L(t)}J(x-y)\phi (t,y)\textrm{d}y\ge (1-\epsilon ) \phi (t,x) \hbox { for } x\in [-L(t),L(t)],\ t\ge 0 \end{aligned}$$
(2.2)

provided that

$$\begin{aligned} L(t)\ge \theta ^*\xi (t) \ \ \ \mathrm{for\ all}\ t\ge 0. \end{aligned}$$
(2.3)

Proof

Since \(||J||_{L^1}=1\), there is \(L_0>0\) depending only on J and \(\epsilon \) such that

$$\begin{aligned} \int _{-L_0}^{L_0}J(x) \textrm{d}x\ge 1-\epsilon /2. \end{aligned}$$
(2.4)

Define

$$\begin{aligned} \psi (t,x):=\phi (t, L(t)x)=\min \left\{ 1, (1-|x|)^\rho \xi (t)^\rho \right\} ,\ \ x\in [-1,1],\ t\ge 0. \end{aligned}$$

We note that \(\rho \ge 2\) implies that \(\psi (t,x)\) is a convex function of x when

$$\begin{aligned} 1-\frac{1}{\xi (t)}\le |x|\le 1. \end{aligned}$$

Clearly

$$\begin{aligned} \psi (t,x)={\left\{ \begin{array}{ll} 1 &{}\hbox { for } |x|\le 1-\xi (t)^{-1},\\ \big [(1-|x|) \xi (t) \big ]^{\rho } &{}\hbox { for } 1-\xi (t)^{-1}<|x|\le 1. \end{array}\right. } \end{aligned}$$

It is also easy to check that

$$\begin{aligned} \frac{|\psi (t,x)-\psi (t,y) |}{|x-y|}\le M(t):= \rho \xi (t) \hbox { for } x, y\in [-1,1],\ x\not =y,\ t\ge 0, \end{aligned}$$

which implies

$$\begin{aligned} |\phi (t,x)-\phi (t,y)|&=|\psi (t, x/L(t))-\psi (t, y/L(t))|\nonumber \\&\le \frac{M(t)}{L(t)} |x-y| \hbox { for } \ x,y\in [-L(t), L(t)]. \end{aligned}$$
(2.5)

Since \(\psi (t, x)>0\) for \(x\in (-1,1)\), \(\psi (t,\pm 1)=0\), and \(\psi (t,x)\) is convex in x for \(x\in [-1, -1+1/\xi (t)]\) and for \(x\in [1-1/\xi (t), 1]\), if we extend \(\psi (t,x)\) by \(\psi (t,x)=0\) for \(|x|>1\), then

$$\begin{aligned} \psi (t, x)\hbox { is convex for } x\in [1-1/\xi (t),\infty ) \hbox { and for } x\in (-\infty , -1+1/\xi (t)]. \end{aligned}$$

We now verify (2.2) for \(x\in [0, L(t)]\); the proof for \(x\in [-L(t), 0]\) is parallel and will be omitted. We will divide the proof into two cases:

$$\begin{aligned} \mathbf{(a)}\, x\in \left[ 0, (1-\frac{1}{2\xi (t)})L(t)\right] \hbox { and } \mathbf{(b)}\, x\in \left[ (1-\frac{1}{2\xi (t)})L(t), L(t)\right] . \end{aligned}$$

Case (a). For

$$\begin{aligned} x\in \left[ 0, (1-\frac{1}{2\xi (t)})L(t)\right] , \end{aligned}$$

a direct calculation gives

$$\begin{aligned} \int _{-L(t)}^{L(t)}J(x-y)\phi (t, y)\textrm{d}y=\int _{-L(t)-x}^{L(t)-x}J(y)\phi (t, x+y)\textrm{d}y\ge \int _{-L_0}^{L_0}J(y)\phi (x+y)\textrm{d}y, \end{aligned}$$

where \(L_0\) is given by (2.4) and we have used

$$\begin{aligned} L(t)-x\ge \frac{L(t)}{2\xi (t)}\ge L_0, \hbox { which is guaranteed if we assume } L(t)\ge {2L_0}{\xi (t)}. \end{aligned}$$

Then by (2.4), (2.5) and (J),

$$\begin{aligned}&\int _{-L_0}^{L_0}J(y)\phi (t, x+y)\textrm{d}y\\&\quad = \int _{-L_0}^{L_0}J(y)\phi (t,x)\textrm{d}y+\int _{-L_0}^{L_0}J(y)[\phi (t,x+y)-\phi (t,x)]\textrm{d}y\\&\quad \ge \int _{-L_0}^{L_0}J(y)\phi (t,x)\textrm{d}y-\frac{M(t)}{L(t)}\int _{-L_0}^{L_0}J(y)|y|\textrm{d}y\\&\quad \ge (1-\epsilon /2)\phi (t, x)-\frac{M(t)}{L(t)}L_0. \end{aligned}$$

Clearly

$$\begin{aligned} M_1(t):=\min _{x\in [0,(1-\frac{1}{2\xi (t)})L(t)]}\phi (t, x)=\Big (\frac{1}{2}\Big )^\rho . \end{aligned}$$

Then from the above calculations we obtain, for \( x\in [0, (1-\frac{1}{2\xi (t)})L(t)]\),

$$\begin{aligned} \int _{-L(t)}^{L(t)}J(x-y)\phi (t,y)\textrm{d}y&\ge (1-\epsilon /2)\phi (t, x)-\frac{M(t)}{L(t)}L_0\\&= (1-\epsilon )\phi (t, x)+\frac{\epsilon }{2} \phi (t, x) -\frac{M(t)}{L(t)}L_0\\&\ge (1-\epsilon )\phi (t, x)+ \frac{\epsilon }{2} M_1(t)-\frac{M(t)}{L(t)}L_0\\ {}&\ge (1-\epsilon )\phi (t, x) \end{aligned}$$

provided that

$$\begin{aligned} {L(t)}\ge \frac{2L_0M(t)}{\epsilon M_1(t)}=\frac{2^{\rho +1}L_0\rho }{\epsilon }\xi (t). \end{aligned}$$

Case (b). For

$$\begin{aligned} x\in \left[ \left( 1-\frac{1}{2\xi (t)}\right) L(t), L(t)\right] , \end{aligned}$$

we have, using \(-L(t)-x<-L_0\) and \(\phi (t, x)=0\) for \(x\ge L(t)\),

$$\begin{aligned} \int _{-L(t)}^{L(t)}J(x-y)\phi (t, y)\textrm{d}y&\ge \int _{-L_0}^{\min \{L_0, L(t)-x\}}J(y)\phi (t, x+y)\textrm{d}y\\&=\int _{-L_0}^{L_0}J(y)\phi (t, x+y)\textrm{d}y\\&=\int _{0}^{L_0}J(y)[\phi (t, x+y)+\phi (t,x-y)]\textrm{d}y. \end{aligned}$$

Since \(\phi (t,s)\) is convex in s for \(s\ge L(t)[1-\xi (t)^{-1}]\), and for \(x\in \left[ (1-\frac{1}{2\xi (t)})L(t), L(t)\right] \), \(y\in [0, L_0]\), we have

$$\begin{aligned}{} & {} x+y\ge x-y\ge (1-\frac{1}{2\xi (t)})L(t)-L_0\ge (1-\frac{1}{\xi (t)})L(t)\hbox { by our earlier assumption }\\{} & {} L(t)\ge 2L_0\xi (t). \end{aligned}$$

Then, we can use the convexity of \(\phi (t,\cdot )\) and (2.4) to obtain

$$\begin{aligned} \int _{0}^{L_0}J(y)[\phi (t,x+y)+\phi (t,x-y)]\textrm{d}y\ge 2\phi (t,x)\int _{0}^{L_0}J(y)\textrm{d}y\ge (1-\epsilon /2) \phi (t,x). \end{aligned}$$

Thus

$$\begin{aligned} \int _{-L(t)}^{L(t)}J(x-y)\phi (t,y)\textrm{d}y\ge (1-\epsilon ) \phi (t, x). \end{aligned}$$

Summarising, from the above conclusions in cases (a) and (b), we see that (2.2) holds if \(L(t)\ge \theta ^* \xi (t)\) for all \(t\ge 0\) with \(\theta ^*:=\frac{2^{\rho +1}L_0\rho }{\epsilon }>2L_0\). The proof is finished. \(\square \)

3 Lower bounds

Recall that J(x) satisfies (J) and either (1.8) or (1.9). The case (1.8) holds with \(\alpha \in (1,2)\) and the case (1.9) holds will be considered in subsection 3.1, while the case that (1.8) holds with \(\alpha =2\) will be considered in subsection 3.2.

From now on, in all our stated results, we will only list the conclusions for h(t); the corresponding conclusions for \(-g (t)\) follow directly by considering the problem with initial function \(u_0(-x)\), whose unique solution is given by \(({\tilde{u}}(t,x), {\tilde{g}} (t), {\tilde{h}}(t))=(u(t,-x), -h(t), -g (t))\).

3.1 The case (1.8) holds with \(\alpha \in (1,2)\) and the case (1.9) holds

Lemma 1.4

Assume that J satisfies (J) and either (1.8) with \(\alpha \in (1,2)\) or (1.9), f satisfies (f), and spreading happens to (1.1). Then

$$\begin{aligned}{\left\{ \begin{array}{ll} \displaystyle \liminf _{t\rightarrow \infty }\frac{h(t)}{t^{1/({\alpha }-1)}}\ge \frac{2^{2-\alpha }}{2-\alpha }\mu \underline{\lambda } &{} \hbox { if } (1.8) \hbox { holds with }\ \alpha \in (1,2),\\ \displaystyle \liminf _{t\rightarrow \infty }\frac{\ln h(t)}{t^{1/\beta }}\ge \left( \frac{2\beta \mu \underline{\lambda }}{\beta -1}\right) ^{1/\beta }&{} \hbox { if } (1.9) \hbox { holds.} \end{array}\right. } \end{aligned}$$

Proof

We construct a suitable lower solution to (1.1), which will lead to the desired estimate by the comparison principle.

Let \(\rho > 2\) be a large constant to be determined. For any given small \(\epsilon >0\), define for \(t\ge 0\),

$$\begin{aligned} {\left\{ \begin{array}{ll} \underline{h}(t):=(K_1t+\theta )^{\frac{1}{\alpha -1}},~~\underline{g}(t):=-\underline{h}(t) &{}\hbox { if }(1.8) \hbox { holds with }\alpha \in (1,2),\\ \underline{h}(t):=e^{K_1(t+\theta )^{1/\beta }},~~\underline{g}(t):=-\underline{h}(t) &{}\hbox { if }(1.9)\hbox { holds }, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \underline{u}(t,x):=K_2 \min \left\{ 1, \Big [K_3\displaystyle \frac{\underline{h}(t)-|x|}{\underline{h}(t)}\Big ]^{\rho }\right\}&\hbox { for } t\ge 0,\ |x|\le \underline{h}(t), \end{aligned}$$

where

$$\begin{aligned} K_1:= {\left\{ \begin{array}{ll} \displaystyle (1-\epsilon )^2(2-\epsilon )^{2-\alpha } D_{\epsilon /(2-\epsilon )}(\alpha -1)\mu \underline{\lambda }&{}\hbox { if } (1.8) \hbox { holds with } \alpha \in (1,2),\\ \displaystyle \Big [(1-\epsilon )^4\frac{2\beta \mu \underline{\lambda }}{\beta -1}\Big ]^{1/\beta }&{}\hbox { if } (1.9) \hbox { holds },\\ \end{array}\right. } \end{aligned}$$
$$\begin{aligned} K_2:=1-\epsilon ,\ \ K_3:=1/\epsilon , \ \ \theta \gg 1 \hbox { and } D_{\epsilon /(2-\epsilon )} \hbox { is given according to }(2.1). \end{aligned}$$

It is easily seen that \(\underline{u}(t,x)\equiv K_2\) for \(|x|\le (1-\epsilon )\underline{h}(t)\). Moreover, \(\underline{u}\) is continuous, and \(\underline{u}_t\) exists and is continuous except when \(|x|=(1-\epsilon )\underline{h}(t)\), where \(\underline{u}_t\) has a jumping discontinuity. In what follows, we check that \((\underline{u}, \underline{g}, \underline{h})\) defined above forms a lower solution to (1.1). We will do this in three steps.

Step 1. We prove the inequality

$$\begin{aligned} \underline{h}'(t)\le \mu \int _{-\underline{h}(t)}^{\underline{h}(t)}\int ^{+\infty }_{\underline{h}(t)}J(x-y)\underline{u}(t,x)\textrm{d}y \textrm{d}x, \end{aligned}$$
(3.1)

which immediately gives

$$\begin{aligned} \underline{g}'(t)\ge -\mu \int _{-\underline{h}(t)}^{\underline{h}(t)}\int _{-\infty }^{-\underline{h}(t)}J(x-y)\underline{u}(t,x)\textrm{d}y \textrm{d}x, \end{aligned}$$

due to \(\underline{u}(t,x)=\underline{u}(t,-x)\) and \(J(x)=J(-x)\).

Using the definition of \(\underline{u}\), we have

$$\begin{aligned} \mu \int _{-\underline{h}}^{\underline{h}} \int _{\underline{h}}^{+\infty } J(x-y) \underline{u}(t,x)\textrm{d}y\textrm{d}x&\ge (1-\epsilon )\mu \int _{-(1-\epsilon )\underline{h}}^{(1-\epsilon )\underline{h}} \int _{\underline{h}}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\\&= (1-\epsilon )\mu \int _{-(2-\epsilon )\underline{h}}^{-\epsilon \underline{h}} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x. \end{aligned}$$

Using Lemma 2.1, we obtain for large \(\underline{h}\) (guaranteed by \(\theta \gg 1\)),

$$\begin{aligned}&\displaystyle \int _{-(2-\epsilon )\underline{h}}^{-\epsilon \underline{h}} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\ge (1-\epsilon )\underline{\lambda } D_{\epsilon /(2-\epsilon )}[(2-\epsilon ) \underline{h}]^{2-\alpha }\ \ \hbox { if } (1.8) \hbox { holds with }\\&\alpha \in (1,2), \end{aligned}$$

and

$$\begin{aligned} \displaystyle \int _{-(2-\epsilon )\underline{h}}^{-\epsilon \underline{h}} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x&\ge (1-\epsilon ) \frac{(1-\frac{\epsilon }{2-\epsilon })\underline{\lambda }}{\beta -1}(2-\epsilon ) \underline{h} \big [\ln (2-\epsilon ) \underline{h}\big ]^{1-\beta }\\&=(1-\epsilon )^2\frac{2\underline{\lambda }}{\beta -1} \underline{h} \big [\ln (2-\epsilon ) \underline{h}\big ]^{1-\beta }\\ {}&\ge (1-\epsilon )^3\frac{2\underline{\lambda }}{\beta -1} \underline{h} (\ln \underline{h})^{1-\beta } \ \hbox { if } (1.9) \hbox { holds }. \end{aligned}$$

Therefore, by the definition of \(K_1\), when (1.8) holds with \(\alpha \in (1,2)\), we have

$$\begin{aligned}&{\mu } \int _{-\underline{h}(t)}^{\underline{h}(t)} \int _{\underline{h}(t)}^{+\infty } J(x-y) \underline{u}(t,x)\textrm{d}y\textrm{d}x\\&\ge (1-\epsilon )^2\mu \underline{\lambda } D_{\epsilon /(2-\epsilon )}[(2-\epsilon )\underline{h}(t)]^{2-\alpha }\\&= (1-\epsilon )^2\mu \underline{\lambda } D_{\epsilon /(2-\epsilon )}(2-\epsilon )^{2-\alpha }(K_1t+\theta )^{(2-\alpha )/({\alpha }-1)}\\&= \frac{K_1}{{\alpha }-1} (K_1t+\theta )^{(2-\alpha )/({\alpha }-1)}=\underline{h}'(t); \end{aligned}$$

and when (1.9) holds, we have

$$\begin{aligned}&{\mu } \int _{-\underline{h}(t)}^{\underline{h}(t)} \int _{\underline{h}(t)}^{+\infty } J(x-y) \underline{u}(t,x)\textrm{d}y\textrm{d}x\\&\ge \mu (1-\epsilon )^4\frac{2\underline{\lambda }}{\beta -1} \underline{h} (\ln \underline{h})^{1-\beta }\\&= \frac{K_1^\beta }{\beta }\underline{h} (\ln \underline{h})^{1-\beta }=\underline{h}'(t). \end{aligned}$$

This proves (3.1).

Step 2. We prove the following inequality for \(t>0\) and \(|x|\in [0, \underline{h}(t)]\setminus \{(1-\epsilon )\underline{h}(t)\}\),

$$\begin{aligned} \underline{u}_t\le&\ d \int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y -d\underline{u}+f(\underline{u}). \end{aligned}$$
(3.2)

From the definition of \(\underline{u}\), we see that

$$\begin{aligned} \underline{u}_t=0 \ \hbox { for } |x|< (1-\epsilon )\underline{h}(t), \end{aligned}$$

and for \( (1-\epsilon )\underline{h}(t)<|x|<\underline{h}(t)\), if (1.8) holds with \(\alpha \in (1,2)\), then

$$\begin{aligned} \underline{u}_t=K_2K_3^{\rho }\rho \left( \frac{\underline{h}-|x|}{\underline{h}}\right) ^{\rho -1}\frac{\underline{h}'|x|}{\underline{h}^2}=\frac{K_1K_2K_3^{\rho }\rho }{\alpha -1}\left( \frac{\underline{h}-|x|}{\underline{h}}\right) ^{\rho -1}\frac{|x|}{\underline{h}}\underline{h}^{1-\alpha }, \end{aligned}$$
(3.3)

where we have used \(\underline{h}'=\frac{K_1}{\alpha -1}\underline{h}^{2-\alpha }\); and if (1.9) holds, then

$$\begin{aligned} \underline{u}_t=K_2K_3^{\rho }\rho \left( \frac{\underline{h}-|x|}{\underline{h}}\right) ^{\rho -1}\frac{\underline{h}'|x|}{\underline{h}^2}=\frac{K_1^\beta K_2K_3^{\rho }\rho }{\beta }\left( \frac{\underline{h}-|x|}{\underline{h}}\right) ^{\rho -1}\frac{|x|}{\underline{h}}(\ln \underline{h})^{1-\beta }, \end{aligned}$$

where we have utilized \(\underline{h}'=\frac{K_1^\beta }{\beta }\underline{h} (\ln \underline{h})^{1-\beta }\).

Claim. There is \(C_1=C_1(\epsilon )>0\) such that for \(x\in [-\underline{h}(t),\underline{h}(t)]\) and \(t\ge 0\),

$$\begin{aligned} d \int _{-\underline{h}(t)}^{\underline{h}(t)} {J}(x-y) \underline{u}(t,y)\textrm{d}y -d \underline{u}+f(\underline{u}) \ge C_1\left[ \int _{-\underline{h}(t)}^{\underline{h}(t)} {J}(x-y) \underline{u}(t,y)\textrm{d}y+\underline{u}\right] . \end{aligned}$$

The definition of \(\underline{u}\) indicates \(0\le \underline{u}(t,x)\le K_2=1-\epsilon <1\). By the properties of f, there exists \( \widetilde{C}_1:= \widetilde{C}_1(\epsilon )\in (0, d)\) such that

$$\begin{aligned} f(s)\ge \widetilde{C}_1 s \hbox { for } s\in [0,K_2]. \end{aligned}$$

Using Lemma 2.2 with

$$\begin{aligned} (L(t),\phi (t,x),\xi (t))=(\underline{h}(t), \underline{u}(t,x)/K_2,K_3), \end{aligned}$$

for any given small \(\delta >0\), we can find large \(h_*=h_*(\delta ,\epsilon )\) such that for \(\underline{h}\ge h_*\) and \(|x|\le \underline{h}\),

$$\begin{aligned}&\int ^{\underline{h}}_{-\underline{h}}J(x-y)\underline{u}(t,y)dy\ge (1-\delta )\underline{u}(t,x). \end{aligned}$$

Hence, due to \(d>\widetilde{C}_1\),

$$\begin{aligned}&d \int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y -d \underline{u}(t,x)+f(\underline{u}(t,x))\\&\ge d\int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y +(\widetilde{C}_1-d) \underline{u}(t,x)\\&{\ge \frac{\widetilde{C}_1}{3} \int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y+(d-\frac{\widetilde{C}_1}{3})(1-\delta )\underline{u}(t,x)+(\widetilde{C}_1-d) \underline{u}(t,x)}\\&\ge {\frac{\widetilde{C}_1}{3}}\left[ \int _{-\underline{h}(t)}^{\underline{h}(t)} {J}(x-y) \underline{u}(t,y)\textrm{d}y+\underline{u}(t,x)\right] , \end{aligned}$$

provided that \(\delta =\delta (\epsilon )>0\) is sufficiently small. Thus the claim holds with \(C_1=\widetilde{C}_1/3\).

To verify (3.2), it remains to prove

$$\begin{aligned} \underline{u}_t\le C_1\left[ \int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y+\underline{u}\right] \ \hbox { for } \ |x|\in [0, \underline{h}(t)]\setminus \{(1-\epsilon )\underline{h}(t)\}. \end{aligned}$$
(3.4)

Since \(\underline{u}(x,t)\equiv 1-\epsilon \) for \(|x|<(1-\epsilon )\underline{h}(t)\), (3.4) holds trivially for such x. Hence we only need to consider the case of \((1-\epsilon )\underline{h}(t)<|x|<\underline{h}(t)\).

Since \(\theta \gg 1\) and \(0<\epsilon \ll 1\), for \(x\in [7\underline{h}(t)/8,\underline{h}(t)]\supset [(1-\epsilon )\underline{h}(t), \underline{h}(t)]\), we have

$$\begin{aligned} \int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y&\ge \int _{-7\underline{h}/8}^{7\underline{h}/8} {J}(x-y) \underline{u}(t,y)\textrm{d}y\ge K_2\int _{-7\underline{h}/8}^{7\underline{h}/8} {J}(x-y) \textrm{d}y\\&= (1-\epsilon )\int _{-7\underline{h}/8-x}^{7\underline{h}/8-x} {J}(y) \textrm{d}y\ge (1-\epsilon )\int _{-\underline{h}/4}^{-\underline{h}/8} {J}(y) \textrm{d}y\\&=(1-\epsilon )\int ^{\underline{h}/4}_{\underline{h}/8} {J}(y) \textrm{d}y. \end{aligned}$$

Hence, when (1.8) holds with \(\alpha \in (1,2)\), we obtain

$$\begin{aligned}&\int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y\ge \frac{\underline{\lambda }}{2}\int _{\underline{h}/8}^{\underline{h}/4} y^{-\alpha }\textrm{d}y\nonumber =\ \frac{(8^{\alpha -1}-4^{\alpha -1})\underline{\lambda }}{2(\alpha -1)}\underline{h}^{1-\alpha }=:C_2\underline{h}^{1-\alpha }, \end{aligned}$$

and when (1.9) holds, we have

$$\begin{aligned} \int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y&\ge \frac{\underline{\lambda }}{2}\int _{\underline{h}/8}^{\underline{h}/4} y^{-1} (\ln y)^{-\beta }\textrm{d}y\nonumber > \frac{\underline{\lambda }\,\underline{h}}{16} y^{-1} (\ln y)^{-\beta }|_{y=\underline{h}/4}\\&\ge \ \frac{\underline{\lambda }}{4}(\ln \underline{h})^{-\beta }=:\widetilde{C}_2(\ln \underline{h})^{-\beta }. \end{aligned}$$

Similar estimates hold for \(x\in [-\underline{h}(t),-7\underline{h}(t)/8]\).

Now, if (1.8) holds with \(\alpha \in (1,2)\), then for \(|x|\in [(1-C_\epsilon )\underline{h}(t), \underline{h}(t)]\) with

$$\begin{aligned}C_\epsilon :=\Big [\frac{C_1C_2(\alpha -1)}{K_1K_2\rho }\epsilon ^{\rho }\Big ]^{1/(\rho -1)}, \end{aligned}$$

we have

$$\begin{aligned} \underline{u}_t-C_1 \int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y&\le \frac{K_1K_2K_3^{\rho }\rho }{\alpha -1}\left( \frac{\underline{h}-|x|}{\underline{h}}\right) ^{\rho -1}\underline{h}^{1-\alpha }-C_1C_2\underline{h}^{1-\alpha }\\&\le \Big [\frac{K_1K_2K_3^{\rho }\rho }{\alpha -1}C_\epsilon ^{\rho -1}-C_1C_2\Big ]\underline{h}^{1-\alpha }= 0, \end{aligned}$$

and for \((1-\epsilon )\underline{h}(t)<|x|\le (1-C_\epsilon )\underline{h}(t)\), using the definition of \(\underline{u}\), we obtain

$$\begin{aligned} \underline{u}_t-C_1 \underline{u}=&\left[ \frac{K_1\rho }{\alpha -1}\left( \frac{\underline{h}-|x|}{\underline{h}}\right) ^{-1}\frac{|x|}{\underline{h}}\underline{h}^{1-\alpha }-C_1\right] \underline{u}\\ \le&\left[ \frac{K_1\rho }{C_\epsilon (\alpha -1)} \underline{h}^{1-\alpha }-C_1\right] \underline{u}\le 0 \end{aligned}$$

since \(\theta \gg 1\) and \(\underline{h}(t)\ge \theta ^{1/(\alpha -1)}\), \(1-\alpha <0\). We have thus proved (3.4).

We next deal with the case that (1.9) holds. If |x| satisfies

$$\begin{aligned} \underline{h}(t)\ge |x|\ge \Big [1-\frac{\widetilde{C}_\epsilon }{(\ln \underline{h}(t))^{1/(\rho -1)}}\Big ]\underline{h}(t) \end{aligned}$$

with

$$\begin{aligned} \widetilde{C}_\epsilon := \left[ \frac{C_1\widetilde{C}_2\beta }{K_1^\beta K_2K_3^{\rho }\rho }\right] ^{1/(\rho -1)}= \left[ \frac{C_1\widetilde{C}_2\beta \epsilon ^\rho }{K_1^\beta K_2\rho }\right] ^{1/(\rho -1)}, \end{aligned}$$

then \(|x|\in [7\underline{h}(t)/8,\underline{h}(t)]\) and

$$\begin{aligned} \underline{u}_t-C_1 \int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y&\le \frac{K_1^\beta K_2K_3^{\rho }\rho }{\beta }\left( \frac{\underline{h}-|x|}{\underline{h}}\right) ^{\rho -1}(\ln \underline{h})^{1-\beta }-C_1\widetilde{C}_2(\ln \underline{h})^{-\beta }\\&\le \Big [\frac{K_1^\beta K_2K_3^{\rho }\rho }{\beta }\widetilde{C}_\epsilon ^{\rho -1}-C_1\widetilde{C}_2\Big ](\ln \underline{h})^{-\beta }= 0. \end{aligned}$$

For \((1-\epsilon )\underline{h}<|x|\le [1-\frac{\widetilde{C}_\epsilon }{(\ln \underline{h})^{1/(\rho -1)}}]\underline{h}\), from the definition of \(\underline{u}\), we deduce

$$\begin{aligned} \underline{u}_t-C_1 \underline{u}=&\left[ \frac{K_1^\beta \rho }{\beta }\left( \frac{\underline{h}-|x|}{\underline{h}}\right) ^{-1}\frac{|x|}{\underline{h}}(\ln \underline{h})^{1-\beta }-C_1\right] \underline{u}\\ \le&\left[ \frac{K_1^\beta \rho }{\widetilde{C}_\epsilon \beta }(\ln \underline{h})^{1-\beta +(\rho -1)^{-1}}-C_1\right] \underline{u}\le 0 \end{aligned}$$

since \(\underline{h}(t)\ge e^{K_1\theta ^{1/\beta }}\gg 1\) and we may choose \(\rho \) large enough such that \(1-\beta +(\rho -1)^{-1}<0\). The desired inequality (3.4) is thus proved.

Step 3. Completion of the proof by the comparison principle.

Since spreading happens, there is \(t_0>0\) large enough such that \([g(t_0),h(t_0)]\supset [-\underline{h}(0),\underline{h}(0)]\), and also

$$\begin{aligned} u(t_0,x)\ge K_2=1-\epsilon \ge \underline{u}(0,x)\ \ \hbox { for } x\in [-\underline{h}(0),\underline{h}(0)]. \end{aligned}$$

Moreover, from the definition of \(\underline{u}\), we see \(\underline{u}(x,t)=0\) for \(x=\pm \underline{h}(t)\) and \(t\ge 0\). Thus we are in a position to apply the comparison principle (see Theorem 3.1 in [10] and Remark 2.4 in [19], the latter explains why the jumping discontinuity of \(\underline{u}_t\) along \(|x|=(1-\epsilon ) \underline{h}(t)\) does not affect the conclusion) to conclude that

$$\begin{aligned} -\underline{h}(t)\ge g(t_0+t), \ \ \underline{h}(t)\le h(t_0+t) \hbox { for }t\ge 0. \end{aligned}$$

The desired conclusions then follow from the arbitrariness of \(\epsilon >0\) and the fact that \(D_{\epsilon /(2-\epsilon )}\rightarrow D_0\) as \(\epsilon \rightarrow 0\). The proof is finished. \(\square \)

3.2 The case that (1.8) holds with \({\alpha }=2\)

Lemma 1.5

If the conditions in Lemma 3.1 are satisfied except that J satisfies (1.8) with \({\alpha }=2\), then

$$\begin{aligned} \displaystyle \liminf _{t\rightarrow \infty }\frac{h(t)}{t\ln t}\ge \displaystyle \mu \underline{\lambda }. \end{aligned}$$
(3.5)

Proof

For fixed \(\rho \ge 2\), \(0<\epsilon \ll 1\), \(0<\tilde{\epsilon }\ll 1\) and \(\theta \gg 1\), define

$$\begin{aligned} {\left\{ \begin{array}{ll} \underline{h}(t):=K_1(t+\theta )\ln (t+\theta ), &{} t\ge 0,\\ \displaystyle \underline{u}(t,x):=K_2\min \left\{ 1, \left[ \frac{\underline{h}(t)-|x|}{(t+\theta )^{\tilde{\epsilon }}}\right] ^{\rho }\right\} , &{} t\ge 0,\ x\in [-\underline{h}(t), \underline{h}(t)], \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} K_1:=(1-\epsilon )^3 (1-\tilde{\epsilon })\mu \underline{\lambda },\ \ K_2:=1-\epsilon . \end{aligned}$$

Obviously, for any \(t> 0\), \(\partial _t \underline{u}(t,x)\) exists for \(x\in [-\underline{h}(t), \underline{h}(t)]\) except when \(|x|=\underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}\). However, the one-sided partial derivates \(\partial _t\underline{u}(t\pm 0, x)\) always exist.

Step 1. We show that for \(\theta \gg 1\),

$$\begin{aligned}&\underline{h}'(t)\le \mu \int _{-\underline{h}(t)}^{\underline{h}(t)} \int _{\underline{h}(t)}^{+\infty } J(x-y) \underline{u}(t,x)\textrm{d}y\textrm{d}x\ \hbox { for } t> 0, \end{aligned}$$
(3.6)

which clearly implies, due to \(\underline{u}(t,x)=\underline{u}(t,-x)\) and \({J}(x)={J}(-x)\), that

$$\begin{aligned} -\underline{h}'(t)\ge - \mu \int _{-\underline{h}(t)}^{\underline{h}(t)} \int _{-\infty }^{-\underline{h}(t)} J(x-y) \underline{u}(t,x)\textrm{d}y\textrm{d}x\ \hbox { for } t> 0. \end{aligned}$$

Making use of the definition of \(\underline{u}\) and

$$\begin{aligned} \big [-2(1-\epsilon )\underline{h},-[2(1-\epsilon )\underline{h}]^{\tilde{\epsilon }}\big ]\subset [-2\underline{h}+(t+\theta )^{\tilde{\epsilon }},-(t+\theta )^{\tilde{\epsilon }}] \end{aligned}$$

for \(\theta \gg 1\), we obtain

$$\begin{aligned}&\mu \int _{-\underline{h}}^{\underline{h}} \int _{\underline{h}}^{+\infty } J(x-y) \underline{u}(t,x)\textrm{d}y\textrm{d}x\ge (1-\epsilon )\mu \int _{-\underline{h}+(t+\theta )^{\tilde{\epsilon }}}^{\underline{h}-(t+\theta )^{\tilde{\epsilon }}} \int _{\underline{h}}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\\&\quad =\ (1-\epsilon )\mu \int _{-2\underline{h}+(t+\theta )^{\tilde{\epsilon }}}^{-(t+\theta )^{\tilde{\epsilon }}} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\\&\quad \ge (1-\epsilon )\mu \int _{-2(1-\epsilon )\underline{h}}^{-[2(1-\epsilon )\underline{h}]^{\tilde{\epsilon }}} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x. \end{aligned}$$

Thanks to Lemma 2.1, for large \(\underline{h}\) (which is guaranteed by \(\theta \gg 1\)),

$$\begin{aligned} \int _{-2(1-\epsilon )\underline{h}}^{-[2(1-\epsilon )\underline{h}]^{\tilde{\epsilon }}} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\ge (1-\epsilon )(1-\tilde{\epsilon })\underline{\lambda }\ln [2(1-\epsilon )\underline{h}]. \end{aligned}$$

Hence, with \(\theta \gg 1\), we have

$$\begin{aligned}&{\mu } \int _{-\underline{h}(t)}^{\underline{h}(t)} \int _{\underline{h}(t)}^{+\infty } J(x-y) \underline{u}(t,x)\textrm{d}y\textrm{d}x\\&\ge (1-\epsilon )^2\mu (1-\tilde{\epsilon })\underline{\lambda }\ln [2(1-\epsilon )\underline{h}]\\&= (1-\epsilon )^2\mu (1-\tilde{\epsilon })\underline{\lambda }\big \{\ln (t+\theta )+\ln [ \ln (t+\theta )]+\ln [2(1-\epsilon )K_1]\big \}\\&\ge K_1\ln (t+\theta )+K_1=\underline{h}'(t) \hbox { for all }t>0, \end{aligned}$$

which proves (3.6).

Step 2. We show that for \(t>0\) and \(x\in [-\underline{h}(t),\underline{h}(t)]\) with \(|x|\not =\underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}\),

$$\begin{aligned} \underline{u}_t(t,x)\le d \int _{-\underline{h}(t)}^{\underline{h}(t)} {J}(x-y) \underline{u}(t,y)\textrm{d}y -d\underline{u}(t,x)+f(\underline{u}(t,x)) \end{aligned}$$
(3.7)

for \(\theta \gg 1\).

From the definition of \(\underline{u}\), we obtain by direct calculation that, for \(t>0\),

$$\begin{aligned} \underline{u}_t(t,x)={\left\{ \begin{array}{ll} \rho K_2^{\rho ^{-1}} \underline{u}^{1-\rho ^{-1}}\left[ K_1\frac{(1-\tilde{\epsilon })\ln (t+\theta )+1}{(t+\theta )^{\tilde{\epsilon }}} +\frac{\tilde{\epsilon } |x|}{(t+\theta )^{1+\tilde{\epsilon }}}\right] &{} \hbox { if } \underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}<|x|\le \underline{h}(t),\\ 0 &{} \hbox { if } 0\le |x|< \underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}. \end{array}\right. } \nonumber \\ \end{aligned}$$
(3.8)

Making use of Lemma 2.2 with

$$\begin{aligned} (L(t),\phi (t,x),\xi (t))=(\underline{h}(t),\underline{u}(t,x)/K_2, \frac{\underline{h}(t)}{(t+\theta )^{\tilde{\epsilon }}}), \end{aligned}$$

for any given small \(\delta >0\), we can find a large \(\theta _*=\theta _*(\delta ,\epsilon )\) such that for \(\theta \ge \theta _*\) and \(|x|\le \underline{h}(t)\),

$$\begin{aligned}&\int ^{\underline{h}(t)}_{-\underline{h}(t)}J(x-y)\underline{u}(y,t)dy\ge (1-\delta )\underline{u}(x,t). \end{aligned}$$

Then, a similar analysis as in the proof of Lemma 3.1 shows that there exists \(C_1>0\), depending on \(\epsilon \) and \(\delta \), such that for \(\theta \gg 1\), \(x\in [-\underline{h}(t),\underline{h}(t)]\) and \(t\ge 0\),

$$\begin{aligned} d \int _{-\underline{h}(t)}^{\underline{h}(t)} {J}(x-y) \underline{u}(t,y)\textrm{d}y -d \underline{u}+f(\underline{u}) \ge C_1\left[ \int _{-\underline{h}(t)}^{\underline{h}(t)} {J}(x-y) \underline{u}(t,y)\textrm{d}y+\underline{u}\right] . \end{aligned}$$

Hence, to verify (3.7), we only need to show that

$$\begin{aligned} \underline{u}_t\le C_1\left[ \int _{-\underline{h}(t)}^{\underline{h}(t)} {J}(x-y) \underline{u}(t,y)\textrm{d}y+\underline{u}\right] \ \hbox { for } \ |x|\in [0, \underline{h}(t)]\setminus \{ \underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}\}. \end{aligned}$$
(3.9)

Clearly, (3.9) holds trivially for \(0\le |x|<\underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}\) due to \(\underline{u}_t=0\) for such x. We next consider the remaining case \(\underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}<|x|<\underline{h}(t)\).

Denote \(\eta =\eta (t):=(t+\theta )^{{\tilde{\epsilon }}}\). Using \(\theta \gg 1\) and (1.8), we obtain, for \(x\in [\underline{h}(t)-\eta (t),\underline{h}(t)]\),

$$\begin{aligned}&\int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y\ge \int _{-\underline{h}+\eta }^{\underline{h}-\eta } {J}(x-y) \underline{u}(t,y)\textrm{d}y= K_2\int _{-\underline{h}+\eta }^{\underline{h}-\eta } {J}(x-y) \textrm{d}y\nonumber \\&=K_2\int _{-\underline{h}+\eta -x}^{\underline{h}-\eta -x} {J}(y) \textrm{d}y\ge K_2\int _{-\underline{h}}^{-\eta } {J}(y) \textrm{d}y\ge \frac{K_2\underline{\lambda }}{2}\int _{\eta }^{\underline{h}} y^{-2}\textrm{d}y\nonumber \\&=\frac{K_2\underline{\lambda }}{2}(\eta ^{-1}-\underline{h}^{-1})\ge \frac{(1-\epsilon )\underline{\lambda }}{4}\eta ^{-1}=:C_2(t+\theta )^{-\tilde{\epsilon }}. \end{aligned}$$

The same estimate also holds for \(x\in [-\underline{h}(t),-\underline{h}(t)+\eta (t)]\). Therefore, for \(|x|\in [\underline{h}(t)-\eta (t),\underline{h}(t)]\), due to \(\rho >2\) and \(0<\tilde{\epsilon }\ll 1\), we have

$$\begin{aligned}&\underline{u}_t(t,x)-C_1\int _{-\underline{h}}^{\underline{h}} {J}(x-y) \underline{u}(t,y)\textrm{d}y\\&\le \rho K_2^{1/\rho } \underline{u}^{(\rho -1)/\rho }\left[ K_1\frac{(1-{\tilde{\epsilon }})\ln (t+\theta )+1}{(t+\theta )^{{\tilde{\epsilon }}}} +\frac{{\tilde{\epsilon }} \underline{h}}{(t+\theta )^{1+{\tilde{\epsilon }}}}\right] -C_1C_2(t+\theta )^{-\tilde{\epsilon }}\\&\le 2K_1 \rho K_2^{1/\rho } \underline{u}^{(\rho -1)/\rho }\frac{\ln (t+\theta )}{(t+\theta )^{{\tilde{\epsilon }}}} -C_1C_2(t+\theta )^{-{\tilde{\epsilon }}}\\&=\frac{2K_1 \rho K_2 [(\underline{h}-|x|)/(t+\theta )^{{\tilde{\epsilon }}}]^{\rho -1}\ln (t+\theta )-C_1C_2}{(t+\theta )^{{\tilde{\epsilon }}}} \le 0 \end{aligned}$$

if |x| further satisfies

$$\begin{aligned} |x|\ge \underline{h}(t)-\left( \frac{C_1C_2}{2K_1 \rho K_2}\right) ^{1/(\rho -1)}\frac{(t+\theta )^{\tilde{\epsilon }}}{[\ln (t+\theta )]^{1/(\rho -1)}}=:\underline{h}(t)-C_3\frac{(t+\theta )^{{\tilde{\epsilon }}}}{[\ln (t+\theta )]^{1/(\rho -1)}}. \end{aligned}$$

On the other hand, for \(\underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}<|x|<\underline{h}(t)-C_3{(t+\theta )^{{\tilde{\epsilon }}}}/[\ln (t+\theta )]^{1/(\rho -1)}\), using (3.8) and \(0<\tilde{\epsilon }\ll 1\), \(\theta \gg 1\), we deduce

$$\begin{aligned} \underline{u}_t-C_1\underline{u}&\le 2K_1 \rho K_2^{1/\rho } \underline{u}^{(\rho -1)/\rho }\frac{\ln (t+\theta )}{(t+\theta )^{{\tilde{\epsilon }}}} -C_1\underline{u}\\&=\underline{u} \left( \frac{2K_1 \rho [(\underline{h}-|x|)/(t+\theta )^{{\tilde{\epsilon }}}]^{-1/\rho }\ln (t+\theta )}{(t+\theta )^{{\tilde{\epsilon }}}}-C_1\right) \\&\le \underline{u} \left( \frac{2K_1 \rho [\ln (t+\theta )]^{1+\frac{1}{\rho (\rho -1)}}}{C_3^{1/\rho }(t+\theta )^{{\tilde{\epsilon }}}}-C_1\right) <0. \end{aligned}$$

Hence, (3.9) holds true. This concludes Step 2.

Step 3. We finally prove (3.5).

The definition of \(\underline{u}\) clearly gives \(\underline{u}(t,\pm \underline{h}(t))=0\) for \(t\ge 0\). Since spreading happens for (ugh) and \(K_2=1-\epsilon <1\), there is a large constant \(t_0>0\) such that

$$\begin{aligned}&{[}-\underline{h}(0),\underline{h}(0)]\subset (g (t_0),h(t_0)),\\&\underline{u}(0,x)\le K_2\le u(t_0,x)\ \hbox { for } \ x\in {[}-\underline{h}(0),\underline{h}(0)]. \end{aligned}$$

By Remark 2.4 in [19], we see that the comparison principle (Theorem 3.1 in [10]) applies to our situation here, even though \( \underline{u}_t(t,x)\) has a jumping discontinuity at \(|x|=\underline{h}(t)-(t+\theta )^{\tilde{\epsilon }}\). It follows that

$$\begin{aligned}&[-\underline{h}(t),\underline{h}(t)]\subset [g(t+t_0),h(t+t_0)] \ {}{} & {} \hbox { for } t\ge 0,\\&\underline{u}(t,x)\le u(t+t_0,x){} & {} \hbox { for } t\ge 0,\ x\in [-\underline{h}(t),\underline{h}(t)], \end{aligned}$$

which implies

$$\begin{aligned} \displaystyle \liminf _{t\rightarrow \infty }\frac{h(t)}{t\ln t}\ge (1-\epsilon )^3 (1-{\tilde{\epsilon }})\mu \underline{\lambda }. \end{aligned}$$

Since \(\epsilon >0\) and \({\tilde{\epsilon }}>0\) can be arbitrarily small, we thus obtain (3.5) by letting \(\epsilon \rightarrow 0\) and \({\tilde{\epsilon }}\rightarrow 0\). This completes the proof of the lemma. \(\square \)

4 Upper bounds

Recall that we will only state and prove the conclusions for h(t), as the corresponding conclusion for \(-g (t)\) follows directly by considering the problem with initial function \(u_0(-x)\).

Lemma 1.6

Assume that J satisfies (J) and one of the conditions (1.8) and (1.9), f satisfies (f), and spreading happens to (1.1). Then

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \limsup _{t\rightarrow \infty }\frac{h(t)}{t^{1/({\alpha }-1)}}\le \frac{2^{2-\alpha }}{2-\alpha } \mu \bar{\lambda }&{}\hbox { if } (1.8) \hbox { holds with }\ {\alpha }\in (1,2),\\ \displaystyle \limsup _{t\rightarrow \infty }\frac{h(t)}{t\ln t}\le \displaystyle \mu \bar{\lambda }&{}\hbox { if } (1.8) \hbox { holds with } \alpha =2,\\ \displaystyle \limsup _{t\rightarrow \infty }\frac{\ln h(t)}{t^{1/\beta }}\le \left( \frac{2\beta \mu \overline{\lambda }}{\beta -1}\right) ^{1/\beta } &{} \hbox { if } (1.9) \hbox { holds. } \end{array}\right. } \end{aligned}$$
(4.1)

Proof

For any given small \(\epsilon >0\), define, for \(t\ge 0\),

$$\begin{aligned}&{\bar{h}}(t):={\left\{ \begin{array}{ll} (Kt+\theta )^{1/({\alpha }-1)}&{}\hbox { if } (1.8) \hbox { holds with }\ {\alpha }\in (1,2],\\ K(t+\theta )\ln (t+\theta )&{}\hbox { if } (1.8) \hbox { holds with }\ {\alpha }=2,\\ e^{K(t+\theta )^{1/\beta }}&{}\hbox { if }(1.9) \hbox { holds }, \end{array}\right. } \\&\overline{u}(t,x):=1+\epsilon ,\ \ \ x\in [-{\bar{h}}(t), {\bar{h}}(t)], \end{aligned}$$

where \(\theta \gg 1\) and

$$\begin{aligned} K:={\left\{ \begin{array}{ll} \displaystyle (1+\epsilon )^3\frac{2^{2-\alpha }}{2-\alpha }\mu \bar{\lambda }&{} \hbox { if } (1.8) \hbox { holds with } \alpha \in (1, 2),\\ \displaystyle (1+\epsilon )^3\mu \bar{\lambda }&{} \hbox { if } (1.8) \hbox { holds with }\ \alpha =2,\\ \displaystyle \Big [\frac{2(1+\epsilon )^3\beta \mu \overline{\lambda }}{\beta -1}\Big ]^{1/\beta }&{}\hbox { if }(1.9) \hbox { holds }, \end{array}\right. } \end{aligned}$$
(4.2)

We verify that for \(t>0\),

$$\begin{aligned} {\bar{h}}'(t)\ge \mu \int _{-{\bar{h}}(t)}^{{\bar{h}}(t)} \int _{{\bar{h}}(t)}^{+\infty } J(x-y) {\bar{u}}(t,x)\textrm{d}y\textrm{d}x, \end{aligned}$$
(4.3)

which clearly implies

$$\begin{aligned} -{\bar{h}}'(t)\le - {\mu } \int _{-{\bar{h}}(t)}^{{\bar{h}}(t)} \int _{-\infty }^{-{\bar{h}}(t)} J(x-y) {\bar{u}}(t,x)\textrm{d}y\textrm{d}x \end{aligned}$$

since \(\overline{u}(t,x)=\overline{u}(t,-x)\) and \(J(x)=J(-x)\).

Using \({\bar{u}}=1+\epsilon \), we have

$$\begin{aligned} \mu \int _{-{\bar{h}}}^{{\bar{h}}} \int _{{\bar{h}}}^{+\infty } J(x-y) {\bar{u}}(t,x)\textrm{d}y\textrm{d}x&= (1+\epsilon )\mu \int _{-{\bar{h}}}^{{\bar{h}}} \int _{{\bar{h}}}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\\&= (1+\epsilon )\mu \int _{-2{\bar{h}}}^{0} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x. \end{aligned}$$

By Lemma 2.1 with \(\delta =0\), we see that for large \({\bar{h}}\), which is guaranteed by \(\theta \gg 1\),

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \int _{-2{\bar{h}}}^{0} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\le (1+\epsilon )\frac{\bar{\lambda }}{(\alpha -1)(2-\alpha )}(2{\bar{h}})^{2-\alpha }, &{} \hbox { if } (1.8) \hbox { holds with } \alpha \in (1,2),\\ \displaystyle \int _{-2{\bar{h}}}^{0} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\le (1+\epsilon )\bar{\lambda }\ln (2{\bar{h}}), &{} \hbox { if } (1.8) \hbox { holds with } \alpha =2,\\ \displaystyle \int _{-2{\bar{h}}}^{0} \int _{0}^{+\infty } J(x-y) \textrm{d}y\textrm{d}x\le \displaystyle (1+\epsilon )(2{\bar{h}}) [\ln (2{\bar{h}}) ]^{1-\beta }\frac{\bar{\lambda }}{\beta -1} &{}\hbox { if }(1.9) \hbox { holds }. \end{array}\right. } \end{aligned}$$

Therefore, when (1.8) holds with \(\alpha \in (1,2)\), by the definition of K, we have

$$\begin{aligned} {\mu } \int _{-{\bar{h}}}^{{\bar{h}}} \int _{{\bar{h}}}^{+\infty } J(x-y) {\bar{u}}(t,x)\textrm{d}y\textrm{d}x&\le (1+\epsilon )^2\mu \frac{\bar{\lambda }}{(\alpha -1)(2-\alpha )}(2{\bar{h}})^{2-\alpha }\\&= (1+\epsilon )^2\mu \frac{\bar{\lambda }}{(\alpha -1)(2-\alpha )} 2^{2-\alpha }(Kt+\theta )^{(2-\alpha )/({\alpha }-1)}\\&\le \frac{K}{{\alpha }-1} (Kt+\theta )^{(2-\alpha )/({\alpha }-1)}={\bar{h}}'(t). \end{aligned}$$

When (1.8) holds with \({\alpha }=2\), we similarly obtain, due to \(\theta \gg 1\),

$$\begin{aligned} {\mu } \int _{-{\bar{h}}}^{{\bar{h}}} \int _{{\bar{h}}}^{+\infty } J(x-y) {\bar{u}}(t,x)\textrm{d}y\textrm{d}x&\le (1+\epsilon )^2\mu \bar{\lambda }\ln (2{\bar{h}})\\&= (1+\epsilon )^2\mu \bar{\lambda }\big \{\ln (t+\theta )+\ln [ \ln (t+\theta )]+\ln 2K\big \}\\&\le K\ln (t+\theta )+K={\bar{h}}'(t). \end{aligned}$$

Finally, when (1.9) holds, we have

$$\begin{aligned} {\mu } \int _{-{\bar{h}}}^{{\bar{h}}} \int _{{\bar{h}}}^{+\infty } J(x-y) {\bar{u}}(t,x)\textrm{d}y\textrm{d}x&\le (1+\epsilon )^2\mu (2{\bar{h}}) [\ln (2{\bar{h}}) ]^{1-\beta }\frac{\bar{\lambda }}{\beta -1} \\&\le (1+\epsilon )^3\mu (2{\bar{h}}) (\ln {\bar{h}})^{1-\beta }\frac{\bar{\lambda }}{\beta -1}\\&= \frac{K^\beta }{\beta }\underline{h} (\ln \underline{h})^{1-\beta }=\underline{h}'(t). \end{aligned}$$

Thus (4.3) always holds.

Recalling that \(\overline{u}\ge 1\) is a constant, we get, for \(t>0\), \(x\in [-{\bar{h}}(t),{\bar{h}}(t)]\),

$$\begin{aligned} \overline{u}_t(t,x)\equiv 0\ge d \int _{-{\bar{h}}}^{{\bar{h}}} {J}(x-y) \overline{u}(t,y)\textrm{d}y -d\overline{u}(t,x)+f(\overline{u}(t,x)). \end{aligned}$$

Note that condition (f) implies, by simple comparison with ODE solutions,

$$\begin{aligned}\limsup _{t\rightarrow \infty } \max _{x\in [g(t),h(t)]}u(t,x)\le 1; \end{aligned}$$

hence there is \(t_0>0\) such that

$$\begin{aligned} u(t_0,x)\le 1+\epsilon = \overline{u}(t_0,x) \ \hbox { for } x\in [g(t_0),h(t_0)]\subset [-{\bar{h}}(0),{\bar{h}}(0)] \end{aligned}$$

with the last part holding for large \(\theta \).

We are now in a position to use the comparison principle (Theorem 3.1 in [10]) to conclude that

$$\begin{aligned}&[g(t+t_0),h(t+t_0)]\subset [-{\bar{h}}(t),{\bar{h}}(t)]\ \textrm{for}\ t\ge 0,\ \\&u(t+t_0,x)\le \overline{u}(t,x)\ \textrm{for}\ t\ge 0,\ x\in [ g (t+t_0), h(t+t_0)]. \end{aligned}$$

By the arbitrariness of \(\epsilon >0\), we get (4.1). The proof is finished. \(\square \)