1 Introduction

This paper concerns the following nonlocal reaction–diffusion problem with Stefan type free boundary conditions:

where \(w(\tau ,x;t)\) is the solution w(sx), evaluated at \(s=\tau \), of the following initial boundary value problem

Here \(\alpha \), \(\beta \), \(\mu \), D and \(\tau \) are positive constants and f is a nonlinear function. Clearly \(w(\tau ,x;t)\) depends on \(u(t-\tau ,\cdot )\) and g(s), h(s) with \(s\in [t-\tau , t]\). Therefore (P) is highly nonlocal.

Such a problem is used here to model the biological invasion of an age-structured species when the juveniles diffuse in an expanding habitat whose expansion is determined by the diffusive adults. More precisely, u represents the density of the adult population, \(\tau \) is the time length for a newborn to grow to an adult, f is the birth function and \(w(\tau ,x;t)\) is the density of the newly added adult at time t. A derivation of problem (P) with the aforementioned biological assumptions will be presented in the next section.

Problem (P) reduces to some existing problems in the literature when some of the parameters in \(\{\tau , \mu , D\}\) are sent to certain limiting values. If \(\tau \rightarrow 0\), then \(w(\tau ,x;t)\rightarrow f(u(t,x))\) and the model is reduced to

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=u_{xx}- \alpha u+ f(u), &{} t>0,\ x\in (g(t),h(t)),\\ u(t,g(t))=u(t, h(t))=0, &{} t>0,\\ g'(t)=-\mu u_x(t, g(t)), &{} t>0,\\ h'(t)=-\mu u_x (t, h(t)), &{} t>0,\\ u(0,x) =\phi (x),&{} x\in [g(0),h(0)], \end{array} \right. \end{aligned}$$
(1.1)

which was introduced by Du and Lin [9] in 2010, where they revealed a vanishing–spreading dichotomy when the nonlinearity f is of KPP type. Problem (1.1) has been extended in several directions (e.g. [7, 10]), and we mention in particular that very recently, a new phenomenon was found in [5, 8] for (1.1) when the local diffusion term \(u_{xx}\) is replaced by a suitable nonlocal diffusion operator. Our problem (P), however, is a very different nonlocal problem.

If \(\tau \rightarrow \infty \), then \(w(\tau ,x;t)\rightarrow 0\) and the model reduces to a linear problem with the Stefan free boundary condition.

If \(D\rightarrow 0\), then \(w(\tau ,x;t)\rightarrow e^{-\beta \tau }f(u(t-\tau ,x))\) and the model becomes a local free boundary problem with time delay, which was studied recently by Sun and Fang [22].

If \(\mu \rightarrow \infty \), then the free boundary condition disappears and the model becomes a nonlocal Cauchy problem in the whole line with time delay:

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=u_{xx}- \alpha u+e^{-\beta \tau } \int _\mathbb {R}G(\tau , y)f(u(t-\tau ,x-y))dy, &{} t>0,\ x\in \mathbb {R},\\ u(\theta ,x) =\phi (\theta , x),&{} \theta \in [-\tau ,0], x\in \mathbb {R}, \end{array} \right. \end{aligned}$$
(1.2)

where

$$\begin{aligned} G(\tau ,y):=\frac{1}{\sqrt{4\pi D\tau }}e^{-\frac{y^2}{4D\tau }}. \end{aligned}$$

Model (1.2) was introduced by So et al. [21] in 2001. Further studies on (1.2) can be found in [2, 11, 16, 17, 19, 28] when f has a monostable structure and in [1, 3, 12, 19, 20, 24, 25, 27] when f is bistable.

If \(\mu \rightarrow 0\), then the expanding domain reduces to a fixed one and the model becomes a nonlocal problem with zero Dirichlet boundary condition and time delay:

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=u_{xx}- \alpha u+e^{-\beta \tau } \int _{-\ell }^{\ell } {\textbf {K}}_\ell (\tau ,x-y)f(u(t-\tau ,y))dy, &{} t>0,\ x\in (-\ell , \ell ),\\ u(t, \pm \ell )=0, &{} t>0,\\ u(\theta ,x) =\phi (\theta , x),&{} \theta \in [-\tau ,0], x\in (-\ell , \ell ), \end{array} \right. \end{aligned}$$
(1.3)

where

$$\begin{aligned} {\textbf {K}}_\ell (\tau ,x)=\sum _{n\in \mathbb {Z}}(-1)^nG(\tau ,x-2n\ell ). \end{aligned}$$

We refer to a survey by Gourely and Wu [13] in 2006 for more details on the research of (1.3). With \(f(u)=pue^{-qu}, p,q>0\), problems (1.2) and (1.3) are often called the diffusive Nicholson blowfly models. For early work on the classical (ODE) Nicholson blowfly model we refer to [14, 18].

From the above discussions we see that the nonlocal terms in (1.2) and (1.3) are induced by the joint effect of diffusion (i.e., \(D>0\)) and time delay (i.e., \(\tau >0\)). For our problem (P), except for these two factors, the nonlocal term \(w(\tau ,x; t)\) also involves the to-be-determined varying domain over a time period of length \(\tau \). This is a main distinct feature of (P).

The first result of this paper is the well-posedness of the problem.

Theorem 1.1

(Well-posedness) Assume that f satisfies

$$\begin{aligned} \mathbf {(H)}\ \left\{ \begin{array}{l} f(u)\in C^{1}([0,\infty )),\ \ f(0)=0,\ \ f'(0)>\alpha e^{\beta \tau };\\ f(u) \text{ is } \text{ monotonically } \text{ increasing } \text{ in } u \geqslant 0;\\ b(u):= \frac{f(u)}{u} \text{ is } \text{ monotonically } \text{ decreasing } \text{ in } u > 0 \text{ and } \\ b(\infty ):=\lim _{u\rightarrow \infty }b(u)\in [0,\alpha e^{\beta \tau }), \end{array} \right. \end{aligned}$$

and the initial data \((\phi (\theta ,x), g(\theta ), h(\theta ))\) satisfy

$$\begin{aligned} \left\{ \begin{array}{ll} g, h\in C^1([-\tau , 0]),\ \phi \in C^{1,2} ([-\tau ,0]\times [g,h]),\\ \phi (\theta ,x)>0\ \text{ for } (\theta ,x)\in [-\tau ,0]\times (g, h),\\ \phi (\theta ,x) = 0\ \text{ for } \theta \in [-\tau ,0],\; x=g(\theta ) \text{ or } h(\theta ), \end{array} \right. \end{aligned}$$
(1.4)

as well as the compatibility condition

$$\begin{aligned}{}[g(\theta ), h(\theta )]\subset [g(0), h(0)]\ \ \ \text{ for } \ \theta \in [-\tau ,0]. \end{aligned}$$
(1.5)

Then for any \(\gamma \in (0,1)\), problem (P) admits a unique solution

$$\begin{aligned} (u, g, h)\in C^{(1+\gamma )/2, 1+\gamma }([0,\infty )\times [g,h])\times C^{1+\gamma /2}([0,\infty ))\times C^{1+\gamma /2}([0,\infty )). \end{aligned}$$

Here and throughout this paper, for constants \(a<b\) and functions \(g(t)<h(t)\), we define

$$\begin{aligned}{}[a,b]\times [g, h]:=\{(t,x): t\in [a,b],\ x\in [g(t), h(t)]\}. \end{aligned}$$

The sets \((a,b)\times [g,h]\), \([a,b]\times (g, h)\), etc., are defined similarly.

It follows from (H) that \(f(s)-\alpha e^{\beta \tau } s=0\) has a unique positive root \(s=u^*\). A simple example of such a nonlinearity is \(f(s)=\frac{ps}{1+qs}\) with \(p>\alpha e^{\beta \tau }\) and \(q>0\).

The second result is on the long-time behaviour of the solution, which is determined by a dichotomy of spreading and vanishing.

Theorem 1.2

Assume that (H) holds. For any given triple \((\phi (\theta ,x), g(\theta ), h(\theta ))\) satisfying (1.4) and (1.5), let (ugh) be the solution of (P) with \(u(\theta ,x)=\sigma \phi (\theta ,x)\) in \([-\tau ,0]\times [g,h]\) for some \(\sigma >0\). Then there exists \(\sigma ^* \in [0,\infty ]\), depending on the initial data, with the following properties:

  1. (i)

    Spreading happens when \(\sigma >\sigma ^*\) in the sense that \((g_\infty , h_\infty )=\mathbb {R}\) and

    $$\begin{aligned} \lim _{t\rightarrow \infty }u(t,x)=u^* \text{ locally } \text{ uniformly } \text{ in } \,\mathbb {R}; \end{aligned}$$
  2. (ii)

    Vanishing happens when \(\sigma \leqslant \sigma ^*\) in the sense that \((g_\infty , h_\infty )\) is a finite interval and

    $$\begin{aligned} \lim _{t\rightarrow \infty }\max _{g(t)\leqslant x\leqslant h(t)} u(t,x)=0. \end{aligned}$$
  3. (iii)

    There exists a unique \(\ell ^*>0\) independent of the initial data such that \(\sigma ^*=0\) if and only if \(h(0)-g(0)\geqslant 2\ell ^*\). Moreover, \(\sigma ^*<\infty \) if \(b(\infty )>0\).

When spreading happens, we will determine the spreading speed of the fronts, by making use of the nonlinear and nonlocal semi-wave problem

$$\begin{aligned} \left\{ \begin{array}{ll} U_{\xi \xi }+cU_{\xi }- \alpha U+\int _{-\infty }^0\mathcal {K}(c,\xi ,x)f(U(x))dx=0, &{} \quad \xi<0,\\ U(0)=0,\ \ \ U(-\infty )=u^*,\ \ U(\xi )>0, &{} \quad \xi <0,\\ -\mu U_\xi (0)=c, \end{array} \right. \end{aligned}$$
(1.6)

where

$$\begin{aligned} \mathcal {K}(c,\xi ,x):=e^{-\beta \tau -\frac{c^2}{4D}\tau -\frac{c}{2D}(\xi -x)}\big [G(\tau ,\xi -x)-G(\tau ,\xi +x)\big ],\ \ \xi ,\ x\leqslant 0, \end{aligned}$$

with \( G(\tau ,y)\) as given before. It follows from Sect. 4 that problem (1.6) admits a unique solution pair \((c,U)=(c^*, U^{c^*})\). With the semi-wave established above, we can construct various super- and subsolutions to estimate the spreading fronts h(t) and g(t), and obtain the third result of this paper.

Theorem 1.3

(Spreading speed) Assume that (H) holds. Let (ugh) be a solution satisfying Theorem 1.2 (i). Then

$$\begin{aligned} -\lim \limits _{t\rightarrow \infty } \frac{g(t)}{t}=\lim \limits _{t\rightarrow \infty } \frac{h(t)}{t}=c^*, \end{aligned}$$
(1.7)

where \((c^*,U^{c^*})\) is the unique solution of (1.6).

The rest of the paper is organised as follows. In Sect. 2, we first explain how problem (P) can be deduced from some reasonable biological assumptions, and then we give a few comparison results for (P) to be used later in the paper. The main technical part of this section is the proof of the well-posedness of (P) (Theorem 1.1), which follows existing strategies but with considerable changes. Section 3 examines the long-time behaviour of the solution of (P), which relies on a good understanding of the corresponding problem over a fixed interval and involves a nonlocal eigenvalue problem. The latter is treated in Sect. 3.1 while the former is the main task of Sect. 3.2. Based on these preparations we obtain sufficient conditions for the solution of (P) to vanish in Sect. 3.3, and obtain sufficient conditions for the spreading to persists in Sect. 3.4, where the spreading–vanishing dichotomy (Theorem 3.6) is also proved. These pave the way to complete the proof of Theorem 1.2 in Sect. 3.5. The approach in Sect. 3 is based mainly on comparison arguments involving various innovative constructions of sub- and super-solutions. Section 4 is devoted to finding the spreading speed when spreading is successful, and is perhaps one of the most innovative parts of the paper. We first introduce a semi-wave problem based on a heuristic analysis, and we then prove that the semi-wave problem has a unique solution, namely a semi-wave with profile \(U^{c^*}\) and speed \(c^*\). This is the content of Sect. 4.1, where a completely new approach is used; in particular, it involves the introduction of a sequence of bistable problems which converge to the monostable problem at hand, and the traveling waves of these auxiliary bistable problems are used to construct sub-solutions of our semi-wave problem. In Sect. 4.2, we show that the semi-wave profile \(U^{c^*}\) can be suitably modified to produce super- and sub-solutions of problem (P) to eventually give the spreading speed, which is precisely \(c^*\), as stated in Theorem 1.3.

2 Model formulation, comparison principle and well-posedness

2.1 Model formulation

To formulate problem (P), we start from the age-structured population growth law

$$\begin{aligned} p_t+p_a=D(a)p_{xx}-d(a)p, \end{aligned}$$
(2.1)

where \(p=p(t,x;a)\) denotes the density of the concerned species of age a at time t and location x, D(a) and d(a) denote the diffusion rate and death rate of the species of age a, respectively.

We assume that the species has the following biological characteristics:

  1. (A1)

    The species can be classified into two stages according to age: mature and immature. An individual at time t belongs to the mature class if and only if its age exceeds the maturation time \(\tau >0\). Within each stage, all individuals have the same diffusion rate and death rate.

  2. (A2)

    The immature population moves in space within the habitat of the mature population, but does not contribute to the expansion of the habitat.

The total mature population u at time t and location x can be represented by the integral

$$\begin{aligned} u(t,x)=\int _\tau ^\infty p(t,x;a)da. \end{aligned}$$
(2.2)

We assume that the mature population u lives in the habitat [g(t), h(t)], vanishes outside the habitat, and so

$$\begin{aligned} u(t,x)=0,\quad t>0,\ x\not \in (g(t),h(t)); \end{aligned}$$
(2.3)

moreover, the habitat expands according to the Stefan type moving boundary conditions:

$$\begin{aligned} h'(t)=-\mu u_x(t,h(t)),\ \ g'(t)=-\mu u_x(t,g(t)), \quad t>0, \end{aligned}$$
(2.4)

where \(\mu \) is a given positive constant. The equations in (2.4) can be deduced from some reasonable biological assumptions as in [4], where it is assumed that certain sacrifices (in terms of population loss at the range boundary) is made by the species in order to have the population range expanded, with \(1/\mu \) proportional to this loss.

By (A2), the immature population also lives in [g(t), h(t)] and vanishes outside of it. However, the immature population disperses over the population range of the adult population passively, with no contribution to the expansion of [g(t), h(t)]. Considering that in many species, the sacrifices made by the species to expand the population range are mostly for raising/protecting the young by the adults, it appears reasonable to assume that the young do not contribute to the expansion of the population range.

According to (A1) we may assume tha

$$\begin{aligned} D(a)=\left\{ \begin{array}{ll} 1,&{} a\geqslant \tau ,\\ D, &{} 0\leqslant a<\tau , \end{array} \right. \ \ \ \ d(a)=\left\{ \begin{array}{ll} \alpha ,&{} a\geqslant \tau ,\\ \beta , &{} 0\leqslant a<\tau , \end{array} \right. \end{aligned}$$

where D, \(\alpha \) and \(\beta \) are three positive constants. Differentiating both sides of (2.2) in time yields

$$\begin{aligned} u_t= & {} \int _\tau ^\infty p_t da = \int _\tau ^\infty [-p_a+ p_{xx}-\alpha p]da\nonumber \\= & {} u_{xx}-\alpha u+p(t,x;\tau ) -p(t,x;\infty ). \end{aligned}$$
(2.5)

Since no individual lives forever, it is natural to assume that

$$\begin{aligned} p(t,x;\infty )=0. \end{aligned}$$
(2.6)

To obtain a closed form of the model, one then needs to express \(p(t,x;\tau )\) in terms of u. Note that \(p(t,x;\tau )\) represents the newly matured population at time t, from the newborns at \(t-\tau \). In other words, there is an evolution relation between the quantities \(p(t,x;\tau )\) and \(p(t-\tau ,x;0)\). Such a relation is governed by the growth law (2.1) for \(0<a<\tau \), and hence it is the time-\(\tau \) solution map of the following problem

$$\begin{aligned} \left\{ \begin{array}{ll} w_s=D w_{xx}-\beta w, &{} s\in (0,\tau ],\ x\in (g(s+t-\tau ),h(s+t-\tau )),\\ w(s,x)=0, &{} s\in (0,\tau ],\ x= g(s+t-\tau ) \text{ or } h(s+t-\tau ),\\ w(0,x)=p(t-\tau ,x;0), &{} x\in [g(t-\tau ),h(t-\tau )]. \end{array} \right. \end{aligned}$$
(2.7)

Further, if b(u) is the birth rate function of the mature population and \(f(u)=b(u)u\), then

$$\begin{aligned} p(t-\tau , x;0)=f(u(t-\tau , x)). \end{aligned}$$

Thus problem (2.7) can be formulated as an initial boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll} w_s=D w_{xx}-\beta w, &{} s\in (0,\tau ],\ x\in (g(s+t-\tau ),h(s+t-\tau )),\\ w(s,x)=0, &{} s\in (0,\tau ],\ x\in \{g(s+t-\tau ),h(s+t-\tau )\},\\ w(0,x)=f(u(t-\tau ,x)), &{} x\in [g(t-\tau ),h(t-\tau )]. \end{array} \right. \end{aligned}$$
(2.8)

If we regard (ugh) as given and denote the unique solution of (2.8) by w(sxt), then

$$\begin{aligned} p(t,x;\tau )=w(\tau ,x; t). \end{aligned}$$
(2.9)

Combining (2.3)–(2.6) and (2.9), we are led to the following:

$$\begin{aligned} \left\{ \begin{array}{lll} u_t =u_{xx} - \alpha u + w(\tau ,x;t), &{} t>0,\ x\in (g(t),h(t)),\\ u(t,x)=0, &{} t>0,\ x\in \{g(t),h(t)\},\\ u(0, x)=\phi (0, x), &{} x\in [g(0), h(0)], \end{array} \right. \end{aligned}$$
(2.10)

and

$$\begin{aligned}{\left\{ \begin{array}{ll} g'(t)=-\mu u_x(t,g(t)),\ h'(t)=-\mu u_x(t,h(t)), \ t>0,\\ u(\theta ,x)=\phi (\theta , x),\; \theta \in [-\tau , 0),\; x\in [g(\theta ), h(\theta )]. \end{array}\right. } \end{aligned}$$

which are equivalent to problem (P).

By the maximum principle it is easily seen from (2.10) that \(g'(t)<0<h'(t)\) for \(t>0\), namely the habitat is expanding for \(t\geqslant 0\). Therefore it is natural to assume that

$$\begin{aligned}{}[g(\theta ), h(\theta )]\subset [g(0), h(0)]\ \ \ \text{ for } \ \theta \in [-\tau ,0], \end{aligned}$$

which is the aforementioned compatibility condition (1.5).

2.2 Comparison principle

In this subsection, we give some comparison principles, which will be used in the rest of this paper.

Lemma 2.1

Suppose that (H) holds, \(T\in (0,\infty )\), \(\overline{g}, \overline{h}\in C^1([-\tau ,T])\), \(\overline{u}, \overline{w}\in C(\overline{D}_T)\cap C^{1,2}(D_T)\) with \(D_T:=(-\tau , T]\times (\overline{g},\overline{h}) \),

$$\begin{aligned} \left\{ \begin{array}{lll} \overline{u}_{t} \geqslant \overline{u}_{xx} -\alpha \overline{u}+\overline{w}(t,x),\; &{} 0<t \leqslant T,\ \overline{g}(t)<x<\overline{h}(t), \\ \overline{u}= 0,\quad \overline{g}'(t)\leqslant -\mu \overline{u}_x,\quad &{} 0<t \leqslant T, \ x=\overline{g}(t),\\ \overline{u}= 0,\quad \overline{h}'(t)\geqslant -\mu \overline{u}_x,\quad &{}0<t \leqslant T, \ x=\overline{h}(t), \end{array} \right. \end{aligned}$$

where \(\overline{w}(t,x)=v(\tau ,x; t)\) with \(v(s,x; t)=v(s,x)\) satisfying

$$\begin{aligned} \left\{ \begin{array}{lll} v_s \geqslant Dv_{xx} -\beta v,\; &{} s\in (0,\tau ],\ x\in (\overline{g}(t-\tau +s),\overline{h}(t-\tau +s)), \\ v\geqslant 0, &{} s\in (0,\tau ], \ x=\overline{g}(t-\tau +s),\overline{h}(t-\tau +s),\\ v(0,x)\geqslant f(\overline{u}(t-\tau ,x)), &{} x\in [\overline{g}(t-\tau ), \overline{h}(t-\tau )]. \end{array} \right. \end{aligned}$$

If (ugh) is a solution to (P) with

$$\begin{aligned}{}[g(\theta ), h(\theta )]\subseteq [\overline{g}(\theta ), \overline{h}(\theta )] \quad \hbox {and} \quad u(\theta , x)\leqslant \overline{u}(\theta ,x)\quad \hbox {for} \quad \theta \in [-\tau ,0], x\in [g(\theta ), h(\theta )], \end{aligned}$$

then

$$\begin{aligned}{}[g(t),h(t)]\subseteq (\overline{g}(t),\overline{h}(t)) \quad \hbox {and}\quad u(t,x)\leqslant \overline{u}(t,x)\quad \hbox {for} \quad t\!\in \! (0, T], \quad x\!\in \! (g(t), h(t)). \end{aligned}$$

Lemma 2.2

Suppose that (H) holds, \(T\in (0,\infty )\), \(\overline{g},\, \overline{h}\in C^1([-\tau ,T])\), \(\overline{u}, \overline{w}\in C(\overline{D}_T)\cap C^{1,2}(D_T)\) with \(D_T=( -\tau ,T]\times (\overline{g}, \overline{h})\), and

$$\begin{aligned} \left\{ \begin{array}{lll} \overline{u}_{t} \geqslant \overline{u}_{xx} -\alpha \overline{u}+\overline{w}(t,x), &{}0<t \leqslant T,\ \overline{g}(t)<x<\overline{h}(t), \\ \overline{u}\geqslant u, &{}0<t \leqslant T, \ x= \overline{g}(t),\\ \overline{u}= 0,\quad \overline{h}'(t)\geqslant -\mu \overline{u}_x,\quad &{}0<t \leqslant T, \ x=\overline{h}(t), \end{array} \right. \end{aligned}$$

where \(\overline{w}(t,x)=v(\tau ,x; t)\) with \(v(s,x; t)=v(s,x)\) satisfying

$$\begin{aligned} \left\{ \begin{array}{lll} v_s \geqslant D v_{xx} -\beta v,\; &{} s\in (0,\tau ],\ x\in (\overline{g}(t-\tau +s),\overline{h}(t-\tau +s)), \\ v(s,x)\geqslant w(s,x; t), &{} s\in (0,\tau ], \ x=\overline{g}(t-\tau +s),\\ v \geqslant 0, &{} s\in (0,\tau ], \ x=\overline{h}(t-\tau +s),\\ v(0,x)\geqslant f(\overline{u}(t-\tau ,x)), &{} x\in (\overline{g}(t-\tau ), \overline{h}(t-\tau )), \end{array} \right. \end{aligned}$$

and

$$\begin{aligned}{} & {} \overline{g}(t)\geqslant g(t) \hbox { in }[-\tau ,T],\quad h(\theta )\leqslant \overline{h}(\theta )\hbox { in }[-\tau ,0],\\{} & {} u(\theta ,x)\leqslant \overline{u}(\theta ,x)\hbox { for } \theta \in [-\tau ,0], x\in [\overline{g}(0),h(0)], \end{aligned}$$

where (ugh) solves (P) and w solves (Q). Then

$$\begin{aligned} h(t)\leqslant \overline{h}(t)\hbox { in }(0, T],\quad u(x,t)\leqslant \overline{u}(x,t)\hbox { for }t\in (0, T]\hbox { and } \overline{g}(t)<x< h(t). \end{aligned}$$

The proof of Lemma 2.1 is a simple modification of those of Lemma 5.7 in [9] and Lemma 2.3 in [22], and with some further minor changes of this proof, one obtains Lemma 2.2.

Remark 2.3

The function \(\overline{u}\), or the triple \((\overline{u},\overline{g},\overline{h})\), in Lemmas 2.1 and 2.2 is often called an upper solution to (P). A lower solution can be defined analogously by reversing all the inequalities. There is a symmetric version of Lemma 2.2, where the conditions on the left and right boundaries are interchanged. We also have corresponding comparison results for lower solutions in each case.

2.3 Well-posedness

We employ the Banach and the Schauder fixed point theorems to establish the local existence of a solution to (P), and prove its uniqueness, we then extend the solution to all time by an estimate on the free boundaries.

Theorem 2.4

(Local existence) Assume that (H) holds. Then for any \(\gamma \in (0,1)\), there exists a \(T>0\) such that problem (P) with the initial data \((\phi (\theta ,x), g(\theta ), h(\theta ))\) satisfying (1.4) and (1.5), admits a unique solution (ugh) for \(t\in [0, T]\) with

$$\begin{aligned} u\in W^{1,2}_p([0,T]\times [g, h])\cap C^{\frac{1+\gamma }{2}, 1+\gamma }([0,T]\times [g, h]),\ \; g, h\in C^{1+\frac{\gamma }{2}}([0,T]), \end{aligned}$$

for some \( p>1\).

Proof

We use a change of variable argument to transform problem (P) into a problem with straight boundaries but a more complicated differential operator as in [6, 9]. Denote \(g_0:=g(0)\) and \(h_0:=h(0)\) for convenience, and set \(l_0:=\frac{1}{2}(h_0-g_0)\). Let \(\xi _{1}(y)\) and \(\xi _{2}(y)\) be two nonnegative functions in \(C^{3}(\mathbb {R})\) such that

$$\begin{aligned}{} & {} \xi _{1}(y)=1\ \text{ if }\ | y-g_0|< \frac{l_{0}}{4},\ \xi _{1}(y)=0\ \text{ if } \ |y-g_0|> \frac{l_{0}}{2},\ |\xi _{1}'(y)|<\frac{6}{l_{0}}\ \text{ for }\ y\in \mathbb {R};\\{} & {} \xi _{2}(y)=1\ \text{ if }\ | y-h_0|< \frac{l_{0}}{4},\ \xi _{2}(y)=0\ \text{ if }\ | y-h_0| > \frac{l_{0}}{2},\ |\xi _{2}'(y)| < \frac{6}{l_{0}}\ \text{ for }\ y\in \mathbb {R}. \end{aligned}$$

For \(0<T\leqslant \min \big \{\frac{h_{0}-g_0}{16(1 +\mu \phi _x(0,g_0)-\mu \phi _x(0,h_0))},\ \tau \big \}\), we define

$$\begin{aligned}&\mathcal {D}^{g}_{T}=\{g\in C^{1}([0,T]):\ g(0)=g_0,\ g'(0)=-\mu \phi _x(0,g_0),\ \Vert g'-g'(0)\Vert _{C([0,T])} \leqslant 1\},\\&\mathcal {D}^{h}_{T}=\{h\in C^{1}([0,T]):\ h(0)=h_0,\ h'(0)=-\mu \phi _x(0,h_0),\ \Vert h'-h'(0)\Vert _{C([0,T])} \leqslant 1\}. \end{aligned}$$

Clearly, \(\mathcal {D}_T:=\mathcal {D}^{g}_{T}\times \mathcal {D}^{h}_{T}\) is a bounded and closed convex set of \(C^1([0,T])\times C^1([0,T])\).

For each pair \((g,h)\in \mathcal D_T\), we can define \(y= y(t,x)\) for \(t\in [0, T]\) through the identity

$$\begin{aligned} x=x(t,y):=y+\xi _{1}(y)(g(t)-g_0)+\xi _{2}(y)(h(t)-h_0), \end{aligned}$$
(2.11)

which clearly changes the set \([0,T]\times [g, h]\) in the (tx) plane to \([0,T]\times [g_0, h_0]\) in the (ty) plane.

If (u(tx), g(t), h(t)) solves (P), then with the above defined transformation,

$$\begin{aligned} \tilde{u}(t,y):=u(t,x(t,y))=u(t,x) \end{aligned}$$

satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} \tilde{u}_t -A(t,y)\tilde{u}_{yy} + B(t,y)\tilde{u}_{y}=\tilde{w}(\tau ,y; t)-\alpha \tilde{u}, &{}\quad t\in (0, T],\ y\in (g_0, h_0),\\ \tilde{u}(t,g_0)=\tilde{u}(t,h_0)=0, &{} \quad t\in (0, T],\\ \tilde{u}(0,y) =\phi (0,y),&{} \quad y\in [g_0,h_0], \end{array} \right. \end{aligned}$$
(2.12)

and

$$\begin{aligned} g'(t)=-\mu \, \tilde{u}_y(t,g_0), \ \ h'(t) = -\mu \tilde{u}_{y}(t,h_0) \ \text{ for } t\in (0, T], \end{aligned}$$
(2.13)

where

$$\begin{aligned}\begin{array}{rl} A(t,y):&{}=[1+\xi _1'(y)(g(t)-g_0)+\xi _2'(y)(h(t)-h_0)]^{-2}, \\ B(t,y):&{}=[\xi _1''(y)(g(t)-g_0)+\xi _2''(y)(h(t)-h_0)]A(t,y)^{\frac{3}{2}}-[\xi _1(y)g'(t)\\ {} &{}\quad +\xi _2(y)h'(t)]A(t,y)^{\frac{1}{2}}, \\ \tilde{w}(\tau ,y; t):&{}=w(\tau ,x(t,y); f(u(t-\tau ,\cdot )))=w(\tau ,x; f(u(t-\tau ,\cdot ))). \end{array} \end{aligned}$$

To straighten the boundaries in (Q), we need to extend y(tx) to \(t\in [-\tau , 0)\). Note that for t in this range, g(t) and h(t) are given as part of the initial data. Since no free boundary conditions are involved for t in this range, we simply define

$$\begin{aligned} y(t,x):=g_0+\frac{x-g(t)}{h(t)-g(t)}(h_0-g_0) \text{ for } t\in [-\tau , 0),\; x\in [g(t), h(t)], \end{aligned}$$

whose inverse is given by

$$\begin{aligned} x=x(t,y):=g(t)+\frac{y-g_0}{h_0-g_0}[h(t)-g(t)]. \end{aligned}$$
(2.14)

We define

$$\begin{aligned} {\left\{ \begin{array}{ll} \tilde{u}(t,y):=\phi (t, x(t,y)) \text{ for } t\in [-\tau , 0),\; y\in [g_0, h_0], \\ A(t,y):=\left[ \frac{h(t)-g(t)}{h_0-g_0}\right] ^2 \text{ for } t\in [-\tau , 0),\\ B(t,y):=\frac{y-g_0}{h(t)-g(t)}[h'(t)-g'(t)]+\frac{h_0-g_0}{h(t)-g(t)}g'(t) \text{ for } t\in [-\tau , 0),\\ \tilde{w}(s, y; t):=w(s, x(s+t-\tau ,y); f(u(t-\tau ,\cdot ))) \text{ for } s\in [0, \tau ], \ t\in [0, T]. \end{array}\right. } \end{aligned}$$
(2.15)

Then \(\tilde{w}(s,y; t)\) satisfies, for every \(t\in [0, T]\),

$$\begin{aligned} \left\{ \begin{array}{ll} \tilde{w}_s -DA(s+t-\tau ,y)\tilde{w}_{yy} + B(s+t-\tau ,y)\tilde{w}_{y}=-\beta \tilde{w}, &{} s\in (0,\tau ],\ y\in (g_0, h_0),\\ \tilde{w}(s,g_0;t)=\tilde{w}(s,h_0; t)=0, &{} s\in (0,\tau ],\\ \tilde{w}(0,y; t) =f(\tilde{u}(t-\tau ,y)), &{} y\in [g_0, h_0]. \end{array} \right. \end{aligned}$$
(2.16)

Let us note that A(ty) is Liptschitz continuous in \([-\tau , T]\times [g_0, h_0]\), B(ty) is continuous and bounded in \(([-\tau , T]\setminus \{0\})\times [g_0, h_0]\) with a jumping discontinuity at \(t=0\).

For any given \(\gamma \in (0,1)\) and U(ty) in \(C([0,T]\times [g_0, h_0])\), extended to \(t\in [-\tau , 0)\) by

$$\begin{aligned} U(\theta , y)=\phi (\theta , x(\theta , y)) \text{ for } \theta \in [-\tau , 0],\; y\in [g_0, h_0], \end{aligned}$$

problem (2.16) with \(f(\tilde{u}(t-\tau ,y))\) replaced by \(f({U}(t-\tau ,y))\) has a unique solution \(\tilde{W}(s,y; t)\) in \(W_p^{1,2}([0,\tau ]\times [g_0, h_0])\hookrightarrow C^{\frac{1+\gamma }{2},{1+\gamma }}([0,\tau ]\times [g_0, h_0])\), provided that p is sufficiently large.

With \(\tilde{W}\) obtained above, (2.12) with \(\tilde{w}(\tau ,y; t)\) replaced by \(\tilde{W}(\tau ,y; t)\) has a unique solution \(\tilde{U}(t,y)\) in \(W_p^{1,2}([0,T]\times [g_0, h_0])\hookrightarrow C^{\frac{1+\gamma }{2},{1+\gamma }}([0,T]\times [g_0, h_0])\).

This defines an operator \(\mathcal K: C([0,T]\times [g_0,h_0])\rightarrow C^{\frac{1+\gamma }{2},{1+\gamma }}([0,T]\times [g_0, h_0])\) by

$$\begin{aligned} \mathcal K[U](t,y):=\tilde{U}(t,y). \end{aligned}$$

Using the extension trick in [26], the \(L^p\) estimate, Sobolev embedding theorem and the Banach fixed point theorem, it can be shown (as in [26]) that \(\mathcal K\) has a unique fixed point in a suitable subset of \(C([0,T]\times [g_0, h_0])\), provided that \(T>0\) is sufficiently small, say \(T\in (0, T_0]\). We denote this fixed point by \(\tilde{u}(t,y)\), and extend it to \(t\in [-\tau , 0]\) by

$$\begin{aligned} \tilde{u}(\theta , y)=\phi (\theta , x(\theta , y)) \text{ for } \theta \in [-\tau , 0],\; y\in [g_0, h_0]. \end{aligned}$$

Let us note that with \(U=\tilde{u}\), the above obtained \(\tilde{W}(s,y; t)\) solves the original (2.16) and so if we denote this special \(\tilde{W}(s,y; t)\) by \(\tilde{w}(s,y; t)\), then the pair \((\tilde{u},\tilde{w})\) solves (2.12) (for \(t\in [0, T]\)) and (2.16) simultaneously. Moreover,

$$\begin{aligned} \Vert \tilde{w}(\cdot ; t)\Vert _{C^{\frac{1+\gamma }{2},{1+\gamma }}([0,\tau ]\times [g_0, h_0])}\leqslant C_0 \end{aligned}$$
(2.17)

and

$$\begin{aligned} \Vert \tilde{u}\Vert _{C^{\frac{1+\gamma }{2},{1+\gamma }}([-\tau ,T]\times [g_0, h_0])}\leqslant C_{1}, \end{aligned}$$
(2.18)

where \({C}_0\) and \( C_1\) are positive constants dependent on \( g|_{[-\tau ,0]}\), \(h|_{[-\tau ,0]}\), \(\gamma \) and \(\phi \), but independent of \(t, T\in (0, T_0]\) and \((g,h)\in \mathcal D_T\).

We now define \(\tilde{g}(t)\) and \(\tilde{h}(t)\) for \(t\in [0, T]\) by

$$\begin{aligned} {\left\{ \begin{array}{ll} \tilde{g}(t):=g_0-\int _0^t \mu \tilde{u}_{y}(s, g_0)ds,\\ \tilde{h}(t):=h_0-\int _0^t \mu \tilde{u}_{y}(s, h_0)ds. \end{array}\right. } \end{aligned}$$

Then clearly

$$\begin{aligned} \tilde{g}'(t)=-\mu \tilde{u}_{y}(t, g_0),\ \tilde{g}(0)=g_0,\ \tilde{g}'(0)=-\mu \phi _{y}(0, g_0), \end{aligned}$$

and thus \(\tilde{g}'\in C^{\frac{\gamma }{2}}([0,T])\) and

$$\begin{aligned} \Vert \tilde{g}'\Vert _{C^{\frac{\gamma }{2}}([0,T])}\leqslant \mu C_1=:C_2. \end{aligned}$$
(2.19)

Similarly, \(\tilde{h}'\in C^{\frac{\gamma }{2}}([0,T])\) and

$$\begin{aligned} \Vert \tilde{h}'\Vert _{C^{\frac{\gamma }{2}}([0,T])}\leqslant C_2. \end{aligned}$$
(2.20)

Therefore, for any \(T\in (0, T_0]\) and any given pair \((g,h)\in \mathcal {D}_T\), we can define an operator \( \mathcal {F}\) by

$$\begin{aligned} \mathcal {F}(g,h)=(\tilde{g},\tilde{h}). \end{aligned}$$

From the above discussions, it is easily seen that \(\mathcal {F}\) is completely continuous in \(\mathcal {D}_T\), and \((g,h)\in \mathcal {D}_T\) is a fixed point of \(\mathcal {F}\) if and only if \((\tilde{u},\tilde{g},\tilde{h})\) solves (2.12) for \(t\in [0, T]\). We will show that if \( T>0\) is small enough, then \(\mathcal {F}\) has a fixed point by using the Schauder fixed point theorem.

Firstly, it follows from (2.19) and (2.20) that

$$\begin{aligned} \Vert \tilde{h}'-\tilde{h}'(0)\Vert _{C([0,T])}\leqslant C_{2}T^{\frac{\gamma }{2}},\ \Vert \tilde{g}'-\tilde{g}'(0)\Vert _{C([0,T])}\leqslant C_{2}T^{\frac{\gamma }{2}}. \end{aligned}$$

Thus if we choose \(T\leqslant T_1:=\min \big \{T_0,\ C^{-\frac{2}{\gamma }}_{2}\big \}\), then \(\mathcal {F}\) maps the closed convex set \(\mathcal {D}_T\) into itself. Consequently, \(\mathcal {F}\) has at least one fixed point by the Schauder fixed point theorem, which implies that (2.12) has at least one solution \((\tilde{u},\tilde{g},\tilde{h})\) defined in [0, T].

We now prove the uniqueness of such a solution. Let \((u_i,g_i,h_i)\) (\(i=1,2\)) be two solutions of (P) (for \(t\in [0, T]\)), and let \(w_i\) be the corresponding solutions of (Q), and set

$$\begin{aligned} \tilde{u}_i(t,y)&:=u_i(t,y+\xi _{1}(y)(g_i(t)-g_0)+\xi _{2}(y)(h_i(t)-h_0)),\\ \tilde{w}_i(s,y; t)&:=w_i(s,y+\xi _{1}(y)(g_i(s+t-\tau )-g_0)+\xi _{2}(y)(h_i(s+t-\tau )-h_0)). \end{aligned}$$

Then it follows from (2.17)–(2.20) that, for \(i=1,2\) and \(t\in [0, T]\),

$$\begin{aligned}{} & {} \Vert \tilde{w}_i(\cdot ; t)\Vert _{C^{\frac{1+\gamma }{2},{1+\gamma }}([0,\tau ]\times [g_0, h_0])}\leqslant C_{0},\\{} & {} \Vert \tilde{u}_i\Vert _{C^{\frac{1+\gamma }{2},{1+\gamma }}([0,T]\times [g_0, h_0])}\leqslant C_{1},\\{} & {} \Vert h'_i\Vert _{C^{\frac{\gamma }{2}}([0,T])}\leqslant C_{2},\ \ \Vert g'_i\Vert _{C^{\frac{\gamma }{2}}([0,T])}\leqslant C_{2}. \end{aligned}$$

Set

$$\begin{aligned} \hat{w}:=\tilde{w}_{1}-\tilde{w}_{2},\ \ \hat{u}:=\tilde{u}_{1}-\tilde{u}_{2},\ \ \hat{g}(t):=g_1(t)-g_2(t),\ \text{ and } \ \hat{h}(t):=h_1(t)-h_2(t). \end{aligned}$$

Then we find that for any \(t\in [0,T]\) (noting that \(T\leqslant T_1\leqslant \tau \)),

$$\begin{aligned} \left\{ \begin{array}{ll} \hat{w}_{s} -DA_{2}(s+t-\tau ,y)\hat{w}_{yy} + B_{2}(s+t-\tau ,y)\hat{w}_{y}\\ \qquad =\!F_1(s\!+\!t\!-\!\tau ,y)\!-\! \beta \hat{w}, &{} s\in (0,\tau ], y\in (g_0, h_0),\\ \hat{w}(s,g_0)=\hat{w}(s, h_0)= 0, &{} s\in (0,\tau ],\\ \hat{w}(0,y) =0, &{} y\in [g_0, h_0], \end{array} \right. \end{aligned}$$

where

$$\begin{aligned} F_1=D(A_{1}-A_{2})(\tilde{w}_1)_{yy}-(B_{1}-B_{2})(\tilde{w}_1)_{y}, \end{aligned}$$

\(A_i\) and \(B_i\) are the coefficients of problem (2.16) with \((g_i,h_i)\) in place of (gh). We can apply the \(L^p\) estimates for parabolic equations to deduce that, for \(t\in [0, T]\),

$$\begin{aligned} \Vert \hat{w}(\cdot ; t)\Vert _{W^{1,2}_p([0,\tau ]\times [g_0, h_0])}\leqslant C_4 (\Vert \hat{g}\Vert _{C^1([0,T])}+\Vert \hat{h}\Vert _{C^1([0,T])}) \end{aligned}$$
(2.21)

with \(C_4\) depending on \(C_0\), \(C_1\) and \(C_2\).

It is easy to see that \(\hat{u}(t,y)\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} \hat{u}_{t} -A_{2}(t,y)\hat{u}_{yy} + B_{2}(t,y)\hat{u}_{y}=F_2(t,y)- \alpha \hat{u}, &{} y\in (g_0, h_0),\ t\in (0, T],\\ \hat{u}(t,g_0)=\hat{u}(t,h_0)= 0, &{} t\in (0, T],\\ \hat{u}(0,y) =0 ,&{} y\in [g_0, h_0], \end{array} \right. \end{aligned}$$

where

$$\begin{aligned} F_2=(A_{1}-A_{2})(\tilde{u}_1)_{yy}-(B_{1}-B_{2})(\tilde{u}_1)_{y}+\hat{w}. \end{aligned}$$

Thanks to (2.21), we can apply the extension trick of [26], the \(L^p\) estimates for parabolic equations, and the Sobolev embedding theorem much as before, to deduce that

$$\begin{aligned} \Vert \hat{u}\Vert _{C^{\frac{1+\gamma }{2},{1+\gamma }}([0, T]\times [g_0, h_0])}\leqslant C_5 (\Vert \hat{g}\Vert _{C^1([0,T])}+\Vert \hat{h}\Vert _{C^1([0,T])}) \end{aligned}$$
(2.22)

with \(C_5\) depending on \(C_0\), \(C_1\), \(C_2\) and \(C_4\), but independent of \(T\in (0, T_1]\).

Since \(\hat{h}'(0)=h'_{1}(0)-h'_{2}(0)=0\), we have

$$\begin{aligned} \Vert \hat{h}'\Vert _{C^{\frac{\gamma }{2}}([0,T])}=\mu \Vert \hat{u}_{y}(\cdot , h_0)\Vert _{C^{\frac{\gamma }{2}}([0,T])}\leqslant \mu \Vert \hat{u}\Vert _{C^{\frac{1+\gamma }{2},{1+\gamma }}([0, T]\times [g_0, h_0])}. \end{aligned}$$

This, together with (2.22), implies that

$$\begin{aligned} \Vert \hat{h}\Vert _{C^1([0,T])}\leqslant 2T^{\frac{\gamma }{2}} \Vert \hat{h}'\Vert _{C^{\frac{\gamma }{2}}([0,T])}\leqslant C_6T^{\frac{\gamma }{2}} (\Vert \hat{g}\Vert _{C^1([0,T])}+\Vert \hat{h}\Vert _{C^1([0,T])}), \end{aligned}$$

where \(C_6=2\mu C_5\). Similarly, we have

$$\begin{aligned} \Vert \hat{g}\Vert _{C^1([0,T])}\leqslant C_6T^{\frac{\gamma }{2}} (\Vert \hat{g}\Vert _{C^1([0,T])}+\Vert \hat{h}\Vert _{C^1([0,T])}). \end{aligned}$$

As a consequence, we deduce that

$$\begin{aligned} \Vert \hat{g}\Vert _{C^1([0,T])}\Vert +\Vert \hat{h}\Vert _{C^1([0,T])} \leqslant 2C_6 T^{\frac{\gamma }{2}} (\Vert \hat{g}\Vert _{C^1([0,T])}+\Vert \hat{h}\Vert _{C^1([0,T])}). \end{aligned}$$

Hence for

$$\begin{aligned} T\leqslant T_2:=\min \Big \{ T_1,\ (4C_6)^{-\frac{2}{\gamma }}\Big \}, \end{aligned}$$

we have

$$\begin{aligned} \Vert \hat{g}\Vert _{C^1([0,T])}\Vert +\Vert \hat{h}\Vert _{C^1([0,T])} \leqslant \frac{1}{2} (\Vert \hat{g}\Vert _{C^1([0,T])}+\Vert \hat{h}\Vert _{C^1([0,T])}). \end{aligned}$$

This shows that \(\hat{g}\equiv 0 \equiv \hat{h}\) for \(0\leqslant t\leqslant T\); thus \(F_1\equiv 0\) and \(F_2\equiv 0\), which imply \(\hat{w}\equiv 0\) and hence \(\hat{u}\equiv 0\). Consequently, the local solution of (P) is unique, which ends the proof of this theorem.

Theorem 2.5

Assume that (H) holds. Then the local solution (ugh) of problem (P) can be extended to all \(t\in (0, \infty )\).Footnote 1

Proof

Fix a \(\gamma \in (0,1)\) and let \([0, T_{max})\) be the maximal time interval in which the solution as described in Theorem 2.4 exists. In view of Theorem 2.4, we have \(T_{max}>0\). Using an indirect argument, we assume that \(T_{max}<\infty \).

Thanks to the choice of the initial data, we can use the comparison principle to bound the solution by the corresponding ODE problems to obtain

$$\begin{aligned} u(t,x)\leqslant K:=u^*+\Vert \phi \Vert _{L^\infty ([-\tau ,0]\times [g,h])}\ \text{ for } t\in [0,T_{max}),\ x\in [g(t),h(t)], \end{aligned}$$

and for fixed \(t\in (0, T_{max})\),

$$\begin{aligned} w(s,x; f(u(t-\tau ,\cdot )))\leqslant f'(0) K \ \text{ for } s\in [0,\tau ],\ x\in [g(s+t-\tau ), h(s+t-\tau )]. \end{aligned}$$

To bound g(t) and h(t), we construct two auxiliary functions

$$\begin{aligned}{\left\{ \begin{array}{ll} \bar{u}(t,x):=K\big [2M(h(t)-x)-M^{2}(h(t)-x)^{2}\big ]\\ \bar{w}(t,x):=f'(0)K\end{array}\right. } \text{ for } (t,x)\in [-\tau ,T_{max})\times [h-M^{-1},h], \end{aligned}$$

where

$$\begin{aligned} M:=\max \left\{ \sqrt{\frac{f'(0)}{2}},\ \frac{2}{h(-\tau )-g(-\tau )},\ \frac{4}{3K}\max _{-\tau \leqslant \theta \leqslant 0}\Vert \phi (\theta ,\cdot )\Vert _{C^1([g(\theta ),h(\theta )])}\right\} . \end{aligned}$$

Clearly

$$\begin{aligned} K\geqslant \bar{u}(t,x)\geqslant KM(h(t)-x) \text{ for } x\in [h(t)-M^{-1}, h(t)],\ t\in [-\tau , T_{max}), \end{aligned}$$

and

$$\begin{aligned} \phi (\theta , x)\!\leqslant \! \max _{-\tau \leqslant \theta \leqslant 0}\Vert \phi (\theta ,\cdot )\Vert _{C^1([g(\theta ),h(\theta )])}(h(t)\!-\!x) \text{ for } x\!\in \! [h(\theta )-M^{-1}, h(\theta )],\ \theta \in [-\tau , 0]. \end{aligned}$$

It thus follows from the definition of M that

$$\begin{aligned} \bar{u}(\theta , x)\geqslant \phi (\theta , x) \text{ for } x\in [h(\theta )-M^{-1}, h(\theta )],\ \theta \in [-\tau , 0]. \end{aligned}$$

After a simple calculation we obtain

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{u}_t-\bar{u}_{xx}+ \alpha \bar{u}-\bar{ w}(t,x)\geqslant K(2M^2-f'(0))\geqslant 0, &{} t>0,\ x\in [h(t)-M^{-1},h(t)),\\ \bar{u}(t,h(t)-M^{-1})=K\geqslant u(t,h(t)-M^{-1}), &{} t>0,\\ \bar{u}(t,h(t))=0=u(t,h(t)), &{} t>0,\\ \bar{u}(\theta ,x)\geqslant u(\theta ,x),&{} \theta \in [-\tau ,0],\ x\in [h(\theta )-M^{-1},h(\theta )], \end{array} \right. \end{aligned}$$

and for \(t>0\), \(\bar{w}(t,x)=v(\tau ,x)\) with \(v(s,x):=e^{\beta (\tau -s)}f'(0)K\) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} v_s-Dv_{xx}+\beta v= 0, &{} s\in (0,\tau ],\ x\in [h(s+t-\tau )-\frac{1}{M},h(s+t-\tau )),\\ v(s,x)\geqslant w(s,x ;f(u(t-\tau ,x))), &{} s\in (0,\tau ],\ x=h(s+t-\tau )-\frac{1}{M} \text{ or } h(s+t-\tau ),\\ v(0,x)=f'(0)Ke^{\beta \tau } \geqslant f (\bar{u}(t-\tau ,x)),&{} x\in [h(t-\tau )-\frac{1}{M},h(t-\tau )]. \end{array} \right. \end{aligned}$$

So we can apply the comparison principle to deduce that \(u(t, x)\leqslant \bar{u}(t, x)\) for \(t\in (0, T_{max})\) and \(x\in [h(t)-M^{-1},h(t)]\). It follows that \(u_x(t, h(t)) \geqslant \bar{u}_x(t, h(t)) = -2MK\), and hence

$$\begin{aligned} h'(t) =-\mu u_x(t, h(t))\leqslant C_0:= 2\mu M K. \end{aligned}$$

We can similarly prove \(-g'(t)\leqslant C_0\) for \(t\in (0, T_{max})\).

With the above estimate on \(h'(t)\) and \(g'(t)\), and the bounds

$$\begin{aligned} 0\leqslant u\leqslant K, 0\leqslant w\leqslant f'(0)K, \end{aligned}$$

we are able to show that the solution (ugh) can be defined beyond \(t=T_{max}\).

To do so, we straighten the boundaries of (2.10) via the transformation

$$\begin{aligned} \tilde{u}(t,y):=u(t, x(t,y)) \end{aligned}$$

for \( t\in [0, T_{max}),\ y\in [g_0, h_0]\), with x(ty) given by (2.14). Then \(\tilde{u}\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} \tilde{u}_t -A(t)\tilde{u}_{yy} + B(t,y)\tilde{u}_{y}=\tilde{w}(\tau ,y; t)-\alpha \tilde{u}, &{} t\in (0, T_{max}),\ y\in (g_0, h_0),\\ \tilde{u}(t,g_0)=\tilde{u}(t,h_0)=0, &{} t\in (0, T_{max}),\\ \tilde{u}(0,y) =\phi (0,y),&{} y\in [g_0,h_0], \end{array} \right. \end{aligned}$$
(2.23)

with A and B given by the formulas in (2.15), and \(\tilde{w}(\tau ,y; t):=w(\tau , x(s,y); f( u(t-\tau ,\cdot )))\).

Applying the \(L^p\) theory to (2.23) we obtain \(\tilde{u}\in W^{1,2}_p([0,T]\times [g_0, h_0])\) for any \(p>1\) and \(T\in (T_{max}/2, T_{max})\), and by the Sobolev embedding theorem we obtain, for any \(\gamma \in (0,1)\) and some large enough \(p>1\) depending on \(\gamma \),

$$\begin{aligned} \Vert \tilde{u}\Vert _{W^{1,2}_p([0, T]\times [g_0, h_0])}+\Vert \tilde{u}\Vert _{C^{\frac{1+\gamma }{2},{1+\gamma }}([0, T]\times [g_0, h_0])}\leqslant C_{p,\gamma } \end{aligned}$$
(2.24)

for some \(C_{p,\gamma }>0\) independent of \(T\in (T_{max}/2, T_{max})\).

Choose \(t_n\in (0,T_{max})\) satisfying \(t_n\nearrow T_{max}\), and regard \((u(t_n-\theta , x), g(t_n-\theta ), h(t_n-\theta ))\) for \(\theta \in [0,\tau ]\) as the initial data. Due to (2.24) and the properties of g and h proved earlier, we can repeat the proof of Theorem 2.4Footnote 2 to conclude that there exists \(s_0>0\) depending on \(C_{p,\gamma }\) and f but independent of n such that problem (P) has a unique solution (ugh) for \(t\in [t_n, t_n+s_0]\). This gives a solution (ugh) of (P) defined for \(t\in [0,t_n+s_0]\). Since \(t_n+s_0>T_{max}\) when n is large, this contradicts the definition of \(T_{max}\), and hence we must have \(T_{max}=\infty \), as desired. The proof is complete.

3 Long time behavior of the solutions

In this section we study the asymptotic behavior of the solutions of (P).

3.1 A nonlocal eigenvalue problem

For any given \(\ell >0\), we consider the following eigenvalue problem:

$$\begin{aligned} \left\{ \begin{array}{ll} - \varphi '' +\alpha \varphi -f'(0)e^{(\lambda -\beta )\tau }\int _{-\ell }^\ell {\textbf {K}}_\ell (\tau , x-y)\varphi (y)dy=\lambda \varphi , &{} x\in (-\ell ,\ell ),\\ \varphi (\pm \ell )=0, \end{array} \right. \end{aligned}$$
(3.1)

where

$$\begin{aligned} {\textbf {K}}_\ell (\tau ,x)=\sum _{n\in \mathbb {Z}}(-1)^nG(\tau ,x-2n\ell ),\ \ \ x\in \mathbb {R}, \end{aligned}$$

with

$$\begin{aligned} G(\tau ,y)=\frac{1}{\sqrt{4\pi D \tau }}e^{-\frac{y^2}{4D\tau }},\ \ y\in \mathbb {R}. \end{aligned}$$

We note that

$$\begin{aligned} \psi (t,x;\lambda ,\varphi ):=e^{\lambda t}\int _{-\ell }^\ell {\textbf {K}}_\ell (t,x-y)\varphi (y)dy \end{aligned}$$

satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi _t=D\psi _{xx}+\lambda \psi , &{} t>0,\ x\in (-\ell , \ell ),\\ \psi (t,\pm \ell )=0,&{} t>0,\\ \psi (0,x)=\varphi (x), &{} x\in [-\ell ,\ell ]. \end{array}\right. } \end{aligned}$$

Therefore the first equation in (3.1) can be rewritten as

$$\begin{aligned} - \varphi '' +\alpha \varphi -f'(0)e^{-\beta \tau }\psi (\tau , x;\lambda ,\varphi )=\lambda \varphi . \end{aligned}$$

By the Krein–Rutman theorem and the spectral mapping theorems for semigroups, it follows from [23] that (3.1) possesses a unique principal eigenvalue, namely a real eigenvalue \(\lambda =\lambda _1^\ell \) with a positive eigenfunction \(\varphi _\ell \), which is unique upon normalization such as \(\Vert \varphi _\ell \Vert _\infty =1\):

$$\begin{aligned} \left\{ \begin{array}{ll} - \varphi _\ell '' +\alpha \varphi _\ell -f'(0)e^{(\lambda _1^\ell -\beta )\tau }\int _{-\ell }^\ell {\textbf {K}}_\ell (\tau ,x-y)\varphi _\ell (y)dy=\lambda _1^\ell \varphi _\ell , &{} x\in (-\ell ,\ell ),\\ \varphi _\ell (\pm \ell )=0. \end{array} \right. \end{aligned}$$
(3.2)

We have the following conclusions for \((\lambda _1^{\ell },\varphi _\ell )\).

Lemma 3.1

Assume that (H) holds. The principal eigen-pair \((\lambda ^\ell _1,\varphi _{\ell })\) of (3.1) has the following properties:

  1. (i)

    \(\varphi _\ell (x)=\cos (\frac{\pi }{2\ell }x)\).

  2. (ii)

    \(\lambda _1^\ell \) is decreasing and continuous in \(\ell >0\), with \(\lambda _1^0:=\lim _{\ell \rightarrow 0}\lambda ^\ell _1=\infty \) and \(\lambda _1^\infty :=\lim _{\ell \rightarrow \infty }\lambda _1^\ell <0\).

  3. (iii)

    There exists a unique constant \(\ell ^* = \ell ^*(f'(0), D, \alpha , \beta , \tau )>0\) such that the principal eigenvalue \(\lambda _1^\ell \) is negative (resp. 0, or positive) when \(\ell >\ell ^*\) (resp. \(\ell =\ell ^*\), or \(\ell <\ell ^*\)).

Proof

Let \(\phi _\ell (x):=\cos (\frac{\pi }{2\ell }x)\). Clearly \(\phi _\ell (x)>0\) in \((-\ell , \ell )\) and \(\phi _\ell (\pm \ell )=0\). Moreover, since the unique solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi _t=D\psi _{xx}, &{} t>0,\ x\in (-\ell , \ell ),\\ \psi (t,\pm \ell )=0,&{} t>0,\\ \psi (0,x)=\phi _\ell (x), &{} x\in [-\ell ,\ell ] \end{array}\right. } \end{aligned}$$

is given by

$$\begin{aligned} \psi _0(t,x)=e^{-\frac{D\pi ^2}{4\ell ^2}t}\phi _\ell (x), \end{aligned}$$

we obtain

$$\begin{aligned} \psi (t,x;\lambda ,\phi _\ell )=e^{\lambda t}\psi _0(t,x)=e^{\lambda t-\frac{D\pi ^2}{4\ell ^2}t}\phi _\ell (x), \end{aligned}$$

and thus

$$\begin{aligned} - \phi _\ell '' +\alpha \phi _\ell -f'(0)e^{-\beta \tau }\psi (\tau , x;\lambda ,\phi _\ell )=\left[ \frac{\pi ^2}{4\ell ^2}+\alpha -f'(0)e^{-\beta \tau -\frac{D\pi ^2}{4\ell ^2}\tau }e^{\lambda \tau }\right] \phi _\ell . \end{aligned}$$

Therefore \((\lambda ,\varphi )=(\lambda ,\phi _\ell )\) will solve (3.1) if

$$\begin{aligned} \frac{\pi ^2}{4\ell ^2}+\alpha -f'(0)e^{-\beta \tau -\frac{D\pi ^2}{4\ell ^2}\tau }e^{\lambda \tau }=\lambda , \end{aligned}$$

or, equivalently, if

$$\begin{aligned} \alpha =F_\ell (\lambda ):=-\frac{\pi ^2}{4\ell ^2}+\lambda +f'(0)e^{-\beta \tau -\frac{D\pi ^2}{4\ell ^2}\tau }e^{\lambda \tau }. \end{aligned}$$
(3.3)

Clearly \(F_\ell \) is strictly increasing and continuous on \(\mathbb {R}\) with \(F_\ell (-\infty )\!=\!-\infty ,\, F_\ell (+\infty )\!=+\infty \), Therefore there exists a unique \(\lambda =\lambda (\ell )\in \mathbb {R}\) satisfying

$$\begin{aligned} \alpha =F_\ell (\lambda (\ell )). \end{aligned}$$

By the uniqueness of the principal eigen-pair \((\lambda _1^\ell , \varphi _\ell )\), we necessarily have

$$\begin{aligned} \varphi _\ell =\phi _\ell ,\; \lambda _1^\ell =\lambda (\ell ). \end{aligned}$$

This proves part (i).

For any fixed \(\lambda _0>0\),

$$\begin{aligned} \lim _{\ell \rightarrow 0}F_\ell (\lambda _0)=-\infty <\alpha . \end{aligned}$$

It follows that \(\lambda (\ell )>\lambda _0\) for all small \(\ell >0\), which implies \(\lim _{\ell \rightarrow 0}\lambda (\ell )=+\infty \).

On the other hand, from

$$\begin{aligned} \lim _{\ell \rightarrow \infty }F_\ell (0)=f'(0)e^{-\beta \tau }>\alpha , \end{aligned}$$

we see that for all large \(\ell >0\), \(\lambda (\ell )<0\). Moreover, as \(F_\infty (\lambda ):=\lim _{\ell \rightarrow \infty }F_\ell (\lambda )=\lambda +f'(0)e^{-\beta \tau }e^{\lambda \tau }\), the limit \(\lambda (\infty ):=\lim _{\ell \rightarrow \infty } \lambda (\ell )\) exists, and is the unique solution of \(\alpha =F_\infty (\lambda )\), which is negative. The conclusions in part (ii) are now proved.

Let us observe that the uniqueness of \(\lambda (\ell )\) and the continuous dependence of \(F_\ell (\lambda )\) on \(\ell \) imply that \(\ell \rightarrow \lambda (\ell )\) is continuous. Moreover, the facts that \(\lambda \rightarrow F_\ell (\lambda )\) is strictly increasing and \(\ell \rightarrow F_\ell (\lambda )\) is strictly decreasing imply that \(\ell \rightarrow \lambda (\ell )\) is strictly decreasing. Therefore the conclusions in part (ii) guarantee the existence of a unique constant \(\ell ^* = \ell ^*(f'(0),D, \alpha , \beta , \tau )>0\) such that the principal eigenvalue \(\lambda (\ell )\) is negative (resp. 0, or positive) when \(\ell >\ell ^*\) (resp. \(\ell =\ell ^*\), or \(\ell <\ell ^*\)). Part (iii) is now proved.

3.2 Positive solutions on bounded intervals

Using Lemma 3.1, we can obtain the asymptotic behavior of the solutions to

$$\begin{aligned} \left\{ \begin{array}{ll} v_t=v_{xx} -\alpha v +e^{-\beta \tau }\int _{-\ell }^\ell {\textbf {K}}_\ell (\tau ,x-y)f(v(t-\tau ,y))dy, &{} t>0,\ x\in (-\ell ,\ell ),\\ v(t, \pm \ell )=0, &{} t>0,\\ v(\theta , x)=\psi (\theta ,x)\geqslant ,\not \equiv 0, &{} \theta \in [-\tau ,0],\ x\in (-\ell ,\ell ). \end{array} \right. \end{aligned}$$
(3.4)

Lemma 3.2

Suppose (H) holds and let \(\ell ^*\) be given in Lemma 3.1. Then for \(\ell >\ell ^*\), the unique solution v(tx) of problem (3.4) satisfies

$$\begin{aligned} \lim _{t\rightarrow \infty }|v(t,x)-U_0(x; \ell )|=0 \ \ \text{ uniformly } \text{ in } (-\ell ,\ell ), \end{aligned}$$
(3.5)

where \(U_0(x; \ell )\) is the unique positive solution of the following problem

$$\begin{aligned} \left\{ \begin{array}{ll} V_{xx} -\alpha V +e^{-\beta \tau }\int _{-\ell }^\ell {\textbf {K}}_\ell (\tau ,x-y)f(V(y))dy=0, &{} x\in (-\ell ,\ell ),\\ V(\pm \ell )=0, \end{array} \right. \end{aligned}$$
(3.6)

which can be shown to satisfy \(0<U_0(x;\ell )< u^*\). Moreover, \(U_0(x; \ell )\) is strictly increasing in \(\ell \) and \(U_0(x; \ell ) \rightarrow u^*\) as \(\ell \rightarrow \infty \) in \(L^\infty _{loc}(\mathbb {R})\). When \(\ell \leqslant \ell ^*\), the unique solution v(tx) of problem (3.4) satisfies \(v(t,x)\rightarrow 0\) uniformly in \((-\ell ,\ell )\) as \(t\rightarrow \infty \).

Proof

We first prove that when \(\ell >\ell ^*\) the problem (3.6) admits a unique positive solution. We shall use the sub-supersolution argument to establish its existence. Obviously, \(\bar{v}=u^*\) is a supersolution to (3.6). To construct a positive subsolution, we recall from Lemma 3.1 that if \(\ell >\ell ^*\), the principal eigenvalue \(\lambda _1^{\ell }\) of (3.1) is negative, whose corresponding positive eigenfunction is \(\varphi =\cos (\frac{\pi }{2\ell }x)\). Set

$$\begin{aligned} \underline{v}=\delta \varphi \ \ \text{ for } \ x\in [-\ell , \ell ], \end{aligned}$$

where \(\delta >0\) is small such that

$$\begin{aligned} f(s)\geqslant f'(0)e^{\lambda _1^{\ell }\tau }s\ \text{ for } s\in [0, \delta ],\ \underline{v}<u^*. \end{aligned}$$
(3.7)

A simple calculation yields that for \(x\in (-\ell , \ell )\),

$$\begin{aligned}{} & {} \underline{v}_{xx}-\alpha \underline{v}+e^{-\beta \tau }\int _{-\ell }^{\ell } {\textbf {K}}_\ell (\tau ,x-y)f(\underline{v}(y))dy \\{} & {} \quad = -\lambda _{\ell } \underline{v} +e^{-\beta \tau }\int _{-\ell }^{\ell } {\textbf {K}}_\ell (\tau ,x-y) \big [f(\underline{v}(y))-f'(0)e^{\lambda _1^{\ell }\tau }\underline{v}(y)\big ]dy \geqslant 0, \end{aligned}$$

thus \(\underline{v}\) is a positive subsolution. Thus, by a standard iteration technique, problem (3.6) with \(\ell >\ell ^*\) admits a positive solution.

We then verify the uniqueness of the positive solution to (3.6). Fix \(\ell >\ell ^*\) and suppose that problem (3.6) has two different positive solutions \(v_1\) and \(v_2\). With the help of the Hopf boundary lemma, we can find \(M_0 >1\) such that

$$\begin{aligned} M_0^{-1}v_1 \leqslant v_2 \leqslant M_0v_1\ \ \text{ in } \ (-\ell , \ell ). \end{aligned}$$

It is easily seen that \(M_0v_1\) is a supersolution of (3.6) and \(M_0^{-1}v_1\) is a subsolution. As a result, there exist a minimal and a maximal solution to (3.6) in the order interval \([M_0^{-1}v_1, M_0v_1]\), which we denote by \(v_*\) and \(v^*\), respectively. Thus \(v_*\leqslant v_i\leqslant v^*\leqslant u^*\) for \(i = 1, 2\). Hence it suffices to show that \(v_*=v^*\).

To achieve this goal, let us define

$$\begin{aligned} \varrho ^* := \sup \{\varrho \in \mathbb {R}:\ \varrho v^*\leqslant v_*\}. \end{aligned}$$

Clearly \(0<\varrho ^*\leqslant 1\) and \(\varrho ^*v^*\leqslant v_*\). We next prove \(\varrho ^*=1\), which will yield \(v_*=v^*\). Suppose for contradiction that \(\varrho ^*< 1\). Then for

$$\begin{aligned} \eta := v_*-\varrho ^* v^*, \end{aligned}$$

it is easy to check that \(\eta \geqslant ,\not \equiv 0\), \(\eta (\pm \ell ) = 0\), and \(\eta \) satisfies

$$\begin{aligned} \eta ''-\alpha \eta= & {} e^{-\beta \tau }\int _{-\ell }^{\ell } {\textbf {K}}_\ell (\tau ,x-y)[\varrho ^*f(v^*(y))-f(v_*(y))]dy \\\leqslant & {} e^{-\beta \tau }\int _{-\ell }^{\ell } {\textbf {K}}_\ell (\tau ,x-y)[f(\varrho ^*v^*(y))-f(v_*(y))]dy \leqslant 0, \end{aligned}$$

where the sub-linearity and monotonicity of f(z) for \(z\geqslant 0\) are used. Hence we can use the strong maximum principle and Hopf boundary lemma to deduce that \(\eta > 0\) in \((-\ell , \ell )\), and \(\eta '(-\ell )>0>\eta '(\ell )\). It follows that \(\eta \geqslant \epsilon v_*\) for some \(\epsilon > 0\) small, and hence \(v_*\geqslant (1 - \epsilon )^{-1}\varrho ^* v^*\), which contradicts the definition of \(\varrho ^*\). Consequently, we must have \(\varrho ^* =1\), and the uniqueness conclusion is proved.

In what follows, let us denote by \(U_0(x; \ell )\) the unique positive solution of (3.6) for \(\ell >\ell ^*\). It follows from the strong maximum principle and Hopf boundary lemma that

$$\begin{aligned} 0<U_0(x;\ell )< u^* \ \text{ in } (-\ell , \ell )\ \ \text{ and } \ \ (U_0)_x(-\ell ; \ell )>0. \end{aligned}$$

Observe that \(U_0(x; \ell _1)\) is a supersolution of (3.6) provided that \(\ell _1>\ell \). On the other hand, we can choose a small \(0<\delta <1\) so that (3.7) holds and \(\delta \varphi < U_0(x; \ell _1)\) in \((-\ell ,\ell )\), where \(\varphi \) is the unique positive eigenfunction of (3.1). Furthermore, \(\delta \varphi \) is a subsolution of (3.6). Thus, due to the uniqueness, \(U_0(x; \ell )<U_0(x; \ell _1)\) in \((-\ell ,\ell )\). Hence \(U_0(x; \ell )\) is increasing in \(\ell \) for any \(\ell >\ell ^*\), and \(U^*(x) :=\lim _{\ell \rightarrow \infty }U_0(x; \ell )\leqslant u^*\) is well defined on \(\mathbb {R}\). Furthermore, by standard regularity considerations, we see that \(v=U^*\) satisfies

$$\begin{aligned} v_{xx} -\alpha v +e^{-\beta \tau }\int _{\mathbb {R}} G(\tau ,x-y)f(v(y))dy=0, \ \ x\in \mathbb {R}. \end{aligned}$$
(3.8)

As \(U^*(x)> U_0(x; \ell )> 0\) in \((-\ell , \ell )\) for each \(\ell >\ell ^*\), we know that \(U^*(x)\) is a positive solution of (3.8).

We claim that \(U^*(x)\) is a constant function. Indeed, the above argument leading to \(U_0(x;\ell )\leqslant U_0(x;\ell _1)\) for \(\ell <\ell _1\) can also be used to show that for any \(x_0\in \mathbb {R}\), \(U_0(x+x_0;\ell )\leqslant U^*(x)\) for \(x\in [-\ell -x_0, \ell -x_0]\). Letting \(\ell \rightarrow \infty \) we obtain \(U^*(x+x_0)\leqslant U^*(x)\) for all \(x, x_0\in \mathbb {R}\), which implies that \(U^*(x)\) is a constant function. Thus we must have \(U^*(x)\equiv u^*\), which yields that \(U_0(x; \ell ) \rightarrow u^*\) as \(\ell \rightarrow \infty \) in \(L^\infty _{loc}(\mathbb {R})\).

Next, we prove (3.5). Fix \(\ell >\ell ^*\). We have

$$\begin{aligned} v(\tau ,x)>0 \text{ in } (-\ell , \ell ) \text{ and } v_x(\tau ,\ell )<0<v_x(\tau ,-\ell ). \end{aligned}$$

Therefore we can find \(M>1\) such that

$$\begin{aligned} \underline{v}(x):=M^{-1}U_0(x;\ell )\leqslant v(\tau , x)\leqslant \overline{v}(x):=M U_0(x;\ell ) \text{ in } [-\ell ,\ell ]. \end{aligned}$$

Let \(v_1(t,x)\) and \(v_2(t,x)\) be the solution of (3.4) with \(\psi (\theta ,x)\) replaced by \(\underline{v}(x)\) and by \(\overline{v}(x)\), respectively. It then follows from the comparison principle that

$$\begin{aligned} v_1(t,x)\leqslant v(t+\tau ,x)\leqslant v_2(t,x) \text{ for } (t,x)\in (0,\infty )\times [-\ell , \ell ]. \end{aligned}$$
(3.9)

\(M>1\) implies that \(\underline{v}\) is a lower solution of (3.6), and \(\overline{v}\) is an upper solution. It follows that \(v_1(t,x)\) is increasing in t and \(v_2(t,x)\) is decreasing in t. Therefore \(\lim _{t\rightarrow \infty } v_1(t,x) = \underline{V}(x)\) exists and \(\underline{V}(x)\) is a positive solution of (3.6). As \(U_0(x;\ell )\) is the unique positive solution to this problem, we obtain \(\underline{V}=U_0\). Hence \(\lim _{t\rightarrow \infty } v_1(t,x) = U_0(x;\ell )\). Similarly \(\lim _{t\rightarrow \infty } v_2(t,x) = U_0(x;\ell )\). Moreover, a compactness consideration indicates that these convergences are uniform for \(x\in [-\ell ,\ell ]\). Hence (3.5) follows from (3.9).

Finally, it follows from [29, Theorem 2.2] that when \(\ell \leqslant \ell ^*\), the unique solution v(tx) of problem (3.4) satisfies \(v(t,x)\rightarrow 0\) uniformly in \((-\ell ,\ell )\) as \(t\rightarrow \infty \).

3.3 Vanishing phenomenon

In this subsection, we study the vanishing phenomenon of (P). First, we give the following equivalence result.

Lemma 3.3

Assume that (H) holds and let \(\ell ^*\) be given in Lemma 3.1. Then the following three assertions are equivalent:

$$\begin{aligned} \mathrm{(i)}\ h_\infty \text{ or } g_\infty \text{ is } \text{ finite };\quad \mathrm{(ii)}\ h_\infty -g_\infty \leqslant 2\ell ^*; \quad \mathrm{(iii)}\ \Vert u(t,\cdot )\Vert _{L^\infty ([g(t),h(t)])} \rightarrow 0 \text{ as } t\rightarrow \infty . \end{aligned}$$

Proof

“(i)\(\Rightarrow \)(ii)”. Without loss of generality we assume \(g_\infty > -\infty \) and prove (ii) by contradiction. Assume that \(h_\infty -g_\infty >2\ell ^* \), then for sufficiently large \(t_1>\tau \), we have \(h(t_1-\tau ) - g(t_1-\tau ) > 2\ell ^*\).

Now we consider an auxiliary problem:

$$\begin{aligned} \left\{ \begin{array}{ll} \underline{u}_t = \underline{u}_{xx} - \alpha \underline{u} +\underline{w}(\tau ,x;f(\underline{u}(t-\tau , x))), &{} t> t_1,\ x\in (k(t), l(t)),\\ \underline{u}(t, k(t)) = 0,\quad k'(t)= -\mu \underline{u}_x(t, k(t)),&{} t>t_1,\\ l(t)\equiv h(t_1),\quad \underline{u} (t,l(t))=0, &{} t> t_1,\\ k(\vartheta ) = g(\vartheta ),\ \ l(\vartheta )=h(\vartheta ),\ \underline{u}(\vartheta , x)= u(\vartheta , x), &{} \vartheta \in [t_1-\tau ,t_1],\ x\in [k(\vartheta ), l(\vartheta )], \end{array} \right. \end{aligned}$$
(3.10)

where for any \(t>t_1\), \(\underline{w}(\tau ,x;f(\underline{u}(t-\tau , x)))\) is given by the following problem:

$$\begin{aligned} \left\{ \begin{array}{ll} \underline{w}_s = D\underline{w}_{xx} - \beta \underline{w}, &{} s\in (0,\tau ],\ x\in (k(\omega ), l(\omega )),\\ \underline{w}(s, k(\omega )) = 0=\underline{w}(s, l(\omega )),&{} s\in (0,\tau ],\\ \underline{w} (0,x)=f(\underline{u} (t-\tau ,x)), &{} x\in (k(t-\tau ), l(t-\tau )), \end{array} \right. \end{aligned}$$
(3.11)

with \(\omega :=s+t-\tau \). Clearly, \(\underline{u}\) is a lower solution of (P). So \(k(t)\geqslant g(t)\) and \(k(\infty )>-\infty \) by our assumption. Using a similar argument as in [7, Lemma 2.2] by straightening the free boundary one can show that

$$\begin{aligned} \Vert \underline{u}(t,\cdot )- U_0(\cdot -k(\infty )-\ell ;\ell )\Vert _{C^1([k(t), l(t)])} \rightarrow 0\quad \text{ as } \ t\rightarrow \infty , \end{aligned}$$

where \(U_0(x;\ell )\) is the positive solution of (3.6) with \(\ell := \frac{l(\infty ) - k(\infty )}{2}>\frac{h(t_1) -k(t_1) }{2}>\ell ^*\). Therefore,

$$\begin{aligned} \lim _{t\rightarrow \infty } k'(t) = \lim _{t\rightarrow \infty } [ -\mu \underline{u}_x(t,k(t))] =-\mu (U_0)_x(-\ell ;\ell ) <0. \end{aligned}$$

This contradicts the assumption \(k(\infty ) >-\infty \).

“(ii)\(\Rightarrow \)(i)”. When (ii) holds, (i) is obvious.

“(ii)\(\Rightarrow \)(iii)”. By the assumption and Lemma 3.2 we see that the unique positive solution of the following problem

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{u}_t=\bar{u}_{xx}-\alpha \bar{u}+e^{-\beta \tau }\int _{-\tilde{\ell }}^{\tilde{\ell }} {\textbf {K}}_{\tilde{\ell }}(\tau ,x-y)f(\bar{u}(t-\tau ,y))dy, &{} t>0,\ x\in [-\tilde{\ell },\tilde{\ell }],\\ \bar{u}(t, \pm \tilde{\ell })= 0, &{} t>0,\\ \bar{u}(0,x)\geqslant ,\not \equiv 0, &{} x\in [-\tilde{\ell },\tilde{\ell }], \end{array} \right. \end{aligned}$$
(3.12)

with \(\tilde{\ell }:=\frac{h_\infty -g_\infty }{2}\leqslant \ell ^*\), \( \bar{u}(0,x-\frac{h_\infty +g_\infty }{2})\geqslant u(0,x)\) in [g(0), h(0)], satisfies \(\bar{u}\rightarrow 0\) uniformly for \(x\in [-\tilde{\ell },\tilde{\ell }]\) as \(t\rightarrow \infty \). The conclusion (iii) now follows from the comparison principle.

“(iii)\(\Rightarrow \)(ii)”: We proceed by a contradiction argument. Assume that, for some small \(\varepsilon >0\) there exists a large number \(t_2\) such that \(h(t)-g(t)>2\ell ^*+ 4\varepsilon \) for all \(t>t_2-\tau \). It is known that the eigenvalue problem (3.1), with \(\ell = \ell ^*+\varepsilon \), admits a negative principal eigenvalue, denoted by \(\lambda _\varepsilon \), whose corresponding positive eigenfunction is \(\varphi _\varepsilon =\cos (\frac{\pi }{2(\ell ^*+\varepsilon )}x)\). Set

$$\begin{aligned} v(x) :=\delta \varphi _\varepsilon (x)\ \text{ for } x\in [-\ell ^*-\varepsilon ,\ell ^*+\varepsilon ], \end{aligned}$$

with \(\delta >0\) small such that

$$\begin{aligned} f(s)\geqslant f'(0)e^{\lambda _\varepsilon \tau }s\ \text{ for } s\in [0, \delta ]. \end{aligned}$$

A direct calculation yields that for \(x\in [-\ell ^*-\varepsilon , \ell ^*+\varepsilon ]\),

$$\begin{aligned}{} & {} v_t-v_{xx}+\alpha v-e^{-\beta \tau }\int _{-\ell ^*-\varepsilon }^{\ell ^*+\varepsilon } \textbf{K}_{\ell ^*+\varepsilon }(\tau ,x-y)f(v(t-\tau ,y))dy \\{} & {} \quad = \lambda _\varepsilon v +e^{-\beta \tau }\int _{-\ell ^*-\varepsilon }^{\ell ^*+\varepsilon } \textbf{K}_{\ell ^*+\varepsilon }(\tau ,x-y) \big [f'(0)e^{\lambda _\varepsilon \tau }v(y)-f(v(y))\big ]dy \leqslant 0. \end{aligned}$$

Furthermore, one can choose \(\delta \) sufficiently small such that for \(x\in [-\ell ^*-\varepsilon , \ell ^*+\varepsilon ]\),

$$\begin{aligned} 0\leqslant \delta \varphi _\varepsilon (x)\leqslant \min _{[-\tau ,0]\times [-\ell ^*-\varepsilon , \ell ^*+\varepsilon ]} u(t_2+\theta , x +\ell ^*+g(t_2+\theta )+2\varepsilon ), \end{aligned}$$

since the last function is positive for \(x\in [-\ell ^*-\varepsilon ,\ell ^* +\varepsilon ]\), which implies that

$$\begin{aligned} v(x)\leqslant u(t_2+\theta , x +\ell ^*+g(t_2+\theta )+2\varepsilon )\ \ \text{ for } \theta \in [-\tau ,0],\ x\in [-\ell ^*-\varepsilon , \ell ^*+\varepsilon ]. \end{aligned}$$

By the comparison principle we have, for all \(t>0\),

$$\begin{aligned} u(t+t_2,\ell ^*+g(t_2)+2\varepsilon ) \geqslant v(0)=\delta \varphi _\varepsilon (0)>0, \end{aligned}$$

contradicting (iii).

This proves the lemma.

Next, we give a sufficient condition for vanishing, which indicates that if the initial domain and the initial function are both small, then the species dies out eventually.

Lemma 3.4

Suppose (H) holds and let \(\ell ^*\) be given in Lemma 3.1. If \(h(0)-g(0)<2\ell ^*\) and if \(\Vert \phi \Vert _{L^\infty ([-\tau ,0]\times [g,h])}\) is sufficiently small, then vanishing happens for the solution (ugh) of (P).

Proof

It suffices to construct an appropriate super solution that converges to 0 as \(t\rightarrow \infty \). Without loss of generality, we may assume that \(g(0)+h(0)=0\).

Set \(\ell _0:=\frac{h(0)-g(0)}{2}\). For \(\delta >0\) sufficiently small, we define

$$\begin{aligned} k(t):=\ell _0\big (1+\delta -\frac{\delta }{2}e^{-\delta t}\big ),\quad t\ge -\tau . \end{aligned}$$
(3.13)

Clearly, k(t) is increasing and

$$\begin{aligned} \ell _0<\ell _0\left( 1+\delta -\frac{\delta }{2}e^{\delta \tau }\right) \le k(t)<\ell _0 (1+\delta ),\quad t\ge -\tau . \end{aligned}$$
(3.14)

For fixed \(t\ge 0\), it is easily checked that

$$\begin{aligned} \bar{v}(s,x):=e^{-\left( \beta +\frac{D\pi ^2}{4k^2(t)}\right) s}\cos \frac{\pi }{2k(t)}x \end{aligned}$$

satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{v}_s=D\bar{v}_{xx}- \beta \bar{v}, &{} s>0,\ x\in (-k(t),k(t)),\\ \bar{v}(s,x)\equiv 0, &{} s>0,\ x =\pm k(t),\\ \bar{v}(0,x)=\cos \frac{\pi }{2k(t)}x, &{} x\in [-k(t), k(t)]. \end{array} \right. \end{aligned}$$

Since k(t) is increasing, we thus obtain

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{v}_s=D\bar{v}_{xx}- \beta \bar{v}, &{} s\in (0,\tau ],\ x\in (-k(t-\tau +s),k(t-\tau +s)),\\ \bar{v}(s,x)\geqslant 0, &{} s\in (0,\tau ],\ x =\pm k(t-\tau +s),\\ \bar{v}(0,x)\geqslant \cos \frac{\pi }{2k(t-\tau )}x, &{} x\in [-k(t-\tau ), k(t-\tau )]. \end{array} \right. \end{aligned}$$

For \(\epsilon >0\), define

$$\begin{aligned} \bar{u}(t,x)= \epsilon e^{-\delta t}\cos \frac{\pi }{2 k(t)}x. \end{aligned}$$
(3.15)

Next we verify that \(\bar{u}\) is a super solution of (P) when \(\delta \) is small enough and \(\epsilon \) is suitably chosen. Indeed, in view of assumption (H) and (3.15), we have

$$\begin{aligned} f(\bar{u}(t-\tau ,x))\leqslant f'(0)\bar{u}(t-\tau ,x)\leqslant f'(0)\epsilon e^{-\delta (t-\tau )}\bar{v}(0,x) \end{aligned}$$

for \(x\in [-k(t-\tau ), k(t-\tau )]\). Thus

$$\begin{aligned} \bar{w}(s,x;t):=f'(0)\epsilon e^{-\delta (t-\tau )}\bar{v}(s,x) \end{aligned}$$

satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{w}_s=D\bar{w}_{xx}- \beta \bar{w}, &{} s\in (0,\tau ],\ x\in (-k(t-\tau +s),k(t-\tau +s)),\\ \bar{w}(s,x)\geqslant 0, &{} s\in (0,\tau ],\ x =\pm k(t-\tau +s),\\ \bar{w}(0,x)\geqslant f(\bar{u}(t-\tau ,x)), &{} x\in [-k(t-\tau ), k(t-\tau )]. \end{array} \right. \end{aligned}$$

Moreover, using \(x\tan x\ge 0\) for \(|x|<\frac{\pi }{2}\), \(k'(t)>0\), (3.3) and (3.14), we can infer that for \((t,x)\in (0,+\infty )\times (-k(\cdot ),k(\cdot ))\),

$$\begin{aligned}{} & {} \left( \bar{u}_t-\bar{u}_{xx}+\alpha \bar{u}-\bar{w}(\tau ,x; t) \right) \left( \epsilon e^{-\delta t} \cos \frac{\pi }{2k(t)}x\right) ^{-1}\\{} & {} \quad = \Big (\alpha -\delta +\frac{\pi ^2}{4k^2(t)}-f'(0)e^{-\beta \tau -\frac{D\pi ^2}{4k^2(t)}\tau }\Big ) +\frac{\pi x k'(t)}{2k^2(t)} \tan \frac{\pi }{2k(t)}x\\{} & {} \quad \geqslant \alpha -\delta +\frac{\pi ^2}{4k^2(t)}-f'(0)e^{\delta \tau -\beta \tau -\frac{D\pi ^2}{4k^2(t)}\tau }\\{} & {} \quad \geqslant \alpha -\delta +\frac{\pi ^2}{4\ell _0^2(1+\delta )^2}-f'(0)e^{\delta \tau -\beta \tau -\frac{D\pi ^2}{4\ell _0^2(1+\delta )^2}\tau }\\{} & {} \quad = \lambda _1^{\ell _0}-\delta -\frac{\pi ^2}{4\ell _0^2}\left( 1-\frac{1}{(1+\delta )^2}\right) \\{} & {} \qquad -f'(0)e^{\delta \tau -\beta \tau -\frac{D\pi ^2}{4\ell _0^2}\tau }\left( e^{\frac{D\pi ^2}{4\ell _0^2}\tau \times \left( 1-\frac{1}{(1+\delta )^2}\right) }-e^{(\lambda _1^{\ell _0}-\delta )\tau }\right) , \end{aligned}$$

which is positive for all small \(\delta >0\) as \(\lambda _1^{\ell _0}>0\).

Furthermore, for \(t>0\) we have \(\bar{u}(t,\pm k(t))=0\) and \(- \mu \bar{u}_x(t, k(t)) = \frac{\mu \epsilon \pi }{2k(t)} e^{-\delta t} \leqslant \frac{\mu \epsilon \pi }{2\ell _0} e^{-\delta t}\) and \(k'(t)=\frac{\delta ^2}{2}e^{-\delta t}\). Thus, choosing \(\epsilon =\frac{\ell _0 \delta ^2}{\mu \pi }\) yields

$$\begin{aligned} - \mu \bar{u}_x(t, k(t)) \le k'(t),\quad t>0. \end{aligned}$$
(3.16)

Similarly, \(- \mu \bar{u}_x(t, -k(t)) \ge -k'(t)\). Therefore, such a \(\bar{u}\) is a super solution for (P). Consequently, for initial data \((\phi ,g,h)\) satisfying the extra condition

$$\begin{aligned} 0\le \phi (\theta ,x)\le \frac{\ell _0 \delta ^2}{\mu \pi } e^{-\delta \theta } \cos \frac{\pi }{2k(\theta )}x,\quad (\theta ,x)\in [-\tau ,0]\times [g, h], \end{aligned}$$
(3.17)

the corresponding solution (ugh) of (P) satisfies \(u\le \bar{u}\) and \(-\ell _0(1+\delta )<-k(t)\le g(t)< h(t)\le k(t)<\ell _0(1+\delta )\).

3.4 Spreading phenomenon

In this subsection, we study the spreading phenomenon of (P) and give some sufficient conditions for spreading to happen.

Lemma 3.5

Assume that (H) holds and let \(\ell ^*\) be given in Lemma 3.1. If \(h(0)-g(0)\geqslant 2\ell ^*\), then spreading always happens for the solution (ugh) of (P), i.e., \(-g_\infty =h_\infty =\infty \) and

$$\begin{aligned} \lim _{t\rightarrow \infty }u(t,x)=u^* \ \text{ locally } \text{ uniformly } \text{ in } \mathbb {R}, \end{aligned}$$
(3.18)

where \(u^*\) is the unique positive root of \(f(v)-\alpha e^{\beta \tau }v=0\).

Proof

Since \(h(0)-g(0)\geqslant 2\ell ^*\), from \(g'(t)<0<h'(t)\) for \(t>0\) we deduce \(h(t)-g(t)>2\ell ^*\) for any \(t>0\). So the conclusion \(-g_\infty = h_\infty =\infty \) follows from Lemma 3.3. In what follows we prove (3.18).

First, we choose an increasing sequence of positive numbers \(\ell _m\) such that \(\ell _m\rightarrow \infty \) as \(m\rightarrow \infty \) and \(\ell _m>\ell ^*\) for all \(m\geqslant 1\). As \(-g_\infty = h_\infty =\infty \), we can find \(t_m\) large such that \([-\ell _m,\ell _m]\subset (g(t),h(t))\) for \(t\geqslant t_m-\tau \). It follows from Lemma 3.2 that the following problem

$$\begin{aligned} \left\{ \begin{array}{ll} \underline{u}_t =\underline{u}_{xx}-\alpha \underline{u}+e^{-\beta \tau }\int _{-\ell _m}^{\ell _m}\textbf{K}_{\ell _m}(\tau ,x-y)f(\underline{u}(t-\tau ,y))dy, &{} t>t_m,\ x\in (-\ell _m,\ell _m),\\ \underline{u}(t,\pm \ell _m)=0,&{} t>t_m,\\ \underline{u}(t_m+\theta ,x)=u(t_m+\theta ,x), &{} \theta \in [-\tau ,0],\ x\in [-\ell _m,\ell _m], \end{array} \right. \end{aligned}$$

admits a unique positive solution \(\underline{u}_m(t,x)\), which satisfies

$$\begin{aligned} \underline{u}_m(t,x)\rightarrow U_0(x;\ell _m) \ \text{ uniformly } \text{ for } \ x\in [-\ell _m,\ell _m] \ \text{ as } t\rightarrow \infty , \end{aligned}$$

where \(U_0(x;\ell _m)\) is the unique positive solution of (3.6) with \(\ell =\ell _m\). Moreover, as \(\ell _m\rightarrow \infty \), \(U_0(x;\ell _m)\rightarrow u^*\) in \(L^\infty _{loc}(\mathbb {R})\). By the comparison principle we have

$$\begin{aligned} \underline{u}_m(t,x)\leqslant u(t,x)\ \ \text{ for } t\geqslant t_m,\ x\in [-\ell _m,\ell _m]. \end{aligned}$$

Thus

$$\begin{aligned} \liminf _{t\rightarrow \infty } u(t,x) \geqslant u^*\ \text{ locally } \text{ uniformly } \text{ for } x\in \mathbb {R}. \end{aligned}$$
(3.19)

On the other hand, consider the problem

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{u}_t =-\alpha \bar{u}+ \bar{w}(\tau ;t), &{} t>0,\\ \bar{u}(\theta )\equiv u^*+\Vert \phi \Vert _{L^\infty ([-\tau ,0]\times [g,h])}, &{} \theta \in [-\tau ,0], \end{array} \right. \end{aligned}$$

where for any \(t>0\), \(\bar{w}(s;t)=\bar{w}(s)\) is the unique solution of

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{w}_s =- \beta \bar{w}, &{} s\in (0,\tau ],\\ \bar{w}(0)=f(\bar{u}(t-\tau )). \end{array} \right. \end{aligned}$$

It follows from [15, chap. 4, Theorem 9.4] that the above problem has a unique solution \(\bar{u}(t)\) and

$$\begin{aligned} \bar{u}(t)\rightarrow u^* \ \ \text{ as } t\rightarrow \infty . \end{aligned}$$

It thus follows from the comparison principle that

$$\begin{aligned} \limsup _{t\rightarrow \infty } u(t ,x) \leqslant u^*\ \text{ locally } \text{ uniformly } \text{ for } x\in \mathbb {R}. \end{aligned}$$

Combining this with (3.19) we obtain

$$\begin{aligned} \lim _{t\rightarrow \infty } u(t,x) = u^*\ \text{ locally } \text{ uniformly } \text{ for } x\in \mathbb {R}. \end{aligned}$$

The proof is complete.

Using (3.18) and \(-g_\infty =h_\infty =\infty \), it is also easy to show that the corresponding solution \(w\big (s,x; t\big )\) to (Q) satisfies

$$\begin{aligned} \lim _{t\rightarrow \infty }w\big (s,x; t\big )=f(u^*)e^{-\beta s} \end{aligned}$$
(3.20)

locally uniformly for \((s,x)\in [0,\tau ]\times \mathbb {R}\).

We are now in a position to prove the following spreading–vanishing dichotomy result.

Theorem 3.6

(Spreading–vanishing dichotomy) Assume that (H) holds and \(\ell ^*\) is given in Lemma 3.1. Let (ugh) be the solution of (P) with the initial data \((\phi (\theta ,x), g(\theta ), h(\theta ))\) satisfying (1.4) and (1.5). Then one of the following alternatives holds:

(i) Spreading: \((g_\infty , h_\infty )=\mathbb {R}\) and

$$\begin{aligned} \lim _{t\rightarrow \infty }u(t,x)=u^* \text{ locally } \text{ uniformly } \text{ in } \mathbb {R}, \end{aligned}$$

(ii) Vanishing: \((g_\infty , h_\infty )\) is a finite interval with \(h_\infty -g_\infty \leqslant 2\ell ^*\) and

$$\begin{aligned} \lim _{t\rightarrow \infty }\max _{g(t)\leqslant x\leqslant h(t)} u(t,x)=0. \end{aligned}$$

Proof

It is easy to see that there are two possibilities: (i) \(h_\infty -g_\infty \leqslant 2\ell ^*\); (ii) \(h_\infty -g_\infty >2\ell ^*\). In case (i), it follows from Lemma 3.3 that \(\lim _{t\rightarrow \infty } \Vert u(t,\cdot )\Vert _{L^\infty ([g(t),h(t)])}=0\), i.e. vanishing happens. For case (ii), it follows from Lemma 3.5 and its proof that \((g_\infty , h_\infty )=\mathbb {R}\), \(u(t,\cdot )\rightarrow u^*\) as \(t\rightarrow \infty \) locally uniformly in \(\mathbb {R}\), i.e. spreading happens, which ends the proof.

Under an additional condition on f, namely

$$\begin{aligned} b(\infty )=\lim _{u\rightarrow \infty } f(u)/u>0, \end{aligned}$$
(3.21)

we can show that spreading happens if the initial function \(\phi (\theta ,x)\) is large enough, as described in the following result.

Lemma 3.7

Assume that (H) holds. For any given triple \((\phi (\theta ,x), g(\theta ), h(\theta ))\) satisfying (1.4) and (1.5), let (ugh) be the solution of (P) with \(u(\theta ,x)=\sigma \phi (\theta ,x)\) in \([-\tau ,0]\times [g,h]\) for some \(\sigma >0\). If (3.21) holds, then there exists \(\sigma _0>0\) such that spreading happens when \(\sigma \geqslant \sigma _0\).

Proof

To stress the dependence of the solution (ugh) on \(\sigma \), we will denote it by \((u_\sigma , g_\sigma , h_\sigma )\). Recall that

$$\begin{aligned} u_\sigma (t,x)=\sigma \phi (t,x),\; g_\sigma (t)=g(t),\; h_\sigma (t)=h(t) \text{ for } t\in [-\tau , 0]. \end{aligned}$$

By the comparison principle we easily see that \(u_\sigma (t,x), h_\sigma (t)\) and \(-g_\sigma (t)\) are all increasing in \(\sigma \) for fixed \(t>0\) and \(x\in (g_\sigma (t), h_\sigma (t))\). Therefore if spreading happens for \(\sigma =\sigma _1\), then spreading happens for all \(\sigma \geqslant \sigma _1\).

Assume by way of contradiction the desired conclusion is false; then by Theorem 3.6 vanishing happens for all \(\sigma >0\), and hence

$$\begin{aligned} h_\sigma (t)-g_\sigma (t)<2\ell ^*\hbox { for all }t\geqslant 0\hbox { and } \sigma >0. \end{aligned}$$
(3.22)

We now let \(g_*(t)\) and \(h_*(t)\) be continuous extensions of g(t) and h(t) from \([-\tau ,0]\) to \([-\tau , \tau ]\), respectively, with the following properties: \(g_*|_{[0,\tau ]}, h_*|_{[0,\tau ]}\in C^1([0,\tau ])\), they are constant in \([0,\epsilon ]\) for some small \(\epsilon >0\), and

$$\begin{aligned} g_*'(t)\leqslant 0\leqslant h_*'(t) \text{ in } (0,\tau ],\ \ h_*(\tau )-g_*(\tau )>2\ell ^*. \end{aligned}$$

By the monotonicity of f(u)/u and (3.21), we have

$$\begin{aligned} f(u)/u\geqslant \sigma _\infty :=\lim _{\xi \rightarrow \infty }f(\xi )/\xi>0 \hbox { for any }u>0. \end{aligned}$$

Then for \(t\in [0,\tau ]\), let \(w_*(s,x;t)\) denote the unique solution of the initial boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll} w_s=D w_{xx}-\beta w, &{} s\in (0,\tau ],\ x\in (g_*(s+t-\tau ),h_*(s+t-\tau )),\\ w(s,x)=0, &{} s\in (0,\tau ],\ x\in \{g_*(s+t-\tau ),h_*(s+t-\tau )\},\\ w(0,x)= \sigma _\infty \phi (t-\tau ,x), &{} x\in [g_*(t-\tau ),h_*(t-\tau )], \end{array} \right. \end{aligned}$$
(3.23)

and let \(u_*(t,x)\) be the unique solution of

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=u_{xx}-\alpha u+w_*(\tau , x;t), &{} t\in (0,\tau ],\ x\in (g_*(t),h_*(t)),\\ u(t,x)=0, &{} t\in (0,\tau ],\ x\in \{g_*(t),h_*(t)\},\\ u(\theta ,x)= \phi (\theta ,x), &{} \theta \in [-\tau , 0],\ x\in [g_*(\theta ),h_*(\theta )]. \end{array} \right. \end{aligned}$$
(3.24)

We now define, for any \(k\geqslant 1\), \(t\in [0,\tau ]\) and \(s\in [0,\tau ]\),

$$\begin{aligned} \underline{u}_k(t,x):=k u_*(t,x),\; \underline{w}_k(s,x; t):=k w_*(s,x;t). \end{aligned}$$

Then

$$\begin{aligned} \left\{ \begin{array}{ll} (\underline{u}_k)_t=(\underline{u}_k)_{xx}-\alpha \underline{u}_k+\underline{w}_k(\tau , x;t), &{} t\in (0,\tau ],\ x\in (g_*(t),h_*(t)),\\ \underline{u}_k(t,x)=0, &{} t\in (0,\tau ],\ x\in \{g_*(t),h_*(t)\},\\ \underline{u}_k(0,x)= k \phi (0,x), &{} x\in [g_*(0),h_*(0)], \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \left\{ \begin{array}{ll} (\underline{w}_k)_s=D (\underline{w}_k)_{xx}-\beta \underline{w}_k, \quad s\in (0,\tau ],\ x\in (g_*(s+t-\tau ),h_*(s+t-\tau )),\\ \underline{w}_k(s,x)=0, \quad s\in (0,\tau ],\quad x\in \{g_*(s+t-\tau ),h_*(s+t-\tau )\},\\ \underline{w}_k (0,x)= k\sigma _\infty \phi (t-\tau ,x)\\ \leqslant f(k\phi (t-\tau ,x))=f(ku_*(t-\tau ,x)) , \quad x\in [g_*(t-\tau ),h_*(t-\tau )].&{} \end{array} \right. \end{aligned}$$

Since \(w_*\geqslant 0\), we can apply the parabolic Hopf boundary lemma to (3.24) to obtain

$$\begin{aligned} \partial _xu_*(t, h_*(t))<0<\partial _x u_*(t,g_*(t)) \text{ for } t\in (0, \tau ]. \end{aligned}$$

Thus we can find \(\delta >0\) such that

$$\begin{aligned} \partial _xu_*(t, h_*(t))\leqslant -\delta \text{ and } \partial _x u_*(t,g_*(t))\geqslant \delta \text{ for } t\in [\epsilon , \tau ]. \end{aligned}$$

It follows that, for all large k,

$$\begin{aligned} -\mu \partial _x\underline{u}_k(t, h_*(t))\geqslant \mu k\delta \geqslant h_*'(t) \text{ and } -\mu \partial _x \underline{u}_k(t,g_*(t))\leqslant -\mu k\delta \leqslant g'_*(t) \text{ for } t\in [\epsilon , \tau ]. \end{aligned}$$

Since \(h_*'(t)=g_*'(t)=0\) for \(t\in [0,\epsilon ]\), the above inequalities also hold for \(t\in [0,\epsilon ]\). Thus we see that \((\underline{u}_k, g_*,h_*)\) forms a lower solution to the problem satisfied by \((u_\sigma , g_\sigma , h_\sigma )\) (for \(t\leqslant \tau \)) with \(\sigma =k\), for all large k. It follows that

$$\begin{aligned} h_\sigma (\tau )-g_\sigma (\tau )\geqslant h_*(\tau )-g_*(\tau )>2\ell ^* \text{ for } \text{ all } \text{ large } \sigma , \end{aligned}$$

which contradicts (3.22). Therefore the desired conclusion holds.

3.5 Proof of Theorem 1.2

With the preparation of the previous subsections, we are now ready to complete the proof of Theorem 1.2.

By Lemma 3.5, we find that spreading happens when \(h(0)-g(0)\geqslant 2\ell ^*\), where \(\ell ^*\) is given in Lemma 3.1. Hence in this case we have \(\sigma ^*=0\) for any given \((\phi (\theta ,x), g(\theta ), h(\theta ))\) satisfying (1.4) and (1.5).

In what follows we consider the remaining case \(h(0)-g(0)< 2\ell ^*\). Define

$$\begin{aligned} \Sigma :=\big \{\sigma _0: \text{ vanishing } \text{ happens } \text{ for } \sigma \in (0,\sigma _0]\big \}. \end{aligned}$$

By Lemma 3.4, we see that vanishing happens for all small \(\sigma >0\), thus \(\Sigma \ne \varnothing \), and

$$\begin{aligned} \sigma ^*:=\sup \Sigma \in (0,+\infty ]. \end{aligned}$$

If \(\sigma ^*=\infty \), then there is nothing left to prove. Suppose \(\sigma ^*\in (0,\infty )\). Then by definition vanishing happens when \(\sigma \in (0,\sigma ^*)\). By the comparison principle we see that spreading happens for \(\sigma >\sigma ^*\).

It remains to prove that vanishing happens when \(\sigma =\sigma ^*\). Otherwise it follows from Theorem 3.6 that spreading must happen when \(\sigma =\sigma ^*\) and we can find \(t_0>0\) such that \(h(t_0)-g(t_0)>2\ell ^*+1\). By the continuous dependence of the solution of (P) on its initial values, we find that if \(\epsilon >0\) is sufficiently small, then the solution of (P) with \(u(\theta ,x)=(\sigma ^*-\epsilon )\phi (\theta ,x)\) in \([-\tau ,0]\times [g,h]\), denoted by \((u^*_\epsilon , g^*_\epsilon , h^*_\epsilon )\), satisfies

$$\begin{aligned} h^*_\epsilon (t_0)-g^*_\epsilon (t_0)>2\ell ^*. \end{aligned}$$

But by Lemma 3.5, this implies that spreading happens to \((u^*_\epsilon , g^*_\epsilon , h^*_\epsilon )\), a contradiction to the definition of \(\sigma ^*\).

Finally, if (3.21) holds, then by Lemma 3.7 we have \(\sigma ^*<\infty \). \(\square \)

4 Asymptotic spreading speed

Throughout this section we assume that (H) holds and (ugh) is a solution of (P) for which spreading happens.

4.1 A semi-wave problem

Let \(c\geqslant 0\). Introducing the transform

$$\begin{aligned} (t,x)\rightarrow (t,\xi ) \ \text{ with } \ \ \xi =\xi (t,x)=x-h(t) \end{aligned}$$

and writing

$$\begin{aligned} \tilde{u}(t,\xi )=u(t,x),\ \ \tilde{w}(t,\xi )=w(t,x), \end{aligned}$$

then problem (P) is changed into the following form:

$$\begin{aligned} \left\{ \begin{array}{ll} \tilde{u}_t=\tilde{u}_{\xi \xi }+h'(t)\tilde{u}_{\xi }- \alpha \tilde{u}+ \tilde{w}(\tau ,\xi ; t), &{} t>0,\ \xi \in (g(t)-h(t),0),\\ \tilde{u}(t,\xi )=0,\ \ g'(t)=-\mu \tilde{u}_\xi (t, \xi ), &{} t>0,\ \xi =g(t)-h(t),\\ \tilde{u}(t,0)=0,\ \ h'(t)=-\mu \tilde{u}_\xi (t, 0) , &{} t>0,\\ \tilde{u}(\theta ,\xi ) =\phi (\theta ,\xi ),&{} \theta \in [-\tau ,0],\ \xi \in (g(\theta )-h(\theta ),0), \end{array} \right. \end{aligned}$$
(4.1)

where \(\tilde{w}(s,\xi ; t)=\tilde{w}(s,\xi )\) is the solution to

$$\begin{aligned} \left\{ \begin{array}{ll} \tilde{w}_s=D\tilde{w}_{\xi \xi }+h'(s+t-\tau )\tilde{w}_\xi - \beta \tilde{w}, &{} s\in (0,\tau ],\ \xi \in (g(s+t-\tau )-h(s+t-\tau ),0),\\ \tilde{w}(s,0)=0=\tilde{w}(s,\xi ), &{} s\in (0,\tau ],\ \xi =g(s+t-\tau )-h(s+t-\tau ),\\ \tilde{w}(0,\xi )=f(\tilde{u}(t-\tau ,\xi )), &{} \xi \in (g(t-\tau )-h(t-\tau ),0). \end{array} \right. \end{aligned}$$
(4.2)

Since spreading happens, we have

$$\begin{aligned} \lim _{t\rightarrow \infty } [g(t)-h(t)]=-\infty . \end{aligned}$$

If we heuristically assume that \(\lim _{t\rightarrow \infty } h'(t)=c\) and there exists \(U\in C^2((-\infty ,0],[0,\infty ))\) with \(U(-\infty )=u^*\) such that

$$\begin{aligned} \lim _{t\rightarrow \infty } \tilde{u}(t,\xi )=U(\xi )\ \text{ locally } \text{ uniformly } \text{ in } \xi \in (-\infty ,0], \end{aligned}$$

then letting \(t\rightarrow \infty \) in (4.1) and (4.2), we obtain a limiting elliptic problem for U in \((-\infty ,0]\):

$$\begin{aligned} \left\{ \begin{array}{ll} U_{\xi \xi }+cU_\xi - \alpha U+W=0, &{} \xi <0,\\ U(0)=0,\ \ \ U(-\infty )=u^*, \end{array} \right. \end{aligned}$$
(4.3)

where \(W(\xi )=v(\tau ,\xi )\) and \(v(s,\xi )\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} v_s=Dv_{\xi \xi }+cv_\xi - \beta v, &{} s\in (0,\tau ],\ \xi <0,\\ v(s,0)=0, &{} s\in (0,\tau ],\\ v(0,\xi )=f(U(\xi )), &{} \xi \leqslant 0. \end{array} \right. \end{aligned}$$
(4.4)

Using the reflection method, we can solve \(v(\tau ,\xi )\) explicitly to obtain

$$\begin{aligned} v(\tau ,\xi )=\int _{-\infty }^0\mathcal {K}(c,\xi ,x)f(U(x))dx,\ \ \xi <0, \end{aligned}$$
(4.5)

where

$$\begin{aligned} \mathcal {K}(c,\xi ,x):=e^{-\beta \tau -\frac{c^2}{4D}\tau -\frac{c}{2D}(\xi -x)}\big [G(\tau ,\xi -x)-G(\tau ,\xi +x)\big ],\ \ \xi ,\ x\leqslant 0, \end{aligned}$$
(4.6)

and

$$\begin{aligned} G(\tau ,y):=\frac{1}{\sqrt{4D\pi \tau }}e^{-\frac{y^2}{4D\tau }},\ \ y\in \mathbb {R}. \end{aligned}$$
(4.7)

Substituting (4.5) into (4.3), we obtain a nonlocal elliptic problem

$$\begin{aligned} \left\{ \begin{array}{ll} U_{\xi \xi }+cU_{\xi }- \alpha U+\int _{-\infty }^0\mathcal {K}(c,\xi ,x)f(U(x))dx=0, &{} \xi <0,\\ U(0)=0,\ \ \ U(-\infty )=u^*. \end{array} \right. \end{aligned}$$
(4.8)

Lemma 4.1

The following statements are valid:

  1. (i)

    For any \(c\geqslant 0\), \(\mathcal {K}(c,\xi ,x)<e^{-\beta \tau }G(\tau ,\xi +c\tau -x)\) for \(\xi \leqslant 0\) and \(x\leqslant 0\).

  2. (ii)

    For any \(c\geqslant 0\), \(\mathcal {K}(c,\xi ,x)>0\) for \(\xi < 0\) and \(x<0\), and \(\mathcal {K}(c,\xi ,x)=0\) when \(x\xi =0\).

  3. (iii)

    If \(\varphi \in L^\infty ((-\infty ,0])\) is non-increasing, then \(\int _{-\infty }^0 \mathcal {K}(c,\xi ,x)\varphi (x)dx\) is non-increasing in \(\xi \leqslant 0\) and \(c\ge 0\).

Proof

Items (i) and (ii) follow directly from the definition of \(\mathcal {K}\). As for item (iii), fix \(0\leqslant c_1<c_2\). Note that \(\int _{-\infty }^0 \mathcal {K}(c_i,\xi ,x)\varphi (x)dx\) for \(i=1, 2\) are the time-\(\tau \) solutions \(v^i(\tau ,\xi )\) of the following problems, respectively:

$$\begin{aligned} \left\{ \begin{array}{ll} v^i_s=Dv^i_{\xi \xi }+c_iv^i_\xi - \beta v^i, &{} s\in (0,\tau ],\ \xi <0,\\ v^i(s,0)=0, &{} s\in (0,\tau ],\\ v^i(0,\xi )=\varphi (\xi ),&{} \xi \leqslant 0. \end{array} \right. \end{aligned}$$
(4.9)

Noting that \(\varphi \) is non-increasing in \(\xi \leqslant 0\), so are \(v^i(\tau ,\xi )\), \(i=1, 2\), thanks to the parabolic comparison principle. Hence, \(v:=v^1-v^2\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} v_s-Dv_{\xi \xi }+\beta v=c_1v^1_\xi - c_2v^2_\xi \geqslant c_1v_\xi , &{} s\in (0,\tau ],\ \xi <0,\\ v(s,0)=0, &{} s\in (0,\tau ],\\ v(0,\xi )=0,&{} \xi \leqslant 0. \end{array} \right. \end{aligned}$$

The parabolic comparison principle then infers that \(v(\tau ,\xi )\geqslant 0\), i.e., \(\int _{-\infty }^0 \mathcal {K}(c,\xi ,x)\varphi (x)dx\) is non-increasing in \(c\geqslant 0\). The proof is complete.

Define

$$\begin{aligned} c_0:=\inf _{\gamma >0}\frac{\lambda (\gamma )}{\gamma }, \end{aligned}$$
(4.10)

where \(\lambda =\lambda (\gamma )\) is the unique positive root of the following equation

$$\begin{aligned} \lambda =\gamma ^2-\alpha +f'(0)e^{(D\gamma ^2-\beta -\lambda )\tau }. \end{aligned}$$
(4.11)

It follows from (4.11) and (H) that \(\lambda (\gamma )\) is positive at \(\gamma =0\) and greater than \(\gamma ^2-\alpha \) for all large \(\gamma \). Therefore,

$$\begin{aligned} \lim _{\gamma \downarrow 0}\frac{\lambda (\gamma )}{\gamma }=\infty =\lim _{\gamma \uparrow \infty }\frac{\lambda (\gamma )}{\gamma }. \end{aligned}$$

Hence, \(c_0:=\inf _{\gamma >0}\frac{\lambda (\gamma )}{\gamma }\) is attained at some \(\gamma ^*\).

Theorem 4.2

Assume that (H) holds. Let \(c_0\) be given in (4.10). Then problem (4.8) admits a unique positive solution (denoted by \(U^c\) \()\) if and only if \(0\leqslant c<c_0\). Further, \(U^c\) is continuous in \(c\in [0,c_0)\) and \(U^c_\xi (\xi )<0\) for \(\xi \leqslant 0\). For \(0\leqslant c_1<c_2<c_0\), \(U^{c_1}(\xi )>U^{c_2}(\xi )\) for \(\xi <0\), \(0<U^{c_1}_\xi (0)<U^{c_2}_\xi (0)\) and \(\lim _{c\uparrow c_0} U_\xi ^{c}(0)=0\).

Proof

Assume that \(U^{c}(\xi )\geqslant 0\) for \(\xi <0\) is a solution of (4.8), then by the strong maximum principle we can infer that \(U^{c}(\xi )>0\) for \(\xi <0\). The rest of the proof is divided into five parts.

Part 1. Existence of a positive and decreasing solution when \(c\in [0,c_0)\).

We assume that \(c\in [0,c_0)\) and employ the super and subsolution method. For this purpose, we first construct a monotone operator.

Let

$$\begin{aligned} \mathcal {M}:=\{\text{ all } \text{ non-increasing } \text{ functions } \text{ from } (-\infty ,0] \text{ to } [0,u^*]\}. \end{aligned}$$

Let \(\lambda _1<\lambda _2\) be the two distinct roots of \(\lambda ^2+c\lambda -\alpha =0\) for \(c\in [0,c_0)\). Clearly, \(\lambda _1<0<\lambda _2\). Define

$$\begin{aligned} \mathcal {Q}[\varphi ](\xi ):{} & {} ={ \frac{1}{\lambda _2-\lambda _1}} \big (e^{\lambda _1\xi }-e^{\lambda _2\xi }\big )\int _{-\infty }^\xi e^{-\lambda _1 s}\int _{-\infty }^0 \mathcal {K}(c,s,x)f(\varphi (x))dxds\\{} & {} \quad + { \frac{1}{\lambda _2-\lambda _1}}e^{\lambda _2\xi }\int _\xi ^0\big (e^{-\lambda _2s}-e^{-\lambda _1s}\big )\int _{-\infty }^0 \mathcal {K}(c,s,x)f(\varphi (x))dxds, \ \ \xi \leqslant 0. \end{aligned}$$

Since \(f(0)=0\), we have \(\mathcal {Q}[0](\xi )\equiv 0\) for \(\xi \leqslant 0\). In view of Lemma 4.1 (i), we have

$$\begin{aligned} \mathcal {Q}[u^*](\xi )\leqslant { \frac{1}{\lambda _2-\lambda _1}}\Big [\big (e^{\lambda _1\xi }-e^{\lambda _2\xi }\big )\int _{-\infty }^\xi e^{-\lambda _1 s}ds+e^{\lambda _2\xi }\int _\xi ^0\big (e^{-\lambda _2s}-e^{-\lambda _1s}\big )ds\Big ]e^{-\beta \tau }f(u^*). \end{aligned}$$

After a simple calculation, we obtain

$$\begin{aligned} \mathcal {Q}[u^*](\xi )\leqslant { \frac{1}{\lambda _2-\lambda _1}}\Big (\frac{1}{\lambda _2}-\frac{1}{\lambda _1}\Big )e^{-\beta \tau }f(u^*) =-\frac{1}{\lambda _1\lambda _2 }e^{-\beta \tau }f(u^*), \end{aligned}$$

which, combined with \(\lambda _1\lambda _2=-\alpha \) and \(e^{-\beta \tau }f(u^*)=\alpha u^*\), yields

$$\begin{aligned} \mathcal {Q}[u^*](\xi )\leqslant u^* \ \ \ \text{ for } \ \xi \leqslant 0. \end{aligned}$$

Hence, it follows from Lemma 4.1 that \(\mathcal {Q}:\ \mathcal {M}\rightarrow \mathcal {M}\) satisfies

$$\begin{aligned} u^* \geqslant \mathcal {Q}[u^*]\geqslant \mathcal {Q}[\varphi _1]\geqslant \mathcal {Q}[\varphi _2]\geqslant \mathcal {Q}[0]=0\ \ \text{ for } \ \ u^* \geqslant \varphi _1\geqslant \varphi _2\geqslant 0. \end{aligned}$$

Moreover, it is not difficult to check that

$$\begin{aligned} \left\{ \begin{array}{ll} (\mathcal {Q}[\varphi ])''(\xi )+c(\mathcal {Q}[\varphi ])'(\xi )-\alpha (\mathcal {Q}[\varphi ])(\xi )+\int _{-\infty }^0 \mathcal {K}(c,\xi ,x)f(\varphi (x))dx=0, &{} \xi <0,\\ \mathcal {Q}[\varphi ](0)=0. \end{array} \right. \end{aligned}$$
(4.12)

Therefore, a fixed point of \(\mathcal {Q}\) satisfies the first equation of (4.8).

Next, we construct a lower fixed point for \(\mathcal {Q}\). We introduce a family of bistable problems, the unique traveling wave solutions of which will be used. For \(\varepsilon >0\), we define

$$\begin{aligned} f_\varepsilon (u):= \left\{ \begin{array}{ll} (1-\varepsilon )f\big (\frac{u}{1-\varepsilon }\big ), &{} u\geqslant 0,\\ -\varepsilon f\big (-\frac{u}{\varepsilon }\big ), &{} u<0. \end{array} \right. \end{aligned}$$

Let

$$\begin{aligned} \delta _0:=\frac{1}{2\tau }\ln {\frac{f'(0)}{\alpha e^{\beta \tau }}}. \end{aligned}$$

Since \(f'(0)>\alpha e^{\beta \tau }\), we have \(\delta _0>0\). Consider the following problem

$$\begin{aligned} u_t^\varepsilon =u^\varepsilon _{xx}-\alpha u^\varepsilon +e^{-(\beta +\delta _0\varepsilon )\tau }\int _{\mathbb {R}}G(\tau ,y) f_\varepsilon (u^\varepsilon (t-\tau ,x-y))dy,\ \ t>0,\ x\in \mathbb {R}. \end{aligned}$$
(4.13)

Claim: The following statements are valid:

  1. (i)

    For \(\varepsilon \in (0,1)\), \(e^{-(\beta +\delta _0\varepsilon )\tau }f_\varepsilon (u)=\alpha u\) admits exactly three roots \(u^{\varepsilon }_-<0<u^{\varepsilon }_+\) with the properties that \(u^{\varepsilon }_+\uparrow u^*\) and \(u^{\varepsilon }_-\uparrow 0\) as \(\varepsilon \downarrow 0\), \(u_+^\varepsilon \downarrow 0\) and \(u_-^\varepsilon \downarrow -u^*_1<0\) as \(\varepsilon \uparrow 1\), where \(u_1^*\) is the unique positive root of the equation

    $$\begin{aligned} f(u)-\alpha e^{(\beta +\delta _0)\tau } u=0, \end{aligned}$$

    whose existence is guaranteed by the choice of \(\delta _0\) and (H).

  2. (ii)

    There exist a unique \(c^\varepsilon \) and a unique monotone function \(U^\varepsilon \in C^2(\mathbb {R},\mathbb {R})\) with

    $$\begin{aligned} U^\varepsilon (-\infty )=u^{\varepsilon }_+,\ \ U^\varepsilon (+\infty )=u^{\varepsilon }_-,\ \ U^\varepsilon (0)=0 \end{aligned}$$

    such that \(u^\varepsilon (t,x):=U^\varepsilon (x-c^\varepsilon t)=U^\varepsilon (\xi )\) solves (4.13), that is

    $$\begin{aligned} U^\varepsilon _{\xi \xi }+c^\varepsilon U^\varepsilon _{\xi }-\alpha U^\varepsilon +e^{-(\beta +\delta _0\varepsilon )\tau }\int _{\mathbb {R}}G(\tau ,y)f_\varepsilon (U^\varepsilon (\xi -y+c^\varepsilon \tau ))dy=0,\ \ \xi \in \mathbb {R}. \end{aligned}$$
    (4.14)
  3. (iii)

    \(c^\varepsilon \) is continuous in \(\varepsilon \in (0,1)\), \(c^\varepsilon \leqslant c_0\), \(\lim _{\varepsilon \rightarrow 0}c^\varepsilon = c_0\) and there exists \(\varepsilon _1\in (0,1)\) such that \(c^{\varepsilon }<0\) for \(\varepsilon \in (\varepsilon _1,1)\).

Let us postpone the proofs of the claim to Part 5. In the following we construct a lower fixed point of \(\mathcal {Q}\). For \(l^\varepsilon >0\) to be specified later, we define

$$\begin{aligned} \underline{U}^\varepsilon (\xi ):= \left\{ \begin{array}{ll} U^\varepsilon (\xi +l^\varepsilon ), &{} \xi <-l^\varepsilon ,\\ 0, &{} \xi \in [-l^\varepsilon ,0]. \end{array} \right. \end{aligned}$$

By the claim we know that for any \(c\in [0,c_0)\) there exists \(\varepsilon \in (0,1)\) such that

$$\begin{aligned} c^\varepsilon =c. \end{aligned}$$

Then with such an \(\varepsilon \) we show that \(\underline{U}^\varepsilon (\xi )\) is a sub-solution of (4.8) provided that \(l^\varepsilon \) is sufficiently large. Set

$$\begin{aligned} \mathcal {L}[\underline{U}^\varepsilon ](\xi ):=\underline{U}^\varepsilon _{\xi \xi }+c^\varepsilon \underline{U}^\varepsilon _\xi - \alpha \underline{U}^\varepsilon +\int _{-\infty }^0 \mathcal {K}(c^\varepsilon ,\xi ,x)f(\underline{U}^\varepsilon (x))dx. \end{aligned}$$

For \(\xi >-l^\varepsilon \), clearly \(\mathcal {L}[\underline{U}^\varepsilon ](\xi )=\int _{-\infty }^0 \mathcal {K}(c^\varepsilon ,\xi ,x)f(\underline{U}^\varepsilon (x))dx\geqslant 0\). For \(\xi <-l^\varepsilon \),

$$\begin{aligned} \mathcal {L}[\underline{U}^\varepsilon ](\xi )= & {} U_{\xi \xi }^\varepsilon (\xi +l^\varepsilon )+c^\varepsilon U_{\xi }^\varepsilon (\xi +l^\varepsilon )-\alpha U^\varepsilon (\xi +l^\varepsilon )\nonumber \\ {}{} & {} +\int _{-\infty }^{-l^\varepsilon } \mathcal {K}(c^\varepsilon ,\xi ,x)f(U^\varepsilon (x+l^\varepsilon ))dx. \end{aligned}$$
(4.15)

In view of (4.14), and due to \(U^\varepsilon (\xi )\leqslant 0\) for \(\xi \geqslant 0\) and the concavity of \(f_\varepsilon (u)\) for \(u\geqslant 0\), we have

$$\begin{aligned}{} & {} U_{\xi \xi }^\varepsilon (\xi +l^\varepsilon )+c^\varepsilon U_{\xi }^\varepsilon (\xi +l^\varepsilon )-\alpha U^\varepsilon (\xi +l^\varepsilon )\\{} & {} \quad = -e^{-(\beta +\delta _0\varepsilon )\tau }\int _{\mathbb {R}}G(\tau ,y) f_\varepsilon \big (U^\varepsilon \big (\xi +l^\varepsilon -y+c^\varepsilon \tau \big )\big )dy\\{} & {} \quad = -e^{-(\beta +\delta _0\varepsilon )\tau }\int _{\mathbb {R}} G\big (\tau ,\xi +l^\varepsilon +c^\varepsilon \tau -y\big ) f_\varepsilon \big (U^\varepsilon (y)\big )dy\\{} & {} \quad \geqslant -e^{-(\beta +\delta _0\varepsilon )\tau }\int _{-\infty }^{-l^\varepsilon } G\big (\tau ,\xi +c^\varepsilon \tau -y\big ) f_\varepsilon (U^\varepsilon (y+l^\varepsilon ))dy\\{} & {} \quad \geqslant -e^{-(\beta +\delta _0\varepsilon )\tau }\int _{-\infty }^{-l^\varepsilon }G\big (\tau ,\xi +c^\varepsilon \tau -y\big ) f(U^\varepsilon (y+l^\varepsilon ))dy. \end{aligned}$$

This, together with (4.15), implies that \(\mathcal {L}[\underline{U}^\varepsilon ](\xi )\geqslant 0\) for \(\xi <-l^\varepsilon \) provided that

$$\begin{aligned} \mathcal {K}(c^\varepsilon ,\xi ,x)\geqslant e^{-(\beta +\delta _0\varepsilon )\tau }G\big (\tau ,\xi +c^\varepsilon \tau -x\big ) \text{ for } \xi ,\ x<-l^\varepsilon , \end{aligned}$$

which, in view of the definitions of \(\mathcal {K}\) in (4.6) and G in (4.7), is equivalent to

$$\begin{aligned} e^{-\frac{(c^\varepsilon )^2}{4D}\tau -\frac{c^\varepsilon }{2D}(\xi -x)}\Big [e^{-\frac{(\xi -x)^2}{4D\tau }}-e^{-\frac{(\xi +x)^2}{4D\tau }}\Big ] \geqslant e^{- \delta _0\varepsilon \tau -\frac{(\xi +c^\varepsilon \tau -x)^2}{4D\tau }} \text{ for } \xi ,\ x<-l^\varepsilon . \end{aligned}$$

The above inequality can be simplified into the form

$$\begin{aligned} \Big (1-e^{- \delta _0\varepsilon \tau }-e^{-\frac{\xi x}{D\tau }}\Big )e^{-\frac{(\xi +c^\varepsilon \tau -x)^2}{4D\tau }}\geqslant 0 \ \text{ for } \xi ,\ x<-l^\varepsilon , \end{aligned}$$

which is true provided that \(1-e^{- \delta _0\varepsilon \tau }-e^{-\frac{(l^\varepsilon )^2}{D\tau }}\geqslant 0\), that is

$$\begin{aligned} l^\varepsilon \geqslant \sqrt{-D\tau \ln \big (1-e^{-\delta _0\varepsilon \tau }\big )}. \end{aligned}$$

Now we are ready to verify that \(\underline{U}^\varepsilon \) is a lower fixed point of \(\mathcal {Q}\). In view of \(\mathcal {L}[\underline{U}^\varepsilon ](\xi )\geqslant 0\) for \(\xi \in (-\infty , 0)\setminus \{-l^\varepsilon \}\), and \(\underline{U}^\varepsilon _\xi (-l^\varepsilon -0)\leqslant 0=\underline{U}^\varepsilon _\xi (-l^\varepsilon +0)\), we can easily deduce

$$\begin{aligned} \mathcal {Q}[\underline{U}^\varepsilon ](\xi )\geqslant \underline{U}^\varepsilon (\xi )\ \ \text{ for } \ \xi \leqslant 0. \end{aligned}$$

Define the iterative scheme

$$\begin{aligned} U^n(\xi ):=\mathcal {Q}[U^{n-1}](\xi )\ (n\geqslant 1), \ \text{ with } \ U^0(\xi )=\underline{U}^\varepsilon (\xi )\ \text{ for } \xi \leqslant 0. \end{aligned}$$

Then \(\{U^n\}\) is non-decreasing in n and non-increasing in \(\xi \leqslant 0\) with \(U^0\leqslant U^n\leqslant u^*\) for \(n\geqslant 1\). By the monotonicity of \(U^n\) in n, \(U^n\) is convergent. Let \(U^\infty \in \mathcal {M}\) be the limit. Then \(U^0\leqslant U^\infty \leqslant u^*\). By Lebesgue’s dominated convergence theorem, we infer that

$$\begin{aligned} \mathcal {Q}[U^\infty ](\xi )=U^\infty (\xi )\ \ \text{ for } \ \xi \leqslant 0, \end{aligned}$$

from which we can further infer that \(U^\infty (0)=0\) and \(U^\infty \in C^2((-\infty ,0))\) with \(U^\infty _{\xi }(-\infty )=0= U^\infty _{\xi \xi }(-\infty )\). By (4.12) we see that \(U^\infty (-\infty )\) solves the first equation in (4.8), that is,

$$\begin{aligned} -\alpha U^\infty (-\infty )+\lim _{\xi \rightarrow -\infty }\int _{-\infty }^0 \mathcal {K}(c,\xi ,x) f(U^\infty (x))dx=0. \end{aligned}$$
(4.16)

Using the explicit form of \(\mathcal {K}\), we compute to have

$$\begin{aligned}{} & {} e^{\beta \tau }\int _{-\infty }^0 \mathcal {K}(c,\xi ,x) f(U^\infty (x))dx\\{} & {} \quad = \int _{-\infty }^0 G(\tau ,\xi +c\tau -x) f(U^\infty (x))dx-\int _{-\infty }^0 G(\tau ,\xi +c\tau +x) e^{\frac{c}{D}x}f(U^\infty (x))dx\\{} & {} \quad = \int _{\xi }^{\infty } G(\tau ,x+c\tau ) f(U^\infty (\xi -x))dx-\int _{-\infty }^{\xi } G(\tau ,x+c\tau )e^{\frac{c}{D}(x-\xi )} f(U^\infty (x-\xi ))dx\\{} & {} \quad := I(\xi )-II(\xi ). \end{aligned}$$

By Lebesgue’s dominated convergence theorem, we obtain, for any \(\xi _0<0\),

$$\begin{aligned} f(U^\infty (-\infty ))\int _{-\infty }^{\infty } G(\tau ,x) dx\geqslant & {} \limsup _{\xi \rightarrow -\infty } I(\xi )\\ {}\geqslant & {} \liminf _{\xi \rightarrow -\infty } I(\xi ) \geqslant f(U^\infty (-\infty ))\\ {}{} & {} \times \int _{\xi _0}^{\infty } G(\tau ,x) dx, \end{aligned}$$

and

$$\begin{aligned} 0\leqslant \limsup _{\xi \rightarrow -\infty } II(\xi ) \leqslant f(u^*)\limsup _{\xi \rightarrow -\infty }\int _{-\infty }^\xi G(\tau ,x+c\tau ) dx=0, \end{aligned}$$

and hence, (4.16) becomes \(-\alpha U^\infty (-\infty )+e^{-\beta \tau }f(U^\infty (-\infty ))=0\), from which we see that \(U^{\infty }(-\infty )=0\) or \(u^*\). However, \(U^\infty (-\infty )\geqslant U^0(-\infty )=u^\varepsilon _+>0\). Therefore, \(U^{\infty }(-\infty )=u^*\).

Thus we have shown that \(U^\infty \) is a solution of (4.8). By the elliptic strong maximum principle, we infer that \(U^\infty (\xi )\) is decreasing in \(\xi \leqslant 0\), and positive for \(\xi < 0\).

Part 2. Non-existence when \(c \geqslant c_0\).

We employ a sliding argument. It follows from (4.10) and (4.11) that for any \(c\geqslant c_0\), there exists \(\gamma _1>0\) such that \(c=\frac{\lambda (\gamma _1)}{\gamma _1}\). Consequently,

$$\begin{aligned} \gamma _1^2-c\gamma _1-\alpha +f'(0)e^{(D\gamma _1^2-\beta -c\gamma _1)\tau }=0. \end{aligned}$$

Next we show that for any \(\epsilon >0\), \(\overline{U}^\epsilon (\xi ):=\epsilon e^{-\gamma _1\xi }\) is a supersolution of (4.8). Indeed, thanks to the inequalities

$$\begin{aligned} f(u)\leqslant f'(0)u\ \text{ for } u\geqslant 0,\ \ \ \mathcal {K}(c,\xi ,x)\leqslant e^{-\beta \tau }G(\tau ,\xi -x+c\tau )\ \ \text{ for } \xi \leqslant 0 \text{ and } x\leqslant 0, \end{aligned}$$

we have

$$\begin{aligned}{} & {} \overline{U}^\epsilon _{\xi \xi }+c\overline{U}^\epsilon _{\xi }-\alpha \overline{U}^\epsilon +\int _{-\infty }^0 \mathcal {K}(c,\xi ,x) f(\overline{U}^\epsilon (x))dx\nonumber \\{} & {} \quad \leqslant \overline{U}^\epsilon _{\xi \xi }+c\overline{U}^\epsilon _{\xi }-\alpha \overline{U}^\epsilon +\int _{-\infty }^0 \mathcal {K}(c,\xi ,x) f'(0)\overline{U}^\epsilon (x)dx\nonumber \\{} & {} \quad \leqslant \overline{U}^\epsilon _{\xi \xi }+c\overline{U}^\epsilon _{\xi }-\alpha \overline{U}^\epsilon +e^{-\beta \tau }f'(0)\int _{\mathbb {R}} G(\tau ,\xi -x+c\tau )\overline{U}^\epsilon (x)dx\nonumber \\{} & {} \quad = \overline{U}^\epsilon (\xi )\Big [\gamma _1^2-c\gamma _1-\alpha +f'(0)e^{(D\gamma _1^2-\beta -c\gamma _1)\tau }\Big ]=0. \end{aligned}$$
(4.17)

To employ the sliding method, we assume for the sake of contradiction that U is a solution of (4.8). Define \(\tilde{W}^\epsilon (\xi ):=\overline{U}^\epsilon (\xi )-U(\xi )\) for \(\xi \leqslant 0\). Since \(\tilde{W}^\epsilon (-\infty )=\infty \) and \(\tilde{W}^\epsilon (0)=\epsilon >0\), we may choose \(\epsilon \) appropriately such that \(\tilde{W}^\epsilon \geqslant 0\) and vanishes at some \(\xi _*<0\). Hence, by (4.17) we can infer that for \(\xi <0\),

$$\begin{aligned} \tilde{W}^\epsilon _{\xi \xi }+c\tilde{W}^\epsilon _{\xi }-\alpha \tilde{W}^\epsilon \leqslant -\int _{-\infty }^0 \mathcal {K}(c,\xi ,x) \big [f\big (\overline{U}^\epsilon (x)\big )-f(U(x))\big ]dx\leqslant 0, \end{aligned}$$

where the monotonicity of f(s) in \(s\geqslant 0\) and Lemma 4.1 (iii) are used. By the elliptic strong maximum principle, we infer that \(\tilde{W}^\epsilon (\xi )\equiv 0\) for \(\xi \leqslant 0\), leading to a contradiction. The non-existence is thus proved.

Part 3. Uniqueness when \(c\in [0,c_0)\).

Fix \(c\in [0,c_0)\). Assume that \(U^1\) and \(U^2\) are two positive solutions of (4.8). Then \(U_{\xi }^{i}(\xi )<0\) for \(\xi \leqslant 0\) with \(U^{i}(-\infty )=u^*\) and \(U^{i}(0)=0\) for \(i=1,\ 2\). Hence, we can define the number

$$\begin{aligned} \rho ^*:=\inf \{\rho \geqslant 1:\ \rho U^1(\xi )> U^2(\xi ),\ \ \forall \xi <0 \}. \end{aligned}$$

We show that \(\rho ^*=1\). Otherwise, \(\rho ^*>1\). Define \(\tilde{V}:=\rho ^*U^1-U^2\). Then \(\tilde{V}\geqslant 0\), \(\tilde{V}(0)= 0\) and \(\tilde{V}(-\infty )=(\rho ^*-1)u^*>0\). By the concavity of f and \(\rho ^*>1\), we have \(\rho ^*f\big (U^{1}\big )\geqslant f\big (\rho ^*U^{1}\big )\), and hence

$$\begin{aligned} \tilde{V}_{\xi \xi }+c\tilde{V}_{\xi }-\alpha \tilde{V} \leqslant \int _{-\infty }^0 \mathcal {K}(c,\xi ,x)\big [f\big (U^{2}(x)\big ) -f\big (\rho ^* U^{1}(x)\big )\big ]dx\leqslant 0,\ \xi <0. \end{aligned}$$

We may now use the elliptic strong maximum principle and Hopf’s boundary lemma to deduce that \(\tilde{V}\geqslant \delta U^2\) in \((-\infty ,0]\) for some \(\delta >0\) small, and this infers that

$$\begin{aligned} \frac{\rho ^*}{1+\delta } U^1(\xi ) \geqslant U^2(\xi )\ \ \forall \xi <0, \end{aligned}$$

which contradicts the definition of \(\rho ^*\). Therefore, \(\rho ^*=1\) and \(U^1(\xi )\geqslant U^2(\xi )\) for \(\xi \leqslant 0\). Exchange the roles of \(U^1\) and \(U^2\), we obtain that \(U^2(\xi )\geqslant U^1(\xi )\) for \(\xi \leqslant 0\). Therefore, \(U^1\equiv U^2\) for \(\xi \leqslant 0\), and the uniqueness is proved.

Part 4. Monotonicity and continuity of \(U^c\) in \(c\in [0,c_0)\) with \(\lim _{c\uparrow c_0}U^c_{\xi }(0)=0\).

First, we prove the monotonicity of \(U^c\) in \(c\in [0,c_0)\). Let \(0\leqslant c_1<c_2<c_0\). We see that \(U^{c_i}_\xi (\xi )<0\) for \(\xi \leqslant 0\) with \(U^{c_i}(-\infty )=u^*\) and \(U^{c_i}(0)=0\), \(i=1,\, 2\). Hence, we may define

$$\begin{aligned} M^*:=\inf \{M \geqslant 1:\ M U^{c_1}(\xi )> U^{c_2}(\xi ),\ \ \forall \xi <0 \}. \end{aligned}$$

We show that \(M^*=1\). Otherwise, \(M^*>1\). Define \(\Psi :=M^*U^{c_1}-U^{c_2}\). Then \(\Psi \geqslant 0\), \(\Psi (0)= 0\) and \(\Psi (-\infty )=(M^*-1)u^*>0\). By direct computations, we have

$$\begin{aligned}{} & {} \Psi _{\xi \xi }+c_1 \Psi _{\xi }-\alpha \Psi \\{} & {} \quad = (c_2-c_1) U^{c_2}_{\xi }+\int _{-\infty }^0\mathcal {K}(c_2,\xi ,x)f(U^{c_2}(x))dx-\int _{-\infty }^0\mathcal {K}(c_1,\xi ,x)M^*f(U^{c_1}(x))dx\\{} & {} \quad \leqslant (c_2-c_1) U^{c_2}_{\xi }+\int _{-\infty }^0\mathcal {K}(c_2,\xi ,x)f(U^{c_2}(x))dx-\int _{-\infty }^0\mathcal {K}(c_1,\xi ,x)f\big (M^*U^{c_1}(x)\big )dx\\{} & {} \quad \leqslant (c_2-c_1) U^{c_2}_{\xi }+\int _{-\infty }^0\mathcal {K}(c_1,\xi ,x)\big [f(U^{c_2}(x))-f\big (M^*U^{c_1}(x)\big )\big ]dx \leqslant 0, \end{aligned}$$

where the concavity of f(s) in \(s\geqslant 0\), the monotonicity of \( U^{c_2}(\xi )\) in \(\xi \leqslant 0\) and Lemma 4.1 (iii) are used. By a similar argument as in Part 3, we can obtain a contradiction with the definition of \(M^*\). Thus \(M^*=1\) and \(U^{c_1}(\xi )\geqslant U^{c_2}(\xi )\) for \(\xi \leqslant 0\). Repeating the above argument with \(M^*=1\), by the uniqueness of solution to (4.8), the strong elliptic maximum principle and Hopf boundary lemma, we have

$$\begin{aligned} U^{c_1}(\xi )> U^{c_2}(\xi )\ \text{ for } \xi<0\ \text{ and } \ U^{c_1}(0)=0= U^{c_2}(0)\ \text{ with } \ U^{c_1}_\xi (0)< U^{c_2}_\xi (0), \end{aligned}$$

which completes the proof of the monotonicity of \(U^c\) in \(c\in [0,c_0)\).

Next, we employ a contradiction argument to show that \(\lim _{c\uparrow c_0}U^c_{\xi }(0)=0\). So we assume that \(\lim _{c\uparrow c_0}U^c_{\xi }(0)<0\). Then, as \(c\uparrow c_0\), \(U^c(\xi )\) converges to some non-increasing function \(U^*(\xi )\), and \(U^*\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} U^*_{\xi \xi }+c_0 U^*_{\xi }-\alpha U^* +\int _{-\infty }^0 \mathcal {K}(c_0,\xi ,x) f(U^*(x))dx=0, &{} \xi<0,\\ U^*_\xi (0)<0=U^*(0). \end{array} \right. \end{aligned}$$

By the monotonicity of \(U^*\) we can infer that \(U^*(-\infty )=u^*\) as in Part 1. Therefore, \(U^*\) is a solution of (4.8) with \(c=c_0\), a contradiction to the nonexistence of solution when \(c\geqslant c_0\).

Finally, let us prove the continuity of \(U^c\) in \(c\in [0,c_0)\). Fix \(\bar{c} \in (0,c_0)\) and choose a sequence \(\{c_n\}\subset (0,c_0)\) with \(c_n \nearrow \bar{c}\) as \(n\rightarrow \infty \). It follows from Part 3 that \(U^{c_n}\), the unique positive solution of (4.8) with \(c=c_n\), is decreasing in n and \(U^{\bar{c}}(\xi )\leqslant U^{c_n}(\xi )\leqslant u^*\) for \(\xi \leqslant 0\). Using standard regularity theory for elliptic equations (up to the boundary), we see that

$$\begin{aligned} U^{c_n}\rightarrow Z\ \ \text{ locally } \text{ uniformly } \text{ in } C^2((-\infty ,0]) \text{ as } n\rightarrow \infty , \end{aligned}$$

where Z is a positive solution of (4.8) with \(c=\bar{c}\) and \(Z\geqslant U^{\bar{c}}\). The proved uniqueness of positive solution to (4.8) in Part 3 yields that \(Z \equiv U^{\bar{c}}\).

Similarly, fix \(\underline{c}\in [0,c_0)\) and choose a sequence \(\{\tilde{c}_n\}\subset (0,c_0)\) with \(\tilde{c}_n \searrow \underline{c}\) as \(n\rightarrow \infty \). We can obtain that

$$\begin{aligned} U^{\tilde{c}_n}\rightarrow U^{\underline{c}}\ \ \text{ locally } \text{ uniformly } \text{ in } C^2((-\infty ,0]) \text{ as } n\rightarrow \infty , \end{aligned}$$

where \(U^{\tilde{c}_n}\) and \(U^{\underline{c}}\) are, respectively, the unique positive solution of (4.8) with \(c=\tilde{c}_n\) and \(c=\underline{c}\). Thus the continuity of \(U^c\) in \(c\in [0,c_0)\) is proved.

Part 5. Proof of the claim.

Statement (i) follows from direct computations. Statement (ii) follows from [27]. As for statement (iii), the continuity of \(c^\varepsilon \) follows from [3]. It then remains to show that

$$\begin{aligned} c^{\varepsilon }\leqslant c_0, \quad \lim _{\varepsilon \rightarrow 0}c^\varepsilon = c_0,\quad \lim _{\varepsilon \rightarrow 1} c^{\varepsilon }<0. \end{aligned}$$

Indeed, by the definition of \(c_0\), we see that there exists \(\gamma ^*>0\) such that for any \(M>0\), the function \(\bar{u}(t,x):=Me^{\gamma ^*(c_0t-x)}\) satisfies

$$\begin{aligned} -\bar{u}_t+\bar{u}_{xx}-\alpha \bar{u}+e^{-\beta \tau }f'(0) \int _\mathbb {R}G(\tau ,y) \bar{u}(t-\tau ,x-y)dy=0 \end{aligned}$$

for all (tx). Choose appropriate M such that \(\bar{u}(0,x)\geqslant U^{\varepsilon }(x)\) and \(\bar{u}(0,x_0)= U^{\varepsilon }(x_0)\) for some \(x_0\). Since \(f_\varepsilon (s)\leqslant f(s)\leqslant f'(0)s\) for \(s\geqslant 0\), we can infer by the comparison principle that \(U^{\varepsilon }(x-c^\varepsilon t)\leqslant \bar{u}(t,x)\) for \(t>0\) and \(x\in \mathbb {R}\), and hence, \(c^\varepsilon \leqslant c_0\) and the limit \(c^0:=\lim _{\varepsilon \rightarrow 0}c^{\varepsilon }\) satisfies \(c^0\leqslant c_0\). Next, we show that \(c^0 \geqslant c_0\). Denote the limit (up to subsequence) of \(U^\varepsilon \) as \(\varepsilon \rightarrow 0\) by \(U^0\), then \((U^0, c^0)\) satisfies (4.14) with \(\varepsilon =0\). Since \(U^0(0)=\frac{u^*}{2}\), we have \(U^0(-\infty )=u^*\) and \(U^0(0)=0\). This implies that \(U^0(x-c^0t)\) is a traveling wave solution of \(u_t=u_{xx}-\alpha u+e^{\beta \tau } \int _\mathbb {R}G(\tau ,y) f(u(t-\tau ,x-y))dy\), for which the minimal wave speed had been shown to be \(c_0\) (see e.g. [11]). Hence, \(c^0\geqslant c_0\). Finally, we show that \(\lim _{\varepsilon \rightarrow 1} c^{\varepsilon }<0\). By [3], we know that \(c^{\varepsilon }\) has the same sign as the integral \(\int _{u_-^\varepsilon }^{u_+^\varepsilon } \left[ e^{-(\beta +\delta _0 \varepsilon )\tau }f_\epsilon (s)-\alpha s\right] ds\). Due to the choice of \(\delta _0\) and \(f'(0)>\alpha e^{\beta \tau }\), we see that \(f(u)/u=\alpha e^{(\beta +\delta _0)\tau }\) has a unique positive root \(u_1^*\). Moreover, it is easily seen that, as \(\varepsilon \rightarrow 1\), we have \(u_+^\varepsilon \rightarrow 0\), \(u_-^\varepsilon \rightarrow -u_1^*\) and

$$\begin{aligned} \int _{u_-^\varepsilon }^{u_+^\varepsilon } \left[ e^{-(\beta +\delta _0 \varepsilon )\tau }f_\epsilon (s)-\alpha s\right] ds\rightarrow \int _0^{u_1^*}\left[ -e^{-(\beta +\delta _0)\tau }f(s)+\alpha s\right] ds. \end{aligned}$$

Therefore, \(c^{\varepsilon }<0\) when \(\varepsilon \) is close to 1 as \(-e^{-(\beta +\delta _0)\tau }f(s)+\alpha s<0\) for \(s\in (0,u_1^*)\).

Based on Lemma 4.1 and Theorem 4.2, we obtain the following result.

Proposition 4.3

Assume that (H) holds. Let \(c_0\) be given in (4.10) and \(U^c\) be the unique positive solution of problem (4.8) with \(c\in [0,c_0)\). Set

$$\begin{aligned} W^c(\xi ):=\int _{-\infty }^0 \mathcal {K}(c,\xi ,x) f(U^c(x))dx; \end{aligned}$$

then \(W^c(\xi )\) is non-increasing in \(\xi \leqslant 0\), \(W^c(-\infty )=f(u^*)e^{-\beta \tau }\), \(W^c(0)=0>W^c_\xi (0)\). Further, \(W^c(\xi )=v(\tau ,\xi )\), where \(v(s,\xi )\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} v_s=Dv_{\xi \xi }+cv_\xi - \beta v, &{} s\in (0,\tau ],\ \xi <0,\\ v(s,0)=0, &{} s\in (0,\tau ],\\ v(0,\xi )=f(U^c(\xi )), &{} \xi \leqslant 0. \end{array} \right. \end{aligned}$$

Theorem 4.4

Assume that (H) holds. Let \(c_0\) be given in (4.10). For each \(\mu >0\), there exists a unique \(c^*=c^*_\mu \in (0, c_0)\) such that \(-\mu U^{c^*}_\xi (0)=c^*\), where \(U^{c^*}\) is the unique positive solution of (4.8) with c replaced by \(c^*\). Moreover, \(c^*_\mu \) is increasing in \(\mu \) and

$$\begin{aligned} \lim _{\mu \rightarrow \infty }c^*_\mu =c_0. \end{aligned}$$

Proof

By Theorem 4.2, for each \(c\in [0,c_0)\), problem (4.8) admits a unique solution \(U^c(\xi )>0\) for \(\xi <0\) and \(U^c_\xi (0)<0\). Let us consider the following function

$$\begin{aligned} \mathcal {J}(c)=\mathcal {J}_\mu (c):=U^c_\xi (0)+\frac{c}{\mu }\ \ \text{ for } \ c\in [0,c_0). \end{aligned}$$

It follows from Theorem 4.2 again that \(\mathcal {J}(c)\) is continuous and strictly increasing in \(c\in [0, c_0)\), and \(\lim _{c\uparrow c_0}\mathcal {J}(c)=\frac{c_0}{\mu }>0\). Moreover, \(\mathcal {J}(0)=U^0_\xi (0)<0\). Thus there exists a unique \(c^*=c^*_\mu \in (0, c_0)\) such that \(\mathcal {J}(c^*)=0\), which means that

$$\begin{aligned} -U^{c^*}_\xi (0)=\frac{c^*}{\mu }. \end{aligned}$$

Next, let us view \(\big (c^*_\mu , \frac{c^*_\mu }{\mu }\big )\) as the unique intersection point of the decreasing curve \(y=-U^c_\xi (0)\) with the increasing line \(y=\frac{c}{\mu }\) in the cy-plane, then it is clear that \(c^*_\mu \) increases to \(c_0\) as \(\mu \) increases to \(\infty \). The proof is complete.

4.2 Asymptotic spreading speed

In order to determine the spreading speed, we will construct some suitable sub- and supersolutions based on the semi-waves.

Theorem 4.5

(Spreading speed) Assume that (H) holds and that spreading happens for a solution (ugh) of (P) as in Theorem 3.6 (i). Let \((c^*, U^{c^*})\) be the unique solution of (4.8) with \(-\mu U^{c^*}_\xi (0)=c^*\) and \(W^{c^*}(\xi ):=\int _{-\infty }^0 \mathcal {K}(c^*,\xi ,x) f\big (U^{c^*}(x)\big )dx\). Then

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{h(t)}{t}=-\lim _{t\rightarrow \infty }\frac{g(t)}{t}=c^*. \end{aligned}$$
(4.18)

Proof

We will prove (4.18) for h(t) only, since the proof for g(t) is parallel.

For any given small \(\epsilon >0\), we define

$$\begin{aligned}&\bar{h}(t):=(1+2\epsilon )c^*t+L\ \text{ for } t\geqslant 0,\\&\bar{u}(t,x):=(1+2\epsilon )U^{c^*}(x-\bar{h}(t))\ \text{ for } t\geqslant 0,\ x\in (-\infty ,\bar{h}(t)],\\&\bar{w}(t,x):=(1+2\epsilon )W^{c^*}(x-\bar{h}(t))\ \text{ for } t\geqslant 0,\ x\in (-\infty ,\bar{h}(t)], \end{aligned}$$

where \(L>0\) is a constant to be determined.

Recall that \(W^{c^*}(\xi )=V(\tau ,\xi )\) where \(V(s,\xi )\) is the solution of

$$\begin{aligned} \left\{ \begin{array}{ll} V_s=DV_{\xi \xi }+c^*V_\xi - \beta V, &{} s\in (0,\tau ],\ \xi <0,\\ V(s,0)=0, &{} s\in (0,\tau ],\\ V(0,\xi )=f(U^{c^*}(\xi )), &{} \xi \leqslant 0. \end{array} \right. \end{aligned}$$
(4.19)

Thanks to the monotonicity of \(U^{c^*}(\xi )\) in \(\xi \leqslant 0\) and f(u) in \(u\geqslant 0\), respectively, the parabolic comparison principle implies that the solution \(V(s,\xi )\) of (4.19) satisfies

$$\begin{aligned} V_\xi (s,\xi )\leqslant 0\ \text{ for } \ \xi \leqslant 0, s\in (0,\tau ]. \end{aligned}$$
(4.20)

Moreover, it follows from \(U^{c^*}(-\infty )=u^*\) and (4.19) that

$$\begin{aligned} V(s,-\infty )=f(u^*)e^{-\beta s}\ \text{ for } \ s\in [0,\tau ]. \end{aligned}$$
(4.21)

Define, for any fixed \(t>0\),

$$\begin{aligned} \bar{v}(s,x;t):=(1+2\epsilon )V(s,x-\bar{h}(s+t-\tau )). \end{aligned}$$

Then clearly \(\bar{w}(t,x)=\bar{v}(\tau ,x; t)\), and for \(x<\bar{h}(s+t-\tau )\) and \(s\in (0, \tau ]\), \(\bar{v}(s,x; t)=\bar{v}(s,x)\) satisfies

$$\begin{aligned} \bar{v}_s-D\bar{v}_{xx}+\beta \bar{v}= & {} (1+2\epsilon )(V_s-\bar{h}'V_\xi -DV_{\xi \xi }+\beta V)\\= & {} (1+2\epsilon )(V_s-(1+2\epsilon )c^*V_\xi -DV_{\xi \xi }+\beta V)\\= & {} -2\epsilon (1+2\epsilon )V_\xi \geqslant 0. \end{aligned}$$

Moreover,

$$\begin{aligned} \bar{v}(s,x)\geqslant 0 \text{ for } x\leqslant \bar{h}(s+t-\tau ),\, s\in (0,\tau ], \end{aligned}$$

and

$$\begin{aligned} \bar{v}(0,x)= & {} (1+2\epsilon )V(0,x-\bar{h}(t-\tau )=(1+2\epsilon )f(U^{c^*}(x-\bar{h}(t-\tau )))\\\geqslant & {} f\big ((1+2\epsilon )U^{c^*}(x-\bar{h}(t-\tau ))\big )\\= & {} f\big (\bar{u}(t-\tau , x)\big )\ \text{ for } x\leqslant \bar{h}(t-\tau ). \end{aligned}$$

As before, a comparison argument involving the corresponding ODE problem shows that there is \(T_1>\tau \) large enough such that

$$\begin{aligned} u(t+T_1,x)\leqslant (1+\epsilon )u^*\ \text{ for } \ t\geqslant 0,\ x\in [g(T_1+t),h(T_1+t)]. \end{aligned}$$

As \(U^{c^*}(-\infty )=u^*\), there exists \(L>0\) large such that \(\bar{h}(0)=L>h(T_1+\tau )-g(T_1+\tau )\), and for \(s\in [0,\tau ]\), \(x\in [g(T_1+s),h(T_1+s)]\),

$$\begin{aligned} \bar{u}(s,x)\geqslant & {} (1+2\epsilon )U^{c^*}(x-L) \\\geqslant & {} (1+2\epsilon )U^{c^*}\big (h(T_1+\tau )-L\big ) \\\geqslant & {} (1+\epsilon )u^*\\\geqslant & {} u(T_1+s,x). \end{aligned}$$

For \(t\geqslant 0\), we have \(\bar{u}\big (t,\bar{h}(t)\big )=0\), \(\bar{u}(t,g(T_1+t))>0=u(t+T_1,g(T_1+t))\), and

$$\begin{aligned} \bar{h}'(t)=(1+2\epsilon )c^*=-\mu (1+2\epsilon )U^{c^*}_\xi (0)=-\mu \bar{u}_x(t,\bar{h}(t)). \end{aligned}$$

A direct calculation shows, for \(t\geqslant \tau \) and \(x\in (g(T_1+t),\bar{h}(t))\),

$$\begin{aligned}{} & {} \bar{u}_t-\bar{u}_{xx}+\alpha \bar{u}-\bar{w}(t,x) \\{} & {} \quad = (1+2\epsilon ) \big [-(1+2\epsilon )c^*U^{c^*}_{\xi }-U^{c^*}_{\xi \xi }+\alpha U^{c^*}-W^{c^*}\big ] \\{} & {} \quad = -2\epsilon (1+2\epsilon )c^*U^*_{\xi }\geqslant 0. \end{aligned}$$

We may now use the comparison principle to conclude that

$$\begin{aligned} u(t+T_1, x)\leqslant \bar{u}(t, x),\ \ h(t + T_1) \leqslant \bar{h}(t)\ \text{ for } \ t \geqslant \tau ,\ x\in [g(t+T_1),h(t + T_1)]. \end{aligned}$$

As a consequence, we have

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{h(t)}{t}\leqslant \limsup _{t\rightarrow \infty }\frac{\bar{h}(t-T_1)}{t}=(1+2\epsilon )c^*. \end{aligned}$$

Letting \(\epsilon \rightarrow 0\), we immediately obtain

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{h(t)}{t}\leqslant c^*. \end{aligned}$$
(4.22)

To obtain a lower bound for h(t)/t, we define, for small \(\epsilon >0\),

$$\begin{aligned}&\underline{h}(t):=(1-2\epsilon )c^*t+h(0)\ \text{ for } t\geqslant 0,\\&\underline{u}(t,x):=(1-2\epsilon )U^{c^*}(x-\underline{h}(t))\ \text{ for } t\geqslant 0,\ x\in [g(0),\underline{h}(t)],\\&\underline{w}(t,x):=(1-2\epsilon )W^{c^*}(x-\underline{h}(t))\ \text{ for } t\geqslant 0,\ x\in [g(0),\underline{h}(t)]. \end{aligned}$$

Since \(W^{c^*}(\xi )=V(\tau ,\xi )\), where \(V(s,\xi )\) is the solution of problem (4.19), if we define

$$\begin{aligned} \underline{v}(s,x;t):=(1-2\epsilon )V(s, x-\underline{h}(s+t-\tau )), \end{aligned}$$

then \(\underline{w}(t,x)=\underline{v}(\tau ,x; t)\) and \(\underline{v}(s,x; t)=\underline{v}(s,x)\) satisfies, for \(s\in (0,\tau ]\) and \(x<\underline{h}(s+t-\tau )\),

$$\begin{aligned} \underline{v}_s-D\underline{v}_{xx}+\beta \underline{v}= & {} (1-2\epsilon )(V_s-\underline{h}'V_\xi -DV_{\xi \xi }+\beta V)\\= & {} (1-2\epsilon )(V_s-(1-2\epsilon )c^*V_\xi -DV_{\xi \xi }+\beta V)\\= & {} 2\epsilon (1-2\epsilon )V_\xi \leqslant 0. \end{aligned}$$

Moreover, due to the definition of \(\underline{v}\), (4.20) and (4.21),

$$\begin{aligned} \underline{v}(s,\underline{h}(s+t-\tau ))=0,\; \underline{v}(s, g(0))\leqslant (1-2\epsilon )V(s,-\infty )=(1-2\epsilon )f(u^*)e^{-\beta s} \text{ for } s\in (0,\tau ]. \end{aligned}$$

and

$$\begin{aligned} \underline{v}(0,x)= & {} (1-2\epsilon )V(0,x-\underline{h}(t-\tau )=(1-2\epsilon )f(U^{c^*}(x-\underline{h}(t-\tau )))\\\leqslant & {} f\big ((1-2\epsilon )U^{c^*}(x-\underline{h}(t-\tau ))\big )\\= & {} f\big (\underline{u}(t-\tau , x)\big )\ \text{ for } x\leqslant \underline{h}(t-\tau ). \end{aligned}$$

Since spreading happens, there is \(T_2\gg 1\) such that \(h(T_2)>(1-2\epsilon )c^*\tau +h(0)\),

$$\begin{aligned} u(t+T_2,x)\geqslant (1-\epsilon )u^* \ \text{ for } t\geqslant 0, x\in [g(0),\underline{h}(\tau )], \end{aligned}$$

and for \(t\geqslant T_2\), due to (3.20),

$$\begin{aligned} w(s, x; t)\geqslant (1-\epsilon )f(u^*)e^{-\beta s}\ \ \text{ for } s\in [0,\tau ],\ x\in [g(0),\underline{h}(\tau )]. \end{aligned}$$

Clearly \(h(T_2+s)\geqslant h(T_2)>\underline{h}(\tau )\geqslant \underline{h}(s)\) for \(s\in [0,\tau ]\). Moreover, we have that

$$\begin{aligned} u(T_2+s,x)\geqslant (1-2\epsilon )u^*\geqslant \underline{u}(s,x)\ \text{ for } s\in [0,\tau ],\ x\in [g(0), \underline{h}(\tau )]. \end{aligned}$$

For \(t\geqslant 0\), we have \(\underline{u}(t,\underline{h}(t))=0\), \(u\big (t+T_2,g(0)\big )\geqslant (1-\epsilon )u^*>\underline{u}\big (t,g(0)\big )\), and

$$\begin{aligned} w\big (s, g(0);t+T_2\big )\geqslant & {} (1-\epsilon )f(u^*)e^{-\beta s}\\> & {} \underline{v}(s, g(0)) \text{ for } s\in [0,\tau ]. \end{aligned}$$

Moreover,

$$\begin{aligned} \underline{h}'(t)=(1-2\epsilon )c^*=-\mu (1-2\epsilon )U^{c^*}_{\xi }(0) =-\mu \underline{u}_x(t,\underline{h}(t))\ \text{ for } \ t\geqslant 0. \end{aligned}$$

A direct calculation shows that for \(t\geqslant \tau \) and \(x\in [g(0),\underline{h}(t))\),

$$\begin{aligned}{} & {} \underline{u}_t-\underline{u}_{xx}+\alpha \underline{u}-\underline{w}(t,x) \\{} & {} \quad = (1-2\epsilon ) \big [-(1-2\epsilon )c^*U^{c^*}_{\xi }-U^{c^*}_{\xi \xi }+\alpha U^{c^*}-W^{c^*}\big ] \\{} & {} \quad = 2\epsilon (1-2\epsilon )c^*U^{c^*}_{\xi }\leqslant 0. \end{aligned}$$

Thus we can use the comparison principle to obtain

$$\begin{aligned} u(t+T_2, x)\geqslant \underline{u}(t, x),\ \ h(t + T_2) \geqslant \underline{h}(t)\ \text{ for } \ t \geqslant \tau ,\ x\in [g(0),\underline{h}(t)]. \end{aligned}$$

Consequently, we have

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{h(t)}{t}\geqslant \liminf _{t\rightarrow \infty }\frac{\underline{h}(t-T_2)}{t}=(1-2\epsilon )c^*. \end{aligned}$$

Letting \(\epsilon \rightarrow 0\), we immediately obtain

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{h(t)}{t}\geqslant c^*. \end{aligned}$$

This, together with (4.22), yields that

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{h(t)}{t}= c^*, \end{aligned}$$

which ends the proof.