A delay induced nonlocal free boundary problem

We study the dynamics of a population with an age structure whose population range expands with time, where the adult population is assumed to satisfy a reaction–diffusion equation over a changing interval determined by a Stefan type free boundary condition, while the juvenile population satisfies a reaction–diffusion equation whose evolving domain is determined by the adult population. The interactions between the adult and juvenile populations involve a fixed time-delay, which renders the model nonlocal in nature. After establishing the well-posedness of the model, we obtain a rather complete description of its long-time dynamical behaviour, which is shown to follow a spreading–vanishing dichotomy. When spreading persists, we show that the population range expands with an asymptotic speed, which is uniquely determined by an associated nonlocal elliptic problem over the half line. We hope this work will inspire further research on age-structured population models with an evolving population range.


Introduction
This paper concerns the following nonlocal reaction-diffusion problem with Stefan type free boundary conditions: , h(t)), u(t, g(t)) = u(t, h(t)) = 0, t > 0, where w(τ, x; t) is the solution w(s, x), evaluated at s = τ , of the following initial boundary value problem Here α, β, μ, D and τ are positive constants and f is a nonlinear function. Clearly w(τ, x; t) depends on u(t − τ, ·) and g(s), h(s) with s ∈ [t − τ, t]. Therefore (P) is highly nonlocal.
Such a problem is used here to model the biological invasion of an age-structured species when the juveniles diffuse in an expanding habitat whose expansion is determined by the diffusive adults. More precisely, u represents the density of the adult population, τ is the time length for a newborn to grow to an adult, f is the birth function and w(τ, x; t) is the density of the newly added adult at time t. A derivation of problem (P) with the aforementioned biological assumptions will be presented in the next section.
Problem (P) reduces to some existing problems in the literature when some of the parameters in {τ, μ, D} are sent to certain limiting values. If τ → 0, then w(τ, x; t) → f (u(t, x)) and the model is reduced to , h(t)), u(t, g(t)) = u(t, h(t)) = 0, t > 0, which was introduced by Du and Lin [9] in 2010, where they revealed a vanishingspreading dichotomy when the nonlinearity f is of KPP type. Problem (1.1) has been extended in several directions (e.g. [7,10]), and we mention in particular that very recently, a new phenomenon was found in [5,8] for (1.1) when the local diffusion term u x x is replaced by a suitable nonlocal diffusion operator. Our problem (P), however, is a very different nonlocal problem. If τ → ∞, then w(τ, x; t) → 0 and the model reduces to a linear problem with the Stefan free boundary condition.
If D → 0, then w(τ, x; t) → e −βτ f (u(t − τ, x)) and the model becomes a local free boundary problem with time delay, which was studied recently by Sun and Fang [22].
If μ → ∞, then the free boundary condition disappears and the model becomes a nonlocal Cauchy problem in the whole line with time delay: 4π Dτ e − y 2 4Dτ .
If μ → 0, then the expanding domain reduces to a fixed one and the model becomes a nonlocal problem with zero Dirichlet boundary condition and time delay: We refer to a survey by Gourely and Wu [13] in 2006 for more details on the research of (1.3). With f (u) = pue −qu , p, q > 0, problems (1.2) and (1.3) are often called the diffusive Nicholson blowfly models. For early work on the classical (ODE) Nicholson blowfly model we refer to [14,18].
From the above discussions we see that the nonlocal terms in (1.2) and (1.3) are induced by the joint effect of diffusion (i.e., D > 0) and time delay (i.e., τ > 0). For our problem (P), except for these two factors, the nonlocal term w(τ, x; t) also involves the to-be-determined varying domain over a time period of length τ . This is a main distinct feature of (P).
The first result of this paper is the well-posedness of the problem. Then for any γ ∈ (0, 1), problem (P) admits a unique solution Here and throughout this paper, for constants a < b and functions g(t) < h(t), we define (i) Spreading happens when σ > σ * in the sense that (g ∞ , h ∞ ) = R and lim t→∞ u(t, x) = u * locally uniformly in R; (ii) Vanishing happens when σ σ * in the sense that (g ∞ , h ∞ ) is a finite interval and When spreading happens, we will determine the spreading speed of the fronts, by making use of the nonlinear and nonlocal semi-wave problem with G(τ, y) as given before. It follows from Sect. 4 that problem (1.6) admits a unique solution pair (c, U ) = (c * , U c * ). With the semi-wave established above, we can construct various super-and subsolutions to estimate the spreading fronts h(t) and g(t), and obtain the third result of this paper.
where (c * , U c * ) is the unique solution of (1.6).
The rest of the paper is organised as follows. In Sect. 2, we first explain how problem (P) can be deduced from some reasonable biological assumptions, and then we give a few comparison results for (P) to be used later in the paper. The main technical part of this section is the proof of the well-posedness of (P) (Theorem 1.1), which follows existing strategies but with considerable changes. Section 3 examines the long-time behaviour of the solution of (P), which relies on a good understanding of the corresponding problem over a fixed interval and involves a nonlocal eigenvalue problem. The latter is treated in Sect. 3.1 while the former is the main task of Sect. 3.2. Based on these preparations we obtain sufficient conditions for the solution of (P) to vanish in Sect. 3.3, and obtain sufficient conditions for the spreading to persists in Sect. 3.4, where the spreading-vanishing dichotomy (Theorem 3.6) is also proved. These pave the way to complete the proof of Theorem 1.2 in Sect. 3.5. The approach in Sect. 3 is based mainly on comparison arguments involving various innovative constructions of sub-and super-solutions. Section 4 is devoted to finding the spreading speed when spreading is successful, and is perhaps one of the most innovative parts of the paper. We first introduce a semi-wave problem based on a heuristic analysis, and we then prove that the semi-wave problem has a unique solution, namely a semi-wave with profile U c * and speed c * . This is the content of Sect. 4.1, where a completely new approach is used; in particular, it involves the introduction of a sequence of bistable problems which converge to the monostable problem at hand, and the traveling waves of these auxiliary bistable problems are used to construct sub-solutions of our semiwave problem. In Sect. 4.2, we show that the semi-wave profile U c * can be suitably modified to produce super-and sub-solutions of problem (P) to eventually give the spreading speed, which is precisely c * , as stated in Theorem 1.3.

Model formulation
To formulate problem (P), we start from the age-structured population growth law where p = p(t, x; a) denotes the density of the concerned species of age a at time t and location x, D(a) and d(a) denote the diffusion rate and death rate of the species of age a, respectively. We assume that the species has the following biological characteristics: (A1) The species can be classified into two stages according to age: mature and immature. An individual at time t belongs to the mature class if and only if its age exceeds the maturation time τ > 0. Within each stage, all individuals have the same diffusion rate and death rate. (A2) The immature population moves in space within the habitat of the mature population, but does not contribute to the expansion of the habitat.
The total mature population u at time t and location x can be represented by the integral We assume that the mature population u lives in the habitat [g(t), h(t)], vanishes outside the habitat, and so moreover, the habitat expands according to the Stefan type moving boundary conditions: where μ is a given positive constant. The equations in (2.4) can be deduced from some reasonable biological assumptions as in [4], where it is assumed that certain sacrifices (in terms of population loss at the range boundary) is made by the species in order to have the population range expanded, with 1/μ proportional to this loss. By (A2), the immature population also lives in [g(t), h(t)] and vanishes outside of it. However, the immature population disperses over the population range of the adult population passively, with no contribution to the expansion of [g(t), h(t)]. Considering that in many species, the sacrifices made by the species to expand the population range are mostly for raising/protecting the young by the adults, it appears reasonable to assume that the young do not contribute to the expansion of the population range.
According to (A1) we may assume tha where D, α and β are three positive constants. Differentiating both sides of (2.2) in time yields Since no individual lives forever, it is natural to assume that To obtain a closed form of the model, one then needs to express p(t, x; τ ) in terms of u. Note that p(t, x; τ ) represents the newly matured population at time t, from the newborns at t − τ . In other words, there is an evolution relation between the quantities p(t, x; τ ) and p(t − τ, x; 0). Such a relation is governed by the growth law (2.1) for 0 < a < τ, and hence it is the time-τ solution map of the following problem Further, if b(u) is the birth rate function of the mature population and f (u) = b(u)u, then Thus problem (2.7) can be formulated as an initial boundary value problem (2.8) If we regard (u, g, h) as given and denote the unique solution of (2.8) by w(s, x; t), then Combining (2.3)-(2.6) and (2.9), we are led to the following: and which are equivalent to problem (P).
By the maximum principle it is easily seen from (2.10) that g (t) < 0 < h (t) for t > 0, namely the habitat is expanding for t 0. Therefore it is natural to assume that which is the aforementioned compatibility condition (1.5).

Comparison principle
In this subsection, we give some comparison principles, which will be used in the rest of this paper.
where (u, g, h) solves (P) and w solves (Q). Then The proof of Lemma 2.1 is a simple modification of those of Lemma 5.7 in [9] and Lemma 2.3 in [22], and with some further minor changes of this proof, one obtains Lemma 2.2.

Remark 2.3
The function u, or the triple (u, g, h), in Lemmas 2.1 and 2.2 is often called an upper solution to (P). A lower solution can be defined analogously by reversing all the inequalities. There is a symmetric version of Lemma 2.2, where the conditions on the left and right boundaries are interchanged. We also have corresponding comparison results for lower solutions in each case.

Well-posedness
We employ the Banach and the Schauder fixed point theorems to establish the local existence of a solution to (P), and prove its uniqueness, we then extend the solution to all time by an estimate on the free boundaries.
for some p > 1.

Proof
We use a change of variable argument to transform problem (P) into a problem with straight boundaries but a more complicated differential operator as in [6,9]. Denote g 0 := g(0) and h 0 := h(0) for convenience, and set l 0 := 1 2 (h 0 − g 0 ). Let ξ 1 (y) and ξ 2 (y) be two nonnegative functions in C 3 (R) such that For 0 < T min Clearly, For each pair (g, h) ∈ D T , we can define y = y(t, x) for t ∈ [0, T ] through the identity , h(t)) solves (P), then with the above defined transformation, where To straighten the boundaries in (Q), we need to extend y(t, x) to t ∈ [−τ, 0). Note that for t in this range, g(t) and h(t) are given as part of the initial data. Since no free boundary conditions are involved for t in this range, we simply define , whose inverse is given by (2.14) We define For any given γ ∈ (0, 1) and WithW obtained above, (2.12) withw(τ, y; t) replaced byW (τ, y; t) has a unique Using the extension trick in [26], the L p estimate, Sobolev embedding theorem and the Banach fixed point theorem, it can be shown (as in [26]) that K has a unique fixed point in a suitable subset of We denote this fixed point byũ(t, y), and extend it to t ∈ [−τ, 0] bỹ Let us note that with U =ũ, the above obtainedW (s, y; t) solves the original (2.16) and so if we denote this specialW (s, y; t) byw(s, y; t), then the pair (ũ,w) solves (2.12) (for t ∈ [0, T ]) and (2.16) simultaneously. Moreover, Then clearlyg Therefore, for any T ∈ (0, T 0 ] and any given pair (g, h) ∈ D T , we can define an operator F by From the above discussions, it is easily seen that F is completely continuous in D T , We will show that if T > 0 is small enough, then F has a fixed point by using the Schauder fixed point theorem.
Firstly, it follows from (2.19) and (2.20) that Thus if we choose T , then F maps the closed convex set D T into itself. Consequently, F has at least one fixed point by the Schauder fixed point theorem, which implies that (2.12) has at least one solution (ũ,g,h) defined in [0, T ].
We now prove the uniqueness of such a solution. Let (u i , g i , h i ) (i = 1, 2) be two solutions of (P) (for t ∈ [0, T ]), and let w i be the corresponding solutions of (Q), and set Then it follows from (2.17)-(2.20) that, for i = 1, 2 and t ∈ [0, T ], Then we find that for any t ∈ [0, T ] (noting that T A i and B i are the coefficients of problem (2.16) with (g i , h i ) in place of (g, h). We can apply the L p estimates for parabolic equations to deduce that, for t ∈ [0, T ], with C 4 depending on C 0 , C 1 and C 2 .
It is easy to see thatû(t, y) satisfies Thanks to (2.21), we can apply the extension trick of [26], the L p estimates for parabolic equations, and the Sobolev embedding theorem much as before, to deduce that û with C 5 depending on C 0 , C 1 , C 2 and C 4 , but independent of T ∈ (0, T 1 ]. This, together with (2.22), implies that where C 6 = 2μC 5 . Similarly, we have As a consequence, we deduce that Hence for we have This shows thatĝ ≡ 0 ≡ĥ for 0 t T ; thus F 1 ≡ 0 and F 2 ≡ 0, which implŷ w ≡ 0 and henceû ≡ 0. Consequently, the local solution of (P) is unique, which ends the proof of this theorem.

Theorem 2.5 Assume that (H) holds. Then the local solution
Proof Fix a γ ∈ (0, 1) and let [0, T max ) be the maximal time interval in which the solution as described in Theorem 2.4 exists. In view of Theorem 2.4, we have T max > 0. Using an indirect argument, we assume that T max < ∞.
Thanks to the choice of the initial data, we can use the comparison principle to bound the solution by the corresponding ODE problems to obtain and for fixed t ∈ (0, T max ), To bound g(t) and h(t), we construct two auxiliary functions It thus follows from the definition of M that After a simple calculation we obtain So we can apply the comparison principle to deduce that u(t, x) We can similarly prove −g (t) C 0 for t ∈ (0, T max ).
With the above estimate on h (t) and g (t), and the bounds we are able to show that the solution (u, g, h) can be defined beyond t = T max .
To do so, we straighten the boundaries of (2.10) via the transformatioñ , with x(t, y) given by (2.14). Thenũ satisfies Applying the L p theory to (2.23) we obtainũ ∈ W 1,2 for any p > 1 and T ∈ (T max /2, T max ), and by the Sobolev embedding theorem we obtain, for any γ ∈ (0, 1) and some large enough p > 1 depending on γ , Choose t n ∈ (0, T max ) satisfying t n T max , and regard (u(t n − θ, x), g(t n − θ), h(t n − θ)) for θ ∈ [0, τ ] as the initial data. Due to (2.24) and the properties of g and h proved earlier, we can repeat the proof of Theorem 2.4 2 to conclude that there exists s 0 > 0 depending on C p,γ and f but independent of n such that problem (P) has a unique solution (u, g, h) for t ∈ [t n , t n + s 0 ]. This gives a solution (u, g, h) of (P) defined for t ∈ [0, t n + s 0 ]. Since t n + s 0 > T max when n is large, this contradicts the definition of T max , and hence we must have T max = ∞, as desired. The proof is complete.

Long time behavior of the solutions
In this section we study the asymptotic behavior of the solutions of (P).
By the uniqueness of the principal eigen-pair (λ 1 , ϕ ), we necessarily have This proves part (i).

Positive solutions on bounded intervals
Using Lemma 3.1, we can obtain the asymptotic behavior of the solutions to where U 0 (x; ) is the unique positive solution of the following problem which can be shown to satisfy 0 < U 0 (x; ) < u * . Moreover, U 0 (x; ) is strictly increasing in and U 0 ( Proof We first prove that when > * the problem (3.6) admits a unique positive solution. We shall use the sub-supersolution argument to establish its existence. Obviously,v = u * is a supersolution to (3.6). To construct a positive subsolution, we recall from Lemma 3.1 that if > * , the principal eigenvalue λ 1 of (3.1) is negative, whose corresponding positive eigenfunction is ϕ = cos( π 2 x). Set where δ > 0 is small such that A simple calculation yields that for x ∈ (− , ) , thus v is a positive subsolution. Thus, by a standard iteration technique, problem (3.6) with > * admits a positive solution.
We then verify the uniqueness of the positive solution to (3.6). Fix > * and suppose that problem (3.6) has two different positive solutions v 1 and v 2 . With the help of the Hopf boundary lemma, we can find M 0 > 1 such that It is easily seen that M 0 v 1 is a supersolution of (3.6) and M −1 0 v 1 is a subsolution. As a result, there exist a minimal and a maximal solution to (3.6) in the order interval , which we denote by v * and v * , respectively. Thus v * v i v * u * for i = 1, 2. Hence it suffices to show that v * = v * .
Hence U 0 (x; ) is increasing in for any > * , and U * (x) := lim →∞ U 0 (x; ) u * is well defined on R. Furthermore, by standard regularity considerations, we see is a positive solution of (3.8).
We claim that U * (x) is a constant function. Indeed, the above argument leading to U 0 (x; ) U 0 (x; 1 ) for < 1 can also be used to show that for any x 0 ∈ R, for all x, x 0 ∈ R, which implies that U * (x) is a constant function. Thus we must have U * (x) ≡ u * , which yields that U 0 (x; ) → u * as → ∞ in L ∞ loc (R). Next, we prove (3.5). Fix > * . We have v(τ, x) > 0 in (− , ) and v x (τ, ) < 0 < v x (τ, − ).

Therefore we can find
Let v 1 (t, x) and v 2 (t, x) be the solution of (3.4) with ψ(θ, x) replaced by v(x) and by v(x), respectively. It then follows from the comparison principle that (3.9) M > 1 implies that v is a lower solution of (3.6), and v is an upper solution. It follows that v 1 (t, x) is increasing in t and v 2 (t, x) is decreasing in t. Therefore lim t→∞ v 1 (t, x) = V (x) exists and V (x) is a positive solution of (3.6). As U 0 (x; ) is the unique positive solution to this problem, we obtain V = U 0 .

Vanishing phenomenon
In this subsection, we study the vanishing phenomenon of (P). First, we give the following equivalence result.

Lemma 3.3
Assume that (H) holds and let * be given in Lemma 3.1. Then the following three assertions are equivalent: Proof "(i)⇒(ii)". Without loss of generality we assume g ∞ > −∞ and prove (ii) by contradiction. Assume that h ∞ − g ∞ > 2 * , then for sufficiently large t 1 Now we consider an auxiliary problem: (3.10) where for any t > t 1 , w(τ, x; f (u(t − τ, x))) is given by the following problem: with ω := s + t − τ . Clearly, u is a lower solution of (P). So k(t) g(t) and k(∞) > −∞ by our assumption. Using a similar argument as in [7, Lemma 2.2] by straightening the free boundary one can show that where U 0 (x; ) is the positive solution of (3.6) with : This contradicts the assumption k(∞) > −∞.
"(ii)⇒(iii)". By the assumption and Lemma 3.2 we see that the unique positive solution of the following problem as t → ∞. The conclusion (iii) now follows from the comparison principle.
"(iii)⇒(ii)": We proceed by a contradiction argument. Assume that, for some small ε > 0 there exists a large number t 2 such that h(t) − g(t) > 2 * + 4ε for all t > t 2 − τ . It is known that the eigenvalue problem (3.1), with = * + ε, admits a negative principal eigenvalue, denoted by λ ε , whose corresponding positive eigenfunction is A direct calculation yields that for x ∈ [− * − ε, Furthermore, one can choose δ sufficiently small such that for x ∈ [− * − ε, * + ε], By the comparison principle we have, for all t > 0, contradicting (iii). This proves the lemma.
Next, we give a sufficient condition for vanishing, which indicates that if the initial domain and the initial function are both small, then the species dies out eventually.

Spreading phenomenon
In this subsection, we study the spreading phenomenon of (P) and give some sufficient conditions for spreading to happen.

Lemma 3.5 Assume that (H) holds and let * be given in Lemma 3.1. If h(0) − g(0)
2 * , then spreading always happens for the solution (u, g, h) of (P), i.e., −g ∞ = h ∞ = ∞ and lim t→∞ u(t, x) = u * locally uniformly in R, (3.18) where u * is the unique positive root of In what follows we prove (3.18).
First, we choose an increasing sequence of positive numbers m such that m → ∞ as m → ∞ and m > * for all m 1. As −g ∞ = h ∞ = ∞, we can find t m large such that [− m , m ] ⊂ (g(t), h(t)) for t t m − τ . It follows from Lemma 3.2 that the following problem admits a unique positive solution u m (t, x), which satisfies where U 0 (x; m ) is the unique positive solution of (3.6) with = m . Moreover, as . By the comparison principle we have Thus lim inf t→∞ u(t, x) u * locally uniformly for x ∈ R. On the other hand, consider the problem where for any t > 0,w(s; t) =w(s) is the unique solution of It follows from [15,chap. 4,Theorem 9.4] that the above problem has a unique solutionū(t) andū It thus follows from the comparison principle that lim sup t→∞ u(t, x) u * locally uniformly for x ∈ R.
Combining this with (3.19) we obtain lim t→∞ u(t, x) = u * locally uniformly for x ∈ R.
The proof is complete.
We are now in a position to prove the following spreading-vanishing dichotomy result. Theorem 3.6 (Spreading-vanishing dichotomy) Assume that (H) holds and * is given in Lemma 3.1. Let (u, g, h) be the solution of (P) with the initial data (φ(θ, x), g(θ ), h(θ )) satisfying (1.4) and (1.5). Then one of the following alternatives holds: (i) Spreading: (g ∞ , h ∞ ) = R and lim t→∞ u(t, x) = u * locally uniformly in R, Proof It is easy to see that there are two possibilities: vanishing happens. For case (ii), it follows from Lemma 3.5 and its proof that  u, g, h) on σ , we will denote it by (u σ , g σ , h σ ). Recall that By the comparison principle we easily see that u σ (t, x), h σ (t) and −g σ (t) are all increasing in σ for fixed t > 0 and x ∈ (g σ (t), h σ (t)). Therefore if spreading happens for σ = σ 1 , then spreading happens for all σ σ 1 . Assume by way of contradiction the desired conclusion is false; then by Theorem 3.6 vanishing happens for all σ > 0, and hence h σ (t) − g σ (t) < 2 * for all t 0 and σ > 0. (3.22) We now let g * (t) and h * (t) be continuous extensions of g(t) and h(t) from [−τ, 0] to [−τ, τ ], respectively, with the following properties: , they are constant in [0, ] for some small > 0, and By the monotonicity of f (u)/u and (3.21), we have Then for t ∈ [0, τ ], let w * (s, x; t) denote the unique solution of the initial boundary value problem (3.23) and let u * (t, x) be the unique solution of Since w * 0, we can apply the parabolic Hopf boundary lemma to (3.24) to obtain Thus we can find δ > 0 such that It follows that, for all large k, Since h * (t) = g * (t) = 0 for t ∈ [0, ], the above inequalities also hold for t ∈ [0, ]. Thus we see that (u k , g * , h * ) forms a lower solution to the problem satisfied by (u σ , g σ , h σ ) (for t τ ) with σ = k, for all large k. It follows that which contradicts (3.22). Therefore the desired conclusion holds.

Proof of Theorem 1.2
With the preparation of the previous subsections, we are now ready to complete the proof of Theorem 1.2. By Lemma 3.5, we find that spreading happens when h(0) − g(0) 2 * , where * is given in Lemma 3.1. Hence in this case we have σ * = 0 for any given (φ(θ, x), g(θ ), h(θ )) satisfying (1.4) and (1.5).
In what follows we consider the remaining case h(0) − g(0) < 2 * . Define If σ * = ∞, then there is nothing left to prove. Suppose σ * ∈ (0, ∞). Then by definition vanishing happens when σ ∈ (0, σ * ). By the comparison principle we see that spreading happens for σ > σ * . It remains to prove that vanishing happens when σ = σ * . Otherwise it follows from Theorem 3.6 that spreading must happen when σ = σ * and we can find t 0 > 0 such that h(t 0 ) − g(t 0 ) > 2 * + 1. By the continuous dependence of the solution of (P) on its initial values, we find that if > 0 is sufficiently small, then the solution of (P) with u(θ, But by Lemma 3.5, this implies that spreading happens to (u * , g * , h * ), a contradiction to the definition of σ * .

Asymptotic spreading speed
Throughout this section we assume that (H) holds and (u, g, h) is a solution of (P) for which spreading happens.

A semi-wave problem
Let c 0. Introducing the transform and writingũ then problem (P) is changed into the following form: Since spreading happens, we have If we heuristically assume that lim t→∞ h (t) = c and there exists U then letting t → ∞ in (4.1) and (4.2), we obtain a limiting elliptic problem for U in (−∞, 0]: Using the reflection method, we can solve v(τ, ξ ) explicitly to obtain and Substituting (4.5) into (4.3), we obtain a nonlocal elliptic problem Proof Items (i) and (ii) follow directly from the definition of K. As for item (iii), fix 0 c 1 < c 2 . Note that 0 −∞ K(c i , ξ, x)ϕ(x)dx for i = 1, 2 are the time-τ solutions v i (τ, ξ ) of the following problems, respectively: Noting that ϕ is non-increasing in ξ 0, so are v i (τ, ξ ), i = 1, 2, thanks to the parabolic comparison principle. Hence, v : The parabolic comparison principle then infers that v(τ, ξ ) 0, i.e., where λ = λ(γ ) is the unique positive root of the following equation It follows from (4.11) and (H) that λ(γ ) is positive at γ = 0 and greater than γ 2 − α for all large γ . Therefore, Hence, γ is attained at some γ * .
Proof Assume that U c (ξ ) 0 for ξ < 0 is a solution of (4.8), then by the strong maximum principle we can infer that U c (ξ ) > 0 for ξ < 0. The rest of the proof is divided into five parts. Let λ 1 < λ 2 be the two distinct roots of λ 2 + cλ − α = 0 for c ∈ [0, c 0 ). Clearly, After a simple calculation, we obtain which, combined with λ 1 λ 2 = −α and e −βτ f (u * ) = αu * , yields Hence, it follows from Lemma 4.1 that Q : M → M satisfies Moreover, it is not difficult to check that (4.12) Therefore, a fixed point of Q satisfies the first equation of (4.8).
Let us postpone the proofs of the claim to Part 5. In the following we construct a lower fixed point of Q. For l ε > 0 to be specified later, we define By the claim we know that for any c ∈ [0, c 0 ) there exists ε ∈ (0, 1) such that Then with such an ε we show that U ε (ξ ) is a sub-solution of (4.8) provided that l ε is sufficiently large. Set In view of (4.14), and due to U ε (ξ ) 0 for ξ 0 and the concavity of f ε (u) for u 0, we have This, together with (4.15), implies that L[U ε ](ξ ) 0 for ξ < −l ε provided that which, in view of the definitions of K in (4.6) and G in (4.7), is equivalent to The above inequality can be simplified into the form Now we are ready to verify that U ε is a lower fixed point of Q. In view of Define the iterative scheme U n (ξ ) := Q[U n−1 ](ξ ) (n 1), with U 0 (ξ ) = U ε (ξ ) for ξ 0.
Then {U n } is non-decreasing in n and non-increasing in ξ 0 with U 0 U n u * for n 1. By the monotonicity of U n in n, U n is convergent. Let U ∞ ∈ M be the limit. Then U 0 U ∞ u * . By Lebesgue's dominated convergence theorem, we infer that . By (4.12) we see that U ∞ (−∞) solves the first equation in (4.8), that is, (4.16) Using the explicit form of K, we compute to have By Lebesgue's dominated convergence theorem, we obtain, for any ξ 0 < 0, and 0 lim sup Thus we have shown that U ∞ is a solution of (4.8). By the elliptic strong maximum principle, we infer that U ∞ (ξ ) is decreasing in ξ 0, and positive for ξ < 0.
Part 2. Non-existence when c c 0 .
where the concavity of f (s) in s 0, the monotonicity of U c 2 (ξ ) in ξ 0 and Lemma 4.1 (iii) are used. By a similar argument as in Part 3, we can obtain a contradiction with the definition of M * . Thus M * = 1 and U c 1 (ξ ) U c 2 (ξ ) for ξ 0. Repeating the above argument with M * = 1, by the uniqueness of solution to (4.8), the strong elliptic maximum principle and Hopf boundary lemma, we have which completes the proof of the monotonicity of U c in c ∈ [0, c 0 ). Next, we employ a contradiction argument to show that lim c↑c 0 U c ξ (0) = 0. So we assume that lim c↑c 0 U c ξ (0) < 0. Then, as c ↑ c 0 , U c (ξ ) converges to some nonincreasing function U * (ξ ), and U * satisfies

Asymptotic spreading speed
In order to determine the spreading speed, we will construct some suitable sub-and supersolutions based on the semi-waves.  Thanks to the monotonicity of U c * (ξ ) in ξ 0 and f (u) in u 0, respectively, the parabolic comparison principle implies that the solution V (s, ξ) of (4.19) satisfies V ξ (s, ξ) 0 for ξ 0, s ∈ (0, τ ].
Data Availability Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Code Availability Not applicable.

Conflict of Interest
On behalf of all authors, Yihong Du states that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.