1 Introduction

More and more epidemic models with nonlinear incidence have been studied by many authors to describe the spreading of disease transmission and predict the trend of epidemics. One of the earliest models was discussed by Kermack and Mckendrickc [25], who used an SIR (susceptible–infected–recovered) compartmental model to model the plague epidemic in Bombay. In this SIR model, they assumed that the infected and susceptible individuals were completely mixed so that the density-dependent transmission \(\beta SI\) was used, where \(\beta >0\) is the transmission coefficient. However, the frequency-dependent transmission function \(\beta SI/N\) (N is the total population) is applied in the situation that the infected and susceptible populations are randomly mixed. For more information about these two kinds of incidence functions, one may further refer to [10, 11, 59]. Michaelis–Menten combined the incidence rate of the density-dependent and frequency-dependent to derive a more general form \(\beta C(N) SI/N\), where the Michaelis–Menten contact rate can be taken as \(C(N)=\frac{aN}{1+bN}\); see for example [63]. In particular, if we take \(C(N)=1\), then the general one becomes the frequency-dependent transmission, and if we choose \(a=1\) and \(b=0\), then it becomes the density-dependent transmission \(\beta SI\).

Capasso and Serio [5] found that the number of effective contacts between infected and susceptible individuals did not always increase linearly by studying the cholera epidemic propagate in Bari. Then they proposed a saturated incidence rate g(I)S into the model, especially, \(g(I)=\frac{\beta I}{1+mI}\), which means that the effective contacts may saturate at the high infective. While the incidence rate is saturated by the susceptible, May and Anderson [2] introduced the other saturated incidence rate \(\frac{\beta SI}{1+\alpha S}\) to study the dynamics of host-parasite. Here, the positive number \(\alpha \) and m are the coefficients that measure the inhibitory effect. The more general saturated incidence rate form can be found in [24]. Related discussion on the epidemic models with nonlinear incidence can be referred to [6, 22, 23, 47, 52, 56, 57, 61, 62] and the references therein.

In recent decades, people have realized that environmental heterogeneity and individual mobility can play an important role in studying the transmission of infectious diseases. Allen et al. [1] proposed an SIS (susceptible–infected–susceptible) epidemic reaction–diffusion model

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial S}{\partial t}-d_S\varDelta S=-\frac{\beta (x)SI}{S+I}+\gamma (x)I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial I}{\partial t}-d_I\varDelta I=\frac{\beta (x)SI}{S+I}-\gamma (x)I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial S}{\partial \nu }=\frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle S(x,0)=S_0(x),\ I(x,0)=I_0(x), &{} x\in \varOmega ,\\ \displaystyle \int _{\varOmega }(S_0(x)+I_0(x))\mathrm {d}x=N>0, \end{array}\right. } \end{aligned}$$
(1)

where the habitat \(\varOmega \) is a bounded domain in \(\mathbb {R}^n\) with the smooth boundary \(\partial \varOmega \); S(xt) and I(xt) represent, respectively, the densities of susceptible and infected individuals at location \(x\in \varOmega \) and time \(t>0\); the positive constants \(d_S\) and \(d_I\) represent the diffusion coefficients of the susceptible individuals and infected individuals, respectively; the positive Hölder continuous function \(\beta (x)\) and \(\gamma (x)\) stand for, respectively, the rates of disease transmission and recovery at x. The homogeneous Neumann boundary conditions are imposed which means no flux can cross the boundary \(\partial \varOmega \). Various kinds of SIS (susceptible–infected–susceptible) epidemic reaction–diffusion systems with standard incidence rates have been extensively studied; one may refer to [7,8,9, 16, 18, 21, 26, 27, 29, 31,32,33,34,35,36,37, 44, 46, 48, 49, 51, 53,54,55, 58] and the references therein.

By direct calculation, we can find that in (1) that the total population

$$\begin{aligned} \int _{\varOmega }(S(x,t)+I(x,t))\mathrm {d}x=\int _{\varOmega }(S_0(x)+I_0(x))\mathrm {d}x=N \end{aligned}$$

is fixed for all \(t>0\). In general, the total population number can not always keep a constant in the real world. On the other hand, the birth date of susceptible population and the death rate induced by disease are important factors in the evolution of disease transmission; see [19, 20]. Then Li, Peng and Wang [35] used the recruitment term to describe the growth of the susceptible population. Moreover, most mathematical models imply that the logistic source seems to be a suitable choice of describing the intrinsic growth of the susceptible individuals; see, for instance, [12, 15]. Therefore, Li et al. [32] introduced the logistic source \(a(x)S-b(x)S^2\) in the first equation of (1) to model the susceptible population growth. More precisely, the model proposed in [32] reads as follows.

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial S}{\partial t}-d_S\varDelta S=a(x)S-b(x)S^2-\frac{\beta (x)SI}{S+I}+\gamma (x)I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial I}{\partial t}-d_I\varDelta I=\frac{\beta (x)SI}{S+I}-\gamma (x)I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial S}{\partial \nu }=\frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle S(x,0)=S_0(x),\ I(x,0)=I_0(x), &{} x\in \varOmega , \end{array}\right. } \end{aligned}$$
(2)

where a(x) and b(x) are positive Hölder continuous functions, a(x) and \(\frac{a(x)}{b(x)}\) represent, respectively, the birth of susceptible populations and the intrinsic carrying capacity.

On the other hand, Huo et al. [22] discussed the following SIS epidemic model with saturated incidence rate and the logistic sources

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial S}{\partial t}-d_S\varDelta S=a(x)S-b(x)S^2-\frac{\beta (x)SI}{1+mI}+\gamma (x)I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial I}{\partial t}-d_I\varDelta I=\frac{\beta (x)SI}{1+mI}-\gamma (x)I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial S}{\partial \nu }=\frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle S(x,0)=S_0(x),\ I(x,0)=I_0(x), &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$
(3)

The models with the above-mentioned saturated incidence rate have been investigated by many mathematicians, see [24, 52, 56, 60,61,62].

Motivated by the above works, in this paper, we consider the following SI epidemic model:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial S}{\partial t}-d_S\varDelta S=a(x)S-b(x)S^2-\frac{\beta (x)SI}{1+\alpha S}-\mu (x)S, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial I}{\partial t}-d_I\varDelta I=\frac{\beta (x)SI}{1+\alpha S}-\left( \mu (x)+\eta (x)\right) I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial S}{\partial \nu }=\frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle S(x,0)=S_0(x),\ I(x,0)=I_0(x), &{} x\in \varOmega , \end{array}\right. } \end{aligned}$$
(4)

where the parameters \(\mu (x)\) and \(\eta (x)\) are the positive Hölder continuous functions on \({\overline{\varOmega }}\), \(\mu (x)\) accounts for the rate of natural death and \(\eta (x)\) is the death rate caused by the disease; \(\alpha \) is a positive constant and measures the inhibitory effect. It is worth mentioning that Zhang et. al [62] studied a time-delay SIR epidemic model with the same incidence rate and the logistic source. One may further refer to [41,   Chapter 15] for the derivation of the ODE version of model (4).

As in [4, 28, 64], we introduce the following favorable set:

$$\begin{aligned} F^+=\{x\in \varOmega |\ a(x)>\mu (x)\} \end{aligned}$$

in order to reflect the feature of the heterogeneous environment. We always assume that the favorable set \(F^+\) is nonempty through this paper. Then one can apply the standard theory of semilinear parabolic systems to conclude that (4) has a classical solution provided that the initial functions \(S_{0}(x)\) and \(I_{0}(x)\) are nonnegative continuous functions. If additionally \(\int _{\varOmega }I_0dx>0\), it follows from the strong maximum principle and the Hopf boundary Lemma of parabolic equations that \(S(x,t)>0\) and \(I(x,t)>0\) for \(x\in {{\overline{\varOmega }}}\) and \(t>0\). The steady-state problem corresponding to (4) satisfies the following elliptic system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_S\varDelta S=a(x)S-b(x)S^2-\frac{\beta (x)SI}{1+\alpha S}-\mu (x)S, &{} x\in \varOmega ,\\ \displaystyle -d_I\varDelta I=\frac{\beta (x)SI}{1+\alpha S}-\left( \mu (x)+\eta (x)\right) I, &{} x\in \varOmega ,\\ \displaystyle \frac{\partial S}{\partial \nu }=\frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(5)

Biologically, only nonnegative solutions of (5) are of our interest. The solution (S(x), I(x)) of (5) is called a disease-free equilibrium (DFE) if \(I(x)\equiv 0\) for all \(x\in \varOmega \); and the solution (S(x), I(x)) of (5) is called an endemic equilibrium (EE) if \(I(x)>0\) for some \(x\in \varOmega \). Then we obtain that any EE satisfies \(S(x)>0\) and \(I(x)>0\) for \(x\in {{\overline{\varOmega }}}\) by the strong maximum principle and the Hopf boundary lemma of elliptic equations.

In the present paper, our first goal is to study the extinction and persistence of the disease via the basic reproduction number \({\mathcal {R}}_0\). Indeed, Theorem 2 tells us that the disease vanishes if \({\mathcal {R}}_0<1\), whereas if \({\mathcal {R}}_0>1\) and \(I_0(x) \not \equiv 0\), the solutions of system (4) is uniformly persistent and so the EE exists. Furthermore, we establish the global stability of EE by constructing a suitable Lyapunov function; see Theorem 3. As our second goal, we study the asymptotic profile of EE as the immigration rate of susceptible or infected individuals tends to zero (see Theorems 45). Our results show that the infectious disease always persists though the movement rate of susceptible or infected populations is controlled to be sufficiently small. Similar conclusions still hold with systems (2) and (3). These theoretical results imply that the controlling of the mobility of susceptible or infected populations in the epidemic model with logistic sources is not an effective strategy to eradicate the disease infection. In the discussion section, we will compare our results with those for the other two related models (2) and (3), in order to understand the effect of the incidence rate, the logistic sources, and the mobility of the population; see the last section for more details.

The rest of this paper is organized as follows. In Sect. 2, the dynamics of the epidemic model (4) are analyzed in terms of the basic reproduction number. First, the uniform boundedness of solutions to (4) is established; then, the definition and properties of the basic reproduction number are studied, and finally, the long-time behavior of system (4) by \({\mathcal {R}}_0\) is obtained. Section 3 is devoted to studying the global stability of EE and exploring the spatial distribution of the disease if the movement of the susceptible or infected populations is small. In the last section, the discussion of our paper and the comparisons of the results between our problem (4) and the related systems (2) and (3) are given.

2 Threshold Dynamical Behaviors

In this section, we aim to establish the dynamical behaviors of (4) in terms of the basic reproduction number \({\mathcal {R}}_0\). First of all, we study the auxiliary parabolic problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial S}{\partial t}-d_S\varDelta S=a(x)S-b(x)S^2-\mu (x)S, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial S}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle S(x,0)=S_0(x), &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$
(6)

The associated steady-state problem satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_S\varDelta S=a(x)S-b(x)S^2-\mu (x)S, &{} x\in \varOmega ,\\ \displaystyle \frac{\partial S}{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(7)

Denote by \({\tilde{S}}(x)\) the unique positive solution of (7) if it exists. Then, we use [4,   Theorem A.1] to derive the following conclusion

Lemma 1

Suppose that \( F^+\) is nonempty. Consider the eigenvalue problem with indefinite weight:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_S\varDelta \varphi (x)=\varLambda \left( a(x)-\mu (x)\right) \varphi ,\; &{} x\in \varOmega ,\\ \displaystyle \frac{\partial \varphi (x)}{\partial \nu }=0,\; &{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(8)

If \(\int _{\varOmega }\left( a(x)-\mu (x)\right) \mathrm {d}x< 0\), let \(\varLambda _1(a(x)-\mu (x))\) be the principal positive eigenvalue of (8), and if \(\int _{\varOmega }\left( a(x)-\mu (x)\right) \mathrm {d}x\ge 0\), set \(\varLambda _1(a(x)-\mu (x))=0\). Then the problem has a unique positive steady-state \(\tilde{S}\), which is a global attractor for nonnegative solutions when \(0<d_S<1/\varLambda _1(a(x)-\mu (x))\). When \(d_S\ge 1/\varLambda _1(a(x)-\mu (x))\), there is no positive steady state for (6), and all nonnegative solutions to (6) decay to 0 as \(t\rightarrow \infty \).

2.1 The Uniform Bound of Solutions to (4)

From now on, for the sake of simplicity, let us denote

$$\begin{aligned} h^{*}=\max _{x\in {\bar{\varOmega }}}h(x),\ \ \ h_{*}=\min _{x\in {\bar{\varOmega }}}h(x), \end{aligned}$$

where \(h(x)=\beta (x),a(x),b(x),\mu (x),\eta (x)\).

The uniform bounds of solutions of (4) are given as follows.

Theorem 1

There exists a positive constant C independent of initial data such that

$$\begin{aligned} \Vert S(\cdot ,t)\Vert _{L^{\infty }(\varOmega )}+\Vert I(\cdot ,t)\Vert _{L^{\infty }(\varOmega )}\le C,\ \forall t\ge T, \end{aligned}$$
(9)

for some large time \(T>0\).

Proof

It follows from the first equation of (4) that

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial S}{\partial t}-d_S\varDelta S\le a(x)S-b(x)S^2, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial S}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle S(x,0)=S_0(x), &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$

Let W be the solution of the following problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial W}{\partial t}-d_S\varDelta W= a(x)W-b(x)W^2, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial W}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle W(x,0)=S_0(x), &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$

Applying the standard comparison principal, we get

$$\begin{aligned} S(x,t)\le W(x,t),\ \ \text{ for } \text{ all }\ x\in \varOmega \ \text{ and } \ t\ge 0. \end{aligned}$$
(10)

Moreover, it is well known that

$$\begin{aligned} \limsup _{t\rightarrow \infty }S(x,t) \le \lim _{t\rightarrow \infty }W(x,t)=\frac{a^*}{b_*}\ \ \text{ uniformly } \text{ for }\ x\in {{\overline{\varOmega }}} . \end{aligned}$$

In what follows, we use C to represent a positive constant, which is independent of \(d_{S}\) but allows to vary from line to line. Thus, we can find a large time \(T_{1}>0\) such that

$$\begin{aligned} S(x,t)\le 2C,\ \ t\ge T_1,\ \ x\in \varOmega . \end{aligned}$$
(11)

Denote

$$\begin{aligned} U(t)=\int _\varOmega \left[ S(x,t)+I(x,t)\right] \mathrm {d}x. \end{aligned}$$

Then, it follows from (4) that

$$\begin{aligned} \begin{aligned} \frac{\mathrm {d} U(t)}{\mathrm {d}t}&=\int _\varOmega a(x)S(x,t)-b(x)S^{2}(x,t)\mathrm {d}x-\int _\varOmega \mu (x)S(x,t)\mathrm {d}x\\&\qquad -\int _\varOmega \left( \mu (x)+\eta (x)\right) I(x,t)\mathrm {d}x\\&\le \int _\varOmega a(x)S(x,t) \mathrm {d}x-\int _\varOmega \mu (x)[S(x,t)+I(x,t)]\mathrm {d}x. \end{aligned} \end{aligned}$$

Due to (10), we obtain

$$\begin{aligned} \frac{d U(t)}{dt}\le k-mU(t),\ t\ge T_1, \end{aligned}$$

where \(k= C a^{*}|\varOmega |\) and \(m=\mu _{*}\). This yields

$$\begin{aligned} U(t)\le U(0)e^{-m t}+\frac{k}{m}[1-e^{- mt}],\ t\ge T_1. \end{aligned}$$

That is,

$$\begin{aligned} \int _\varOmega \left[ S(x,t)+I(x,t)\right] \mathrm {d}x\le e^{- mt}\int _\varOmega \left[ S_{0}(x)+I_{0}(x)\right] \mathrm {d}x+\frac{k}{m}[1-e^{- mt}],\ t\ge T_1.\nonumber \\ \end{aligned}$$
(12)

Therefore, we have

$$\begin{aligned} \limsup _{t\rightarrow \infty } \int _\varOmega I(x,t)\mathrm {d}x\le \limsup _{t\rightarrow \infty } \int _\varOmega \left[ S(x,t)+I(x,t)\right] \mathrm {d}x \le \frac{k}{m}. \end{aligned}$$

By setting

$$\begin{aligned} V(x,S,I)=a(x)S-b(x)S^2-\frac{\beta (x)SI}{1+\alpha S}, \end{aligned}$$

from (11), for all \(x\in {{\overline{\varOmega }}},\ t\ge T_1\), we have

$$\begin{aligned} | V(x,S,I)|\le a^*|S|+b^*|S|^2+\frac{\beta _*}{\alpha }|I|\le C+C|I|. \end{aligned}$$

Now, we use [14,   Lemma 2.1] (due to [30]) with \(\sigma =p_0=1\) to the system (4) conclude that there exists a constant \(C>0\) independent of initial data such that

$$\begin{aligned} S(x,t),\ \ I(x,t)\le C, \ \ \forall t>T_{2},\ x\in {{\overline{\varOmega }}} \end{aligned}$$

for some \(T_2\ge T_1\). This completes our proof. \(\square \)

2.2 The Basic Reproduction Number

In this subsection, we will define the basic reproduction number \({\mathcal {R}}_0\) and show the properties of \({\mathcal {R}}_0\). Linearizing the equation of I of system (4) at DFE \(({\tilde{S}}(x),0)\), we have the following parabolic problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial I}{\partial t}-d_I\varDelta I=\frac{\beta (x){\tilde{S}}(x)}{1+\alpha {\tilde{S}}(x)}I-\left( \mu (x)+\eta (x)\right) I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle I(x,0)=I_0(x), &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$

As in [1], we define the basic reproduction number \({\mathcal {R}}_0\):

$$\begin{aligned} {\mathcal {R}}_0=\sup _{0\ne \varphi \in H^1(\varOmega )}\left\{ \frac{\int _{\varOmega }\frac{\beta (x)\tilde{S}(x)}{1+\alpha {\tilde{S}}(x)}\varphi ^2\mathrm {d}x}{\int _{\varOmega }d_I|\nabla \varphi |^2+\left( \mu (x)+\eta (x)\right) \varphi ^2\mathrm {d}x}\right\} . \end{aligned}$$
(13)

It should be noticed that the basic reproduction number \({\mathcal {R}}_0\) defined by (13) implicitly depends not only on the diffusion rate \(d_{I}\) of the infected population but also on the diffusion rate \(d_{S}\) of the susceptible population and saturation rate \(\alpha \). To stress the dependence of \({\mathcal {R}}_0\) on these parameters, we denote by \({\mathcal {R}}_0(d_I,d_S,\alpha )\) as the basic reproduction number of system (4).

Let \((\lambda _{1},\psi _{1})\) be the principal eigenpair of the eigenvalue problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle d_I\varDelta \psi +\frac{\beta (x){\tilde{S}}(x)}{1+\alpha {\tilde{S}}(x)}\psi -\left( \mu (x)+\eta (x)\right) \psi +\lambda \psi =0, &{} x\in \varOmega ,\\ \displaystyle \frac{\partial \psi }{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(14)

Then, we have the following properties.

Proposition 1

Then the following statements hold.

  1. (a)

    For fixed \(d_S,\alpha >0\), then

    1. (a1)

      \({\mathcal {R}}_0(d_I,d_S,\alpha )\) is a monotone decreasing function of \(d_{I}\) with

      $$\begin{aligned} {\mathcal {R}}_0(d_I,d_S,\alpha )\rightarrow \max \limits _{x\in \varOmega }{{\beta (x){\tilde{S}}(x)}\over {[1+\alpha {\tilde{S}}(x)](\mu (x)+\eta (x))}},\ \ \text{ as } \ d_{I}\rightarrow 0, \end{aligned}$$

      and

      $$\begin{aligned} {\mathcal {R}}_0(d_I,d_S,\alpha )\rightarrow {{\int _{\varOmega }{{\beta (x){\tilde{S}}(x)}\over {1+\alpha {\tilde{S}}(x)}}}\mathrm {d}x\Big /{\int _{\varOmega }(\mu (x)+\eta (x))}}\mathrm {d}x,\ \ \text{ as } \ d_{I}\rightarrow \infty . \end{aligned}$$
    2. (a2)

      If \(\int _{\varOmega }{{{\beta (x){\tilde{S}}(x)}}\over {1+\alpha {\tilde{S}}(x)}}\mathrm {d}x< \int _{\varOmega }(\mu (x)+\eta (x))\mathrm {d}x\) and \({{\beta (x){\tilde{S}}(x)} \over {1+\alpha \tilde{S}(x)}}-(\mu (x)+\eta (x))\) changes sign for \(x\in \varOmega \). Then there exists a threshold value \(d_{I}\in (0,\infty )\) so that \({\mathcal {R}}_0(d_I,d_S,\alpha )<1\) for \(d_{I}>d_{I^*}\) and \({\mathcal {R}}_0(d_I,d_S,\alpha )>1\) for \(d_{I}<d_{I^*}\).

    3. (a3)

      If \(\int _{\varOmega }{{{\beta (x){\tilde{S}}}}\over {1+\alpha {\tilde{S}}}}\mathrm {d}x\ge \int _{\varOmega }(\mu (x)+\eta (x))\mathrm {d}x\), then \({\mathcal {R}}_0(d_I,d_S,\alpha )>1\) for all \(d_{I}\).

  2. (b)

    For fixed \(d_I,\alpha >0\), then

    $$\begin{aligned} {\mathcal {R}}_0(d_I,d_S,\alpha )\rightarrow {\mathcal {R}}_0^*:=\sup _{0\ne \varphi \in H^1(\varOmega )}\left\{ \frac{\int _{\varOmega }\frac{\beta (x)(a(x)-\mu (x))_+}{b(x)+\alpha (a(x)-\mu (x))_+}\varphi ^2\mathrm {d}x}{\int _{\varOmega }d_I|\nabla \varphi |^2+\left( \mu (x)+\eta (x)\right) \varphi ^2\mathrm {d}x}\right\} , \end{aligned}$$

    as \(d_{S}\rightarrow 0\), where

    $$\begin{aligned} (a(x)-\mu (x))_+ =\left\{ \begin{array}{ll} a(x)-\mu (x),\ &{}a(x)>\mu (x),\\ 0,\ &{} a(x)\le \mu (x). \end{array} \right. \end{aligned}$$
  3. (c)

    For fixed \(d_I,d_S>0\), then \({\mathcal {R}}_0(d_I,d_S,\alpha )\rightarrow 0,\) as \(\alpha \rightarrow \infty .\)

  4. (d)

    \({\mathcal {R}}_0(d_I,d_S,\alpha )>1\) when \(\lambda _{1}<0\), \({\mathcal {R}}_0(d_I,d_S,\alpha )=1\) when \(\lambda _{1}=0\) and \({\mathcal {R}}_0(d_I,d_S,\alpha )<1\) when \(\lambda _{1}>0\).

The proof of Proposition 1 is similar to that of [1,   Lemma 2.3], and hence, the details are omitted.

Remark 1

In view of [43,   Lemma 3.2], the unique positive solution of (7) satisfies

$$\begin{aligned} {\tilde{S}}(x)\rightarrow \frac{(a(x)-\mu (x))_+}{b(x)}\ \text{ uniformly } \text{ on } \ \;{\overline{\varOmega }},\;\ \text{ as }\ d_S\rightarrow 0. \end{aligned}$$

Then the principal eigenvalue \(\lambda _1\) of the eigenvalue problem (14) converges to \(\lambda _1^*\) as \(d_S\rightarrow 0\), where \(\lambda _1^*\) is the principal eigenvalue of the eigenvalue problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_I\varDelta \phi =\frac{\beta (x)(a(x)-\mu (x))_+}{b(x)+\alpha (a(x)-\mu (x))_+}\phi -\left( \mu (x)+\eta (x)\right) \phi +\lambda \phi , &{} x\in \varOmega ,\\ \displaystyle \frac{\partial \phi }{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(15)

Furthermore, \({\mathcal {R}}_0^*>1\) when \(\lambda _{1}^*<0\), \({\mathcal {R}}_0^*=1\) when \(\lambda _{1}^*=0\) and \({\mathcal {R}}_0^*<1\) when \(\lambda _{1}^*>0\).

2.3 The Extinction and Persistence of Solutions to (4)

It turns out that the long-time dynamics of solutions of (4) is completely determined by \({\mathcal {R}}_0\). More precisely, we have

Theorem 2

Let (SI) be the unique solution of (4). Then the following statements hold.

  1. (i)

    If \({\mathcal {R}}_0<1\), then

    $$\begin{aligned} \lim \limits _{t \rightarrow \infty }(S(x,t)-{\tilde{S}}(x))=0\quad \text{ and }\quad \lim \limits _{t \rightarrow \infty } I(x,t)=0 \end{aligned}$$

    uniformly for \(x \in {\overline{\varOmega }}\), where \({\tilde{S}}(x)\) is the unique positive solution of (6).

  2. (ii)

    If \({\mathcal {R}}_0>1\), then system (4) is uniformly persistent in the sense that for \(I(\cdot ,0)\not \equiv 0\), there exists a constant \(\epsilon _0>0\) independent of the initial data, such that any solution (SI) satisfies

    $$\begin{aligned} \liminf _{t\rightarrow \infty }S(x,t)\ge \epsilon _0,\ \ \ \liminf _{t\rightarrow \infty }I(x,t)\ge \epsilon _0 \end{aligned}$$

    uniformly for \(x \in {\overline{\varOmega }}\). Furthermore, (4) admits at least one EE.

Proof

By the first equation of (4), we see that

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial S}{\partial t}-d_S\varDelta S\le a(x)S-b(x)S^2-\mu (x)S, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial S}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ S(x,0)=S_{0}(x)\ge 0, &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$
(16)

Then, using the standard comparison principle for parabolic equation, we obtain

$$\begin{aligned} S(x,t)\le {\overline{S}}(x,t), \ \ \forall x\in {{\overline{\varOmega }}},\ \ t\ge 0, \end{aligned}$$

where \({\overline{S}}\) is the unique solution of (6). Moreover, it follows from Lemma 1 that

$$\begin{aligned} \limsup _{t\rightarrow \infty }S(x,t)\le \limsup _{t\rightarrow \infty }{\overline{S}}(x,t)=\tilde{S}(x) \ \ \text{ uniformly }\ \ \text{ for }\ \ \text{ all }\ \ x\in {\overline{\varOmega }}. \end{aligned}$$
(17)

For any given small \(\varepsilon \ge 0\), there exists a large time \(T>0\) such that

$$\begin{aligned} S(x,t)\le \tilde{S}+\varepsilon , \ \ \forall x\in {{\overline{\varOmega }}},\ \ t\ge T. \end{aligned}$$
(18)

Note that \({\mathcal {R}}_0<1\). Then, we use Proposition 1 to conclude that \(\lambda _1>0\). Let \(\lambda _{1}(\varepsilon )\) be the principal eigenvalue of (14) with \(\tilde{S}(x)\) replaced by \((\tilde{S}(x)+\varepsilon )\) and let \(\psi _{1}(x)>0\) be the corresponding eigenfunction. Thus, we can choose small \(\varepsilon >0\) such that \(\lambda _{1}(\varepsilon )>0\) by the continuous dependence the principal eigenvalue on the parameters. For such \(\varepsilon \), we apply (18) and the second equation of (4) to see that I satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial I}{\partial t}- d_I\varDelta I\le \frac{\beta (x)(\tilde{S}(x)+\varepsilon )}{1+\alpha (\tilde{S}(x)+\varepsilon )}I-\left( \mu (x)+\eta (x)\right) I, &{} x\in \varOmega ,\ t>T,\\ \displaystyle \frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>T. \end{array}\right. } \end{aligned}$$

Define \(u(x,t)=M^*e^{-\lambda _{1}(\varepsilon )t}\psi _{1}(x)\), where the positive constant \(M_*\) is chosen such that \(M_*\psi _{1}(x)\ge I(x,T)\) for all \(x\in \varOmega \). It is easily seen that u(xt) satisfies the following auxiliary system

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial u}{\partial t}- d_I\varDelta u=\frac{\beta (x)(\tilde{S}(x)+\varepsilon )}{1+\alpha (\tilde{S}(x)+\varepsilon )}u-\left( \mu (x)+\eta (x)\right) u, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial u}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ u(x,0)=M_*\psi _{1}(x), &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$

It follows from the parabolic comparison principle that \(I(x,t+T)\le u(x,t)\) for \(x\in \varOmega \), \(t>0\). Therefore, it holds that

$$\begin{aligned} I(x,t)\rightarrow 0 \ \ \text{ uniformly } \text{ on }\ \ {\overline{\varOmega }}, \ \ \text{ as } \ \ t\rightarrow \infty . \end{aligned}$$

Using this fact, for any small \(\varepsilon >0\), we can find \(\tilde{T}\) such that \(I(x,t)\le \varepsilon \) for \(x\in \varOmega \), \(t\ge \tilde{T}\) and \(a(x)-\varepsilon \beta ^*-\mu (x)>0\) for some \(x\in \varOmega \). It is easily seen from the first equation in (4) that

$$\begin{aligned} S_{t}-d_{S}\varDelta S\ge a(x)S-b(x)S^2-\varepsilon \beta ^*S-\mu (x)S,\ \ \ x\in \varOmega ,\ t\ge \tilde{T}. \end{aligned}$$

Denote by \(\tilde{W}\) the unique positive solution of the following problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial \tilde{W}}{\partial t}-d_S\varDelta \tilde{W}=a(x)\tilde{W}-b(x)\tilde{W}^2-\varepsilon \beta ^*\tilde{W}-\mu (x)\tilde{W}, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial \tilde{W}}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \tilde{W}(x,0)=S(x,\tilde{T})> 0, &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$

Then, we use the standard comparison principle for parabolic equations to infer that

$$\begin{aligned} S(x,t+\tilde{T})\ge \tilde{W}(x,t), \ \ \forall x\in {{\overline{\varOmega }}},\ \ t\ge 0, \end{aligned}$$

which yields that

$$\begin{aligned} \liminf _{t\rightarrow \infty }S(x,t)\ge \tilde{S}_{\varepsilon }(x)\ \ \text{ uniformly } \text{ for } \ \text{ all }\ \ x\in {\overline{\varOmega }}, \end{aligned}$$

where \(\tilde{S}_{\varepsilon }(x)\) is the unique positive solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_S\varDelta v=a(x)v-b(x)v^2-\varepsilon \beta ^*v-\mu (x)v, &{} x\in \varOmega ,\\ \displaystyle \frac{\partial v}{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$

Here we used the fact that \(a(x)-\varepsilon \beta ^*-\mu (x)>0\) for some \(x\in \varOmega \), which ensures the existence of the positive solution \(\tilde{S}_{\varepsilon }(x)\). By the arbitrariness of \(\varepsilon \), letting \(\varepsilon \rightarrow 0\), we easily obtain

$$\begin{aligned} \liminf _{t\rightarrow \infty }S(x,t)\ge \tilde{S}(x)\ \ \text{ uniformly } \ \text{ for } \ \text{ all }\ \ x\in {\overline{\varOmega }}. \end{aligned}$$

Therefore, in light of (17), one can observe that

$$\begin{aligned} \lim _{t\rightarrow \infty }S(x,t)=\tilde{S}(x)\ \ \ \text{ uniformly } \ \text{ for } \ \text{ all }\ \ x\in {\overline{\varOmega }}. \end{aligned}$$

This completes the proof of (i).

Next, we will claim that (ii) holds. Assume that \({\mathcal {R}}_0>1\). We make use of the arguments of [40,   Theorem 2.1]. Let \(X=C({\overline{\varOmega }},\mathbb {R}^{2}_{+})\),

$$\begin{aligned} X_{0}=\{(S_0,I_0)\in X, \ \ I_0\not \equiv 0\},\ \ \ \partial X_{0}=X\setminus X_{0}=\{(S_0,I_0)\in X,\ \ I_0 \equiv 0\}, \end{aligned}$$

and \(X=X_{0}\cup \partial X_{0}\). For a given \((S_0,I_0)\in X\), the system (4) has a semiflow, denoted by \(\varPsi (t)\), and

$$\begin{aligned} \varPsi (t)(S_0,I_0)=(S(\cdot ,t),I(\cdot ,t)), \ \forall t\ge 0, \end{aligned}$$

where the \((S(\cdot ,t),I(\cdot ,t))\) is the unique solution of (4). In light of Theorem 1, \(\varPsi (t)\) is point dissipative. It also follows from the standard parabolic \(L^{p}\)-theory and embedding theorems that \(\varPsi (t)\) is compact from X to X for any fixed \(t > 0\).

By the uniqueness of solutions, we observe that \(I(x,t)\equiv 0\) for all \(t\ge 0\). Then, by the similar process as in the proof of assertion (i), we can get

$$\begin{aligned} S(x,t)\rightarrow \tilde{S}(x)\ \ \ \text{ uniformly } \text{ on }\ {\overline{\varOmega }} \ \text{ as } \ t\rightarrow \infty . \end{aligned}$$

This proves that \((\tilde{S}(x),0)\) attracts \((S_0,I_0)\in \partial X_{0}\).

Set \(M_{0}=(\tilde{S}(x),0)\). For any given \(\epsilon>\), we are going to show that

$$\begin{aligned} \limsup _{t\rightarrow \infty } d(\varPsi (t)(S_0,I_0),M_{0})=\limsup _{t\rightarrow \infty }\Vert \varPsi (t)(S_0,I_0)-M_{0}\Vert \ge \epsilon ,\ \ \forall (S_0,I_0)\in X_{0}.\nonumber \\ \end{aligned}$$
(19)

Suppose that

$$\begin{aligned} \limsup _{t\rightarrow \infty } d(\varPsi (t)(S_0,I_0),M_{0})<\epsilon , \end{aligned}$$

for some \((S_0,I_0)\in X_{0}\). Without loss of generality, there exists \(T_0>0\) such that \(d(\varPsi (t)(S_0,I_0),M_{0})<\epsilon \). Then, it is clearly that

$$\begin{aligned} \tilde{S}(x)-\epsilon< S(x,t)<\tilde{S}(x)+\epsilon , \ \ \text{ for }\; x\in {\overline{\varOmega }}, \; t>T_0. \end{aligned}$$
(20)

Due to \({\mathcal {R}}_0>1\), it follows from Proposition 1(d) that \(\lambda _1<0\). We can choose a positive constant \(\epsilon \) small enough such that \(\lambda _1(\epsilon )<0\), and \((\lambda _1(\epsilon ),\varPhi _1)\) is the eigenpair of the eigenvalue problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle - d_I\varDelta \varPhi =\frac{\beta (x)(\tilde{S}(x)-\epsilon )}{1+\alpha (\tilde{S}(x)-\epsilon )}\varPhi -\left( \mu (x)+\eta (x)\right) \varPhi +\lambda \varPhi , &{} x\in \varOmega ,\\ \displaystyle \frac{\partial \varPhi }{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$

Let \(\omega (x,t)=\delta e^{-{\lambda (\epsilon )t}}\varPhi _1(x)\), where the positive constant \(\delta \) will be chosen below. Then \(\omega \) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial \omega }{\partial t}- d_I\varDelta \omega =\frac{\beta (x)(\tilde{S}(x)-\epsilon )}{1+\alpha (\tilde{S}(x)-\epsilon )}\omega -\left( \mu (x)+\eta (x)\right) \omega , &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial \omega }{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \omega (x,0)=\delta \varPhi _1(x), &{} x\in \varOmega . \end{array}\right. } \end{aligned}$$
(21)

It follows from (20) that the second equation of (4) satisfies

$$\begin{aligned} \displaystyle \frac{\partial I}{\partial t}- d_I\varDelta I\le \frac{\beta (x)(\tilde{S}(x)-\epsilon )}{1+\alpha (\tilde{S}(x)-\epsilon )}I-\left( \mu (x)+\eta (x)\right) I, \end{aligned}$$

for \(x\in \varOmega ,\ t>T_0\). If we choose \(\delta \) small enough such that \(\delta \varPhi _1(x)<I(x,T_0)\) for \(x\in \varOmega \), we obtain that I is an upper solution to the problem (21), that is, \(\omega (x,t)\le I(x,t+T_0)\) for \(x\in \varOmega \) and \(t>0\). It follows from \(\lambda _{1}(\epsilon )<0\) that \(I(x,t)\rightarrow \infty \) uniformly on \({\overline{\varOmega }}\) as \(t\rightarrow \infty \), which contradicts Theorem 1.

Finally, we can use the argument of [40, 50] to derive the desired conclusion of (ii). The proof is complete. \(\square \)

2.4 The Global Stability of EE

In this subsection, we study the global stability of the EE of problem (4) in the spatially homogeneous environment. That is, we consider

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial S}{\partial t}-d_S\varDelta S=a S-bS^2-\frac{\beta SI}{1+\alpha S}-\mu S, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial I}{\partial t}-d_I\varDelta I=\frac{\beta SI}{1+\alpha S}-\left( \mu +\eta \right) I, &{} x\in \varOmega ,\ t>0,\\ \displaystyle \frac{\partial S}{\partial \nu }=\frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega ,\ t>0,\\ \displaystyle S(x,0)=S_0(x),\ I(x,0)=I_0(x), &{} x\in \varOmega , \end{array}\right. } \end{aligned}$$
(22)

where \(\gamma , \alpha , \beta , \mu , \eta \) are positive constants and \(a>\mu \). Clearly, (22) has a unique EE, denoted by \(({\hat{S}},{\hat{I}})\) if and only if \({\mathcal {R}}_0=\frac{\beta (a-\mu )}{[b+\alpha (a-\mu )](\mu +\eta )}>1\).

By elementary calculation, we have from the two equations of (22) that

$$\begin{aligned} \begin{aligned}&{\hat{S}}={{\mu +\eta }\over {\beta -\alpha (\mu +\eta )}}=\frac{({\mathcal {R}}_0-1)[\gamma +\alpha (a-\mu )]+b}{a-\mu },\\&{\hat{I}}={{[a-b{\hat{S}})-\mu ](1+\alpha {\hat{S}})}\over {\beta }}=\frac{({\mathcal {R}}_0-1)[(b+\alpha (a-\mu ))(\mu +\eta )]}{[\beta -\alpha (\eta +\mu )]^2}. \end{aligned} \end{aligned}$$

Now, we consider the global stability of the EE under certain conditions as follows.

Theorem 3

Assume that \({\mathcal {R}}_0>1\) holds, then the EE is globally asymptotically stable if

$$\begin{aligned} {{\alpha \beta {\hat{I}}} \over {1+\alpha {\hat{S}}}}\le b. \end{aligned}$$
(23)

Proof

We choose the following Lyapunov functional

$$\begin{aligned} V(t):=\int _\varOmega L(S(x,t),I(x,t))\mathrm {d}x, \ \ \ \ t>0, \end{aligned}$$

where

$$\begin{aligned} L(S,I)=\int _{{\hat{S}}}^{S} {{\xi -{\hat{S}}}\over {\xi }}\mathrm {d}\xi +\kappa \int _{{\hat{I}}}^{I} {{\xi -{\hat{I}}}\over {\xi }}\mathrm {d}\xi , \end{aligned}$$

with \(\kappa =1+\alpha {\hat{S}}\).

For convenience, let us denote

$$\begin{aligned} f(S,I)=aS-bS^2-{{\beta SI}\over {1+\alpha S}}-\mu S, \ \ \ \ g(S,I)={{\beta SI}\over {1+\alpha S}}-(\mu +\eta )I. \end{aligned}$$

Then we have

$$\begin{aligned} \begin{aligned}&\ L_{S}(S,I)f(S,I)=(S-{\hat{S}})\Big [{{f(S,I)}\over {S}}-{{f({\hat{S}},{\hat{I}})}\over {{\hat{S}}}}\Big ]\\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \quad \quad = \Big (-b+{{\alpha \beta {\hat{I}}}\over {(1+\alpha S)(1+\alpha {\hat{S}})}}\Big )(S-{\hat{S}})^2- {{\beta }\over {1+\alpha S}}(S-{\hat{S}})(I-{\hat{I}}),\\&\ L_{I}(S,I)g(S,I)=(I-{\hat{I}})\Big [{{g(S,I)}\over {I}}-{{g({\hat{S}},{\hat{I}})}\over {{\hat{I}}}}\Big ]={{\kappa \beta }\over {(1+\alpha S)(1+\alpha {\hat{S}})}}(S-{\hat{S}})(I-{\hat{I}}), \end{aligned} \end{aligned}$$

where we used the fact that \(f({\hat{S}},{\hat{I}})=g({\hat{S}},{\hat{I}})=0\). It follows that

$$\begin{aligned} \begin{aligned}&\ {{\mathrm {d}}\over {\mathrm {d}t}}V(t)=\int _\varOmega L_{S}(S,I)S_{t}+L_{I}(S,I)I_{t}\mathrm {d}x \\&\ \ \ \ \ \ \ \ \ \ \quad =\int _\varOmega \Big ({{S-S^*}\over {S}}d_{S}\varDelta S+\kappa {{I-I^*}\over {I}}d_{I}\varDelta I\Big )\mathrm {d}x \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \quad +\int _\varOmega \Big (-b+{{\alpha \beta {\hat{I}}}\over {(1+\alpha S)(1+\alpha {\hat{S}})}}\Big )(S-{\hat{S}})^2\mathrm {d}x\\&\ \ \ \ \ \ \ \ \ \ \ \ \ \quad -\int _\varOmega \Big [{{\beta }\over {1+\alpha S}}-{{\kappa \beta }\over {(1+\alpha S)(1+\alpha {\hat{S}})}}\Big ](S-{\hat{S}})(I-{\hat{I}})\mathrm {d}x\\&\ \ \ \ \ \ \ \ \ \ \quad =-\int _\varOmega \Big ({{S^*}\over {S^2}}d_{S}|\nabla S|^2+\kappa {{I^*}\over {I^2}}d_{I}|\nabla I|^2\Big )\mathrm {d}x \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \quad +\int _\varOmega \Big (-b+{{\alpha \beta {\hat{I}}}\over {(1+\alpha S)(1+\alpha {\hat{S}})}}\Big )(S-{\hat{S}})^2\mathrm {d}x. \end{aligned} \end{aligned}$$

By (23), we have

$$\begin{aligned} \begin{array}{ll} V'(t)&{}<\int _\varOmega \left( -b+{{\alpha \beta {\hat{I}}}\over {(1+\alpha S)(1+\alpha {\hat{S}})}}\right) (S-{\hat{S}})^2\mathrm {d}x \\ &{}<\int _\varOmega \left( -b+{{\alpha \beta {\hat{I}}}\over {(1+\alpha {\hat{S}})}}\right) (S-{\hat{S}})^2\mathrm {d}x\le 0. \end{array} \end{aligned}$$

Moreover, \( V'(t)=0\) if and only if \(S={\hat{S}}\) and \(\mid \nabla I\mid =0\). Denote

$$\begin{aligned} E:=\{x\in H^{1}(\varOmega )|V'(x)=0\}. \end{aligned}$$

Then it is easy to see that the maximal invariant subset of E is \({({\hat{S}},{\hat{I}})}\). By some standard arguments, we see that

$$\begin{aligned} (S,I)(\cdot ,t)\rightarrow ({\hat{S}},{\hat{I}}) \ \ \text{ in } \ \ [H^1(\varOmega )]^2,\ \ \text{ as } \ t\rightarrow \infty . \end{aligned}$$

Moreover, since we have the \(L^\infty \) estimates of S and I in Theorem 1, by some standard arguments, we know

$$\begin{aligned} {\Vert S(x,t)\Vert }_{{C^2}({\bar{\varOmega }})}+{\Vert I(x,t)\Vert }_{C^2({\bar{\varOmega }})}\le C_0,\ \ \text{ for } \text{ all } \text{ large } \ t, \end{aligned}$$

for some positive constant \(C_0\). Therefore, the Sobolev embedding theorem allows one to assert

$$\begin{aligned} (S,I)(\cdot ,t)\rightarrow ({\hat{S}},{\hat{I}})\ \ \text{ in } \ \ [L^\infty (\varOmega )]^2,\ \ \ \text{ as } \ t\rightarrow \infty , \end{aligned}$$

that is, \(({\hat{S}},{\hat{I}})\) attracts all solutions of (4). Furthermore, using a similar process as in [43,   Lemma 3.1], we see that the EE is globally asymptotically stable. This completes the proof. \(\square \)

Remark 2

If \(\alpha =0\), condition (23) always holds, and thus \(({\hat{S}},{\hat{I}})\) is globally asymptotically stable as long as it exists. However, due to the appearance of the saturated incidence rate \((\alpha >0)\), \(({\hat{S}},{\hat{I}})\) may be unstable. Let us denote \(f(S,I),\ g(S,I)\) as in the proof of Theorem 3. Then the Jacobian of the system (22) evaluated at \(({\hat{S}},{\hat{I}})\) can be obtained easily as

$$\begin{aligned} \begin{pmatrix} f_{S}&{} f_{I} \\ g_{S}&{} g_{I} \\ \end{pmatrix}=\begin{pmatrix} a-2b{\hat{S}}-{{\beta {\hat{I}}}\over {({1+\alpha {\hat{S}}})^2}}-\mu &{} -{{\beta {\hat{S}}}\over {1+\alpha {\hat{S}}}} \\ {{\beta {\hat{I}}}\over ({1+\alpha {\hat{S}}})^2} &{} 0 \\ \end{pmatrix}. \end{aligned}$$

By checking the conditions of Turing instability [41], \(({\hat{S}},{\hat{I}})\) is unstable if

$$\begin{aligned} d_{I}f_{S}+d_{S}g_{I}>0 \ \ \ \mathrm{and} \ \ \ d_{I}f_{S}+d_{S}g_{I}>2\sqrt{d_{I}d_{S}(f_{S}g_{I}-g_{S}f_{I})}, \end{aligned}$$

which hold provided that

$$\begin{aligned} \begin{aligned}&\ a-2b{\hat{S}}-{{\beta {\hat{I}}}\over {({1+\alpha {\hat{S}}})^2}}-\mu>0,\\&\ d_{I} \Big [ a-2b{\hat{S}}-{{\beta {\hat{I}}}\over {({1+\alpha {\hat{S}}})^2}}-\mu \Big ]>2\sqrt{{{d_{I}d_{S}\beta ^2{\hat{S}}{\hat{I}}}\over {(1+\alpha {\hat{S}})^3}}}. \end{aligned} \end{aligned}$$

However, the above conditions fail when \(\alpha =0\), because we have

$$\begin{aligned} a-2b{\hat{S}}-{{\beta {\hat{I}}}\over {({1+\alpha {\hat{S}}})^2}}-\mu = a-2b{\hat{S}}-\beta {\hat{I}}-\mu =-b{\hat{S}}<0. \end{aligned}$$

3 The Asymptotic Behavior of EE

This section is devoted to the investigation of the asymptotic behavior of the EE of (4) for the small mobility of susceptible or infected individuals in the spatially heterogeneous environment.

3.1 The Case of \(d_S\rightarrow 0\)

In this subsection, we aim to establish the asymptotic profiles of any positive solution of (4) as \(d_S\rightarrow 0\) while \(d_I>0\) is fixed. Our main result can be stated as follows.

Theorem 4

Assume that fix \(d_I>0\) and \({\mathcal {R}}_0^*>1\), let \(d_S\rightarrow 0\), then every positive solution (SI) of (5) satisfies (up to a subsequence of \(d_S\rightarrow 0\))

$$\begin{aligned} (S,I)\rightarrow (S^{**},I^{**})\ \ \ \text{ uniformly } \ \text{ on } \ {\overline{\varOmega }}, \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} S^{**}=&G \big (x,I^{**}(x)\big ) :={{1}\over {2\alpha b(x)}}\Big [(\alpha a(x)-b(x)-\alpha \mu (x))\\&+\sqrt{(\alpha a(x)-b(x)-\alpha \mu (x))^2+4\alpha b(a-\beta (x), I^{**}(x)-\mu (x))}\Big ] \end{aligned} \end{aligned}$$

where \(I^{**}\) is a positive solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_I\varDelta I^{**}=\frac{\beta (x)G\big (x,I^{**}(x)\big )}{1+\alpha G \big (x,I^{**}(x)\big )}I^{**}(x)-\left( \mu (x)+\eta (x)\right) I^{**}(x), &{} x\in \varOmega ,\\ \displaystyle \frac{\partial I^{**}}{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(24)

Proof

We divide our proof into three steps as follows.

Step 1. The estimates of (S(x), I(x)) of (5). Let \(S(x_1)=\max \limits _{{\overline{\varOmega }}}S(x)\). As in [39,   Proposition 2.2] (or see [45]), we have \(\varDelta S(x)\le 0\). By the first equation of (5), it follows that

$$\begin{aligned} \frac{a^2(x_1)}{4b(x_1)}\ge \frac{\beta (x_1)S(x_1)I(x_1)}{1+\alpha S(x_1)}+\mu (x_1)S(x_1)\ge \mu (x_1)S(x_1). \end{aligned}$$

Thus, it holds that

$$\begin{aligned} S(x)\le S(x_1)\le \frac{(a^*)^2}{4b_*\mu _*}:=C. \end{aligned}$$
(25)

Here and in what follows, the positive constant C does not depend on \(d_S>0\) which varies from place to place.

We rewrite the second equation of (5) as follows

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \varDelta I+\Big [\frac{\beta (x)S}{d_I(1+\alpha S)}-\frac{\mu (x)+\eta (x)}{d_I}\Big ]I=0, &{} x\in \varOmega ,\\ \displaystyle \frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$

As

$$\begin{aligned} \left\| \frac{\beta (x)S}{d_I(1+\alpha S)}-\frac{\mu (x)+\eta (x)}{d_I}\right\| _{L^\infty (\varOmega )}\le \frac{\beta ^{*}}{\alpha d_I}+\frac{\mu ^*+\eta ^*}{d_I}, \end{aligned}$$

we use the Harnack-type inequality (see, e.g., [38] or [43,   Lemma 2.2]) to conclude that

$$\begin{aligned} \max _{{\overline{\varOmega }}}I\le C\min _{{\overline{\varOmega }}}I. \end{aligned}$$
(26)

In view of (5), it follows that

$$\begin{aligned} \begin{aligned} \int _{\varOmega }\left( \mu (x)+\eta (x)\right) I(x)\mathrm {d}x=&\int _{\varOmega }\frac{\beta (x)SI}{1+\alpha S}\mathrm {d}x=\int _{\varOmega }[a(x)S-b(x)S^2-\mu (x)S]\mathrm {d}x\\ \le&\int _{\varOmega }a(x)S(x)\mathrm {d}x, \end{aligned} \end{aligned}$$

which implies that

$$\begin{aligned} \begin{aligned} \int _{\varOmega }I(x)\mathrm {d}x\le&\frac{1}{(\mu _*+\eta _*)} \int _{\varOmega }\left( \mu (x)+\eta (x)\right) I(x)\mathrm {d}x\\ \le&\frac{a^*}{(\mu _*+\eta _*)}\int _{\varOmega }S(x)\mathrm {d}x\le C. \end{aligned} \end{aligned}$$
(27)

Now, we use (27) and integrate the second equation of system (5) over \(\varOmega \) to get

$$\begin{aligned} \begin{aligned} \beta _{*}\int _{\varOmega }{{SI}\over {1+\alpha S}}\mathrm {d}x \le (\eta ^{*}+\mu ^{*}) \int _{\varOmega }I\mathrm {d}x\le C. \end{aligned} \end{aligned}$$
(28)

In view of (26) and (27), it follows that

$$\begin{aligned} I(x)\le C\min _{{\overline{\varOmega }}}I\le \frac{C}{|\varOmega |}\int _{\varOmega }I \mathrm {d}x\le C,\ \ \forall x\in {{\overline{\varOmega }}}. \end{aligned}$$
(29)

Suppose that I has no positive lower bound, we can find a subsequence of \(d_S\rightarrow 0\), say \(d_n:=d_{S,n}\), satisfying \(d_n\rightarrow 0\) as \(n\rightarrow \infty \), and a corresponding positive solution \((S_n,I_n):=(S_{d_{S,n}},I_{d_{S,n}})\) of (5) with \(d_S=d_n\), such that \(\min \limits _{{\overline{\varOmega }}}I_n\rightarrow 0\). Then, we apply (26) to obtain that

$$\begin{aligned} I_n\rightarrow 0\ \ \ \;\text{ uniformly } \text{ on }\ {\overline{\varOmega }},\ \ \;\text{ as }\;n\rightarrow 0. \end{aligned}$$

We may choose arbitrarily small \(\epsilon >0\) such that

$$\begin{aligned} 0\le I_n(\cdot )\le \epsilon \ \ \text{ for } \text{ all } \text{ large } n. \end{aligned}$$

This fact, together with the first equation of (5), implies that for all large n, \((S_n,I_n)\) satisfies

$$\begin{aligned} \displaystyle -d_n\varDelta S_n\le a(x)S_n-b(x){S^2_n}-\mu (x)S_n,\ x\in \varOmega ;\ \ \ \displaystyle \frac{\partial S_n}{\partial \nu }=0,\ x\in \partial \varOmega \end{aligned}$$

and

$$\begin{aligned} \displaystyle -d_n\varDelta S_n\ge a(x)S_n-b(x){S^2_n}-\epsilon \beta ^{*}S_n-\mu (x)S_n,\ x\in \varOmega ;\ \ \ \displaystyle \frac{\partial S_n}{\partial \nu }=0,\ x\in \partial \varOmega . \end{aligned}$$

Given any large n, consider the following two auxiliary systems:

$$\begin{aligned} \displaystyle -d_n\varDelta w= a(x)w-b(x)w^2-\mu (x)w, \ x\in \varOmega ;\ \ \ \displaystyle \frac{\partial w}{\partial \nu }=0,\ x\in \partial \varOmega \end{aligned}$$
(30)

and

$$\begin{aligned} \displaystyle -d_n\varDelta v= a(x)v-b(x)v^2-\epsilon \beta ^{*}v-\mu (x)v, \ x\in \varOmega ;\ \ \ \displaystyle \frac{\partial v}{\partial \nu }=0,\ x\in \partial \varOmega . \end{aligned}$$
(31)

Denote by \(w_n\) and \(v_n\) the unique positive solution of (30) and (31), respectively. Using a simple comparison argument, we deduce that

$$\begin{aligned} w_n\le S_n\le u_n\ \ \ \text{ on }\ {\overline{\varOmega }},\ \ \ \text{ for } \text{ all } \text{ large }\ n. \end{aligned}$$
(32)

By the similar argument to those in [13,   Lemma 2.2], it is not hard to show that

$$\begin{aligned} w_n\rightarrow \frac{(a(x)-\mu (x))_+}{b(x)}\ \ \text{ uniformly } \text{ on }\ {\overline{\varOmega }},\ \ \text{ as }\ n\rightarrow \infty \end{aligned}$$

and

$$\begin{aligned} v_n\rightarrow \frac{(a(x)-\mu (x)-\epsilon \beta ^{*})_+}{b(x)}\ \ \text{ uniformly } \text{ on }\ {\overline{\varOmega }},\ \ \text{ as }\ n\rightarrow \infty . \end{aligned}$$

Letting \(n\rightarrow \infty \) in (32) gives

$$\begin{aligned} \begin{aligned} \frac{(a(x)-\mu (x)-\epsilon \beta ^{*})_+}{b(x)}&\le \liminf _{n\rightarrow \infty }S_n(x)\\&\le \limsup _{n\rightarrow \infty }S_n(x) \le \frac{(a(x)-\mu (x))_+}{b(x)}\ \text{ on }\ {\overline{\varOmega }}. \end{aligned} \end{aligned}$$
(33)

Due to the arbitrary choice of small \(\epsilon \), one immediately gets

$$\begin{aligned} S_n\rightarrow \frac{(a(x)-\mu (x))_+}{b(x)}\ \ \text{ uniformly } \text{ on }\ {\overline{\varOmega }},\ \ \ \text{ as }\ n\rightarrow \infty . \end{aligned}$$
(34)

From the second equation of (5), \(I_n\) satisfies

$$\begin{aligned} \displaystyle -d_I\varDelta I_n=\frac{\beta (x)S_nI_n}{1+\alpha S_n}-\left( \mu (x)+\eta (x)\right) I_n, \ x\in \varOmega ;\ \ \ \displaystyle \frac{\partial I_n}{\partial \nu }=0,\ x\in \partial \varOmega . \end{aligned}$$
(35)

Define

$$\begin{aligned} \tilde{I}_n:=\frac{I_n}{\Vert I_n\Vert _{L^\infty (\varOmega )}}. \end{aligned}$$

Then \(\Vert \tilde{I}_n\Vert _{L^\infty (\varOmega )}=1\) for all \(n\ge 1\), and \(\tilde{I}_n\) solves

$$\begin{aligned} \displaystyle -d_I\varDelta \tilde{I}_n=\left[ \frac{\beta (x)S_n}{1+\alpha S_n}-\left( \mu (x)+\eta (x)\right) \right] \tilde{I}_n, \ x\in \varOmega ;\ \ \ \displaystyle \frac{\partial \tilde{I}_n}{\partial \nu }=0,\ x\in \partial \varOmega . \end{aligned}$$
(36)

By a standard compactness argument for elliptic equations and passing to a further subsequence (if necessary), we may assume that

$$\begin{aligned} \tilde{I}_n\rightarrow \tilde{I}\ \ \ \text{ in }\ C^1({\overline{\varOmega }}),\ \ \ \text{ as }\ n\rightarrow \infty ,\quad \end{aligned}$$
(37)

where \(\tilde{I}\in C^1({\overline{\varOmega }})\) with \(\tilde{I}\ge 0\ \text{ on }\ {\overline{\varOmega }}\) and \(\Vert \tilde{I}\Vert _{L^\infty (\varOmega )}=1\). In view of (34), (36) and (37), it follows that \(\tilde{I}\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_I\varDelta \tilde{I}=\left[ \frac{\beta (x)(a(x)-\mu (x))_+}{b(x)+\alpha (a(x)-\mu (x))_+}-\left( \mu (x)+\eta (x)\right) \right] \tilde{I}, \ x\in \varOmega ,\\ \displaystyle \frac{\partial \tilde{I}}{\partial \nu }=0,\ x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$

From the Harnack-type inequality (see, [38] or [43,   Lemma 2.2]), it follows that \(\tilde{I}>0\) on \({\overline{\varOmega }}\). For the uniqueness of the principal eigenvalue, it follows that \(\lambda ^{*}_1=0\). This is a contradiction with our assumption that \(\lambda ^{*}_1<0\). Thus, I has a positive lower bound C, which is independent of \(0<d_S\le 1\).

Step 2. Convergence of I. Obviously, I satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_I\varDelta I+\left( \mu (x)+\eta (x)\right) I=\frac{\beta (x)S}{1+\alpha S}I, &{} x\in \varOmega ,\\ \displaystyle \frac{\partial I}{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(38)

In view of (25) and (29), we have

$$\begin{aligned} \left\| \frac{\beta (x) S}{1+\alpha S}I\right\| _{L^p(\varOmega )} \le \left\| \beta (x) SI\right\| _{L^p(\varOmega )}\le C,\quad \forall \,p>1. \end{aligned}$$

By the standard \(L^p\)-estimate for elliptic equations (see, e.g., [17]), we see that

$$\begin{aligned} \left\| I\right\| _{W^{2,p}(\varOmega )}\le C\ \ \text{ for } \text{ any } \text{ given } p>1. \end{aligned}$$

Taking p to be sufficiently large, we see from the embedding theorem (see, e.g., [17]) that

$$\begin{aligned} \left\| I\right\| _{C^{1+\theta }({{\overline{\varOmega }}})}\le C\ \ \text{ for } \text{ some }\ 0<\theta <1. \end{aligned}$$

Therefore, there exists a subsequence of \(d_S\rightarrow 0\), say \(d_n:=d_{S,n}\), satisfying \(d_n\rightarrow 0\) as \(n\rightarrow \infty \), and a corresponding positive solution \((S_n,I_n):=(S_{d_{S,n}},I_{d_{S,n}})\) of (5) with \(d_S=d_n\), such that

$$\begin{aligned} I_n\rightarrow I^{**}\ \ \ \text{ uniformly } \text{ on }\ {\overline{\varOmega }},\ \ \ \text{ as }\ n\rightarrow \infty , \end{aligned}$$
(39)

where \(I^{**}\in C^1({\overline{\varOmega }})\) and \(I^{**}> 0\).

Step 3. Convergence of S. From the first equation of (5), \(S_n\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_n\varDelta S_n=a(x)S_n-b(x)S^{2}_n-\frac{\beta (x)S_nI_n}{1+\alpha S_n}-\mu (x)S_n, &{} x\in \varOmega ,\\ \displaystyle \frac{\partial S_n}{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$

Due to (39), given any small \(\epsilon >0\), we have for all large n that

$$\begin{aligned}&a(x)S_n-b(x)S^{2}_n-\frac{\beta (x)S_nI_n}{1+\alpha S_n}-\mu (x)S_n\\&\quad \le a(x)S_n-b(x)S^{2}_n-\frac{\beta (x)S_n(I^{**}-\epsilon )}{1+\alpha S_n}-\mu (x)S_n\\&\quad =S_n\frac{(a(x)-b(x)S_n)\left( 1+\alpha S_n\right) -\beta (x)(I^{**}-\epsilon )-\mu (x)\left( 1+\alpha S_n\right) }{1+\alpha S_n}\\&\quad =:S_n\frac{(g_{+}^{1,\epsilon }(x,I^{**}(x))-S_n)(S_n-g_{-}^{1,\epsilon }(x,I^{**}(x)))}{1+\alpha S_n}. \end{aligned}$$

Here, \(g_{+}^{1,\epsilon }(x,I^{**}(x))\) and \(g_{-}^{1,\epsilon }(x,I^{**}(x))\) are the root of the following equation with respect to the unknown function g:

$$\begin{aligned} (a(x)-b(x)g)\left( 1+\alpha g\right) -\beta (x)(I^{**}-\epsilon )-\mu (x)\left( 1+\alpha g\right) =0. \end{aligned}$$

For large enough n, we consider the following auxiliary problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_n\varDelta W=W\frac{(g_{+}^{1,\epsilon }(x,I^{**}(x))-W)(W-g_{-}^{1,\epsilon }(x,I^{**}(x)))}{1+\alpha W}, &{} x\in \varOmega \\ \displaystyle \frac{\partial W}{\partial \nu }=0,&{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(40)

In view of the bounds of S and I in the proof of step 1, we can further assume that \(g_{+}^{1,\epsilon }>g_{-}^{1,\epsilon }\ge 0\) on \({{\overline{\varOmega }}}\). In addition, \(S_{n}\) is a subsolution to (40) and a sufficiently large positive constant C is a supersolution. Hence, (40) has at least one positive solution denoted by \(W_{n}\) which satisfies \(S_{n}\le W_{n}\le C\).

By the similar argument as in proof of [13,   Lemma 2.2] (or [32,   Lemma 5.1]), we find that

$$\begin{aligned} W_{n}\rightarrow g_{+}^{1,\epsilon }\ \ \ \text{ uniformly } \ \text{ on } \ {\overline{\varOmega }},\ \ \text{ as }\ \ n\rightarrow \infty . \end{aligned}$$

Therefore, we have

$$\begin{aligned} \limsup _{n\rightarrow \infty } S_{n}\le g_{+}^{1,\epsilon },\ \ \ \text{ uniformly }\ \text{ on } \ {\overline{\varOmega }}. \end{aligned}$$
(41)

On the other hand, by (39), for all large n we have

$$\begin{aligned}&a(x)S_n-b(x)S^{2}_n-\frac{\beta (x)S_nI_n}{1+\alpha S_n}-\mu (x)S_n\\&\quad \ge a(x)S_n-b(x)S^{2}_n-\frac{\beta (x)S_n(I^{**}+\epsilon )}{1+\alpha S_n}-\mu (x)S_n\\&\quad =S_n\frac{(a(x)-b(x)S_n)\left( 1+\alpha S_n\right) -\beta (x)(I^{**}+\epsilon )-\mu (x)\left( 1+\alpha S_n\right) }{1+\alpha S_n}\\&\quad =:S_n\frac{(g_{+}^{2,\epsilon }(x,I^{**}(x))-S_n)(S_n-g_{-}^{2,\epsilon }(x,I^{**}(x)))}{1+\alpha S_n}. \end{aligned}$$

Here, \(g_{+}^{2,\epsilon }(x,I^{**}(x))\) and \(g_{-}^{2,\epsilon }(x,I^{**}(x))\) are the root of the following equation with respect to the unknown function g:

$$\begin{aligned} (a(x)-b(x)g)\left( 1+\alpha g\right) -\beta (x)(I^{**}+\epsilon )-\mu (x)\left( 1+\alpha g\right) =0. \end{aligned}$$

Consider the following auxiliary problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_n\varDelta W=W\frac{(g_{+}^{2,\epsilon }(x,I^{**}(x))-W)(W-g_{-}^{2,\epsilon }(x,I^{**}(x)))}{1+\alpha W}, &{} x\in \varOmega \\ \displaystyle \frac{\partial W}{\partial \nu }=0, &{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(42)

In the same fashion, we also get \(g_{+}^{2,\epsilon }>g_{-}^{2,\epsilon }\ge 0\). Observe that \(S_{n}\) and 0 is a pair of upper and lower solution of (42). Hence, one can assert that (42) admits at least one positive solution, and further get

$$\begin{aligned} \liminf _{n\rightarrow \infty }S_n(x)\ge g_{+}^{2,\epsilon }(x,I^{**}(x))\ \ \text{ uniformly } \text{ on }\ {\overline{\varOmega }}. \end{aligned}$$
(43)

As

$$\begin{aligned} g_{+}^{1,0}(x,I^{**}(x))=g_{+}^{2,0}(x,I^{**}(x)):=G(x,I^{**}(x)), \end{aligned}$$

by the arbitrariness of \(\epsilon \), it immediately follows from (41) and (43) that

$$\begin{aligned} S_n(x)\rightarrow G(x,I^{**}(x))\ \ \ \text{ uniformly } \text{ for } x\in {{\overline{\varOmega }}},\ \ \ \text{ as } n\rightarrow \infty . \end{aligned}$$

Furthermore, it is easily seen that \(I^{**}\) satisfies (24) by (35). \(\square \)

3.2 The Case of \(d_I\rightarrow 0\)

We now fix \(d_{S}>0\) and analyze the asymptotic behavior of positive solution of (5) as \(d_{I}\rightarrow 0\). Due to mathematical difficulty, we will consider one space dimension case by taking \(\varOmega =(0,1)\). By Proposition 1 (a1) and Theorem 2 (ii), to ensure the existence of positive solutions of (5) for all small \(d_{I}\), it is necessary to assume that \(\{{\beta (x)\tilde{S}}/({1+\alpha \tilde{S}})>(\eta (x)+\mu (x)): x\in [0,1]\}\) is nonempty.

Theorem 5

Assume that the set \(\{{\beta (x)\tilde{S}}/({1+\alpha \tilde{S}})>(\eta (x)+\mu (x)): x\in [0,1]\}\) is nonempty and fix \(d_S>0\), let \(d_I\rightarrow 0\) then every positive solution (SI) of (5) satisfies (up to a subsequence of \(d_I\rightarrow 0\)) that

$$\begin{aligned} S\rightarrow S^{0}\ \ \ \text{ uniformly } \text{ on }\ [0,1], \end{aligned}$$

where \(S^{0}\in C([0,1])\) and \(S^{0}>0\) on [0, 1], and \(\int _{0}^{1}I\mathrm {d}x\rightarrow I^{0}\) for some positive constant \(I^{0}\).

Proof

It is easy to check that (25), (27) and (28) remain true for all \(d_{I}>0\). Note that S satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_S S^{''}+b(x)S^2+\mu (x)S=a(x)S-\frac{\beta (x)SI}{1+\alpha S}, &{} x\in (0,1) ,\\ \displaystyle S'(0)=S'(1)=0. \end{array}\right. } \end{aligned}$$
(44)

One can use the well-known elliptic \(L^{1}\)-theory [3] (or see [42,   Lemma 2.2]) to (44) to find that

$$\begin{aligned} \left\| S\right\| _{W^{1,p}(0,1)}\le C\ \ \text{ for } \text{ any } \text{ given } p>1. \end{aligned}$$

By taking a properly large p and using the Sobolev embedding theorem, we have

$$\begin{aligned} \left\| S\right\| _{C^{\theta }([0,1])}\le C\ \ \text{ for } \text{ some }\ 0<\theta <1. \end{aligned}$$

This tells us that there exists a subsequence of \(d_I\rightarrow 0\), say \(d_n:=d_{I,n}\), satisfying \(d_n\rightarrow 0\) as \(n\rightarrow \infty \), and a corresponding positive solution \((S_n,I_n):=(S_{d_{I,n}},I_{d_{I,n}})\) of (5) with \(d_I=d_n\), such that

$$\begin{aligned} S_n\rightarrow S^{0}\ \ \text{ in }\ \ C([0,1])\ \ \text{ as }\ \ n\rightarrow \infty . \end{aligned}$$

On the other hand, by (27), up to a further subsequence of \(d_n\) if necessary, it follows that \(\int _{0}^{1}I_{n}\mathrm {d}x\rightarrow I^{0}\) as \(n\rightarrow \infty \).

In what follows, we are going to show \(I^{0},\ S^{0}>0\). We first prove \(I^{0}>0\). To this end, we use a contradiction argument and suppose that \(I^{0}=0\). By integrating (44) from 0 to x, we have

$$\begin{aligned} (S^0)'(x)=-{{1}\over {d_{S}}}\int _{0}^{x}\big [a(y)S_n(y)-b(y)S^{2}_n(y)-\frac{\beta (y)S_{n}(y)I_{n}(y)}{1+\alpha S_{n}(y)}-\mu (y)S_{n}(y)\big ]\mathrm {d}y \end{aligned}$$

uniformly on [0, 1]. Letting \(n\rightarrow \infty \), due to \(\int _{0}^{1}I_{n}\mathrm {d}x\rightarrow 0\), one can infer that

$$\begin{aligned} S_{n}'(x)=-{{1}\over {d_{S}}}\int _{0}^{x}\big [a(y)S^0(y)-b(y)(S^0)^2(y)-\mu (y)S^{0}(y)\big ]\mathrm {d}y \ \ \text{ uniformly } \text{ on }\ \ [0,1]. \end{aligned}$$

By means of \(S_{n}(x)-S_{n}(0)=\int _{0}^{x}S_{n}'(y)\mathrm {d}y\) for any \(n\ge 1\), it is easily seen that \(S_{0}\) satisfies

$$\begin{aligned} S^{0}(x)-S^{0}(0)=-{{1}\over {d_{S}}}\int _{0}^{x}\Big \{\int _{0}^{y}\big [a(z)S^0(z)-b(z)(S^0)^2(z) -\mu (z)S^{0}(z)\big ]\mathrm {d}z\Big \}\mathrm {d}y, \end{aligned}$$

which in turn gives

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -d_S (S^{0})^{''}=a(x)S^0(x)-b(x)(S^0)^2(x)-\mu (x)S^{0}, &{} x\in (0,1),\\ \displaystyle (S^0)'(0)=0. \end{array}\right. } \end{aligned}$$
(45)

Again, one can integrate (44) from x to 1 and apply a similar process as before to deduce that \((S^0)'(1)=0\). Hence, by virtue of (45), we can conclude that \(S^{0}=\tilde{S}\), which means that \(S_{n}\rightarrow \tilde{S}\) uniformly on [0, 1] as \(n\rightarrow \infty \).

One can easily observe that \(\lambda _{1}^n=0, \ \forall n\ge 1\), where \(\lambda _{1}^n\) is the principal eigenvalue of the following eigenvalue problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle d_n\varDelta \psi +\frac{\beta (x)S_n(x)}{1+\alpha S_n(x)}\psi -\left( \mu (x)+\eta (x)\right) \psi +\lambda \psi =0, &{} x\in (0,1),\\ \displaystyle \psi '(1)=\psi '(0)=0.\\ \end{array}\right. } \end{aligned}$$

Applying the same analysis as in [1,   Lemma 2.3], it follows that

$$\begin{aligned}&0= \lambda _{1}^n\rightarrow \\&\min _{x\in [0,1]}\Big (\mu (x)+\eta (x))-\frac{\beta (x)\tilde{S}}{1+\alpha \tilde{S}}\Big ), \ \ \text{ as }\ n\rightarrow \infty . \end{aligned}$$

Clearly, this leads to a contradiction because \(\{{\beta (x)\tilde{S}}/({1+\alpha \tilde{S}})>(\eta (x)+\mu (x)): x\in (0,1)\}\) is non-empty by our assumption. Thus, we must have \(I^{0}>0\).

To show \(S^{0}>0\) on [0, 1], we proceed indirectly again and suppose that \(S^{0}(x)=0\) for some \(x\in [0,1]\). Then, applying the Harnack inequality to the S-equation, one will see immediately that \(S^{0}(x)=0\) for all \(x\in [0,1]\). As a result, we have \(\int _{0}^{1}S_{n}\mathrm {d}x\rightarrow 0\) as \(n\rightarrow \infty \). Integrating the second equation of (5) from 0 to 1, one gets

$$\begin{aligned} (\eta _{*}+\mu _{*}) \int _{0}^{1}I_{n}\mathrm {d}x\le \beta ^{*}\int _{0}^{1}{{S_{n}I_{n}}\over {1+\alpha S_{n}}}\mathrm {d}x\rightarrow 0, \ \ \text{ uniformly } \text{ as } \ n\rightarrow \infty , \end{aligned}$$

which yields \(\int _{0}^{1}I_{n}\mathrm {d}x\rightarrow 0\). This arrives at a contradiction with \(\int _{0}^{1}I_{n}\mathrm {d}x\rightarrow I^{0}>0\). The proof is complete. \(\square \)

4 Discussion

In this paper, we have studied the SI epidemic model (4) with logistic source and saturation infection mechanism. For the parabolic problem (4), we have established the uniform boundedness and the extinction and persistence of the infectious disease in terms of the basic reproductive number \({\mathcal {R}}_0\). We also obtained the global stability of the unique endemic equilibrium when the spatial environment is homogeneous. For the steady-state solution problem (5), we have investigated the asymptotic behavior of the endemic equilibria in the heterogeneous environment when the movement rate of the susceptible and infected populations is small.

In what follows, we first want to compare the influence of immigration rate, logistic sources, and incidence rate on the basic reproduction number of models (1)–(4).

Allen et al. [1] introduced the epidemic model (1) with standard incidence rate \(\frac{SI}{S+I}\) and defined the basic reproduction number

$$\begin{aligned} {\mathcal {R}}_0=\sup \limits _{0\ne \varphi \in H^{1}(\varOmega )}\left\{ \frac{\int _\varOmega \beta (x) \varphi ^{2}\mathrm {d}x}{\int _\varOmega (d_I |\nabla \varphi |^{2}+\gamma (x)\varphi ^{2})\mathrm {d}x}\right\} . \end{aligned}$$

It is clear that the \({\mathcal {R}}_0\) defined here depends on the immigration rate of infected individuals \(d_I\), the transmission rate \(\beta (x)\), and the recovery rate \(\gamma (x)\). Then, Li et al. [32] added the logistic source \(a(x)S-b(x)S^2\) to system (1); but the basic reproduction number is the same as the one that without logistic source. In other words, the logistic sources have no influence on the definition of the basic number of the epidemic model with standard incidence rate.

As the infection mechanism changes to the saturated incidence rate \(\frac{SI}{1+mI}\), Huo and Cui in [22] defined a basic reproduction number of system (3) as

$$\begin{aligned} {\mathcal {R}}_0=\sup \limits _{0\ne \varphi \in H^{1}(\varOmega )}\left\{ \frac{\int _\varOmega \beta (x) {\hat{S}}(x) \varphi ^{2}\mathrm {d}x}{\int _\varOmega (d_I |\nabla \varphi |^{2}+\gamma (x)\varphi ^{2})\mathrm {d}x}\right\} , \end{aligned}$$

where \({\hat{S}}\) is the unique solution of (7) with \(\mu (x)\equiv 0\). It is easily seen that such \({\mathcal {R}}_0\) depends on \({\hat{S}}\), which is continuously dependent on the logistic sources and parameter \(d_S\), except the coefficients \(d_I\), \(\beta (x)\) and \(\gamma (x)\). However, the basic reproduction number \({\mathcal {R}}_0\) of our model (4) not only depends on the parameters \(d_S\), \(d_I\), a(x), b(x), \(\beta (x)\) and \(\gamma (x)\) but also depends on the death rate \(\mu (x)\) and the saturated coefficient \(\alpha \).

In the spatially heterogeneous environment, we have obtained that threshold dynamics in terms of the basic reproduction number, that is, the uniform persistence property holds if \({\mathcal {R}}_0>1\), and the disease extinction occurs if \({\mathcal {R}}_0<1\). Based on the ultimately uniform boundedness in Theorem 1, we established the uniform persistence property of (4), that is, there existed at least one EE when \({\mathcal {R}}_0>1\). Moreover, the disease will die out if the saturation factor \(\alpha \) is large enough. Biologically, the epidemic will be extinct in the long run provided that the effective prevention measures (\(\alpha \)) are taken. For example, people disinfected, washed, and blocked the market of infected food during the outbreak of the COVID-19. Therefore, we know how important the effective prevention and control strategy is in the absence of sufficient medical treatment and vaccines.

Finally, we discussed that the global stability and asymptotic profiles of the endemic equilibrium. When the environment is spatially homogeneous, that is, all the parameters in (4) are positive constants, the global stability of endemic equilibrium has been shown by establishing suitable Lyapunov function for the basic reproduction number \({\mathcal {R}}_0>1\); see Theorem 3. By Remark 2, when \(\alpha =0\), condition (23) always holds. Hence, \(({\hat{S}},{\hat{I}})\) is globally asymptotically stable as long as it exists. However, if the saturated incidence rate \(\alpha >0\), \(({\hat{S}},{\hat{I}})\) may be unstable.

Furthermore, in the case of \(d_{S}\rightarrow 0\), Theorem 4 shows that the disease exists in the entire habitat. On the other hand, Theorem 5 suggests that the susceptible population is positive while the total infected population tends to a positive constant as \(d_{I}\rightarrow 0\) in the one-dimensional interval. The results have suggested that the density of the infected population will not vanish when the mobility of the susceptible or infected population goes to zero. The above discussion reveals that more effective measures \(\alpha \) should be taken to control the sources of infection and cut off the channels of transmission so as to eradicate the disease.

Indeed, [22,   Theorems 4.2 and 4.3], [32,   Theorems 4.1–4.2] and Theorems 45 here have shown that the infectious disease does not die out for the low diffusion rate of susceptible or infected individuals, and thus the epidemic disease cannot be eliminated by controlling the mobility of individuals. Combined with the discussion in [22, 32], we can conclude that the logistic source enhances the persistence of the disease, and the infectious disease will become threatening and harder to control. Our results here, together with the other two related epidemic models [22, 32], show that the logistic growth, the infection mechanism, and the population movement play an important role in the transmission dynamics of disease.

In summary, our discussion above shows that, in order to eradicate the disease modeled of the susceptible individuals, instead of reducing the mobility of the populations.