1 Introduction

Rumor is a statement that arouses public interest and is spread without any corresponding official basis to prove it. Nowadays, with the development of science and technology, online social networks emerge in an endless stream, such as Instagram, WeChat, Twitter and Facebook, which make it more convenient for people to share information. At the same time, rumors spread faster and more diversified. Because the information spread by rumor is untrue or even harmful, rumor will often lead to economic loss, social unrest and other adverse effects. Particularly in major events, deliberately spreading rumors will often lead to the deterioration of the situation. For example, during the spread of the atypical pneumonia caused by a novel coronavirus (COVID-19) in China, many rumors cause public panic and even lead to the phenomenon of looting shuanghuanglian, which hinders the government from controlling the disease as soon as possible. Thus, to understand the mechanism of rumor spreading and then control rumor spreading have become the concern of scholars.

Because there are many similarities between the spread of rumor and the spread of infectious diseases, many epidemic models are applied to the analysis of rumor spread. In 1965, DK model, the first classic model of rumor spreading, was put forward by Daley and Kendal [1]. In DK model, the population are divided into three groups, ignorant (people who do not notice the rumor), spreader (people who notice and spread the rumor) and stifler (people who notice but do not spread the rumor). On this basis, Maki and Thompson assumed that when a spreader contacted with other spreader, the spreader would stop spreading rumors and proposed MK model [2]. Until today, many researchers applied or improved the classical rumor propagation model, such as SI (susceptible–infected), SIR, and applied it in rumor spreading [3,4,5,6,7,8,9,10]. In Ref. [4], Qian improved the traditional SIR model with the concept of independent spreaders in complex networks. In view of the fact that rumors are spread in multilingual environment, Wang established a SIR model with cross-transmitted mechanism [5]. Zhao changed SIR model to SIHR (susceptible–infected–hibernator–removed) model by adding the connection between ignorants and stiflers. At the same time, the influence of forgetting rate and average degree on rumor-spreading scale was discussed [6]. Yao improved SIR model to SDILR (susceptible–dangerous–infective–latent–recovered) model by considering the filtering function of social media for rumors [10]. In addition, many workers have also established new models based on specific situations in real life [8, 11,12,13,14]. For example, Tian assumed that ignorants had three different attitudes when facing rumors, and thus, they established a new SDILR model [8]. In Ref. [13], Liu divided the population into ignorant, lurker, spreader, removal and established the ILSR model.

With the deepening of research and the rapid development of network media, researchers find that the traditional model relying on ordinary differential equation cannot reflect the influence of space on rumor spreading. Therefore, many scholars begin to consider whether they could use partial differential equation to study the spread of rumors [15,16,17,18]. Considering the effect of space-time diffusion, Zhu proposed a rumor propagation model with uncertainty [15]. Guo discussed a kind of reaction–diffusion model with nonlocal delay effect and nonlinear boundary conditions [16]. Similarly, in the field of infectious diseases, more in-depth research is continuing [19,20,21,22,23,24]. However, the above discussions are all carried out in homogeneous environment; that is to say, the parameters corresponding to each point in the space are the same. Obviously, such a model cannot fully reflect the actual situation of rumor propagation. Therefore, many researchers try to improve it to spatial heterogeneous environment and analyze it. In the field of infectious diseases, there have been some achievements [25,26,27,28]. In Ref. [25], Lei improved SIR ordinary differential equation model, which was studied in Ref. [26], to the partial differential equation model in the heterogeneous space, and discussed the asymptotic behavior of the positive equilibrium point when the diffusion rate tended to zero. Zhang put forward a reaction–diffusion epidemic model–SVIR (susceptible–vaccinated–infected–removed) in heterogeneous environment and found that the diffusion rate of disease in heterogeneous space was higher than that in homogeneous space [27]. At present, it is rare to consider the spatial and temporal propagation model of rumors in heterogeneous environments.

In addition to the dynamic behavior of the model, the optimal control of the model is also an enduring topic. Scholars in various fields have studied the optimal control problem for different models [29,30,31,32,33]. Gashirai applied the optimal control theory to the FMD transmission model and analyzed how to effectively control the disease [31]. Based on MK rumor model, Kandhway discussed how to spread as much information as possible with limited funding [32]. In Ref. [33], Huo proved the existence of the optimal control of rumor propagation model with media reports by using the Pontryagin’s maximum principle. It can be seen that the study of optimal control has high practical value, in particular, the optimal control of spatiotemporal propagation model.

In the past, the growth rate of population was often recorded as a constant in the establishment of rumor propagation model or epidemic model. However, some researchers suggest that the capacity of rumor spreading system is not infinite due to the restriction of technology and the capacity of the real world for human beings is limited. Therefore, it is more realistic to express the population growth with the logistic equation, which is influenced by the internal growth rate and certain carrying capacity [34,35,36]. At the same time, many practical factors, such as people’s thinking and judgment based on their own experience and knowledge when they first contact with rumors, and people don’t necessarily review the news as soon as they receive it, will make the rumor communication system need to be added with time delay [12]. Similarly, there is usually a time interval between people’s exposure to rumors and spreading them. Based on the above two aspects, we will consider the influence of time delay and logistic equation when establishing rumor propagation model.

In this paper, we will improve the traditional SIR rumor propagation model to the following form

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }ll} \dfrac{\partial S}{\partial t}-\mathrm{d}_{S}\Delta S=r (x) S\bigg (1-\dfrac{S}{K(x)}\bigg )-\beta (x)S I-\mu _1(x) S+\gamma (x) I, &{}t>0,\;x\in \Omega ,\\ \dfrac{\partial I}{\partial t}-\mathrm{d}_{I}\Delta I=\beta (x)S I-\delta (x) I-\mu _2(x)I -\gamma (x) I,&{} t>0,\;x\in \Omega ,\\ \dfrac{\partial R}{\partial t}-\mathrm{d}_{R}\Delta R=\delta (x)I-\mu _3(x)R,&{} t>0,\;x\in \Omega ,\\ \dfrac{\partial S}{\partial \nu }=\dfrac{\partial I}{\partial \nu }=\dfrac{\partial R}{\partial \nu }=0,&{}t>0,\;x\in \partial \Omega ,\\ S(0,x)=S_0(x)\ge 0,I(0,x)=I_0(x)\ge , \not \equiv 0, R(0,x)=R_0(x)\ge 0,&{}x\in \Omega . \end{array}\right. \end{aligned}$$
(1.1)

where S(tx) denotes the rumor–susceptible who have not been exposed to rumors and may become a rumor–infector, I(tx) denotes the rumor–infector who believe and spread rumors with a certain probability, and R(tx) denotes the removal who already know the rumor is false and will not pay any more attention to it. We assume that the network exit rates of all three groups are \(\mu _1(x), \mu _2(x)\) and \(\mu _3(x)\). The diffusion coefficients \(\mathrm{d}_S\), \(\mathrm{d}_I\) and \(\mathrm{d}_R\) represent the migration rate of rumor–susceptible individuals, the migration rate of rumor–infector individuals and the rate of movement of removal, respectively. r(x) represents the internal growth rate of population, and K(x) represents the maximum capacity of network environment. The probability that a rumor–susceptible becomes a rumor–infector due to contact with rumor is expressed as \(\beta (x)\). \(\gamma (x)\) and \(\delta (x)\) indicate the probability that the rumor–infector becomes a rumor–susceptible due to forgetting and a remover due to identifying the false information, respectively. The habitat \(\Omega \subset {\mathbb {R}}^N\) \((N\ge 1)\) is a bounded area, and its boundary \(\partial \Omega \) is smooth. The Neumann boundary condition means that there is no population flow on the boundary \(\partial \Omega \). We assume that \(\mu _1(x)\), \(\mu _2(x)\), \(\mu _3(x)\), r(x), K(x), \(\beta (x)\), \(\gamma (x)\) and \(\delta (x)\) are positive and H\(\ddot{o}\)lder continuous functions on \({\overline{\Omega }}\).

Since the first two equations of system (1.1) are independent of R(tx), we can simplify it to

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial S}{\partial t}-\mathrm{d}_{S}\Delta S=r (x) S\bigg (1-\dfrac{S}{K(x)}\bigg )-\beta (x)S I-\mu _1(x) S+\gamma (x) I, &{}t>0,x\in \Omega ,\\ \dfrac{\partial I}{\partial t}-\mathrm{d}_{I}\Delta I=\beta (x)S I-\delta (x) I-\mu _2(x)I -\gamma (x) I,&{} t>0,x\in \Omega ,\\ \dfrac{\partial S(t,x)}{\partial \nu }=\dfrac{\partial I(t,x)}{\partial \nu }=0, &{}t>0,x\in \partial \Omega ,\\ S(0,x)=S_0(x)\ge 0,I(0,x)=I_0(x)\ge , \not \equiv 0,&{}x\in \Omega . \end{array}\right. \end{aligned}$$
(1.2)

In the following, we will study the dynamic behavior of system (1.2). Obviously, according to Ref. [37], by applying the strong maximum principle and Hopf lemma for the elliptic equations corresponding to system (1.2), we know that \(S(x)\ge 0\) and \(I(x)\ge 0\) for all \(x\in {\bar{\Omega }}\).

The innovation of this paper includes the following three points. Firstly, considering the difference of location parameters, we establish a heterogeneous spatial model, which is different from the traditional rumor-spreading process in a homogeneous environment [17]. Secondly, we study the spatial nonhomogeneous bifurcation caused by time delay. To some extent, our results have improved the phenomenon of space homogeneous bifurcation [18]. Finally, through numerical simulations, our work describes the propagation process from the perspective of heterogeneous space and finds out the influence of the difference of spatial diffusion ability on information propagation in heterogeneous environment. The numerical results confirm the necessity of the study of spatial propagation.

The paper is organized as follows. In Sect. 2, we establish a spatially heterogeneous model with logistic growth and discuss its consistent persistence and the asymptotic behavior of equilibrium point. In Sect. 3, we degenerate the model into a spatial homogeneous model and study the local and global stability. In addition, we use the maximum principle to study the optimal control of the model. In Sect. 4, considering the effect of time delay in real life, we research Hopf bifurcation of our model with time delay [38]. In order to verify our conclusions and discuss the effect of diffusion coefficients, we give some simulation results in Sect. 5. A brief conclusion is given in Sect. 6.

2 Spatially heterogeneous model

In this section, we always assume that \(r(x)>\mu _1(x)\) in \( {\overline{\Omega }}\). For any given nonnegative and continuous initial value \((S_0,I_0)\), system (1.2) has unique classical solution (S(tx), I(tx)) for all \(t>0\) and \(x\in {\overline{\Omega }}\) by the standard theory for parabolic equations. In addition, \((S(t,x),I(t,x))>(0,0)\) for \(t>0\), \(x\in {\overline{\Omega }}\) provided that \(I_0(x)\ge , \not \equiv 0\).

2.1 Uniform persistence

In this subsection, we will study the uniform persistence of the solution for system (1.2). For convenience, we define

$$\begin{aligned} f^*=\max _{x\in \Omega }f(x),\;f_*=\min _{x\in \Omega }f(x), \end{aligned}$$

where \(f(x)=\mu _1(x), \mu _2(x), \mu _3(x), r(x), K(x), \beta (x), \gamma (x)\; \text{ and }\; \delta (x)\) for \(x\in \Omega \).

Proposition 2.1

There exists a positive constant \({\hat{C}}\) depending on initial data such that the solution (SI) of (1.2) satisfies

$$\begin{aligned} ||S(\cdot ,x)||_{L^\infty (\Omega )}+||I(\cdot ,x)||_{L^\infty (\Omega )}\le {\hat{C}}, \;t\ge 0.\qquad \end{aligned}$$
(2.1)

Furthermore, there exists a positive constant \({\tilde{C}}\) independent of initial data such that

$$\begin{aligned} ||S(\cdot ,x)||_{L^\infty (\Omega )}+||I(\cdot ,x)||_{L^\infty (\Omega )}\le {\tilde{C}}, \;t\ge T,\qquad \end{aligned}$$
(2.2)

for some large time \(T>0\).

Proof

By the first equation of (1.2), it can be calculated directly that

$$\begin{aligned} \begin{aligned} S_t-\mathrm{d}_s \Delta S&=r(x)S\bigg (1-\dfrac{S}{K(x)}\bigg )-\beta (x)S I\\&\quad -\mu _1(x) S+\gamma (x) I\\&\le \dfrac{r^* K^*}{4} +\gamma (x)I-[ \beta (x)I+ \mu _1(x)]S \\&\le [ \beta (x)I\\&\quad + \mu _1(x)]\bigg (\max \bigg \{\dfrac{ \gamma ^*}{ \beta _*},\dfrac{r^* K^*}{4(\mu _1)_*}\bigg \} -S\bigg ) \end{aligned} \end{aligned}$$

for all \(x \in \Omega , t>0\).

Denote \(M_1=\max \bigg \{\dfrac{ \gamma ^*}{ \beta _*},\dfrac{r^* K^*}{4(\mu _1)_*}\bigg \}\), then the following parabolic problem

$$\begin{aligned} \left\{ \begin{array}{lll} w_t-\mathrm{d}_{S}\Delta w=[ \beta (x)I+ \mu _1(x)](M_1-w),&{}t>0,\;x\in \Omega ,\\ \dfrac{\partial w}{\partial \nu }=0, &{}t>0,\;x\in \partial \Omega ,\\ w(0,x)=S_0(x), &{}x\in \Omega \end{array}\right. \end{aligned}$$
(2.3)

has a unique solution w. We apply the comparison principle to conclude that

$$\begin{aligned} S(t,x)\le w(t,x)&\le \max \{M_1,\max _{x\in {\overline{\Omega }}}S_0(x)\},\;t\nonumber \\&>0,\;x\in {\overline{\Omega }}. \end{aligned}$$
(2.4)

It is easy to see that \(v(t)=M_1+\max \limits _{x\in {\overline{\Omega }}}e^{-(\mu _1)_*t}\) is the supersolution of (2.3). Thus, we have

$$\begin{aligned} S(t,x)\le w(t,x)\le v(t),\;t>0,\;x\in {\overline{\Omega }}. \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \lim \limits _{t \rightarrow \infty } \sup \limits _{x\in {\overline{\Omega }}}S(t,x) \le M_1. \end{aligned}$$

We can find a large \(T_1>0\) such that

$$\begin{aligned} S(t,x)\le 2M_1, \;t>T_1,\;x\in {\overline{\Omega }}. \end{aligned}$$
(2.5)

Set \(\mu =\min \{(\mu _1)_*,(\mu _2)_*+\delta _*\}\), we define

$$\begin{aligned} W(t)=\int _{\Omega }[S(t,x)+I(t,x)] \mathrm{d}x, \end{aligned}$$

then

$$\begin{aligned} \begin{aligned} \dfrac{\mathrm{d}W(t)}{\mathrm{d}t}&=\int _{\Omega }\bigg [r(x)S\bigg (1-\dfrac{S}{K(x)}\bigg )-\mu _2(x)I\\&\quad -\mu _1(x)S-\delta (x) I\bigg ] \mathrm{d}x\\&\le \int _{\Omega }\dfrac{r^* K^*}{4} \mathrm{d}x- \mu W(t)\\&\le \dfrac{r^* K^*|\Omega |}{4}- \mu W(t). \end{aligned} \end{aligned}$$

Thus, we have

$$\begin{aligned} W(t)\le W(0)e^{-\mu t}+\dfrac{r^* K^*|\Omega |}{4\mu }(1-e^{-\mu t}), \forall t\ge 0. \end{aligned}$$

That is,

$$\begin{aligned}&\int _{\Omega }[S(t,x)+I(t,x)] \mathrm{d}x\nonumber \\&\quad \le \int _{\Omega }\left[ S_0(x)+I_0(x)\right] \mathrm{d}xe^{-\mu t}\nonumber \\&\qquad +\dfrac{r^* K^*|\Omega |}{4\mu }\left( 1-e^{-\mu t}\right) , \forall t\ge 0. \end{aligned}$$
(2.6)

In view of (2.4) and the last two equations of system (1.1), we use Theorem 1.1 of [39] (or Lemma 2.1 in [40]) to deduce that there exists a positive constant \(M_2\) depending on the initial data such that

$$\begin{aligned} I(t,x)\le M_2\, \text{ for }\, t \ge 0,\, x \in {\overline{\Omega }}. \end{aligned}$$
(2.7)

By (2.6), we have

$$\begin{aligned} \limsup \limits _{t \rightarrow \infty } \int _{\Omega }I(t,x)\le \limsup \limits _{t \rightarrow \infty } W(t) \le \dfrac{r^* K^*|\Omega |}{4\mu }. \end{aligned}$$

We apply Lemma 2.1 of [40] to conclude that there exists a constant \(M_3>0\) independent of initial data such that

$$\begin{aligned} I(t,x)\le M_3\, \text{ for }\, t\ge T_2,\, x\in {\overline{\Omega }} \end{aligned}$$
(2.8)

for some \(T_2>T_1\).

Set \({\hat{C}}=\max \{\max \{M_1,\max \limits _{{\overline{\Omega }}}S_0(x)\},M_2\}\) and \({\tilde{C}}=\max \{2M_1,M_3\}\). Due to (2.4) and (2.7), we obtain (2.1). It follows from (2.5) and (2.8) that (2.2) holds. \(\square \)

Define \(R_0\) as follows

$$\begin{aligned} R_0=&\sup _{0\ne \varphi \in H^1(\Omega )}\\&\left\{ \frac{\int _{\Omega }\beta (x){\tilde{S}}\varphi ^2\mathrm{d}x}{\int _{\Omega }\mathrm{d}_I|\nabla \varphi |^2+(\delta (x)+\mu _2(x)+\gamma (x))\varphi ^2\mathrm{d}x}\right\} , \end{aligned}$$

where \({\tilde{S}}\) is the unique positive solution of

$$\begin{aligned} \left\{ \begin{array}{lll} -\mathrm{d}_{S}\Delta u=r(x)u\bigg (1-\dfrac{u}{K(x)}\bigg )-\mu _1(x) u,&{}t>0,x\in \Omega ,\\ \dfrac{\partial u}{\partial \nu }=0, &{}t>0,x\in \partial \Omega ,\\ u(0,x)=S_0(x),&{}x\in \Omega . \end{array}\right. \end{aligned}$$
(2.9)

Lemma 2.1

The following properties of \(R_0\) hold.

  1. (i)

    \(R_0\) is a monotone decreasing function of \(\mathrm{d}_I\). \(R_0\rightarrow \max \limits _{x\in {\overline{\Omega }}}\frac{\beta (x){\tilde{S}}(x)}{\delta (x)+\mu _2(x)+\gamma (x)}\) as \(\mathrm{d}_I\rightarrow 0\), \(R_0\rightarrow \max \limits _{x\in {\overline{\Omega }}}\frac{\int _{\Omega }\beta (x){\tilde{S}}(x)\mathrm{d}x}{\int _{\Omega }[\delta (x)+\mu _2(x)+\gamma (x)]\mathrm{d}x}\) as \(\mathrm{d}_I\rightarrow \infty \);

  2. (ii)

    \(1-R_0\) has the same sign as \(\lambda _1\), where \(\lambda _1\) is the principal eigenvalue of the following eigenvalue problem

    $$\begin{aligned} \left\{ \begin{array}{lll} \mathrm{d}_{I}\Delta \Phi +[\beta (x){\tilde{S}}(x)-(\delta (x)+\mu _2(x)+\gamma (x))]\Phi +\lambda \Phi =0,&{}x\in \Omega ,\\ \dfrac{\partial \Phi }{\partial \nu }=0, &{}x\in \partial \Omega . \end{array}\right. \end{aligned}$$
    (2.10)

Theorem 2.1

For the given initial data \((S_0,I_0)\), let (SI) be the unique solution of (1.2). If \(R_0>1\), then system (1.2) is uniformly persistent: There exists a constant \(\epsilon >0\) such that

$$\begin{aligned} \liminf _{t\rightarrow \infty }S(t,x)\ge & {} \epsilon ,\;\liminf _{t\rightarrow \infty }I(t,x)\nonumber \\\ge & {} \epsilon \;\text{ uniformly } \text{ for }\;x\in \Omega . \end{aligned}$$
(2.11)

In particular, this implies that system (1.2) admits at least one positive equilibrium solution.

Proof

Let

$$\begin{aligned} \mathbf {X_+}:= & {} C\left( {\overline{\Omega }}, {\mathbb {R}}_+^2\right) ,\;\mathbf {X^0_+}\\:= & {} \left\{ \varsigma =(u,v)\in \mathbf {X_+}:v\not \equiv 0\right\} \end{aligned}$$

and \(\partial \mathbf {X^0_+}:=\mathbf {X_+} \backslash \mathbf {X^0_+}=\{\varsigma \in \mathbf {X_+}:v\equiv 0\}\).

For a given \(\varsigma \in \mathbf {X_+}\), system (1.2) generates a semiflow, denoted by \(\Psi (t)\), and

$$\begin{aligned}{}[\Psi (t)\varsigma ](x)=(S(t,x,\varsigma ),I(t,x,\varsigma )), \end{aligned}$$

where \((S(t,x,\varsigma ),I(t,x,\varsigma ))\) is the unique solution to system (1.2) with \((S_0,I_0)=\varsigma \).

Next, we will claim that \(({\tilde{S}},0)\) attracts \(\varsigma =(S_0,0)\in \partial \mathbf {X^0_+}\). As \(I_0\equiv 0\), the unique solution \((S(t,x,\varsigma ),I(t,x,\varsigma ))\) satisfies \(I(t,x,\varsigma )\equiv 0\) for all \(t\ge 0\) and S solves

$$\begin{aligned} \left\{ \begin{array}{lll} S_t-\mathrm{d}_S\Delta S=r(x)S\bigg (1-\dfrac{S}{K(x)}\bigg )-\mu _1(x)S,&{}t>0,\; x\in \Omega ,\\ \dfrac{\partial S}{\partial _\nu }=0, &{}t>0,\;x \in \partial \Omega ,\\ S(0,x)=S_0(x),&{}x\in \Omega . \end{array}\right. \end{aligned}$$
(2.12)

It can be easily proved that

$$\begin{aligned} \lim _{t\rightarrow \infty }S(t,x)={\tilde{S}}(x)\;\text{ uniformly } \text{ on }\;{\overline{\Omega }}. \end{aligned}$$

This proves our claim.

As \(R_0>1\), it follows from Lemma 2.1 that \(\lambda _1<0\). Then, we can conclude that there exists a constant \(0<\epsilon _0<-\lambda _1\) such that

$$\begin{aligned} \limsup _{t\rightarrow \infty }||\Psi (t)\varsigma -\varsigma _0||\ge \epsilon _0 \end{aligned}$$

for any \(\varsigma \in \mathbf {X^0_+}\) and \(\varsigma _0=({\tilde{S}},0)\).

Suppose that \(\limsup _{t\rightarrow \infty }||\Psi (t)\varsigma -\varsigma _0||< \epsilon _0\) for some \(\varsigma \in \mathbf {X^0_+}\). For the given \(\epsilon _0\), we can find \(T>0\) such that

$$\begin{aligned} {\tilde{S}}(x)-\epsilon _0< S(t,x)<{\tilde{S}}(x)+\epsilon _0,\;0<I<\epsilon _0 \end{aligned}$$

for \(t\ge T\) and \(x\in {\overline{\Omega }}\). Then, we have

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial I}{\partial t}\ge \mathrm{d}_{I}\Delta I+\beta (x){[{\tilde{S}}(x)-\epsilon _0]-(\delta (x)+\mu _2(x)+\gamma (x))}I, &{}t>T,\;x\in \Omega ,\\ \dfrac{\partial I}{\partial \nu }=0, &{}t>T,\;x \in \partial \Omega .\\ \end{array}\right. \end{aligned}$$

We apply the comparison principle to deduce that I is an upper solution of the following parabolic problem

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial v}{\partial t}= \mathrm{d}_{I}\Delta v+\beta (x){[{\tilde{S}}(x)-\epsilon _0]-(\delta (x)+\mu _2(x)+\gamma (x))}v, &{}t>T,\;x\in \Omega ,\\ \dfrac{\partial v}{\partial \nu }=0, &{}t>T,\;x \in \partial \Omega ,\\ v(T,x)=I(T,x)>0,&{}x\in \Omega . \end{array}\right. \end{aligned}$$
(2.13)

Denote by \(\Phi _1\) the eigenfunction corresponding to \(\lambda _1\) which is positive on \({\overline{\Omega }}\). Then, we can find a small positive constant \(\sigma \) such that \(\sigma e^{-(\lambda _1+\epsilon _0)T}\Phi _1(x)\le I(T,x)\) on \({\overline{\Omega }}\). In view of the comparison principle, we obtain that \(\sigma e^{-(\lambda _1+\epsilon _0)t}\Phi _1(x)\) is a lower solution of (2.13). Thus,

$$\begin{aligned}&I(t,x)\ge \sigma e^{-(\lambda _1+\epsilon _0)t}\Phi _1(x)\rightarrow \infty \\&\quad \text{ uniformly } \text{ on }\;{\overline{\Omega }}\;\text{ as }\;t\rightarrow \infty . \end{aligned}$$

This contradicts Proposition 2.1.

Similar to [41], we can apply [42, 61] to find that there is a real number \(\delta _0>0\), which is independent of initial data \((S_0, I_0,R_0)\), satisfying

$$\begin{aligned} \liminf \limits _{t \rightarrow \infty } \min \limits _{x\in {\overline{\Omega }}}I(t,x,\varsigma ) \ge \delta _0,\;\forall \varsigma \in \mathbf {X^0_+}. \end{aligned}$$
(2.14)

In other words, there exists a large \(T_3>0\) such that

$$\begin{aligned} I(t,x,\varsigma ) \ge \delta _0/2,\; t\ge T_3,\;x\in {\overline{\Omega }} \end{aligned}$$

for all \(\varsigma \in \mathbf {X^0_+}\).

By (2.5) and (2.8), there is a sufficiently large \(T_4>\max \{T_1, T_2, T_3\}\),

$$\begin{aligned} \begin{aligned} S_t-\mathrm{d}_S \Delta S&=r(x) S- \dfrac{r(x)}{K(x)}S^2 -\beta (x)S I -\mu _1(x)S+ \gamma (x)I\\&\ge \frac{\gamma _*\delta _0}{2}-\beta ^* M_3 S-2\dfrac{r^*}{K_*}M_2S-(\mu _1)^*S\\ \end{aligned} \end{aligned}$$

for \(t> T_4\), \(x\in {\overline{\Omega }}.\) By the comparison principle, we can get

$$\begin{aligned} \liminf \limits _{t \rightarrow \infty } S(t,x,\varsigma )\ge & {} \dfrac{\frac{\gamma _*\delta _0}{2}}{\beta ^*M_3+2\dfrac{r^*}{K_*}M_2+(\mu _1)^*}\nonumber \\\triangleq & {} \delta _1 \;\text{ uniformly } \text{ for }\; x\in {\overline{\Omega }}. \end{aligned}$$
(2.15)

Let \(\epsilon =\min \{\delta _0,\delta _1\}\). As (2.14) and (2.15), we obtain the assertion (2.11). In addition, this implies that system (1.2) admits at least one positive equilibrium solution E(S(x), I(x)). \(\square \)

Remark 2.1

The uniform persistence of system (1.2) means that rumors persist for a long time. In addition, under the condition of \(S_0(x)>0\) and \(I_0(x)>0\), the rumor continues uniformly for \(t\ge 0\) and \(x\in {\overline{\Omega }}\).

2.2 Asymptotic behavior of E

In this subsection, we will discuss the asymptotic behavior of the positive equilibrium E based on Refs. [25, 43, 44].

Corresponding to (1.2), the equilibrium problem satisfies the following elliptic system:

$$\begin{aligned} \left\{ \begin{array}{lll} -\mathrm{d}_{S}\Delta S=r (x) S\bigg (1-\dfrac{S}{K(x)}\bigg )-\beta (x)S I-\mu _1(x) S+\gamma (x) I, &{}x\in \Omega ,\\ -\mathrm{d}_{I}\Delta I=\beta (x)S I-\delta (x) I-\mu _2(x)I -\gamma (x) I,&{}x\in \Omega ,\\ \dfrac{\partial S}{\partial \nu }=\dfrac{\partial I}{\partial \nu }=0, &{}x\in \partial \Omega . \end{array}\right. \end{aligned}$$
(2.16)

If (SI) is the solution of (2.16), \(S\ge 0\) and \(I\ge \not \equiv 0\), then we use the strong maximum principle and Hopf lemma for system (2.16) to conclude that \(S(x)>0\) and \(I(x)>0\) for all \(x\in {\bar{\Omega }}\).

Now, we recall some known facts, see, for instance, [45] and [46].

Lemma 2.2

Assume that \(\omega \in C^2({\overline{\Omega }})\) and \(\partial _\nu \omega =0\) on \(\partial \Omega \), then the following assertions hold.

  1. (1)

    If \(\omega \) has a local maximum at \(x_1\in {\overline{\Omega }}\), then \(\triangledown \omega (x_1)=0\) and \(\Delta \omega (x_1)\le 0\).

  2. (2)

    If \(\omega \) has a local maximum at \(x_2\in {\overline{\Omega }}\), then \(\triangledown \omega (x_2)=0\) and \(\Delta \omega (x_2)\ge 0\).

Lemma 2.3

Let \(\omega \in C^2(\Omega )\cap C^1({\overline{\Omega }})\) be a positive solution to the elliptic equation

$$\begin{aligned} \Delta \omega +c(x)\omega =0,\;\text{ in }\;\Omega ,\;\partial _\nu \omega =0,\;\text{ on }\;\partial \Omega , \end{aligned}$$

where \(c\in C({\overline{\Omega }})\). Then, there exists a positive constant M which depends only on C where \(||c||_{\infty }\le C\) such that

$$\begin{aligned} \max _{{\overline{\Omega }}}\omega \le M\min _{{\overline{\Omega }}}\omega . \end{aligned}$$

First, we will study the dynamic behavior of system (1.2) as \(\mathrm{d}_S\rightarrow 0\). Let \(\mathrm{d}_S\rightarrow 0\); we apply ([47], Lemma 3.2) to deduce that the unique solution \({\tilde{S}}\) of (2.9) satisfies

$$\begin{aligned} {\tilde{S}}(x)\rightarrow \dfrac{K(x)(r(x)-\mu _1(x))}{r(x)}\;\text{ uniformly } \text{ on }\;{\overline{\Omega }}, \end{aligned}$$

then the principal eigenvalue \(\lambda _1\) of problem (2.10) converges to the principal eigenvalue \(\lambda _1^0\) of the following eigenvalue problem

$$\begin{aligned} \left\{ \begin{array}{lll} \mathrm{d}_{I}\Delta \phi +\left[ \dfrac{\beta (x)K(x)(r(x)-\mu _1(x))}{r(x)}-(\delta (x)+\mu _2(x)+\gamma (x))\right] \phi +\lambda \phi =0,&{}x\in \Omega ,\\ \dfrac{\partial \phi }{\partial \nu }=0, &{}x\in \partial \Omega . \end{array}\right. \end{aligned}$$
(2.17)

Theorem 2.2

For fixed \(\mathrm{d}_I>0\), and assume that \(\lambda _1^0<0\). Let \(\mathrm{d}_S \rightarrow 0\). Then, any positive solution (SI) of (2.16) satisfies (up to a subsequence of \(\mathrm{d}_S \rightarrow 0\)) \((S,I)\rightarrow (S^0,I^0)\) uniformly on \({\overline{\Omega }}\), where

$$\begin{aligned} S^0=H(x,I^0):=\dfrac{K(x)(r(x)-\mu _1(x)-\beta (x) I^0(x))+\sqrt{K^2(x)(\beta (x) I^0(x)+\mu _1(x)-r(x))^2+4r(x)\gamma (x) I^0(x)K(x)}}{2r(x)} \end{aligned}$$

and \(I^0\) is a positive solution of

$$\begin{aligned} \left\{ \begin{array}{lll} -\mathrm{d}_I\Delta I^0=\beta (x)H(x,I^0)I^0-\delta (x)I^0-\mu _2(x)I^0-\gamma (x) I^0 ,&{}x\in \Omega ,\\ \dfrac{\partial I^0}{\partial \nu } =0, &{}x\in \partial \Omega ,\\ \end{array}\right. \end{aligned}$$
(2.18)

Proof

Step 1: There exists a constant \(C>0\), independent of \(0<\mathrm{d}_S\le 1\), such that

$$\begin{aligned} \frac{1}{C}\le S(x),\;I(x)\le C. \end{aligned}$$
(2.19)

Let \(S(x_1)=\max \limits _{x\in {\overline{\Omega }}} S(x)\) for some \(x_1\in {\overline{\Omega }}\).p. As Lemma 2.2, we have \(\Delta S(x_1)\le 0\). For the first equation of system (2.16), we can know

$$\begin{aligned}&r(x_1)S(x_1)\left( 1-\dfrac{S(x_1)}{K(x_1)}\right) -\beta \left( x_1\right) S\left( x_1\right) I\left( x_1\right) \\&\quad -\mu _1\left( x_1\right) S\left( x_1\right) +\gamma \left( x_1\right) I\left( x_1\right) \ge 0. \end{aligned}$$

It follows that

$$\begin{aligned} \dfrac{r(x_1)K(x_1)}{4}\ge&\mu _1(x_1)S(x_1)+[\beta (x_1)S(x_1)\\&-\gamma (x_1)]I(x_1). \end{aligned}$$

If \(\beta (x_1)S(x_1)-\gamma (x_1)<0\), we obtain

$$\begin{aligned} S(x)\le \max \limits _{x\in {\overline{\Omega }}} S(x)=S(x_1)< \dfrac{\gamma (x_1)}{\beta (x_1)}\le \dfrac{\gamma ^*}{\beta _*}. \end{aligned}$$

If \(\beta (x_1)S(x_1)-\gamma (x_1)\ge 0\), we obtain

$$\begin{aligned} S(x)\le \max \limits _{x\in {\overline{\Omega }}} S(x)=S(x_1)\le \dfrac{r(x_1)K(x_1)}{4\cdot \mu _1(x_1)}\le \dfrac{{ r^* K^*}}{{4( \mu _1)_*}}. \end{aligned}$$

Therefore, we can get

$$\begin{aligned} S(x)\le \max \bigg \{\dfrac{\gamma ^*}{\beta _*},\dfrac{{ r^* K^*}}{{4( \mu _1)_*}} \bigg \}\triangleq C_{1}. \end{aligned}$$
(2.20)

Define \(W=\mathrm{d}_S S+\mathrm{d}_I I\), then we obtain

$$\begin{aligned} -\Delta W=&-\mathrm{d}_S \Delta S-\mathrm{d}_I\Delta I=r(x)S\left( 1-\dfrac{S}{K(x)}\right) \\&-\mu _1(x)S-\mu _2(x)I-\delta (x)I. \end{aligned}$$

Let \(W(x_2)=\max \limits _{x\in {\overline{\Omega }}} W(x)\) for \(x_2\in {\overline{\Omega }}\), we use Lemma 2.2 to get \(\Delta W_2(x_0)\le 0\). Thus,

$$\begin{aligned} \mu \left( S\left( x_2\right) +I\left( x_2\right) \right)\le & {} \mu _1\left( x_2\right) S\left( x_2\right) +\mu _2\left( x_2\right) I\\&+\delta \left( x_2\right) I\left( x_2\right) \\\le & {} r\left( x_2\right) S\left( x_2\right) \left( 1-\dfrac{S\left( x_2\right) }{K\left( x_2\right) }\right) \end{aligned}$$

for \(\mu =\min \{(\mu _1)_*,(\mu _2)_*+\delta _*\}\). By the further calculation, we can get

$$\begin{aligned} S(x_2)+I(x_2)\le \dfrac{r(x_2)K(x_2)}{4\mu }\le \dfrac{ r^* K^*}{4 \mu }\triangleq {\tilde{C}}. \end{aligned}$$

Furthermore,

$$\begin{aligned} W(x)\le W(x_2)\le&\max \left\{ \mathrm{d}_S, \mathrm{d}_I\right\} \left[ S\left( x_2\right) \right. \\&\left. + I\left( x_2\right) \right] \le \max \left\{ \mathrm{d}_S, \mathrm{d}_I\right\} {\tilde{C}}. \end{aligned}$$

Then,

$$\begin{aligned} \mathrm{d}_I \max \limits _{x\in {\overline{\Omega }}} I(x)\le \max \limits _{x\in {\overline{\Omega }}} W(x)=W(x_2)\le \max \{\mathrm{d}_S, \mathrm{d}_I\}{\tilde{C}}. \end{aligned}$$

That is,

$$\begin{aligned} \max \limits _{x\in {\overline{\Omega }}} I(x)\le \frac{\max \{\mathrm{d}_S, \mathrm{d}_I\}}{\mathrm{d}_I}{\tilde{C}}\triangleq C_2. \end{aligned}$$

Now, we will give the positive lower bound for the component I. In view of (2.20), we use the Harnack inequality (Lemma 2.3)

$$\begin{aligned} \left\{ \begin{array}{lll} -\mathrm{d}_{I}\Delta I=[\beta (x)S -\delta (x) -\mu _2(x) -\gamma (x)] I,&{}x\in \Omega ,\\ \dfrac{\partial I}{\partial \nu }=0,&{}x\in \partial \Omega \end{array}\right. \end{aligned}$$
(2.21)

to obtain

$$\begin{aligned} \max _{x\in {\overline{\Omega }}}I(x)\le M\min _{{\overline{\Omega }}}I(x), \end{aligned}$$
(2.22)

where the constant \(M>0\) independent of \(\mathrm{d}_S\).

Suppose that I has no positive lower bound, we can find a sequence \(\{\mathrm{d}_{S_n}\}\) satisfying \(\mathrm{d}_{S_n}\rightarrow 0\) as \(n\rightarrow \infty \) and the corresponding positive solution sequence \((S_n, I_n)\) of (2.16) with \(\mathrm{d}_S=\mathrm{d}_{S_n}\), such that \(\min _{x\in {\overline{\Omega }}}I_n(x)\rightarrow 0\) as \(n\rightarrow \infty \). In view of (2.22), we deduce that

$$\begin{aligned} I_n\rightarrow 0\;\text{ uniformly } \text{ on }\;{\overline{\Omega }},\;\text{ as }\;n\rightarrow \infty . \end{aligned}$$

By an analogous consideration (Lemma 3.2 [47]), we have

$$\begin{aligned} S_n(x)&\rightarrow \dfrac{K(x)(r(x)-\mu _1(x))}{r(x)}\;\text{ uniformly } \text{ on }\;{\overline{\Omega }},\;\text{ as }\;n\\&\rightarrow \infty . \end{aligned}$$

On the other hand, as \(\lambda _1^n\) (for \(n\ge 1\)) is the principle eigenvalue of

$$\begin{aligned} \left\{ \begin{array}{lll} \mathrm{d}_I \Delta \varrho +[\beta (x) S_n-\delta (x)- \mu _2(x)- \gamma (x) ] \varrho + \lambda \varrho =0,&{}x \in \Omega ,\\ \partial _{\nu } \varrho =0, &{}x \in \partial \Omega . \end{array}\right. \end{aligned}$$
(2.23)

By (2.21), we have \(\lambda _1^n=0\). It is easy to see that the sequence of \(\{\lambda _1^n\}\) converges to \(\lambda _1^0<0\), which is contradiction with \(\lambda _1^0=0\). So we can draw a conclusion that I has a positive lower bound \(C_3\), which not depends on \(0<\mathrm{d}_S\le 1\).

Set \(S(x_3)= \min \limits _{x\in {\overline{\Omega }}} S(x)\) for \(x_3\in {\overline{\Omega }}\), then \(\Delta S(x_3)\ge 0\) by Lemma 2.2. We use the first equation of (2.16) to get

$$\begin{aligned} \gamma \left( x_3\right) I\left( x_3\right)\le & {} \dfrac{r\left( x_3\right) S^2\left( x_3\right) }{K\left( x_3\right) }+\beta \left( x_3\right) S\left( x_3\right) I\left( x_3\right) \\&+\mu _1\left( x_3\right) S\left( x_3\right) , \end{aligned}$$

that is to say,

$$\begin{aligned} \dfrac{r^*}{K_*}S^2\left( x_3\right) +\left[ \beta ^*C_2+\left( \mu _1\right) ^*\right] S\left( x_3\right) \ge \gamma \left( x_3\right) I\left( x_3\right) \ge \gamma _* C_{3}. \end{aligned}$$

It is not difficult to find

$$\begin{aligned} S\left( x\right) \ge \min \limits _{x \in {\overline{\Omega }}} S\left( x\right) \ge \frac{-K_*\left[ C_{2}\beta ^*+\left( \mu _1\right) ^*\right] +\sqrt{K_*^2\left[ C_{2}\beta ^*+\left( \mu _1\right) ^*\right] ^2+4K_*\gamma _*r^*C_{3}}}{2r^*} \triangleq C_{4}. \end{aligned}$$
(2.24)

In conclusion, we have proved that (2.19) holds.

Step: 2 We study the convergence of I as \(\mathrm{d}_S \rightarrow 0\). I is the solution of the following system

$$\begin{aligned} \left\{ \begin{array}{lll} -\mathrm{d}_I\Delta I+\left[ \delta (x)+\mu _2(x)+\gamma (x)\right] I=\beta (x)S I,&{}x\in \Omega ,\\ \partial _v I=0, &{}x\in \partial \Omega .\\ \end{array}\right. \end{aligned}$$
(2.25)

In view of (2.19)and the standard \(L^p\) theory ([48]), we obtain that

$$\begin{aligned} \parallel I\parallel _{W^{2,p}(\Omega )} \le C\parallel \beta S I\parallel _{L^p(\Omega )} \le C. \end{aligned}$$

Then, we apply the Sobolev embedding theorem (see, e.g., Refs. [25, 43, 44]) to obtain

$$\begin{aligned} \parallel I\parallel _{C^{1+\alpha }({\overline{\Omega }})} \le C. \end{aligned}$$

Hence, there exists a subsequence of \(\mathrm{d}_S\rightarrow 0\) represented by \(\mathrm{d}_n:=\mathrm{d}_{S_n}\), satisfying \(\mathrm{d}_n\rightarrow 0\) as \(n\rightarrow \infty \). And the corresponding positive solution \((S_n,I_n)\) of (2.16) with \(\mathrm{d}_S=\mathrm{d}_{S_n}\), which satisfies that

$$\begin{aligned} I_n \rightarrow I^0 \;\text{ in }\; C^1\left( {\overline{\Omega }}\right) \;\text{ as }\; n \rightarrow \infty , \end{aligned}$$
(2.26)

where \(I^0>0\).

Step: 3 We will prove the convergence of S. For any \(\epsilon >0\), we use (2.26) to find a constant \(N>0\) such that

$$\begin{aligned} 0<I^0-\epsilon \le I_n\le I^0+\epsilon \;\text{ on }\;{\overline{\Omega }} \end{aligned}$$

for \(n\ge N\).

For fixed \(n\ge N\), we know that \(S_n\) is the supersolution of

$$\begin{aligned} \left\{ \begin{array}{lll} -\mathrm{d}_n\Delta U=r(x)U\bigg (1-\dfrac{U}{K(x)}\bigg )-\beta (x)(I^0+\epsilon )U -\mu _1(x)U+\gamma (x)(I^0-\epsilon ),&{}x\in \Omega ,\\ \quad \quad \quad =\dfrac{r(x)[F_\epsilon ^+(x,I^0(x))-U][U-F_\epsilon ^-(x,I^0(x))]}{K(x)},&{}x\in \Omega ,\\ \partial _v U=0, &{}x\in \partial \Omega ,\\ \end{array}\right. \end{aligned}$$
(2.27)

and a subsolution of

$$\begin{aligned} \left\{ \begin{array}{lll} -\mathrm{d}_n\Delta V=r(x)V\bigg (1-\dfrac{V}{K(x)}\bigg )-\beta (x)(I^0-\epsilon )V -\mu _1(x)V+\gamma (x)(I^0+\epsilon ),&{}x\in \Omega ,\\ \quad \quad \quad =\dfrac{r(x)[G_\epsilon ^+(x,I^0(x))-V][V-G_\epsilon ^-(x,I^0(x))]}{K(x)},&{}x\in \Omega ,\\ \partial _v V=0, &{}x\in \partial \Omega ,\\ \end{array}\right. \end{aligned}$$
(2.28)

where

$$\begin{aligned} F_\epsilon ^{\pm }\left( x,I^0(x)\right) =\dfrac{K(x)\left[ r(x)-\beta (x)(I^0(x)+\epsilon )-\mu _1(x)\right] \pm \sqrt{K^2(x)[r(x)-\beta (x)(I^0(x)+\epsilon )-\mu _1(x)]^2+4K(x)r(x)\gamma (x)(I^0(x)-\epsilon )}}{2r(x)} \end{aligned}$$

and

$$\begin{aligned} G_\epsilon ^{\pm }\left( x,I^0(x)\right) =\dfrac{K(x)\left[ r(x)-\beta (x)(I^0(x)-\epsilon )-\mu _1(x)\right] \pm \sqrt{K^2(x)[r(x)-\beta (x)(I^0(x)-\epsilon )-\mu _1(x)]^2+4K(x)r(x)\gamma (x)(I^0(x)+\epsilon )}}{2r(x)}. \end{aligned}$$

By the proof of [25, 43], we can conclude that system (2.27) and system (2.28) have a unique positive solution \(U_n\) and \(V_n\), respectively. Then, we use the proof of Lemma 2.4 (in Ref.  [49]) to conclude that

$$\begin{aligned}&U_n\rightarrow F_\epsilon ^{+}\left( x,I^0(x)\right) ,\;V_n\rightarrow G_\epsilon ^{+}\\&\quad \left( x,I^0(x)\right) \;\text{ uniformly } \text{ on }\;{\overline{\Omega }},\;\text{ as }\;n\rightarrow 0. \end{aligned}$$

In fact, \(S_n\) is a supersolution for the problem (2.27) and is a subsolution for the problem (2.28), and we have \( U_n\le S_n\le V_n\) on \({\overline{\Omega }}\) for \(n>N\). Thus,

$$\begin{aligned} F_\epsilon ^{+}\left( x,I^0(x)\right)\le & {} \liminf _{n\rightarrow \infty } S_n(x)\le \limsup _{n\rightarrow \infty } S_n(x)\\\le & {} G_\epsilon ^{+}\left( x,I^0(x)\right) \;\text{ uniformly } \text{ on }\;{\overline{\Omega }}. \end{aligned}$$

As \(\epsilon >0\) can be choose arbitrarily small, it is easy to show that

$$\begin{aligned} S_n\rightarrow S^0\,\mathrm{uniformly on}\, {\overline{\Omega }}\mathrm{, as}\,n \rightarrow \infty , \end{aligned}$$

where

$$\begin{aligned} S^0=H(x,I^0(x))=\dfrac{K(x)[r(x)-\beta (x)I^0(x)-\mu _1(x)]+\sqrt{K^2(x)[r(x)-\beta (x)I^0(x)-\mu _1(x)]^2+4K(x)r(x)\gamma (x)I^0(x)}}{2r(x)}. \end{aligned}$$

This completes the proof. \(\square \)

Theorem 2.3

For fixed \(\mathrm{d}_S>0\), assume that \(\bigg \{x\in {\overline{\Omega }}\big |\beta (x){\tilde{S}}(x)>\delta (x)+\mu _2(x)+\gamma (x)\bigg \}\) is nonempty. Let \(\mathrm{d}_I \rightarrow 0\). Then, any positive solution (SI) of (2.16) satisfies (up to a subsequence of \(\mathrm{d}_I \rightarrow 0\)) that \(S\rightarrow S^*\) in \(L^1(\Omega )\) with \(||S^*||_{L^1(\Omega )}\) positive, \(\int _{\Omega }I\mathrm{d}x\rightarrow I^*\) with \(I^*\) is a positive constant.

3 Spatially homogeneous model

In this section, we assume that all the parameters \(\mathrm{d}_S, \mathrm{d}_I, r, K, \delta , \mu _1, \mu _2, \beta , \gamma \) in system (1.2) are positive constant.

3.1 The existence of equilibrium points

Firstly, we study the existence of rumor-eliminating equilibrium point \(E_0\) and rumor-spreading equilibrium point \(E_*\). Through a simple calculation, we can get \(E_0=(S_0, 0)=\bigg (K(1-\dfrac{\mu _1}{r}),0\bigg )\). Apparently, when \(\mu _1<r\), rumor-eliminating equilibrium point \(E_0\) exists.

Next, we use the next generation matrix to calculate the basic reproduction number \(R_0\) [50] and try to find the relationship between \(R_0\) and the existence of rumor-spreading equilibrium point \(E_*\).

Theorem 3.1

If \(R_0>1\), system (1.2) has the rumor-spreading equilibrium point \(E_*=(S_*,I_*)= \bigg (\dfrac{\delta +\mu _2+\gamma }{\beta }, \dfrac{r(\delta +\mu _2+\gamma )^2}{\beta ^2(\delta +\mu _2)K}(R_0-1)\bigg )\).

Proof

Let the equations of system (1.2) be equal to zero, we get

$$\begin{aligned} E_*= & {} \left( S_*,I_*\right) =\left( \dfrac{\delta +\mu _2+\gamma }{\beta }, \dfrac{\delta +\mu _2+\gamma }{\beta \left( \delta +\mu _2\right) }\left( r-\mu _1\right) \right. \\&\left. -\dfrac{r\left( \delta +\mu _2+\gamma \right) ^2}{K\beta ^2\left( \delta +\mu _2\right) }\right) . \end{aligned}$$

According to system (1.2), we can obtain the following two matrices

$$\begin{aligned} {\mathcal {F}}=\beta S \end{aligned}$$

and

$$\begin{aligned} {\mathcal {V}}=\delta +\mu _2+\gamma . \end{aligned}$$

Through a direct calculation, we can get that

$$\begin{aligned} R_0={\mathcal {F}}{\mathcal {V}}^{-1}(E_0)=\dfrac{\beta K\left( r-\mu _1\right) }{r\left( \delta +\mu _2+\gamma \right) } \end{aligned}$$

and \(E_*=(S_*,I_*)=\bigg (\dfrac{\delta +\mu _2+\gamma }{\beta }, \dfrac{r(\delta +\mu _2+\gamma )^2}{\beta ^2(\delta +\mu _2)K} (R_0-1)\bigg )\). So we can get the conclusion that there exists \(E_*\) when \(R_0>1\). \(\square \)

Since the stability of the equilibrium point is directly related to the spread and control of rumors, we first discuss the local stability and global stability of the equilibrium points \(E_0\) and \(E_*\) through the linearization technique, Hurwitz theorem and Lyapunov function.

3.2 The local and global stability analysis of \(E_0\) and \(E_*\)

In this section, we will analyze the local stability and global stability of \(E_0\) and \(E_*\).

Theorem 3.2

If \(0<R_0<1\) holds, the rumor-eliminating equilibrium point \(E_0\) is locally asymptotically stable.

Proof

The Jacobian matrix of system (1.2) at \(E_0\) is

$$\begin{aligned} J(E_0)=\left( \begin{array}{lc} \ \mu _1-r &{} -\beta K(1-\dfrac{\mu _1}{r})+\gamma \\ \ \quad 0 &{} \beta K(1-\dfrac{\mu _1}{r})-\delta -\mu _2-\gamma \\ \end{array} \right) , \end{aligned}$$
(3.1)

and the characteristic equation becomes

$$\begin{aligned}&\left( \lambda +\mathrm{d}_S \frac{\pi ^2k^2}{L^2}-\mu _1 +r\right) \left[ \lambda \right. \\&\left. +\mathrm{d}_I \frac{\pi ^2k^2}{L^2}-\beta K\left( 1-\dfrac{\mu _1}{r}\right) +\delta +\mu _2+\gamma \right] =0. \end{aligned}$$

Clearly,

$$\begin{aligned} \lambda _1= & {} -\mathrm{d}_S \frac{\pi ^2k^2}{L^2}+\mu _1 -r,\\ \lambda _2= & {} -\mathrm{d}_I \frac{\pi ^2k^2}{L^2}+\beta K\left( 1-\dfrac{\mu _1}{r}\right) \\&-\delta -\mu _2-\gamma =-\mathrm{d}_I\frac{\pi ^2k^2}{L^2}\\&+\left( \delta +\mu _2+\gamma \right) \left( R_0-1\right) ; \end{aligned}$$

assume \(0<R_0<1\), then \(\lambda _1<0\) and \(\lambda _2<0\). Therefore, \(E_0\) is locally asymptotically stable. \(\square \)

Theorem 3.3

If \(2>R_0>1\) holds, the rumor-spreading equilibrium point \(E_*\) is locally asymptotically stable.

Proof

The Jacobian matrix of system (1.2) at \(E_*\) is

$$\begin{aligned}&J(E_*)\nonumber \\&\quad =\left( \begin{array}{lc} \ r-\dfrac{2rS_*}{K}-\beta I_*-\mu _1 &{} -\beta S_*+\gamma \\ \ \quad \quad \quad \beta I_* &{} \beta S_*-\delta -\mu _2-\gamma \\ \end{array} \right) , \end{aligned}$$
(3.2)

and the characteristic equation is

$$\begin{aligned} \begin{aligned}&\lambda ^2+\left( \mathrm{d}_S\frac{\pi ^2k^2}{L^2}-\beta S_*+\delta +\mu _2+\gamma +\mathrm{d}_I \frac{\pi ^2k^2}{L^2}-r\right. \\&\quad \left. +\dfrac{2rS_*}{K}+\beta I_*+\mu _1\right) \lambda +\mathrm{d}_I\mathrm{d}_S\frac{\pi ^4k^4}{L^4}\\&\quad -\mathrm{d}_S \frac{\pi ^2k^2}{L^2}\beta S_*\\&+\left( \delta +\mu _2+\gamma \right) \mathrm{d}_S \frac{\pi ^2k^2}{L^2}-r\mathrm{d}_I\frac{\pi ^2k^2}{L^2}-r\beta S_*\\&\quad -r\left( \delta +\mu _2+\gamma \right) \\&\quad +\dfrac{2rS_*}{K}\mathrm{d}_I\frac{\pi ^2k^2}{L^2}-\dfrac{2rS_*}{K}\beta S_*\\&\quad +\dfrac{2rS_*}{K}\left( \delta +\mu _2+\gamma \right) +\mathrm{d}_I \frac{\pi ^2k^2}{L^2}\beta I_*\\&\quad +\beta I_*\left( \delta +\mu _2\right) \\&\quad +\mu _1 \mathrm{d}_I \frac{\pi ^2k^2}{L^2}-\mu _1\beta S_*\\&\quad +\mu _1\left( \delta +\mu _2+\gamma \right) =0. \end{aligned} \end{aligned}$$
(3.3)

Assume \(\lambda _1\) and \(\lambda _2\) are the roots of the characteristic equation, then

$$\begin{aligned} \lambda _1+\lambda _2&=-\mathrm{d}_S\frac{\pi ^2k^2}{L^2}-\mathrm{d}_I\frac{\pi ^2k^2}{L^2}+\beta S_*\nonumber \\&\quad -\left( \delta +\mu _2+\gamma \right) +r-\dfrac{2rS_*}{K}-\beta I_*-\mu _1\nonumber \\&=-\mathrm{d}_S\frac{\pi ^2k^2}{L^2}-\mathrm{d}_I\frac{\pi ^2k^2}{L^2}+\left( r-\mu _1\right) \nonumber \\&\quad -\dfrac{2r}{K\beta }\left( \delta +\mu _2+\gamma \right) \nonumber \\&\quad -\dfrac{r\left( \delta +\mu _2+\gamma \right) ^2}{K\beta \left( \delta +\mu _2\right) }\left( R_0-1\right) \nonumber \\&=-\mathrm{d}_S\frac{\pi ^2k^2}{L^2}-\mathrm{d}_I\frac{\pi ^2k^2}{L^2}\nonumber \\&\quad +\dfrac{r\left( \delta +\mu _2+\gamma \right) }{K\beta }\left( R_0-2\right) \nonumber \\&\quad -\dfrac{r\left( \delta +\mu _2+\gamma \right) ^2}{K\beta \left( \delta +\mu _2\right) }\left( R_0-1\right) , \end{aligned}$$
(3.4)
$$\begin{aligned} \lambda _1\lambda _2&=\mathrm{d}_I\frac{\pi ^2k^2}{L^2}\left( \mathrm{d}_S\frac{\pi ^2k^2}{L^2}-r\right. \nonumber \\&\quad \left. +\dfrac{2rS_*}{K}+\beta I_*+\mu _1\right) +\beta I_*\left( \delta +\mu _2\right) \nonumber \\&=\mathrm{d}_I\frac{\pi ^2k^2}{L^2}\left[ \mathrm{d}_S\frac{\pi ^2k^2}{L^2}\right. \nonumber \\&\quad +\dfrac{\beta r}{K\left( \delta +\mu _2\right) }S_*^2\left( R_0-1\right) \nonumber \\&\quad \left. -\dfrac{\left( r-\mu _1\right) K\beta -2r\left( \delta +\mu _2+\gamma \right) }{K\beta }\right] \nonumber \\&\quad +\beta I_*\left( \delta +\mu _2\right) \nonumber \\&=\mathrm{d}_I\frac{\pi ^2k^2}{L^2}\left[ \mathrm{d}_S\frac{\pi ^2k^2}{L^2}\right. \nonumber \\&\quad +\dfrac{\beta r}{K\left( \delta +\mu _2\right) }S_*^2\left( R_0-1\right) \nonumber \\&\quad \left. -\dfrac{r\left( \delta +\mu _2+\gamma \right) }{K\beta }\left( R_0-2\right) \right] \nonumber \\&\quad +\beta I_*\left( \delta +\mu _2\right) . \end{aligned}$$
(3.5)

Thus, if \(2>R_0>1\), we can obtain \(\lambda _1+\lambda _2<0\) and \(\lambda _1\lambda _2>0\) for all \(k=0,1,2...\), and further, we can get \(E_*\) which is locally asymptotically stable. \(\square \)

Theorem 3.4

If \(0<R_0<1\), then the rumor-eliminating equilibrium point \(E_0\) is globally asymptotically stable under the condition of \(\mathrm{d}_S=\mathrm{d}_I\).

Proof

Define the Lyapunov function

$$\begin{aligned} {\mathbb {V}}=\int _{\Omega }V(S(t,x),I(t,x))\mathrm{d}x, \forall t>0, \end{aligned}$$

where

$$\begin{aligned} V(S(t,x),I(t,x))=&\dfrac{1}{2}\left[ \left( S-S_0\right) +I\right] ^2\\&+\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{I}{\beta }. \end{aligned}$$

Then,

$$\begin{aligned} \begin{aligned} \dfrac{\partial V}{\partial t}&=\dfrac{\partial V}{\partial S} \dfrac{\partial S}{\partial t}\\&\quad + \dfrac{\partial V}{\partial I}\dfrac{\partial I}{\partial t}\\&=\left[ \left( S-S_0\right) +I\right] \left[ \mathrm{d}_S \Delta S +\mathrm{d}_I \Delta I\right. \\&\quad \left. + rS\left( 1-\dfrac{S}{K}\right) \right. \\&\quad \left. -\mu _1 S-\mu _2 I-\delta I\right] \\&\quad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left( \mathrm{d}_I \Delta I\right. \\&\quad \left. +\beta SI-\delta I-\mu _2 I-\gamma I\right) . \end{aligned} \end{aligned}$$
(3.6)

Firstly, according to \(\mu _1=r(1-\dfrac{S_0}{K})\), we can get

$$\begin{aligned} \begin{aligned}&\left[ \left( S-S_0\right) +I\right] \left[ rS\left( 1-\dfrac{S}{K}\right) -\mu _1 S-\mu _2 I-\delta I\right] \\&\quad =S\left[ rS\left( 1-\dfrac{S}{K}\right) -\mu _1 S-rS_0\left( 1-\dfrac{S}{K}\right) +\mu _1 S_0\right] \\&\qquad -\left( S-S_0\right) \left( \delta +\mu _2\right) I-\left( \delta +\mu _2\right) I^2\\&\qquad +I\left[ rS\left( 1-\dfrac{S}{K}\right) -rS\left( 1-\dfrac{S_0}{K}\right) \right] \\&\quad =S\left[ r\left( S-S_0\right) -\dfrac{r}{K}S\left( S-S_0\right) \right. \\&\qquad \left. -r\left( 1-\dfrac{S_0}{K}\right) \left( S-S_0\right) \right] -\left( S-S_0\right) \left( \delta +\mu _2\right) I\\&\qquad +IS\left( -\dfrac{rS}{K}+\dfrac{rS_0}{K}\right) -\left( \delta +\mu _2\right) I^2\\&\quad =-\dfrac{rS}{K}\left( S-S_0\right) ^2 -\left( S-S_0\right) \left( \delta +\mu _2\right) I\\&\qquad -\dfrac{rSI}{K}\left( S-S_0\right) -\left( \delta +\mu _2\right) I^2, \end{aligned} \end{aligned}$$
(3.7)

and

$$\begin{aligned} \begin{aligned}&\dfrac{1}{\beta }\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \left( \beta SI-\delta I-\mu _2 I-\gamma I\right) \\&\quad =\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\beta SI\\&\qquad -(\dfrac{rS_0}{K}+\delta +\mu _2)\dfrac{1}{\beta }\left( \delta +\mu _2+\gamma \right) I\\&\quad =\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left[ \beta \left( S-S_0\right) +\beta S_0\right. \\&\qquad \left. -\left( \delta +\mu _2+\gamma \right) \right] I\\&\quad =\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left[ \beta S_0-\left( \delta +\mu _2+\gamma \right) \right] I\\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \left( S-S_0\right) I\\&\quad =\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left( \delta +\mu _2+\gamma \right) \left( R_0-1\right) I\\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \left( S-S_0\right) I. \end{aligned} \end{aligned}$$
(3.8)

Therefore, we obtain

$$\begin{aligned}&\left[ \left( S-S_0\right) +I\right] \left[ \mathrm{d}_S \Delta S +\mathrm{d}_I \Delta I+rS\left( 1-\dfrac{S}{K}\right) \right. \nonumber \\&\qquad \left. -\mu _1 S-\mu _2 I-\delta I\right] \nonumber \\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\nonumber \\&\qquad \times \left( \mathrm{d}_I \Delta I+\beta SI-\delta I-\mu _2 I-\gamma I\right) \nonumber \\&\quad =\left[ \left( S-S_0\right) +I\right] \left( \mathrm{d}_S \Delta S +\mathrm{d}_I \Delta I\right) \nonumber \\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\mathrm{d}_I \Delta I\nonumber \\&\qquad -\dfrac{rS}{K}(S-S_0)^2 -(S-S_0)(\delta +\mu _2)I\nonumber \\&\quad -\dfrac{rSI}{K}\left( S-S_0\right) -\left( \delta +\mu _2\right) I^2\nonumber \\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left( \delta +\mu _2+\gamma \right) \left( R_0-1\right) I\nonumber \\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \left( S-S_0\right) I\nonumber \\&\quad =\left[ \left( S-S_0\right) +I\right] \left( \mathrm{d}_S \Delta S +\mathrm{d}_I \Delta I\right) \nonumber \\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\mathrm{d}_I \Delta I\nonumber \\&\qquad -\dfrac{rS}{K}\left( S-S_0\right) ^2-\dfrac{r}{K}I\left( S-S_0\right) ^2\nonumber \\&\qquad -\left( \delta +\mu _2\right) I^2\nonumber \\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left( \delta +\mu _2+\gamma \right) \left( R_0-1\right) I. \end{aligned}$$
(3.9)

According to the condition of \(\mathrm{d}_S=\mathrm{d}_I\) and the main method of definite integral calculation, we obtain

$$\begin{aligned} \dfrac{\partial {\mathbb {V}}}{\partial t}&=\int _{\Omega }\left\{ \left[ \left( S-S_0\right) +I\right] \left[ \mathrm{d}_S \Delta S +\mathrm{d}_I \Delta I\right. \right. \nonumber \\&\quad \left. + rS\left( 1-\dfrac{S}{K}\right) -\mu _1 S-\mu _2 I-\delta I\right] \nonumber \\&\quad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left( \mathrm{d}_I \Delta I\right. \nonumber \\&\qquad \left. \left. +\beta SI-\delta I-\mu _2 I-\gamma I\right) \right\} \mathrm{d}x\nonumber \\&\quad =\int _{\Omega }\bigg \{\left[ \left( S-S_0\right) +I\right] \left( \mathrm{d}_S \Delta S +\mathrm{d}_I \Delta I\right) \nonumber \\&\qquad +\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\mathrm{d}_I \Delta I\nonumber \\&\qquad -\dfrac{rS}{K}\left( S-S_0\right) ^2\nonumber \\&\qquad -\dfrac{r}{K}I\left( S-S_0\right) ^2-\left( \delta +\mu _2\right) I^2\nonumber \\&\qquad -\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left( \delta +\mu _2+\gamma \right) (1-R_0)I\bigg \} \mathrm{d}x\nonumber \\&\quad =\int _{\Omega } \left[ -\dfrac{rS}{K}\left( S-S_0\right) ^2-\dfrac{r}{K}I\left( S-S_0\right) ^2 -\left( \delta +\mu _2\right) I^2\right. \nonumber \\&\qquad \left. -\left( \dfrac{rS_0}{K}+\delta +\mu _2\right) \dfrac{1}{\beta }\left( \delta +\mu _2+\gamma \right) \left( 1-R_0\right) I\right] \mathrm{d}x\nonumber \\&\qquad -\int _{\Omega }\mathrm{d}_{SI}|\nabla S+\nabla I|^2\mathrm{d}x, \end{aligned}$$
(3.10)

where \( \mathrm{d}_{SI}=\mathrm{d}_S=\mathrm{d}_I.\)

In other words, if \(0<R_0<1\), then \(\dfrac{\partial {\mathbb {V}}}{\partial t}\le 0\). Further, if and only if \(S=S_0\), \(I=0\), we have \(\dfrac{\partial {\mathbb {V}}}{\partial t}=0\). According to LaSalle invariance principle, the rumor-eliminating equilibrium point \(E_0\) is global asymptotically stable. \(\square \)

3.3 Optimal control

In the following subsection, we will discuss the optimal control problem for the spatial homogeneous system (1.1) based on Refs. [51,52,53,54,55,56,57,58].

Firstly, we introduce a control variable u(tx), which represents the rumor clarification rate at time t and location x. Therefore, system (1.1) can be rewritten as follows:

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial S}{\partial t}-\mathrm{d}_{S}\Delta S=r S\bigg (1-\dfrac{S}{K}\bigg )-\beta S I-\mu _1 S+\gamma I,\\ \dfrac{\partial I}{\partial t}-\mathrm{d}_{I}\Delta I=\beta S I-\delta I-\mu _2 I -\gamma I-uI,\text { a.e on } Q=[0,T]\times \Omega ,\\ \dfrac{\partial R}{\partial t}-\mathrm{d}_{R}\Delta R=\delta I-\mu _3 R +uI,\\ \end{array}\right. \end{aligned}$$
(3.11)

with the homogeneous Neumann boundary conditions

$$\begin{aligned} \dfrac{\partial S}{\partial \nu }=\dfrac{\partial I}{\partial \nu }=\dfrac{\partial R}{\partial \nu }=0 \text { a.e on } \sum =(0,T)\times \partial \Omega \end{aligned}$$

and the corresponding initial conditions

$$\begin{aligned}&S(0,x)=S^0>0, I(0,x)=I^0>0,\\&R(0,x)=R^0>0, x\in \Omega . \end{aligned}$$

Record the admissible control set as

$$\begin{aligned} {\mathcal {U}}=\left\{ u\in L^2(Q),0\le u(t,x)\le 1 \text { a.e. on Q}\right\} . \end{aligned}$$

Our aim is to maximize the density of R and minimize the density of I on \(\Omega \) through the minimal control cost. In other words, we want to minimize the following objective function

$$\begin{aligned} \Phi (S,I,u)=\int _{\Omega }(I-R)(T,x)\mathrm{d}x+\int _Q u(t,x)\mathrm{d}t\mathrm{d}x. \end{aligned}$$
(3.12)

Consider the work space in this part as Hilbert space \(H=L^2(\Omega )^3\) and denote \(y=(S,I,R)\) is the solution of system (3.11) with the initial value \(y^0=(S^0,I^0,R^0)\). Further, define the linear operator \(A:D(A)\subset H\rightarrow H\), \(Ay=(\mathrm{d}_S\Delta S, \mathrm{d}_I\Delta I, \mathrm{d}_R\Delta R)\) where \(y\in D(A)=\bigg \{y=(S,I,R)\in (H^2(\Omega ))^3,\dfrac{\partial S}{\partial \nu } =\dfrac{\partial I}{\partial \nu }=\dfrac{\partial R}{\partial \nu }=0\bigg \}\). Then, system (3.11) can be rewritten as

$$\begin{aligned} \left\{ \begin{array}{lll} y'(t)=Ay(t)+f(t,y(t)),t\in [0,T],\\ y(0)=y^0=(S^0,I^0,R^0),\\ \end{array}\right. \end{aligned}$$
(3.13)

where

$$\begin{aligned} \left\{ \begin{array}{lll} f_1(t,y)=r S\bigg (1-\dfrac{S}{K}\bigg )-\beta S I-\mu _1 S+\gamma I,\\ f_2(t,y)=\beta S I-\delta I-\mu _2 I -\gamma I-uI,t\in [0,T], y\in D(f),\\ f_3(t,y)=\delta I-\mu _3 R +uI.\\ \end{array}\right. \end{aligned}$$
(3.14)

For some large enough positive integer N, denote

$$\begin{aligned} f^N=\left\{ \begin{array}{lll} N,\text {if } f(t,y)> N,\\ f(t,y), \text {if } -N\le f(t,y) \le N,\\ -N, \text {if } f(t,y)< -N.\\ \end{array}\right. \end{aligned}$$
(3.15)

It is easy to prove that \(f^N\) is Lipschitz continuous in y uniformly with respect to \(t\in [0,T].\) Thus, the existence of the solution for system (3.11) can be obtained in the following lemma based on Refs. [53, 54, 56].

Lemma 3.1

Suppose that \(y_0\) satisfies condition

$$\begin{aligned}&\left( Q_1\right) \;S^0, I^0, R^0\in H^2(\Omega ), \partial _\nu S^0=\partial _\nu I^0\\&\quad =\partial _\nu R^0=0, \,\mathrm{a.e on}\,\partial \Omega , S^0,I^0,R^0>0\,\mathrm{in}\, \Omega . \end{aligned}$$

Then, system (3.11) admits a unique strong positive solution \(y=(S, I, R)\in W^{1,2}(0,T;H)\) satisfying \(S(t,x),\;I(t,x)\) and \( R(t,x)\in L^2(0,T;H^2(\Omega ))\cap L^{\infty }(0,T;H^1(\Omega ))\cap L^{\infty }(Q).\) Moreover, there exists a constant \(cc>0\) independent of u and y satisfying

$$\begin{aligned} \begin{aligned}&\bigg \Vert \dfrac{\partial S}{\partial t}\bigg \Vert _{L^2(Q)}+\Vert S\Vert _{L^2\left( 0,T;H^2(\Omega )\right) }+\Vert S\Vert _{L^{\infty }(\Omega )}\le cc,\\&\bigg \Vert \dfrac{\partial I}{\partial t}\bigg \Vert _{L^2(Q)}+\Vert I\Vert _{L^2\left( 0,T;H^2(\Omega )\right) }+\Vert I\Vert _{L^{\infty }(\Omega )}\le cc,\\&\bigg \Vert \dfrac{\partial R}{\partial t}\bigg \Vert _{L^2(Q)}+\Vert R\Vert _{L^2\left( 0,T;H^2(\Omega )\right) }+\Vert R\Vert _{L^{\infty }(\Omega )}\le cc, \end{aligned} \end{aligned}$$
(3.16)

and

$$\begin{aligned} \begin{aligned} \Vert S\Vert _{H^1(\Omega )}\le cc, \Vert I\Vert _{H^1(\Omega )}\le cc, \Vert R\Vert _{H^1(\Omega )}\le cc. \end{aligned} \end{aligned}$$
(3.17)

Theorem 3.5

If \(Q_1\) holds, then there exists an optimal state \(y_\star =(S_\star , I_\star , R_\star )\) for system (3.11) with the corresponding optimal control \(u^*\).

Proof

The proof here is the standard method, and thus, we just give a simple proof [54,55,56]. It is not difficult to find that there are sequences \(u_n\in {\mathcal {U}}\) and \(y_n=(S_n, I_n, R_n)\) which is the solution to this system down here

$$\begin{aligned} \left\{ \begin{array}{lll} \partial _t S_n=\mathrm{d}_{S}\Delta S_n+r S_n\left( 1-\dfrac{S_n}{K}\right) -\beta S_n I_n-\mu _1 S_n+\gamma I_n,\\ \partial _t I_n=\mathrm{d}_{I}\Delta I_n+\beta S_n I_n-\delta I_n-\mu _2 I_n -\gamma I_n-u_nI_n,\text { a.e on } Q,\\ \partial _t R_n=\mathrm{d}_{R}\Delta R_n+\delta I_n-\mu _3 R_n +u_nI_n,\\ \end{array}\right. \end{aligned}$$
(3.18)

with the homogeneous Neumann boundary condition

$$\begin{aligned} \dfrac{\partial S_n}{\partial \nu }=\dfrac{\partial I_n}{\partial \nu }=\dfrac{\partial R_n}{\partial \nu }=0 \text { a.e on } \Sigma , \end{aligned}$$
(3.19)

and the initial conditions

$$\begin{aligned}&S_n(0,x)=S^0, I_n(0,x)=I^0,\nonumber \\&R_n(0,x)=R^0 \text { a.e on } \Omega . \end{aligned}$$
(3.20)

In addition,

$$\begin{aligned} \inf \limits _{u\in {\mathcal {U}}}{\Phi (S,I,u)}\le&{\Phi \left( S_n,I_n,u_n\right) }\le \inf \limits _{u\in {\mathcal {U}}}{\Phi (S,I,u)}\\&+\frac{1}{n}\; \text { for} \;\forall n\ge 1. \end{aligned}$$

Combining Arzela–Ascoli theorem [59], there exists a subsequence (we might as well define it as \(S_n, I_n, R_n\).), which satisfies that \(S_n(t)\rightarrow S_\star , I_n(t)\rightarrow I_\star , R_n(t)\rightarrow R_\star \) when \(n\rightarrow \infty .\) Further, by the estimates for \(S_n\), \(I_n\), \(R_n\) according to Lemma 3.1, and \(u_n\in {\mathcal {U}}\), we obtain that

$$\begin{aligned}&\Delta S_n\rightharpoonup \Delta S_\star \text { weakly in } L^2(Q),\\&\Delta I_n\rightharpoonup \Delta I_\star \text { weakly in } L^2(Q),\\&\Delta R_n\rightharpoonup \Delta R_\star \text { weakly in } L^2(Q),\\&\dfrac{\partial S_n}{\partial t}\rightharpoonup \dfrac{\partial S_\star }{\partial t} \text { weakly in } L^2(Q),\\&\dfrac{\partial I_n}{\partial t}\rightharpoonup \dfrac{\partial I_\star }{\partial t} \text { weakly in } L^2(Q),\\&\dfrac{\partial R_n}{\partial t}\rightharpoonup \dfrac{\partial R_\star }{\partial t} \text { weakly in } L^2(Q),\\&S_n \rightharpoonup S_\star \text { weakly-star in } L^{\infty }([0,T];H^1(\Omega )),\\&I_n \rightharpoonup I_\star \text { weakly-star in } L^{\infty }([0,T];H^1(\Omega )),\\&R_n \rightharpoonup R_\star \text { weakly-star in } L^{\infty }([0,T];H^1(\Omega )),\\&S_n \rightharpoonup S_\star \text { weakly in } L^2([0,T];H^2(\Omega )),\\&I_n \rightharpoonup I_\star \text { weakly in } L^2([0,T];H^2(\Omega )),\\&R_n \rightharpoonup R_\star \text { weakly in } L^2([0,T];H^2(\Omega )), \end{aligned}$$

and

$$\begin{aligned} u_n\rightarrow u_* \text { weakly in } L^2(Q). \end{aligned}$$

The further calculation can be obtained that

$$\begin{aligned}&S_nI_n-S_\star I_\star =\left( S_n-S_\star \right) I_n+\left( I_n-I_\star \right) S_\star ,\\&rS_n\left( 1-\dfrac{S_n}{K}\right) -rS_\star \left( 1-\dfrac{S_\star }{K}\right) \\&\quad =\left[ r-\dfrac{r}{K}\left( S_n+S_\star \right) \right] \left( S_n-S_\star \right) . \end{aligned}$$

Thus, according to the above conclusion, we have

$$\begin{aligned} S_nI_n\rightarrow & {} S_\star I_\star \text { in } L^2(Q),\\ rS_n\left( 1-\dfrac{S_n}{K}\right)\rightarrow & {} rS_\star \left( 1-\dfrac{S_\star }{K}\right) \text { in } L^2(Q), \end{aligned}$$

and

$$\begin{aligned} u_nI_n\rightarrow u^*I_\star \text { in } L^2(Q). \end{aligned}$$

Obviously, when \(n \rightarrow \infty \), \((u^*, S_\star , I_\star , R_\star )\) is an optimal solution of system (3.11). \(\square \)

Next, we will apply the standardized method to look for the first order necessary condition of the optimal control (3.11). Define \(z_1^{\varepsilon }=(S^{\varepsilon }-S_\star )/{\varepsilon }\), \(z_2^{\varepsilon }=(I^{\varepsilon }-I_\star )/{\varepsilon }\), \(z_3^{\varepsilon }=(R^{\varepsilon }-R_\star )/{\varepsilon }\), where \((S_\star ,I_\star ,R_\star )\) is the optimal state with the optimal control \(u^*\), and \((S^{\varepsilon }, I^{\varepsilon }, R^{\varepsilon })\) is the solution for system (3.11) with \(u^{\varepsilon }=u^*+\varepsilon u_0 \in {\mathcal {U}}\), \(u_0\in L^2(0, T)\) and \(\varepsilon >0\). Then, system (3.11) can be rewritten as

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial z_1^{\varepsilon }}{\partial t}=\mathrm{d}_S\Delta z_1^{\varepsilon } +rz_1^{\varepsilon }-\dfrac{r}{K}(S^{\varepsilon }+S_\star )z_1^{\varepsilon }-\beta z_1^{\varepsilon }I^{\varepsilon }-\beta S_\star z_2^{\varepsilon }-\mu _1 z_1^{\varepsilon }+\gamma z_2^{\varepsilon },\\ \dfrac{\partial z_2^{\varepsilon }}{\partial t}=\mathrm{d}_I\Delta z_2^{\varepsilon }+\beta z_1^{\varepsilon }I^{\varepsilon }+\beta S_\star z_2^{\varepsilon }-\delta z_2^{\varepsilon }-\mu _2 z_2^{\varepsilon }-\gamma z_2^{\varepsilon }-u^*z_2^{\varepsilon }-u_0I^{\varepsilon },\\ \dfrac{\partial z_3^{\varepsilon }}{\partial t}=\mathrm{d}_R\Delta z_3^{\varepsilon }+\delta z_2^{\varepsilon }-\mu _3 z_3^{\varepsilon }+u^* z_2^{\varepsilon }+u_0I^{\varepsilon },\\ \end{array}\right. \end{aligned}$$
(3.21)

a.e on Q, with the homogeneous Neumann boundary condition

$$\begin{aligned} \dfrac{\partial z_1^{\varepsilon }}{\partial \nu }=\dfrac{\partial z_2^{\varepsilon }}{\partial \nu }=\dfrac{\partial z_3^{\varepsilon }}{\partial \nu }=0 \text { a.e on }\Sigma , \end{aligned}$$

and the initial condition

$$\begin{aligned} z_1^{\varepsilon }(0,x)=z_2^{\varepsilon }(0,x)=z_3^{\varepsilon }(0,x)=0 \text { a.e on } \Omega . \end{aligned}$$

Similar to Theorem 3.5, it is easy to prove that \(\lim \limits _{\varepsilon \rightarrow 0}S^{\varepsilon }=S_\star \), \(\lim \limits _{\varepsilon \rightarrow 0}I^{\varepsilon }=I_\star \), \(\lim \limits _{\varepsilon \rightarrow 0}R^{\varepsilon }=R_\star \) and \(\lim \limits _{\varepsilon \rightarrow 0}z_i^{\varepsilon }=z_i\), \((i=1,2,3)\) in \(L^2(Q)\), where \(z=(z_1. z_2, z_3)\) satisfies

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial z_1}{\partial t}=\mathrm{d}_S\Delta z_1 +rz_1-\dfrac{2rS_\star }{K}z_1^-\beta z_1I^{\varepsilon }-\beta S_\star z_2-\mu _1 z_1+\gamma z_2,\\ \dfrac{\partial z_2}{\partial t}=\mathrm{d}_I\Delta z_2+\beta z_1I^{\varepsilon }+\beta S_\star z_2-\delta z_2-\mu _2 z_2-\gamma z_2-u^*z_2-u_0I_\star ,\\ \dfrac{\partial z_3}{\partial t}=\mathrm{d}_R\Delta z_3+\delta z_2-\mu _3 z_3+u^* z_2+u_0I_\star ,\\ \end{array}\right. \end{aligned}$$
(3.22)

a.e on Q, with the homogeneous Neumann boundary condition

$$\begin{aligned} \dfrac{\partial z_1}{\partial \nu }=\dfrac{\partial z_2}{\partial \nu }=\dfrac{\partial z_3}{\partial \nu }=0 \text { a.e on }\Sigma , \end{aligned}$$

and the initial condition

$$\begin{aligned} z_1(0,x)=z_2(0,x)=z_3(0,x)=0 \text { a.e on } \Omega . \end{aligned}$$

In the following, we will establish the adjoint system corresponding to system (3.11). Let \(p=(p_1, p_2, p_3)\) be the adjoint variable, the adjoint system can be written as

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial p_1}{\partial t}=-\mathrm{d}_S\Delta p_1-\bigg (r-\dfrac{2rS_\star }{K}-\beta I_\star -\mu _1\bigg )p_1-\beta I_\star p_2,\\ \dfrac{\partial p_2}{\partial t}=-\mathrm{d}_I\Delta p_2+\left( \beta S_\star -\gamma \right) p_1-\left( \beta S_\star -\delta -\mu _2-\gamma -u\right) p_2-(\delta +u)p_3,\\ \dfrac{\partial p_3}{\partial t}=-\mathrm{d}_R\Delta p_3+\mu _3 p_3,\\ \end{array}\right. \end{aligned}$$
(3.23)

a.e on Q, with the homogeneous Neumann boundary condition

$$\begin{aligned} \dfrac{\partial p_1}{\partial \nu }=\dfrac{\partial p_2}{\partial \nu }=\dfrac{\partial p_3}{\partial \nu }=0 \text { a.e on } \Sigma , \end{aligned}$$
(3.24)

and the initial condition

$$\begin{aligned} p_1(T,x)=0, p_2(T,x)=-1, p_3(T,x)=1 \text { a.e on } \Omega , \end{aligned}$$
(3.25)

where \((S_\star ,I_\star ,R_\star )\) is the optimal state with the optimal control \(u^*\). Similar to Theorem 3.5, it is not difficult to obtain the existence of a strong solution for system (3.23).

Theorem 3.6

Suppose that \((S_\star , I_\star , R_\star , u^*)\) is an optimal solution of problem (3.11)–(3.12), then the optimal control function \(u^*\) can be written as

$$\begin{aligned} u^*(t)=\left\{ \begin{array}{lll} 1, \text { if } I_\star (p_2-p_3)(t,x)+1<0, \\ 0, \text { if } I_\star (p_2-p_3)(t,x)+1\ge 0, \\ \end{array}\right. \end{aligned}$$
(3.26)

where \(p_2\) and \(p_3\) satisfy system (3.23).

Proof

According to \(\Phi (I_\star , R_\star , u^*)\) \(\le \) \(\Phi (I^\varepsilon , R^\varepsilon , u^\varepsilon )\), we have

$$\begin{aligned} \int _{\Omega }[z_2(T,x)-z_3(T,x)]\mathrm{d}x+\int _Qu_0\mathrm{d}t\mathrm{d}x\ge 0, \end{aligned}$$

where \(\varepsilon \rightarrow 0\). Multiplying both sides of the equations in (3.23) and (3.22) by \(z_1. z_2, z_3, p_1, p_2, p_3\), respectively, and through a simple calculation, we obtain

$$\begin{aligned} \begin{aligned}&z_1\dfrac{\partial p_1}{\partial t}+z_2\dfrac{\partial p_2}{\partial t}+z_3\dfrac{\partial p_3}{\partial t}+ p_1\dfrac{\partial z_1}{\partial t}\\&\qquad +p_2\dfrac{\partial z_2}{\partial t}+p_3\dfrac{\partial z_3}{\partial t}\\&\quad =\mathrm{d}_S\left( p_1\Delta z_1-z_1\Delta p_1\right) +\mathrm{d}_I\left( p_2\Delta z_2-z_2\Delta p_2\right) \\&\qquad +\mathrm{d}_R\left( p_3\Delta z_3-z_3\Delta p_3\right) -u_0p_2I_\star +u_0p_3I_\star . \end{aligned} \end{aligned}$$
(3.27)

By integrating both sides of Eq. (3.27) at the same time and combining with Green’s formula, a straightforward calculation can be made that

$$\begin{aligned} \begin{aligned}&\int _{\Omega }\int _0^T\left( z_1\dfrac{\partial p_1}{\partial t}+z_2\dfrac{\partial p_2}{\partial t}+z_3\dfrac{\partial p_3}{\partial t}\right. \\&\quad \left. + p_1\dfrac{\partial z_1}{\partial t}+p_2\dfrac{\partial z_2}{\partial t}+p_3\dfrac{\partial z_3}{\partial t}\right) \mathrm{d}t\mathrm{d}x\\&\quad +\int _0^T\int _{\Omega }u_0I_\star (p_2-p_3)\mathrm{d}t\mathrm{d}x=0. \end{aligned} \end{aligned}$$
(3.28)

Thus,

$$\begin{aligned}&\int _Qu_0I_\star \left( p_2-p_3\right) \mathrm{d}t\mathrm{d}x+\int _Qu_0\mathrm{d}t\mathrm{d}x\\&\quad = \int _{\Omega }\left( z_2-z_3\right) (T, x)\mathrm{d}x+\int _Qu_0\mathrm{d}t\mathrm{d}x\ge 0. \end{aligned}$$

Let \(u_0={\tilde{u}}-u^*\) with arbitrary \({\tilde{u}}\in {\mathcal {U}}\), then

$$\begin{aligned}&\int _{Q}\left( {\tilde{u}}-u^*\right) \left[ I_\star \left( p_2-p_3\right) (t,x)+1\right] \mathrm{d}t\mathrm{d}x\\&\quad \ge 0, \forall {\tilde{u}}\in {\mathcal {U}}. \end{aligned}$$

Clearly, when \(u^*\) is chosen as the form in (3.26), the above inequality satisfies. This completes the proof. \(\square \)

4 Spatially homogeneous model with time delay

In Sect. 1, we have mentioned that in order to better reflect the reality, it is necessary to add time delay to the model [38]. Considering that the growth of rumor–susceptible is delayed due to the influence of network environment, we establish the following spatially homogeneous model with time delay \(\tau \)

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial S}{\partial t}-\mathrm{d}_{S}\Delta S=r S\bigg (1-\dfrac{S(t-\tau )}{K}\bigg )-\beta S I-\mu _1 S+\gamma I, t>0,x\in \Omega ,\\ \dfrac{\partial I}{\partial t}-\mathrm{d}_{I}\Delta I=\beta S I-\delta I-\mu _2 I -\gamma I, t>0,x\in \Omega ,\\ \dfrac{\partial R}{\partial t}-\mathrm{d}_{R}\Delta R=\delta I-\mu _3 R, t>0,x\in \Omega ,\\ \dfrac{\partial S}{\partial \nu }=\dfrac{\partial I}{\partial \nu }=\dfrac{\partial R}{\partial \nu }=0,t>0,x\in \partial \Omega ,\\ S(0,x)=S(\theta ,x)\ge 0,I(0,x)=I(\theta ,x)\ge , \not \equiv 0, R(0,x)=R(\theta ,x)\ge 0, (\theta ,x)\in [-\tau ,0]\times {\overline{\Omega }}. \end{array}\right. \end{aligned}$$
(4.1)

Similarly, system (4.1) can be simplified as follows

$$\begin{aligned} \left\{ \begin{array}{lll} \dfrac{\partial S}{\partial t}-\mathrm{d}_{S}\Delta S=r S\bigg (1-\dfrac{S(t-\tau )}{K }\bigg )-\beta S I-\mu _1 S+\gamma I,t>0,x\in \Omega ,\\ \dfrac{\partial I}{\partial t}-\mathrm{d}_{I}\Delta I=\beta S I-\delta I-\mu _2 I -\gamma I,t>0,x\in \Omega ,\\ \dfrac{\partial S}{\partial \nu }=\dfrac{\partial I}{\partial \nu }=0,t>0,x\in \partial \Omega ,\\ S(0,x)=S(\theta ,x)\ge 0,I(0,x)=I(\theta ,x)\ge , \not \equiv 0, (\theta ,x)\in [-\tau ,0]\times {\overline{\Omega }}.\\ \end{array}\right. \end{aligned}$$
(4.2)

In the following, we will study the stability and Hopf bifurcation of the rumor-spreading equilibrium point \(E_*\) for system (4.2). Denote \(U(t)=(u_1(t), u_2(t))^{\top }=(S(t,\cdot ), I(t,\cdot ))^{\top },\) and we can rewrite system (4.2) as the following abstract differential equation in the phase space \({\mathcal {C}}=C([-\tau ,0],X)\)

$$\begin{aligned} \begin{aligned} \dfrac{d U(t)}{d t}={\mathbf {D}}\Delta U(t)+{\mathbf {L}}(U_t)+{\mathbf {G}}(U_t), \end{aligned} \end{aligned}$$
(4.3)

where \({\mathbf {D}}=diag\{\mathrm{d}_S, \mathrm{d}_I\}\), \(U_t(\theta )=U(t+\theta )\), \(-\tau \le \theta \le 0\), \({\mathbf {L}}:{\mathcal {C}}\rightarrow X\), and \({\mathbf {G}}:{\mathcal {C}}\rightarrow X\) are given, respectively, by

$$\begin{aligned} {\mathbf {L}}(\psi )=\left( \begin{array}{lc} \ \bigg [r\bigg (1-\dfrac{S_*}{K}\bigg )-\beta I_*-\mu _1\bigg ]\psi _1(0)-\dfrac{rS_*}{K}\psi _1(-\tau )-(\delta +\mu _2)\psi _2(0) \\ \ \quad \quad \quad \quad \beta I_*\psi _1(0)-(\beta S_*-\delta -\mu _2 -\gamma )\psi _2(0)\\ \end{array} \right) \end{aligned}$$
(4.4)

and

$$\begin{aligned} {\mathbf {G}}(\psi )=\left( \begin{array}{lc} \ -\dfrac{r}{K}\psi _1(0)\psi _1(-\tau )-\beta \psi _1(0)\psi _2(0) \\ \ \quad \quad \quad \beta \psi _1(0)\psi _2(0)\\ \end{array} \right) , \end{aligned}$$
(4.5)

for \(\psi (\theta )=U_t(\theta )\), \(\psi =(\psi _1, \psi _2)^{\top }\). The characteristic equation for the linearized system of (4.3) at (0, 0) is

$$\begin{aligned} \begin{aligned} \lambda y-{\mathbf {D}}\Delta y-{\mathbf {L}}(e^{\lambda }y)=0, \end{aligned} \end{aligned}$$
(4.6)

where \(y\in dom(\Delta )\) and \(y\ne 0\), \(dom(\Delta )\subset X\).

The operator \(\Delta \) on X has the eigenvalues \(-\dfrac{k^2\pi ^2}{L^2}\), (\(k=0,1,2...\)), and the corresponding eigenfunctions are

$$\begin{aligned} \alpha _k^1=\left( \begin{array}{lc} \ \zeta _k\\ \ 0 \\ \end{array} \right) , \alpha _k^2=\left( \begin{array}{lc} \ 0 \\ \ \zeta _k \\ \end{array} \right) , \end{aligned}$$
(4.7)

where \(\zeta _k=\cos \bigg (\dfrac{k\pi }{L}x\bigg ).\) Obviously, \((\alpha _k^1, \alpha _k^2)_0^{\infty }\) constructs a basis of the phase space X. Thus, any element y in X can be expanded as follows

$$\begin{aligned} y=\sum _{k=0}^{\infty }Y_k^{\top }\left( \begin{array}{lc} \ \alpha _k^1\\ \ \alpha _k^2 \\ \end{array} \right) , Y_k=\left( \begin{array}{lc} \ \langle y, \alpha _k^1\rangle \\ \ \langle y, \alpha _k^2\rangle \\ \end{array} \right) . \end{aligned}$$
(4.8)

Then, we have

$$\begin{aligned}&L\left( \psi ^{\top }\left( \alpha _k^1, \alpha _k^2\right) ^{\top }\right) \nonumber \\&\quad =L^{\top }(\psi )\left( \alpha _k^1, \alpha _k^2\right) , k=0,1,2\ldots . \end{aligned}$$
(4.9)

Thus, Eq. (4.6) can be written as

$$\begin{aligned}&\sum _{k=0}^{\infty }\left[ \lambda I_2+{\mathbf {D}}\dfrac{k^2\pi ^2}{L^2}- \left( \begin{array}{lc} \ A_k+B_ke^{-\lambda \tau } &{} C_k\\ \ \quad D_k &{} E_k+F_k\\ \end{array} \right) \ \right] \nonumber \\&\quad \left( \begin{array}{lc} \ \alpha _k^1\\ \ \alpha _k^2\\ \end{array} \right) =0, \end{aligned}$$
(4.10)

where

$$\begin{aligned} A_k= & {} \left[ r\left( 1-\dfrac{S_*}{K}\right) -\beta I_*-\mu _1\right] \\&-\mathrm{d}_S \dfrac{k^2\pi ^2}{L^2},\quad B_k=-\dfrac{rS_*}{K},\quad C_k=-\left( \delta +\mu _2\right) ,\\ D_k= & {} \beta I_*,\quad E_k=-\left( \delta +\mu _2 +\gamma \right) -\mathrm{d}_I \dfrac{k^2\pi ^2}{L^2},\quad F_k\\= & {} \beta S_*=\delta +\mu _2 +\gamma , \;k=0,1,2\cdots . \end{aligned}$$

Therefore, the eigenvalue equation is

$$\begin{aligned} \begin{aligned}&\lambda ^2+\left( -A_k-E_k-F_k\right) \lambda +A_kE_k\\&\quad +A_kF_k-C_kD_k\\&\quad +e^{-\lambda \tau }\left( -B_k\lambda +B_kE_k+B_kF_k\right) =0. \end{aligned} \end{aligned}$$
(4.11)

Theorem 3.3. has given the case of \(\tau =0\), so next we only discuss the case of \(\tau >0\).

Theorem 4.1

Assume \((H_{1})\;\dfrac{\mathrm{d}_S}{\mathrm{d}_I}>\dfrac{(\beta I_*+\mu _1-r)^2}{4\beta I_*(\delta +\mu _2)}\), we have the following conclusions when \(\tau >0\).

  1. (1)

    If \(\Delta _0=[2C_kD_k+(A_k^2-B_k^2)+\mathrm{d}_I^2\dfrac{k^4\pi ^4}{L^4}]^2-4[(A_k(E_k+F_k)+C_kD_k)^2-B_k^2(E_k+F_k)^2]<0\) holds for all \(k=0,1,2...\), then the rumor-spreading equilibrium point \(E_*\) is locally asymptotically stable for \(\forall \tau >0\).

  2. (2)

    If \(\Delta _0>0\) and \((H_{2})\;2C_0D_0+(A_0^2-B_0^2)<0\) hold, then the rumor-spreading equilibrium point \(E_*\) is locally asymptotically stable when \(\tau \in (0,\tau ^*)\) and unstable when \(\tau \in (\tau ^*,\tau ^*+\varepsilon )\) for some \(\varepsilon >0\). That is to say, system (4.2) occurs a Hopf bifurcation at the rumor-spreading equilibrium point \(E_*\) when \(\tau =\tau ^{*} \) for \(k\in {\mathbb {T}}\), where \({\mathbb {T}}\) is defined below.

Proof

Let \(\lambda =i\omega \) be a solution of Eq. (4.11). Separating the real part from image part, we obtain that

$$\begin{aligned} \left\{ \begin{array}{lll} \omega ^2-A_kE_k-A_kF_k+C_kD_k=\left( B_kE_k+B_kF_k\right) \cos \omega \tau -B_k\omega \sin \omega \tau , \\ -\left( A_k+E_k+F_k\right) \omega =B_k\omega \cos \omega \tau +\left( B_kE_k+B_kF_k\right) \sin \omega \tau , \end{array} \right. \end{aligned}$$
(4.12)

which leads to

$$\begin{aligned} \begin{aligned}&\omega ^4+\left[ -2A_k\left( E_k+F_k\right) +2C_kD_k\right. \\&\quad \left. +\left( A_k+E_k+F_k\right) ^2-B_k^2\right] \omega ^2 \\&\quad +A_k^2\left( E_k+F_k\right) ^2+C_k^2D_k^2\\&\quad -2A_kC_kD_k\left( E_k+F_k\right) -B_k^2\left( E_k+F_k\right) ^2=0. \end{aligned} \end{aligned}$$
(4.13)

Let \(z=\omega ^2\), it is easy to obtain that

$$\begin{aligned} \begin{aligned}&z^2+\left[ -2A_k\left( E_k+F_k\right) +2C_kD_k\right. \\&\quad \left. +\left( A_k+E_k+F_k\right) ^2-B_k^2\right] z \quad \quad \\&\quad +A_k^2\left( E_k+F_k\right) ^2+C_k^2D_k^2\\&\quad -2A_kC_kD_k\left( E_k+F_k\right) -B_k^2\left( E_k+F_k\right) ^2=0. \end{aligned} \end{aligned}$$
(4.14)

Denote

$$\begin{aligned} \begin{aligned} h(z)=&z^2+\left[ -2A_k\left( E_k+F_k\right) +2C_kD_k\right. \\&\left. +\left( A_k+E_k+F_k\right) ^2-B_k^2\right] z \\&\quad +A_k^2\left( E_k+F_k\right) ^2+C_k^2D_k^2\\&\quad -2A_kC_kD_k\left( E_k+F_k\right) -B_k^2\left( E_k+F_k\right) ^2. \end{aligned} \end{aligned}$$
(4.15)

By a direct calculation, we obtain

$$\begin{aligned} \begin{aligned}&A_k^2\left( E_k+F_k\right) ^2+C_k^2D_k^2\\&\quad -2A_kC_kD_k\left( E_k+F_k\right) -B_k^2\left( E_k+F_k\right) ^2\\&\quad =\left( A_k\mathrm{d}_I k^2+C_kD_k\right) ^2-B_k^2\mathrm{d}_I^2\dfrac{k^4\pi ^4}{L^4}\\&\quad =\left( A_k\mathrm{d}_Ik^2+C_kD_k-B_k\mathrm{d}_I\dfrac{k^2\pi ^2}{L^2}\right) \\&\qquad \left( A_k\mathrm{d}_Ik^2+C_kD_k+B_k\mathrm{d}_I\dfrac{k^2\pi ^2}{L^2}\right) \end{aligned} \end{aligned}$$
(4.16)

and

$$\begin{aligned} \begin{aligned}&-2A_k\left( E_k+F_k\right) +2C_kD_k\\&\qquad +\left( A_k+E_k+F_k\right) ^2-B_k^2\\&\quad =2C_kD_k+A_k^2-B_k^2+\mathrm{d}_I^2\dfrac{k^4\pi ^4}{L^4}. \end{aligned} \end{aligned}$$
(4.17)

Because of \(A_k<0, B_k<0, C_k<0\) and \(D_k>0\), we obtain \(A_k\mathrm{d}_I\dfrac{k^2\pi ^2}{L^2}+C_kD_k+B_k\mathrm{d}_I\dfrac{k^2\pi ^2}{L^2}<0\). Therefore, if \(A_k\mathrm{d}_I\dfrac{k^2\pi ^2}{L^2}+C_kD_k-B_k\mathrm{d}_I\dfrac{k^2\pi ^2}{L^2}<0\), which is equal to

$$\begin{aligned}&\left( H_{1}\right) \;\dfrac{\mathrm{d}_S}{\mathrm{d}_I}>\dfrac{\left( A_0-B_0\right) ^2}{-4C_0D_0}\\&\quad =\dfrac{\left( \beta I_*+\mu _1-r\right) ^2}{4\beta I_*\left( \delta +\mu _2\right) }, \end{aligned}$$

we can get \(A_k^2(E_k+F_k)^2+C_k^2D_k^2-2A_kC_kD_k(E_k+F_k)-B_k^2(E_k+F_k)^2>0\). When \(k \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} 2C_kD_k+\left( A_k^2-B_k^2\right) +\mathrm{d}_I^2\dfrac{k^4\pi ^4}{L^4}\rightarrow \infty . \end{aligned} \end{aligned}$$
(4.18)

Thus, if \((H_{1})\) and \((H_{2})\;2C_0D_0+(A_0^2-B_0^2)<0\) hold, we can know that there exists a constant \(k_0\in \{0,1,2\ldots \}\), if \(0\le k\le k_0\), then \(2C_kD_k+(A_k^2-B_k^2)+\mathrm{d}_I^2\dfrac{k^4\pi ^4}{L^4}\le 0\) and if \(k_0< k\), then \(2C_kD_k+(A_k^2-B_k^2)+\mathrm{d}_I^2\dfrac{k^4\pi ^4}{L^4}>0\). In the case of \(0\le k\le k_0\), if

$$\begin{aligned} \Delta _{0}= & {} \left[ 2C_kD_k+\left( A_k^2-B_k^2\right) +\mathrm{d}_I^2\dfrac{k^4\pi ^4}{L^4}\right] ^2\\&-4\left[ \left( A_k\left( E_k+F_k\right) +C_kD_k\right) ^2\right. \\&\left. -B_k^2\left( E_k+F_k\right) ^2\right] <0, \end{aligned}$$

then Eq. (4.14) has no positive root. If \(\Delta _0\ge 0,\) we can know the roots of Eq. (4.14) are

$$\begin{aligned} \begin{aligned} z_k^{\pm }=\dfrac{-\left[ 2C_kD_k+\left( A_k^2-B_k^2\right) +\left( E_k+F_k\right) ^2\right] \pm \sqrt{\Delta _0}}{2}, \end{aligned} \end{aligned}$$
(4.19)

which leads to roots of (4.13)

$$\begin{aligned} \omega _k^{\pm }=\sqrt{z_k^{\pm }}. \end{aligned}$$

According to (4.12), we obtain

Fig. 1
figure 1

The asymptotic behavior of the solution of system (1.2) when \(\overline{R_0}>1\)

Fig. 2
figure 2

The projection drawing of S(tx) and I(tx) in \(t-x\) plane when \(\mathrm{d}_S=\mathrm{d}_I=0.0001, 1, 5.\)

Fig. 3
figure 3

When \(R_0<1\), \(E_0\) is globally asymptotically stable

Fig. 4
figure 4

The solutions S and \(p_1\) of system (3.11) and adjoint system (3.23)

Fig. 5
figure 5

The solutions I and \(p_2\) of system (3.11) and adjoint system (3.23)

Fig. 6
figure 6

The solutions R and \(p_3\) of system (3.11) and adjoint system (3.23)

Fig. 7
figure 7

The optimal control \( u^*\) at the end of the interactive process

$$\begin{aligned} \cos \omega _k^{\pm }\tau =\dfrac{\left[ \left( \omega _k^{\pm }\right) ^2-A_kE_k-A_kF_k+C_kD_k\right] \left( B_kE_k+B_kF_k\right) -\left( A_k+E_k+F_k\right) \left( \omega _k^{\pm }\right) ^2 B_k}{\left( B_kE_k+B_kF_k\right) ^2+B_k^2\left( \omega _k^{\pm }\right) ^2} =f\left( \omega _k^{\pm }\right) . \end{aligned}$$
(4.20)

Let

$$\begin{aligned} \tau _{k0}^{\pm }=\dfrac{1}{\omega _k^{\pm }}\arccos f\left( \omega _k^{\pm }\right) \end{aligned}$$
(4.21)

and

$$\begin{aligned} \begin{aligned} \tau _{kj}^{\pm }=\tau _{k0}^{\pm }+\dfrac{2j\pi }{\omega _{k}^{\pm }},j=0,1,2...,k \in {\mathbb {T}}, \end{aligned} \end{aligned}$$
(4.22)

where

$$\begin{aligned} {\mathbb {T}}= & {} \left\{ {\overline{k}}\in \{0,1,2...\}\bigcap [0,k_0]|\mathrm{Eq}. (4.14)\right. \\&\left. \,\mathrm{has positive roots with}\,k={\overline{k}}\right\} . \end{aligned}$$

Differentiating the eigenvalue equation with respect to \(\tau \), we can get

$$\begin{aligned} \begin{aligned} \mathrm{Re}\left( \dfrac{\mathrm{d}\lambda }{\mathrm{d}\tau }\right) ^{-1}\bigg |_{\lambda =i\omega _{k}^{\pm }}&=\dfrac{h'\left( \left( \omega _{k}^{\pm }\right) ^2\right) }{\left( B_kE_k+B_kF_k\right) ^2+B_k^2\left( \omega _{k}^{\pm }\right) ^2},\\&=\dfrac{\pm \sqrt{\Delta _0}}{\left( B_kE_k+B_kF_k\right) ^2+B_k^2\left( \omega _{k}^{\pm }\right) ^2}.\\ \end{aligned} \end{aligned}$$
(4.23)

Equation (4.23) implies that \(\mathrm{Re}\bigg (\dfrac{\mathrm{d}\lambda }{\mathrm{d}\tau }\bigg )^{-1}\bigg |_{\lambda =i\omega _{k}^{+}}>0\) when \(\Delta _0>0\). Then, we can come to a conclusion that system (4.2) undergoes a Hopf bifurcation at the rumor-spreading equilibrium point \(E_*\) when \(\tau =\tau ^*=\min \{\tau _{k0}^{+}\}\). In other words, for some small enough \(\varepsilon >0\), \(E_*\) is locally asymptotically stable when \(\tau \in [0,\tau ^*)\) and unstable when \(\tau \in (\tau ^*, \tau ^*+\varepsilon )\). \(\square \)

5 Numerical simulation

5.1 Asymptotic behavior of \(E^{*}\)

In the heterogeneous environment, we let parameters \(r=0.3, K=3.5, \mu _1=0.1, \mu _2=0.1, \delta =0.2, \gamma =0.3\) and \(\beta =0.5+0.05\sin x\). According to Ref. [60], we can estimate the basic regeneration number \(\overline{R_0}\) and get \(\overline{R_0}\ge \frac{\min \limits _{x\in {\overline{\Omega }}}\beta (x) K (r-\mu _1)}{r(\delta +\mu _2+\gamma )}=1.75>1\). In order to satisfy the condition of \(\mathrm{d}_S\rightarrow 0, \mathrm{d}_I>0\), we take diffusion coefficients as \(\mathrm{d}_S=0.0001\) and \(\mathrm{d}_I=0.1\). As Fig. 1 shows, we can get the change of rumor–susceptible and rumor–infector over time and their projections on the \(t-x\) plane. It indicates that as time goes on, the change range of solution curves will gradually decrease. That is to say, system (1.2) eventually tends to a stable state. Biologically speaking, it means that for any given location in the habitat, the number of rumor–susceptible and rumor–infector eventually converges to a positive constant.

5.2 The effect of diffusion coefficients

In order to explain the influence of diffusion coefficients , we make parameters \(r=0.3, K=3.5, \mu _1=0.1, \mu _2=0.1, \delta =0.2, \gamma =0.1, \beta =0.5+0.05\sin x\) and take different diffusion coefficients \(\mathrm{d}_S=\mathrm{d}_I=0.0001, 1, 5\), respectively. It can be seen from Fig. 2 that the fluctuation of S(tx) and I(tx) decreases gradually with the increase in diffusion coefficients in the current parameter environment. This means that the gap between the number of rumor–susceptible and rumor–infector in different locations will gradually narrow with the increase in diffusion coefficient.

5.3 Global stability

In case of all the parameters \(\mathrm{d}_S, \mathrm{d}_I, \mathrm{d}_R, r, K, \beta , \gamma , \delta , \mu _1, \mu _2\) in system (1.1) are positive constants, let \(r=0.7, K=3, \beta =0.4, \gamma =0.6, \delta =0.2, \mu _1=0.3, \mu _2=0.3\) and \(\mathrm{d}_S=\mathrm{d}_I=\mathrm{d}_R=0.01\), we obtain \(R_0=0.6234<1\) and \(E_0=(1.7143, 0, 0)\), which means that the parameters satisfy the conditions of Theorem  3.4. By choosing initial value (2.1, 0.4, 0.5), we obtain Fig. 3a–c, which depicts the trend of SIR over time and space. Obviously, system (1.1) is stable at the rumor-eliminating equilibrium point \(E_0\), which indicates that the number of rumor–susceptible tends to a positive constant and the number of rumor–infector tends to zero. Changing the initial value by perturbation, we obtain Fig. 3d. It can be seen that when we choose different initial data, rumors will be stable at the rumor eliminating equilibrium point \(E_0\), which is consistent with the conclusion of Theorem 3.4.

5.4 Optimal control

The parameters are given by \(r=0.4, L=0.9, \beta =0.2, \mu _1=0.2, \mu _2=0.2, \gamma =0.3, \delta =0.3\) and \(\mathrm{d}_S=0.003, \mathrm{d}_I=0.002, \mathrm{d}_R=0.002\) in system (3.11). The corresponding initial conditions are

$$\begin{aligned} \left\{ \begin{array}{lll} S(0,x)=0.15+0.26(1+\sin (15x)), x\in [0,1],\\ I(0,x)=0.21x(1-x)\cos (1.2+0.2x), x\in [0,1],\\ R(0,x)=0.13+0.3(1+\cos (10x)), x\in [0,1]. \end{array} \right. \end{aligned}$$
(5.1)

The variation curves of solutions (SIR) and \((p_1, p_2, p_3)\) are calculated over the entire domain \(Q=[0,1 ]\times [0,1]\), as shown in Figs. 4, 5 and 6. It is obvious from Fig. 7. that the optimal control \(u^*\) of the interaction process is Bang-Bang form.

5.5 Local stability and Hopf bifurcation

Considering the parameters \(r=0.2, K=5, \beta =0.4, \mu _1=0.1, \mu _2=0.1, \gamma =0.8, \delta =0.04\) in system (4.2), we have \(R_0=1.0638>1\) and the rumor-spreading equilibrium point \(E_*=(2.35, 0.1007)\). In addition, we can calculate that \(\frac{(\beta I_*+\mu _1-r)^2}{4\beta I_*(\delta +\mu _2)}=0.7295\), \(2C_0D_0+(A_0^2-B_0^2)=-0.0189<0\), \(\Delta _0=2.315\times 10^{-4}>0\) and \(\tau ^*=14.8769\). Therefore, we choose the diffusion coefficients \(\mathrm{d}_S=0.01\) and \(\mathrm{d}_I=0.01\), which meets the condition \(\frac{\mathrm{d}_S}{\mathrm{d}_I}=1>0.7295\). According to Theorem  4.1(2), when \(\tau \) exceeds \(\tau ^*\), system (4.2) has Hopf bifurcation at the rumor-spreading equilibrium point \(E_*\). As shown in Figs. 8 and 9, the rumor-spreading equilibrium point \(E_*\) is locally asymptotically stable when \(\tau =14<\tau ^*\) and unstable when \(\tau =16>\tau ^*\). In other words, \(\tau ^*\) is the threshold to determine whether \(E_*\) is stable, which is consistent with our conclusion.

Fig. 8
figure 8

When \(\tau =14<\tau ^*\), \(E^*\) is locally asymptotically stable

Fig. 9
figure 9

When \(\tau =16>\tau ^*\), \(E_*\) is unstable

6 Conclusions

In this work, we have established a new SIR rumor propagation model with logistic growth and spatial diffusion in heterogeneous environment. The discussion about the consistent persistence of the system and the asymptotic behavior of \(E^*\) when \(\mathrm{d}_S\rightarrow 0\) is given. In addition, we also find that when \(\mathrm{d}_S=\mathrm{d}_I\), the fluctuation of S(tx) and I(tx) decreases with the increase in diffusion coefficients by numerical simulations. Then, in order to further study the dynamic behavior of the system, we make the parameters of each location x the same and discuss the existence of equilibrium point, local stability and global stability. We find that the rumor-eliminating equilibrium point \(E_0\) exists when \(\mu _1<r\) and the rumor-spreading equilibrium point \(E_*\) exists when \(R_0>1\). Further, we obtain the conclusion that when \(0<R_0<1\), \(E_0\) is globally stable. In order to make our work more practical, we have analyzed the optimal control problem for the rumor propagation model. In particular, we find that the optimal control is in the form of Bang-Bang through numerical simulation. Finally, in order to make our model more realistic, we consider adding time delay to the model and study the stability and Hopf bifurcation of the delay model. Further, we verify our conclusion by several numerical simulations. It is undeniable that there are still many imperfections in our work. We will continue to conduct in-depth research and strive for more valuable results.