1 Introduction

Since the contributions to the mathematical model of epidemics, as the susceptible-infected-susceptible(SIS) model and the susceptible-infected-removed (SIR) model, were established in [1, 2], the study of mathematical epidemiology has grown rapidly. A large variety of mathematical models [37] have been formulated as dynamical systems of differential equations and applied to analyze the spread speed and control the infectious diseases, such as, susceptible-infected-removed-susceptible model (SIRS), susceptible-exposed-infected epidemic model (SEI), the suscptible-exposed-infected-susceptible model(SEIS), susceptible-exposed-infected-removed model (SEIR) etc. As we all know, it is common knowledge that the quarantine [8, 9] has been widely used to control some infectious diseases and avoid infecting broadly, for example, SARS, AIDS, H7N9, H5N6, H1N1 etc. According to the Kermack-Mckendrick bin model, we can build up a class about the quarantined individuals and call it state Q, which means that some people were quarantined once they were found to have infected with diseases in the exposed state or the infectious state. In fact, the study on the epidemic dynamic models with quarantine has been paid attention by more researchers [1015]. Hethcote et al. investigated the susceptible-infected-quarantine-susceptible (SIQS) model and the susceptible-infected-quarantine-removed (SIQR) epidemic model by considering the effect of quarantine [10]. Wang et al. [11] gave a class of epidemic models with the quarantine and message delivery. In [12, 13], the quarantine models with multiple disease stages or a disease transmission were discussed, respectively. Dobay et al. [14] researched a SIR model with the quarantine by analyzing an epidemic of syphilis.

In the view of potentially dramatic social impact of epidemic diseases, the investigation of epidemic models with respect to feasible steady states and their stability property study is indeed of great importance. As one main purpose, it can enable forecasting determination of the developmental trend of infection and entire event. In recently years, the stability analysis of the epidemic models with quarantine has attracted a main concern in some literatures [1520]. Zhang et al. [15] introduced a class of deterministic and stochastic SIQS models and gave the conditions for the globally asymptotically stable of the disease-free equilibrium and the unique endemic equilibrium. Liu et al. [16] discussed the stability of an SIQS model by considering the effects of transport-related infection and exit-entry screenings, and obtained that the disease-free equilibrium is globally asymptotically stable when the basic reproduction number is less than unify, and an endemic equilibrium is locally asymptotically stable with the condition that the reproduction number is great than unify. Similar results can be obtained in the following literatures. In [17, 18], the asymptotic dynamics of the epidemic models for quarantine and isolation were proposed. Sahu and Dhar concerned the dynamics of the susceptible-exposed-quarantined-infectious-hospitalized-recovered-susceptible (SEQIHRS) epidemic model and obtain a globally asymptotically stable disease-free equilibrium and a unique local asymptotically stable endemic equilibrium [19]. Zhao [20] analyzed the global dynamics behaviors of an SIQR model with pulse vaccination.

As mentioned in the previous paragraph, it is easy to find that the local stability of the endemic equilibrium plays an important role in the epidemiologic systems. Investigating the local stability of the epidemic systems inevitably leads to the problem on how to characterize the domain of attraction (DOA) containing the endemic equilibrium. In fact, we all know that it is a difficult problem to get the exact DOA for nonlinear systems [21], so estimating DOA(i.e.,computing the invariant subsets of DOA) has become a value problem. Generally, the fundamental method for estimating the domain of attraction is to solve an optimization problem by a sublevel set of a valid Lyapunov function. Many computing technologies have been used to solve such optimization problem. For example, Zubov’s method [22], the trajectory reversing method [23], LaSalle’s invariance principle [24], LMI optimization methods [2528], etc. In recently years, the newborn sum of squares (SOS) optimization method [2931], which were proposed by sum of squares of polynomials and semidefinite programs (SDPs), has been successfully applied to estimate the domain of attraction for nonlinear systems. Chesi et al. [32] had the first time to employ SOS method in conjunction with LMI method to solving convex optimization problems. Since then, some research results on the stability and the estimation of the domain of attraction via SOS optimization were presented (see, e.g., [31, 33, 34] and references therein). Chesi [31] studied the estimation and control of the DOA using SOS programming, and show the application on the various nonlinear nature systems in the first time. Topcua et al. [33] computed the bounds on the DOA of polynomial systems via SOS method. Franzè et al. [34] discussed the estimation DOA of a class of nonlinear polynomial time delay systems using SOS approach. Jarvis-Wloszek [30] also gave the great value research results on local stability of polynomial systems based on SOS optimization. Tan used SOS programming to deal with the nonlinear control problems in his Ph.D dissertation [35].

In conjunction with the above pointed investigations, the issue on how to estimate the DOA for the epidemic models by using the proper optimization methods has become one of the main research challenges. It is important to fully understand the dynamic characteristic of the infection spread as a function of the initial population distribution. Zhang et al. [36] set up an LMI optimization problems with polynomial constraints to solving the DOA of a class of SIRS epidemic model. Matallana et al. [37] used the Lyapunov-based approach to study the estimation of the DOA of a class of SIR models with the isolation of infected individuals. Li et al. [38] used the LMI methods on the basis the moments theory to estimate the DOA of an SIR epidemic model. Jing et al. [39] and Chen et al. [40] tried to solve the estimation of the DOA of SIRS and SEIR models via SOS method, respectively, and their research results demonstrated SOS optimization method is more effective on the estimation of the DOA of some epidemic models.

Motivation for our research endeavors come from the facts found and presented in some above literatures. Some recent solutions to the here investigated for the SIRS models and SEIR models were given in [37, 40], and here we extend those results further to the case of SEIQ epidemic models. In this paper, we wish to investigate the stability of a class of the susceptible-exposed-infected-quarantine (SEIQ) epidemic models, which is built up by the characteristics of infectious diseases in the different stages and the effects of quarantine, and estimate the DOA of the local stable endemic equilibrium based on SOS optimization method. We analyze the stability of the equilibrium points of the SEIQ model, including the global stability of the disease-free equilibrium, the local stability of the endemic equilibrium and the global stability of the endemic equilibrium in a positive invariant region, by LaSalle’s invariance principle and a geometric approach, which was proposed in [41, 42]. Furthermore, we use the convex optimization techniques via SOS method to obtain the largest estimation of the DOA for the local stable endemic equilibrium by finding the largest level set of a Lyapunov function.

2 Stability analysis of an SEIQ epidemic model

Following these works, in this section, we consider the following structural SEIQ epidemic model, which has four states: susceptible (S), exposed(E), infected(I) and quarantine(Q), which divide the total population into four classes. In our model, we consider that some exposed people and infected people will be quarantined once they were found to have infectious, and some exposed people and quarantined people will recover the healthy. Figure 1 gives a diagram of interactions between S, E, I and Q of the SEIQ epidemic model. Here, we assume that S(t), E(t), I(t) and Q(t) are the number of susceptible, exposed, infectious and quarantined individuals in the total population N(t) at time t, and \(\beta SI\) is the standard bilinear incidence rate, where \(\beta \) represents how fast the susceptible people come into the exposed class; A is the constant recruitment rate of the population, \(\mu \) is the natural mortality rate of all populations, \(\alpha \) is the mortality rate of infected and quarantined people due to illness, \(\varepsilon \) is the rate at which some exposed people become infective, c is the rate that the infected recovers and comes into the susceptible class, \(\sigma _1\) and \(\sigma _2\) represent the quarantined rate of the exposed people and the infected people, respectively. \(\gamma _1\), \(\gamma _2\) and \(\gamma _3\) represent the recovery rate of the exposed, infected and quarantined individuals, respectively.

Fig. 1
figure 1

A diagram of interactions between S, E, I and Q of an SEIQ epidemic model

The diagram leads to the following equations model as the SEIQ model, which can be formulated as

$$\begin{aligned} \left\{ \begin{array}{l} \dot{S} = A - \beta SI - \mu S + cI ,\\ \dot{E} = \beta SI - \left( {\mu {{ + }}\varepsilon {{ + }}{\sigma _1}{{ + }}{\gamma _1}} \right) E, \\ \dot{I} = \varepsilon E - \left( {\mu {{ + }}\alpha + c{{ + }}{\sigma _2}{{ + }}{\gamma _2}} \right) I, \\ \dot{Q} = {\sigma _1}E + {\sigma _2}I - \left( {\mu {{ + }}\alpha + {\gamma _3}} \right) Q ,\\ \end{array} \right. \end{aligned}$$
(1)

where all the parameters are strictly positive constants.

From (1) and \(N(t) = S(t) + E(t) + I(t) + Q(t)\), the derivative of N can be obtained as

$$\begin{aligned} \dot{N}(t)= & {} A - \mu S - \left( {\mu +{\gamma _1}} \right) E - \left( {\mu +\alpha +{\gamma _2}} \right) I \nonumber \\&- \left( {\mu +\alpha + {\gamma _3}} \right) Q . \end{aligned}$$
(2)

when \(E=I=Q=0\), we get that \(\mathop {\lim }\limits _{t \rightarrow \infty } \sup N(t) \le \frac{A}{\mu }\), and when \(N > A/\mu \), \(\dot{N} < 0\), so the feasible region D of (1) is

$$\begin{aligned} D = \left\{ \begin{array}{l} \left. {\left( {S,E,I,Q} \right) \in {{\mathbb {R}}^4}} \right| 0 < S + E + I + Q \le N \le \frac{A}{\mu }, \\ S \ge 0,E \ge 0,I \ge 0,Q \ge 0 \\ \end{array} \right\} .\nonumber \\ \end{aligned}$$
(3)

Because all solutions of (1) will remain or be tend to the field of D, it is easy to know that the feasible region D is the positive invariant set for system (1). We denote the basic reproduction number \(R_{0}\) as follows

$$\begin{aligned} {R_0} = \frac{{A\beta \varepsilon }}{{\mu \left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) }}, \end{aligned}$$
(4)

and then, it is simple to get equilibrium points \(P_{0}\left( A/\mu ,0,0,0\right) \) and \( P^* \left( {S^*, E^*, I^*, Q^* } \right) \), where

$$\begin{aligned} {S^*}= & {} \frac{A}{{\mu {R_0}}},{I^*}{\mathrm{= }}\frac{{\varepsilon {E^*}}}{{\mu +\alpha + c+{\sigma _2}+{\gamma _2}}},\\ {E^*}= & {} \frac{{A({R_0} - 1)\left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) }}{{{R_0}\left[ {\left( {\mu +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) +\varepsilon \left( {\mu +\alpha +{\sigma _2}+{\gamma _2}} \right) } \right] }},\\ {Q^*}= & {} \frac{{\left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) {\sigma _1}+\varepsilon {\sigma _2}}}{{\left( {\mu +\alpha + {\gamma _3}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) }}{E^*}. \end{aligned}$$

In view of that, we obtain that when \(R_0 \le 1\), there exists a unique equilibrium \(P_{0}\), i.e., the disease-free equilibrium point for system (1); when \(R_0 > 1\), there exist an equilibrium \(P_{0}\) and a unique endemic equilibrium \(P^*\in D_s\), where \(D_s\) is an positive invariant subset of D.

Now we will discuss the stability of equilibriums \(P_{0}\) and \(P^* \) for system (1), respectively. The main results can addressed as follows

Theorem 1

When \(R_0 \le 1\), the unique disease-free equilibrium \(P_{0}\left( A/\mu ,0,0,0\right) \) is globally asymptotically stable in D, and when \(R_{0}>1\), \(P_{0}\) is unstable in D.

Proof

Constructing a Lyapunov function as follows

$$\begin{aligned} V = \varepsilon E + \left( {\mu {{ + }}\varepsilon {{ + }}{\sigma _1}{{ + }}{\gamma _1}} \right) I, \end{aligned}$$

when \(R_0 \le 1\), the derivative of V is

$$\begin{aligned}\begin{array}{ccl} \dot{V} &{}=&{} \varepsilon \beta SI - \left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \\ &{}&{}\times \left( {\mu +\alpha + c{{ + }}{\sigma _2}+{\gamma _2}} \right) I \\ &{}=&{} \varepsilon \beta SI - \frac{{A\beta \varepsilon I}}{{\mu {R_0}}} \\ &{}=&{} \frac{{\varepsilon \beta IA}}{\mu }\left( {1 - \frac{1}{{{R_0}}}} \right) \le 0, \\ \end{array}\end{aligned}$$

and If and only if \(I=0\), then \( \dot{V} = 0\). With LaSalle’s invariance principle, we obtain that \(P_{0}\left( A/\mu ,0,0,0\right) \) is globally asymptotically stable in D.

When \(R_{0}>1\), it can easy to get that if \(E\ne 0\), \(I\ne 0\) and \(Q\ne 0\), and when \(S \rightarrow \frac{A}{\mu }\) and \( S > \frac{A}{R_{0}\mu }\), one has \(\dot{V} > 0\), and the following conclusion can be obtained that any solutions of D, which are near to \(P_{0}\), will be away from \(P_{0}\), then \(P_{0}\) is unstable in D. \(\square \)

Remark 1

In fact, the Routh-Hurwitz stability criterion also can be used to prove Theorem 1 as follows, for \(P_{0},\) we have

$$\begin{aligned} J\left| {_{{P_0}}} \right. = \left( {\begin{array}{*{20}{c}} { - \mu } &{} 0 &{} { - \frac{{\beta A}}{\mu } + c} &{} 0 \\ 0 &{} { - M} &{} {\frac{{\beta A}}{\mu }} &{} 0 \\ 0 &{} \varepsilon &{} { - N} &{} 0 \\ 0 &{} {{\sigma _1}} &{} {{\sigma _2}} &{} { - \left( {\mu +\alpha + {\gamma _3}} \right) } \\ \end{array}} \right) , \end{aligned}$$
(5)

where \(M = \mu + \varepsilon + {\sigma _1} + {\gamma _1}\) and \(N = \mu + \alpha + c + {\sigma _2} + {\gamma _2},\) and the eigenvalue equation is be obtained as

$$\begin{aligned}&\left( {\lambda +\mu } \right) \left( {\lambda +\left( {\mu +\alpha + {\gamma _3}} \right) } \right) \nonumber \\&\quad \times \left( {\left( {\lambda +M} \right) \left( {\lambda +N} \right) - \frac{{\beta A\varepsilon }}{\mu }} \right) = 0, \end{aligned}$$
(6)

then it is easy to know that when \(R_{0}<1\), \(P_{0}\left( A/\mu ,0,0,0\right) \) is globally asymptotically stable in D, and when \(R_{0}>1\), \(P_{0}\) is unstable in D.

Theorem 2

When \(R_0 > 1\), the unique endemic equilibrium \( P^* \left( {S^*, E^*, I^*, Q^* } \right) \) is local asymptotically stable in D for system (1).

Proof

First, the Jacobian matrix of system (1) in \( P^* \left( {S^*, E^*, I^*, Q^* } \right) \) can be gotten as

$$\begin{aligned}&J\left| {_{{P^* }}} \right. \nonumber \\&\quad = \left( {\begin{array}{*{20}{c}} { - \beta {I^*} - \mu } &{} 0 &{} { - \beta {S^*} + c} &{} 0 \\ {\beta {I^*}} &{} { - M} &{} {\beta {S^*}} &{} 0 \\ 0 &{} \varepsilon &{} { - N} &{} 0 \\ 0 &{} {{\sigma _1}} &{} {{\sigma _2}} &{} { - \left( {\mu +\alpha + {\gamma _3}} \right) } \\ \end{array}} \right) .\nonumber \\ \end{aligned}$$
(7)

Then, the eigenvalue equation \(det( {\lambda - J})= 0\) can be computed as

$$\begin{aligned} \left( {\lambda + \left( {\mu +\alpha + {\gamma _3}} \right) } \right) \left( {{a_0}{\lambda ^3} + {a_1}{\lambda ^2} + {a_2}\lambda + {a_3}} \right) = 0\nonumber \\ \end{aligned}$$
(8)

with

$$\begin{aligned}\left\{ \begin{array}{l} {a_0} = 1,{a_1} = \beta {I^*} + 3\mu + \varepsilon + {\sigma _1} + {\gamma _1} \\ \qquad \quad + \, \alpha + c + {\sigma _2} + {\gamma _2}, \\ {a_2} = \left( {\beta {I^*} + \mu } \right) \left( {M + N} \right) +MN - \beta {S^*}\varepsilon , \\ {a_3} = \left( {\beta {I^*} + \mu } \right) MN - \beta \varepsilon \left( {{I^*}c + \mu {S^*}} \right) . \\ \end{array} \right. \end{aligned}$$

For Eq. (8), due to \({\mu +\alpha + {\gamma _3}}>0\), we can only discuss the following equation as

$$\begin{aligned} {a_0}{\lambda ^3} + {a_1}{\lambda ^2} + {a_2}\lambda + {a_3} = 0. \end{aligned}$$
(9)

With the following inequality condition

$$\begin{aligned} {I^*}= & {} \frac{{\varepsilon A({R_0} - 1)}}{{{R_0}\left[ {\left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) }-c\varepsilon \right] }} \nonumber \\= & {} \frac{{A\beta \varepsilon - \mu \left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) }}{{\beta \left[ {\left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) }-c\varepsilon \right] }} \nonumber \\> & {} \frac{{A\beta \varepsilon - \mu \left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) }}{{\beta \left[ {\left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) } \right] }} \nonumber \\= & {} \frac{{A\varepsilon }}{{\left[ {\left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) } \right] }} - \frac{\mu }{\beta }, \nonumber \\ \end{aligned}$$
(10)

one has

$$\begin{aligned}&\beta {I^*} + \mu \nonumber \\&\quad > \frac{{A\beta \varepsilon }}{{\left[ {\left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) } \right] }},\nonumber \\ \end{aligned}$$
(11)

then

$$\begin{aligned} \begin{array}{ccl} {a_1}&{} = &{}\left( {\beta {I^*} + \mu } \right) + M + N> \frac{{A\beta \varepsilon }}{{MN}} + M + N > 0. \\ \end{array} \end{aligned}$$
(12)

Meanwhile, we can obtain that

$$\begin{aligned}&\left| {\begin{array}{*{20}{c}} {{a_1}} &{} {{a_0}} \\ {{a_3}} &{} {{a_2}} \\ \end{array}} \right| \\&\quad = \left( {\begin{array}{*{20}{c}} {\left( {\left( {\beta {I^*} + \mu } \right) + M + N} \right) \left( {\left( {\beta {I^*} + \mu } \right) \left( {M + N} \right) } \right) - } \\ { - \beta {I^*}MN + \beta \varepsilon {I^*}c} \\ \end{array}} \right) \\&\quad = \left( {\begin{array}{*{20}{c}} {{{\left( {\beta {I^*} + \mu } \right) }^2}\left( {M + N} \right) + \left( {\beta {I^*} + \mu } \right) \left( {{M^2} + {N^2}} \right) + } \\ { + \beta {I^*}MN + 2\mu MN + \beta \varepsilon {I^*}c} \\ \end{array}} \right) \\&\quad = \left( \begin{array}{l} {\left( {\beta {I^*} + \mu } \right) ^2}M + \left( {\beta {I^*} + \mu } \right) {M^2} + \beta {I^*}MN + \\ 2\mu MN + {\left( {\beta {I^*} + \mu } \right) ^2}N + \left( {\beta {I^*} + \mu } \right) {N^2} + \beta \varepsilon {I^*}c \\ \end{array} \right) \\&\quad > 0. \end{aligned}$$

According to the Routh-Hurwitz stability criterion, it is easy to know that the real part of all roots is greater than or equal to 0, so one get that \( P^* \left( {S^*, E^*, I^*, Q^* } \right) \) is local asymptotically stable in D for system (1) when \(R_{0}>1\). \(\square \)

Remark 2

From Theorem 2, if \( P^*\) is local asymptotically stable in the set D, its basin of attraction is an open subset of the feasible region D and contains a neighborhood of \( P^*\), and then we can deduce that equilibrium \( P^*\) is said to global asymptotically stable with respect to an open subset \(D_s (\subset D)\), where \(P^*\) is the unique equilibrium in \(D_s\).

In [41], Li and Muldowney proposed a geometric method to deal with this global stability problem based on the criteria of Bendixson and Dulac, the core result of the above geometric method can be introduced as follows. For a dynamical system \(\dot{x} =f(x)\), where the map \(x\rightarrow f(x)\) from the region \(\varLambda \) to \({{\mathbb {R}}^n}\), and the initial condition \(x(0)=x_0\), the solutions of the system are \(x(t,x_0)\). Assume that \(\bar{x}\) is the local asymptotically stable equilibrium in \(\varLambda \), then the basic criteria for the global stability of \(\bar{x}\) with respect to \(\varLambda \) is implied by its local stability can be obtained as Lemma 1.

Lemma 1

[41] Assume that the region \(\varLambda \) is simple connected, and there is a compact absorbing set \(\varLambda _0 (\subset \varLambda )\), where \(\bar{x}\) is the only equilibrium in \(\varLambda _0\), if the quantity \(\bar{q}_2\) satisfies

$$\begin{aligned} {{\bar{q}}_2} = \mathop {\lim }\limits _{t \rightarrow \infty } \sup \mathop {\sup }\limits _{{x_0} \in {\varLambda _0}} \frac{1}{t}\int _0^t {\mu (B(x(s,{x_0})))} ds < 0, \end{aligned}$$
(13)

where \(B = {Z_f}{Z^{ - 1}} + Z\frac{{\partial {f^{[2]}}}}{{\partial x}}{Z^{{\mathrm{- }}1}}\), Z(x) is a matrix valued function, \({Z_f} = \frac{{\partial {Z_{ij}}}}{{\partial x}}f\) and the measure \(\rho (B) = \mathop {\lim }\limits _{h \rightarrow {0^ + }} \frac{{\left\| {{I_0} + hB} \right\| - 1}}{h}\),\(\frac{{\partial {f^{[2]}}}}{{\partial x}}\) is the second additive compound matrix of \(\frac{{\partial {f}}}{{\partial x}}\), then equilibrium \(\bar{x}\) is global asymptotically stable in \(\varLambda _0\).

With the mentioned above, Li and Muldowney [41] used Lemma 1 to prove the global stability of the endemic equilibrium of an SEIR epidemic model. Huan et al. [42] gave the sufficient conditions of the global stability for a dynamic model of Hepatitis B via the geometric method. Lan [43] also obtained the results on global stability of some epidemic models using Lemma 1 in her dissertation. Zhou and Cui [45] gave the proof of the global stability of an susceptible-vaccinated-treated-exposed-infectious (SEIV) model via the Poimcaré-Bendixson criterion. Now, we will study that the endemic equilibrium \(P^*\) is global asymptotically stable in \(D_s\) with the above results.

Theorem 3

When \(R_0 > 1\), the endemic equilibrium \(P^*\) of system (1) is global asymptotically stable in \(D_s (\subset D)\).

Proof

First, it is easy to find that there exists a compact absorbing set \(D_s (\subset D)\), where \(P^*\) is the only equilibrium in \(D_s\), then according to Lemma 1, we only prove that \(\bar{q}_2<0\). Because the first three equations of (1) do not contain the state Q, so we consider the subsystem as follows

$$\begin{aligned} \left\{ \begin{array}{l} \dot{S} = A - \beta SI - \mu S + cI, \\ \dot{E} = \beta SI - \left( {\mu {{ + }}\varepsilon {{ + }}{\sigma _1}{{ + }}{\gamma _1}} \right) E ,\\ \dot{I} = \varepsilon E - \left( {\mu {{ + }}\alpha + c{{ + }}{\sigma _2}{{ + }}{\gamma _2}} \right) I ,\\ \end{array} \right. \end{aligned}$$
(14)

then the Jacobian matrix of system (14) in \(P^*\) is

$$\begin{aligned} J\left| {_{{(S^*,E^*,I^*)}}} \right. = \left( {\begin{array}{*{20}{c}} { - \beta I^* - \mu } &{} 0 &{} { - \beta S^* + c} \\ {\beta I^*} &{} { - M} &{} {\beta S^*} \\ 0 &{} \varepsilon &{} { - N} \\ \end{array}} \right) .\end{aligned}$$

The second additive compound matrix \(J^{[2]}\) of \(J\left| {_{{(S^*,E^*,I^*)}}} \right. \) can be gotten as

$$\begin{aligned} {J^{[2]}}= \left( {\begin{array}{*{20}{c}} { - \beta I^* - \mu - M} &{} {\beta S^*} &{} {\beta S^* - c} \\ \varepsilon &{} { - \beta I^* - \mu - N} &{} 0 \\ 0 &{} {\beta I^*} &{} { - M - N} \\ \end{array}} \right) .\end{aligned}$$

Choosing a proper function \(Z = \left( {\begin{array}{*{20}{c}} 1 &{} 0 &{} 0 \\ 0 &{} {\frac{E^*}{I^*}} &{} 0 \\ 0 &{} 0 &{} {\frac{E^*}{I^*}} \\ \end{array}} \right) \), then

$$\begin{aligned} {Z_f}{Z^{ - 1}} = diag\left\{ 0, {\frac{{\dot{E}^*}}{E^*} - \frac{{\dot{I}^*}}{I^*}} , {\frac{{\dot{E}^*}}{E^*} - \frac{{\dot{I}^*}}{I^*}} \right\} , \end{aligned}$$

and one has

$$\begin{aligned} B = {Z_f Z^{ - 1}} + Z\frac{{\partial {f^{[2]}}}}{{\partial x}}{Z^{{\mathrm{- }}1}} = \left( {\begin{array}{*{20}{c}} {{B_{11}}} &{} {{B_{12}}} \\ {{B_{21}}} &{} {{B_{22}}} \\ \end{array}} \right) , \end{aligned}$$

where

$$\begin{aligned} {B_{11}}= & {} - \beta I^* - \mu - M, \\ {B_{12}}= & {} \left( {\begin{array}{*{20}{c}} {\frac{{\beta S^*I^*}}{E^*}} &{} {\left( {\beta S^* - c} \right) \frac{I^*}{E^*}} \\ \end{array}} \right) ,{B_{21}} = \left( {\begin{array}{*{20}{c}} {\frac{{\varepsilon E^*}}{I^*}} \\ 0 \\ \end{array}} \right) , \\ {B_{22}}= & {} \left( {\begin{array}{*{20}{c}} {\frac{{\dot{E}^*}}{E^*} - \frac{{\dot{I}^*}}{I^*} - \beta I^* - \mu - N} &{} 0 \\ {\beta I^*} &{} {\frac{{\dot{E}^*}}{E^*} - \frac{{\dot{I}^*}}{I^*} - M - N} \\ \end{array}} \right) . \\ \end{aligned}$$

The estimation of the Lozinskiľ measure \(\rho (B)\) corresponding to the vector norm \(\left\| \cdot \right\| \) in \({{\mathbb {R}}^3} \cong {{\mathbb {R}}^{^{\left( {\begin{array}{*{20}{c}} 3 \\ 2 \\ \end{array}} \right) }}} \) can be obtained as follows

$$\begin{aligned} \rho (B) \le \sup \left\{ {({B_{11}}) + \left\| {{B_{12}}} \right\| ,\left\| {{B_{21}}} \right\| + {\rho _1}({B_{22}})} \right\} , \end{aligned}$$
(15)

where

$$\begin{aligned}\begin{array}{l} \left\| {{B_{12}}} \right\| = \max \left\{ {\frac{{\beta S^*I^*}}{E^*},\left( {\beta S^* - c} \right) \frac{I^*}{E^*}} \right\} ,\left\| {{B_{21}}} \right\| = \frac{{\varepsilon E^*}}{I^*}. \\ {\rho _1}({B_{22}}) = \max \left\{ {\frac{{\dot{E}^*}}{E^*} - \frac{{\dot{I}^*}}{I^*} - \mu - N,\frac{{\dot{E}^*}}{E^*} - \frac{{\dot{I}^*}}{I^*} - M - N} \right\} \\ ~~~~~~~~~ = \frac{{\dot{E}^*}}{E^*} - \frac{{\dot{I}^*}}{I^*} - \mu - N. \\ \end{array} \end{aligned}$$

According to system (14), we know that

$$\begin{aligned} \left\| {{B_{21}}} \right\| = \frac{{\varepsilon E^*}}{I^*} = \frac{{\dot{I}^*}}{I^*} + \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) , \end{aligned}$$
(16)

and there exists \(t^*\), when \(t^* > t\), one has that \(\beta S-c>0\), then

$$\begin{aligned} \left\| {{B_{12}}} \right\| = \frac{{\beta S^*I^*}}{E^*} = \frac{{\dot{E}^*}}{E^*} + \left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) . \end{aligned}$$
(17)

So when \(t^* > t\), one get

$$\begin{aligned} \rho (B) \le \sup \left\{ {\frac{{\dot{E}^*}}{E^*} - \beta I^* - \mu ,\frac{{\dot{E}^*}}{E^*} - \mu } \right\} {\mathrm{= }}\frac{{\dot{E}^*}}{E^*} - \mu . \end{aligned}$$
(18)

Substituting (18) to the quantity \(\bar{q}_2\), one has

$$\begin{aligned}&{{\bar{q}}_2} = \mathop {\lim }\limits _{t \rightarrow \infty } \sup \mathop {\sup }\limits _{{x_0} \in {\varLambda _0}} \frac{1}{t}\int _0^t {\rho (B)} ds \\&\quad < \mathop {\lim }\limits _{t \rightarrow \infty } \sup \mathop {\sup }\limits _{{x_0} \in {\varLambda _0}} \left[ {\frac{1}{t}\int _0^{{t^{\mathrm{*}}}} {\rho (B)} ds+\frac{1}{t}\int _{{t^{\mathrm{*}}}}^t {\left( {\frac{{\dot{E}}}{E} - \mu } \right) } ds} \right] \\&\quad = \mathop {\lim }\limits _{t \rightarrow \infty } \sup \mathop {\sup }\limits _{{x_0} \in {\varLambda _0}} \left[ {\frac{1}{t}\int _0^{{t^{\mathrm{*}}}} {\rho (B)} ds+\frac{1}{t}\ln \frac{{E(t)}}{{E({t^{\mathrm{*}}})}} - \frac{{\mu (t - {t^*})}}{t}} \right] \\ \end{aligned}$$

Because the subsystem (14) is uniformly persistent, there exists \(t>t^*\) such that \(\frac{1}{t}\ln {{E(t)}} < \frac{\mu }{2}\), then

$$\begin{aligned} {{\bar{q}}_2}< - \frac{\mu }{2} < 0, \end{aligned}$$
(19)

we gain that equilibrium \((S^*,E^*,I^*)\) is global asymptotically stable for system (14) in \(D_s \). And then, we solve equation

$$\begin{aligned} \begin{array}{ccl} \dot{Q} &{}=&{}{\sigma _1}{E^*} + {\sigma _2}{I^*} - \left( {\mu +\alpha + {\gamma _3}} \right) Q \\ &{}=&{} \left( {{\sigma _1} + \frac{{{\sigma _2}\varepsilon }}{{\mu +\alpha + c+{\sigma _2}+{\gamma _2}}}} \right) {E^*} - \left( {\mu +\alpha + {\gamma _3}} \right) Q \\ \end{array}, \end{aligned}$$
(20)

the solution of (20) is

$$\begin{aligned} Q(t) = {\varUpsilon }{e^{ - \left( {\mu +\alpha + {\gamma _3}} \right) t},} \end{aligned}$$

where

$$\begin{aligned}&{\varUpsilon } = \left( {{\sigma _1} + \frac{{{\sigma _2}\varepsilon }}{{\mu +\alpha + c+{\sigma _2}+{\gamma _2}}}} \right) \\&\quad {E^*}\int _0^t {{e^{\left( {\mu +\alpha + {\gamma _3}} \right) \tau }}} d\tau , \end{aligned}$$

it implies that when \(t \rightarrow \infty \), one has

$$\begin{aligned}&Q(t) \rightarrow {Q^*}(t)\nonumber \\&\quad =\frac{{\left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) {\sigma _1}+\varepsilon {\sigma _2}}}{{\left( {\mu +\alpha + {\gamma _3}} \right) \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) }}{E^*}, \end{aligned}$$

then the conclusion can be obtained that the endemic equilibrium \(P^*\) of system (1) is global asymptotically stable in \(D_s (\subset D)\) with the conditions \(R_0 > 1\). \(\square \)

Remark 3

According to Theorem 2 and 3, the endemic equilibrium \(P^*\) of system (1) is global asymptotically stable in \(D_s \) and is local asymptotically stable in D, where \(D_s\) is the compact absorbing subset of D, and \(P^*\) is only equilibrium in \(D_s\), so it is inevitable to solve the optimization problem how to maximally characterize the stable region \(D_s\) containing the unique endemic equilibrium \(P^*\). Then, the estimation the domain of attraction \(D_s\) for the endemic equilibrium \(P^*\) will be mainly discussed in next section.

3 Estimation the domain of attraction for the endemic equilibrium

From system (1), there is no the variable Q in the three equations, and the state Q only has contact with E and I, so we can only analyze the behavior of S, E and I of subsystem (14) to determine Q. According to subsystem (14), the feasible region \(D_0\) can be described as

$$\begin{aligned} D_0^* = \left\{ \begin{array}{l} \left. {\left( {S,E,I} \right) \in {{\mathbb {R}}^3}} \right| 0 < S + E + I \le \frac{A}{\mu }, \\ S \ge 0,\,\, E \ge 0, \,\, I \ge 0 \\ \end{array} \right\} , \end{aligned}$$

and when \(R_{0}>1\), the local stable endemic equilibrium in \(D^*_0\) is \( P^*_0 \left( {S^*, E^*, I^* } \right) \). Let \( x_1 = S - S^*\), \(x_2 = E - E^* \) and \(x_3 = I - I^* \), the system (14) can be rewritten as

$$\begin{aligned} \left\{ \begin{array}{l} {{\dot{x}}_1} = - (\beta {I^*} + \mu ){x_1} - (\beta {S^*} - c){x_3} - \beta {x_1}{x_3}, \\ {{\dot{x}}_2} = \beta {I^*}{x_1} + \beta {S^*}{x_3} - \left( {\mu +\varepsilon +{\sigma _1}+{\gamma _1}} \right) {x_2} + \beta {x_1}{x_3}, \\ {{\dot{x}}_3} = \varepsilon {x_2} - \left( {\mu +\alpha + c+{\sigma _2}+{\gamma _2}} \right) {x_3}, \\ \end{array} \right. \end{aligned}$$
(21)

and the endemic equilibrium \( P^*_0 \) is translated into as \( P_0 (0,0,0)\), the feasible region \(D_0\) is

$$\begin{aligned} {D_0} = \left\{ \begin{array}{l} \left. {X \in {{\mathbb {R}}^3}} \right| 0 < {x_1} + {x_2} + {x_3} \le \left( \begin{array}{l} \frac{A}{\mu } - {S^*} - \\ - {E^*} - {I^*} \\ \end{array} \right) , \\ {x_1} \ge - {S^*},{x_2} \ge - {E^*},{x_3} \ge - {I^*} \\ \end{array} \right\} ,\end{aligned}$$

where \(X=({{x_1},{x_2},{x_3}})\). Let \(V_0(x_1,x_2,x_3)\) be the Lyapunov function defined on \(D_0\) containing the origin, any bounded region

$$\begin{aligned} {\varOmega _\varrho } = \left\{ {\left( {{x_1},{x_2},{x_3}} \right) \in {{\mathbb {R}}^3}|V_0 \left( {{x_1},{x_2},{x_3}} \right) \le \varrho ,\varrho > 0} \right\} \end{aligned}$$

is a domain of attraction for (21).

According to the framework of the expanding interior algorithm based on SOS programming [30], we firstly restrict \(V_0(x)\in \mathfrak {R}_3\) with \(V_0(0)=0\), \( \mathfrak {R}_3\) is the set of all polynomials in three variables with real coefficients, and we define a variable sized region

$$\begin{aligned} {P_\varrho } = \left\{ {\left( {{x_1},{x_2},{x_3}} \right) \in {{\mathbb {R}}^3}|P\left( {{x_1},{x_2},{x_3}} \right) \le \varrho } \right\} , \end{aligned}$$

where \(P\left( {{x_1},{x_2},{x_3}} \right) \in {\varSigma _3}\), which satisfies the Lyapunov conditions, \({\varSigma _3}\) is the set of all sum of squares polynomials in three variables. Now we assume

$$\begin{aligned} {D_0} = \left\{ {\left( {{x_1},{x_2},{x_3}} \right) \in {{\mathbb {R}}^3}|V_0\left( {{x_1},{x_2},{x_3}} \right) \le 1} \right\} , \end{aligned}$$

then \(P_\varrho \subset D_0\).

Now the optimization problem for the estimation of the DOA based on sum of squares can be introduced. The task in this problem to maximize \(\varrho \) under certain constraints and is defined as follows

$$\begin{aligned}&\mathop {\max }\limits _{{V_0}(x) \in {\mathfrak {R}_3},V(0) = 0} \begin{array}{*{20}{c}} {} \nonumber \\ \end{array}\varrho \\&s.t. \nonumber \\&\quad \left\{ \begin{array}{l} \left\{ \begin{array}{l} X \in {{\mathbb {R}}^3}|{V_0}\left( {{x_1},{x_2},{x_3}} \right) \le 0, \\ {x_1},{x_2},{x_3} \ne 0 \\ \end{array} \right\} = \emptyset , \\ \left\{ \begin{array}{l} X \in {{\mathbb {R}}^3}|P\left( {{x_1},{x_2},{x_3}} \right) \le \varrho , \\ {V_0}\left( {{x_1},{x_2},{x_3}} \right) > 1 \\ \end{array} \right\} = \emptyset , \\ \left\{ \begin{array}{l} X \in {{\mathbb {R}}^3}|{V_0}\left( {{x_1},{x_2},{x_3}} \right) \le 1, \\ {{\dot{V}}_0}\left( {{x_1},{x_2},{x_3}} \right) \ge 0,{x_1},{x_2},{x_3} \ne 0 \\ \end{array} \right\} = \emptyset . \\ \end{array} \right. \end{aligned}$$
(22)

Instead of the constraints \({x_1},{x_2},{x_3} \ne 0\) with \(l_1\left( {x_1},{x_2},{x_3}\right) (\in \varSigma _3) \ne 0,\) \(l_2\left( {{x_1},{x_2},{x_3}} \right) (\in \varSigma _3)\ne 0\), (22) can be rewritten as

$$\begin{aligned} \begin{array}{l} \mathop {\max }\limits _{{V_0}(x) \in {\mathfrak {R}_3},V(0) = 0} \begin{array}{*{20}{c}} {} \\ \end{array}\varrho \\ s.t. \\ \left\{ \begin{array}{l} \left\{ \begin{array}{l} X \in {{\mathbb {R}}^3}|{V_0}\left( {{x_1},{x_2},{x_3}} \right) \le 0, \\ {l_1}({x_1},{x_2},{x_3}) \ne 0 \\ \end{array} \right\} = \emptyset , \\ \left\{ \begin{array}{l} X \in {{\mathbb {R}}^3}|P\left( {{x_1},{x_2},{x_3}} \right) \le \varrho , \\ {V_0}\left( {{x_1},{x_2},{x_3}} \right) > 1 \\ \end{array} \right\} = \emptyset , \\ \left\{ \begin{array}{l} X \in {{\mathbb {R}}^3}|{V_0}\left( {{x_1},{x_2},{x_3}} \right) \le 1, \\ {{\dot{V}}_0}\left( {{x_1},{x_2},{x_3}} \right) \ge 0,{l_2}({x_1},{x_2},{x_3}) \ne 0 \\ \end{array} \right\} = \emptyset . \\ \end{array} \right. \\ \end{array}\end{aligned}$$
(23)

Under the framework of the Positivstellensatz [29, 30], (23) can be translated into the following optimization problem with equivalence constraints,

$$\begin{aligned} \begin{array}{l} \mathop {\max }\limits _{{V_0} \in {\mathfrak {R}_3},V(0) = 0,{k_1},{k_2},{k_3} \in {Z_ + },{s_i} \in \sum _3 , i = 1,...,10} \begin{array}{*{20}{c}} {} \\ \end{array}\varrho \\ s.t. \\ \left\{ \begin{array}{l} {s_1} - {V_0}{s_2} + l_1^{2{k_1}} = 0, \\ \left( \begin{array}{l} {s_3} + (\varrho - p){s_4} + ({V_0} - 1){s_5} + \\ + (\varrho - p)({V_0} - 1){s_6} + {({V_0} - 1)^{2{k_2}}} \\ \end{array} \right) = 0, \\ {s_7} + (1 - V){s_8} + {{\dot{V}}_0}{s_9} + (1 - {V_0}){{\dot{V}}_0}{s_{10}} + l_2^{2{k_3}} = 0. \\ \end{array} \right. \\ \end{array} \end{aligned}$$
(24)

where s’s and the l’s are the sum of squares of polynomials, they are all even degree, \( d_{V_0}\) is even, \( d_{V_0}= d_{l_1 }\), \( \deg (ps_6 ) \ge d_{V_0}, \) \( \deg (V{_0}s_8 ) \ge \deg (\dot{V_0}s_9 )\), \( \deg ({V_0}s_8 ) \ge d_{l_2 }\).

In order to use SOSTOOLs box [44] to solve the above optimization problem (24), we choose that \(k_1 = k_2 = k_3 = 1\), \(s_2 = l_1\) and factor \(l_1\) out \(s_1\), and set \(s_3 = s_4 = 0\) and factor out a \((V_0 - 1)\), \(s_{10} = 0\) and factor out \(l_2\), then one gets that

$$\begin{aligned} \begin{array}{l} \mathop {\max }\limits _{{V_0} \in {\mathfrak {R}_3},V_0(0) = 0,{s_6},{s_8},{s_9} \in {\varSigma _3}} \begin{array}{*{20}{c}} {} \\ \end{array}\varrho \\ s.t. \\ \left\{ \begin{array}{l} {V_0} - {l_1} \in {\varSigma _3}, \\ - ((\varrho - p){s_6} + ({V_0} - 1)) \in {\varSigma _3}, \\ - ((1 - {V_0}){s_8} + {{\dot{V}}_0}{s_9} + {l_2}) \in {\varSigma _3}. \\ \end{array} \right. \\ \end{array}\end{aligned}$$
(25)

The expanding interior algorithm [30] is used to solve the above optimization problem (25). We denote the Lyapunov function \(V^{(i=0)}_0 \in \mathfrak {R}_3\), and set \(\varrho ^{(i=0)}=0\), i is the iteration index and choose a specified tolerance \(\varpi \) and the degrees of \(V,l_1 ,l_2 ,s_6 ,s_8 ,s_9 \). By set \(V_0=V_0^{i-1}\), \(s_8=s_8^{(i)}\) and \(s_9=s_9^{(i)}\), we can solve the linesearch (25) on \(\varrho \) where \({s_6} \in {\varSigma _{3,{d_{{s_6}}}}}\),\({s_8} \in {\varSigma _{3,{d_{{s_8}}}}}\),\({s_9} \in {\varSigma _{3,{d_{{s_9}}}}}\), \({V_0} \in {\mathfrak {R}_3}\), and \(V(0) = 0\) after finite times iterations, when \(|\beta ^{(i)}-\beta ^{(i - 1)}|\le \varpi \), then \( \varrho ^{(i)} = \varrho \) and \(V_0^{(i)} = V_0\) are the optimal solutions of the optimization problem (25), and we know that the set \(D_{\varrho }:= {\{({x_1},{x_2},{x_3}) \in {\mathbb {R}}^3 | V_0^{(i)} ({x_1},{x_2},{x_3})= V_0 \le 1\}}\) is the optimal estimation of the DOA for the endemic equilibrium \( P^*_0 \) of (21).

Remark 4

For the solving process of the optimization problem (25), the linesearch of \(\varrho \) can be finished by the following optimization problems as

$$\begin{aligned} \begin{array}{l} \mathop {\max }\limits _{{s_6},{s_8},{s_9}} \begin{array}{*{20}{c}} {} \\ \end{array}\varrho \\ s.t. \\ \left\{ \begin{array}{l} - ((\varrho - p){s_6} + ({V_0} - 1)) \in {\varSigma _3}, \\ - ((1 - {V_0}){s_8} + {{\dot{V}}_0}{s_9} + {l_2}) \in {\varSigma _3}. \\ \end{array} \right. \\ \end{array}\end{aligned}$$
(26)

and

$$\begin{aligned} \begin{array}{l} \mathop {\max }\limits _{{V_0} ,{s_6}} \begin{array}{*{20}{c}} {} \\ \end{array}\varrho \\ s.t. \\ \left\{ \begin{array}{l} {V_0} - {l_1} \in {\varSigma _3}, \\ - ((\varrho - p){s_6} + ({V_0} - 1)) \in {\varSigma _3}, \\ - ((1 - {V_0}){s_8} + {{\dot{V}}_0}{s_9} + {l_2}) \in {\varSigma _3}. \\ \end{array} \right. \\ \end{array}\end{aligned}$$
(27)

By the finite times iterations of (26) and (27) , if \(|\beta ^{(i)}-\beta ^{(i - 1)}|\) is less than \(\varpi \), the optimal results can be obtained, including the largest estimation of DOA, the Lyapunov function.

Based on the discussions above, we use two examples to demonstrate the feasibility and effectiveness of the above optimization method for the estimation of the DOA via SOS optimization.

Example I

Assume the parameters of the system (1)is \(A = 2,\beta = \frac{1}{2},\mu = c = \frac{1}{4}, \varepsilon = \frac{1}{2},{\sigma _1} = {\gamma _1} = \frac{1}{8},\alpha = \frac{1}{2},{\sigma _2} = {\gamma _2} = \frac{1}{4},{\gamma _3} = \frac{1}{2}\), then the system model can be obtained as

$$\begin{aligned} \left\{ \begin{array}{l} \dot{S} = 2 - \frac{1}{2}SI - \frac{1}{4}S + \frac{1}{4}I, \\ \dot{E} = \frac{1}{2}SI - E, \\ \dot{I} = \frac{1}{2}E - \frac{3}{2}I, \\ \dot{Q} = \frac{1}{8}E + \frac{1}{4}I - \frac{5}{4}Q. \\ \end{array} \right. \end{aligned}$$
(28)

The subsystem model contained S, E and I can be rewritten as

$$\begin{aligned} \left\{ \begin{array}{l} \dot{S} = 2 - \frac{1}{2}SI - \frac{1}{4}S + \frac{1}{4}I, \\ \dot{E}= \frac{1}{2}SI - E, \\ \dot{I} = \frac{1}{2}E - \frac{3}{2}I. \\ \end{array} \right. \end{aligned}$$
(29)

The basic reproduction number \(R_{0}=\frac{4}{3}>1\), and the local stable endemic equilibrium for (29) is \((6,\frac{6}{11},\frac{2}{11})\), then set \(S=x_1+6,E=x_2+\frac{6}{11},I=x_3+\frac{2}{11}\), (29) can be converted to (30),

$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}_1 = - \frac{{15}}{{44}}x_1 - \frac{3}{4}x_3 - \frac{1}{2}x_1x_3, \\ \dot{x}_2 = \frac{1}{11}x_1 + 3x_3 - x_2 + \frac{1}{2}x_1x_3, \\ \dot{x}_3 = \frac{1}{2}x_2 - \frac{3}{2}x_3. \\ \end{array} \right. \end{aligned}$$
(30)

The feasible region \(D_{01}\) of (30) is

$$\begin{aligned}{D_{01}} = \left\{ \begin{array}{l} \left. {\left( {{x_1},{x_2},{x_3}} \right) \in {{\mathbb {R}}^3}} \right| 0 < {x_1}+ {x_2} + {x_3} \le \frac{{13}}{{11}}, \\ {x_1} \ge - 6,{x_2} \ge - \frac{6}{{11}},{x_3} \ge - \frac{2}{{11}} \\ \end{array} \right\} .\end{aligned}$$

Setting the tolerance as \(\beta ^{(i)} - \beta ^{(i - 1)}=0.01 \) and choosing \( d_{V_0} = 2,d_{s_6 } = d_{s_8 } = 2,d_{s_9 } = 0,d_{l_1 } = 2,d_{l_2 } = 4\), with the optimization problems (2225), we can get the optimal solutions under the framework of SOS optimization, the Lyapunov function \(V_{01}(x,y,z)\) can be gained as follows

$$\begin{aligned}&{V_{01}}(x,y,z) \nonumber \\&\quad = \left( \begin{array}{l} 0.38327x_1^2 + 0.41562{x_1}{x_2} + 0.070052{x_1}{x_3} + \\ + 2.7785x_2^2 + 10.8934{x_2}{x_3} + 11.8535x_3^2 \\ \end{array} \right) ,\nonumber \\ \end{aligned}$$
(31)

and \(\varrho _{max}=0.0694,\) then the domain of attraction estimated of (30) by using sum of squares optimization method, which can be obtained as follows

$$\begin{aligned}{D_{\varrho 0}} = \left\{ \begin{array}{l} \left. {({x_1},{x_2},{x_3})} \right| {V_{01}}(x,y,z){|_{(30)}} \le 1, \\ {x_1} \ge - 6,{x_2} \ge - \frac{6}{{11}},{x_3} \ge - \frac{2}{{11}} \\ \end{array} \right\} .\end{aligned}$$

and Fig. 2 gives the simulation result for the domain of attraction estimated of (30), where we set \(x_1=x\), \(x_2=y\) and \(x_3=z\) in Fig. 2. It also can be observed from Fig. 2 that the feasibility of the above research results for the estimation of the DOA via SOS method.

Fig. 2
figure 2

The largest estimation of the DOA for the local stable equilibrium of (30) using SOS method

Example II

In this model, we assume that \(A = 4, \beta = \frac{1}{2}, \mu = c = \frac{1}{2}, \varepsilon = \frac{1}{2},{\sigma _1} = {\gamma _1} ={0}, \alpha = \frac{1}{6}, {\sigma _2} = {\gamma _2} = \frac{1}{6},{\gamma _3} = \frac{1}{2}\), the subsystem for this model can be rewritten as

$$\begin{aligned} \left\{ \begin{array}{l} \dot{S} = 4 - \frac{1}{2}SI - \frac{1}{2}S + \frac{1}{2}I, \\ \dot{E}= \frac{1}{2}SI - E, \\ \dot{I} = \frac{1}{2}E - \frac{3}{2}I. \\ \end{array} \right. \end{aligned}$$
(32)

The basic reproduction number satisfies that \(R_{0}=\frac{4}{3}>1\), and the local stable endemic equilibrium for (32) is \(\left( 6, \frac{6}{5},\frac{2}{5}\right) \). Set \(S=x_1+6,E=x_2+\frac{6}{5},I=x_3+\frac{2}{5}\), (32) can be transformed into as

$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}_1 = - \frac{{7}}{{10}}x_1 - \frac{5}{2}x_3 - \frac{1}{2}x_1x_3,\\ \dot{x}_2 = \frac{1}{5}x_1 + 3x_3 - x_2 + \frac{1}{2}x_1x_3, \\ \dot{x}_3 = \frac{1}{2}x_2 - \frac{3}{2}x_3. \\ \end{array} \right. \end{aligned}$$
(33)

The feasible region \(D_{02}\) is

$$\begin{aligned}{D_{02}} = \left\{ \begin{array}{l} \left. {\left( {{x_1},{x_2},{x_3}} \right) \in {{\mathbb {R}}^3}} \right| 0 < {x_1} + {x_2} + {x_3} \le \frac{2}{5}, \\ {x_1} \ge - 6,{x_2} \ge - \frac{6}{5},{x_3} \ge - \frac{2}{5} \\ \end{array} \right\} .\end{aligned}$$

Similar to Example 1, we set the same tolerance and the degrees for the variables, then the optimal solutions can be expressed as

$$\begin{aligned}&{V_{02}}({x_1},{x_2},{x_3}) \nonumber \\&\quad =\! \left( \begin{array}{l} 0.065216x_1^2 + 0.11041{x_1}{x_2} + 0.053477{x_1}{x_3} + \\ +\,0.30416x_2^2 + 1.1112{x_2}{x_3} + 1.336x_3^2 \\ \end{array}\right) ,\nonumber \\ \end{aligned}$$
(34)

and \(\varrho _{max} =0.6330\). The largest estimation of domain of attraction of (33) is

$$\begin{aligned} {D_{\varrho 1}} = \left\{ \begin{array}{l} \left. {({x_1},{x_2},{x_3})} \right| {V_{01}}({x_1},{x_2},{x_3}){|_{(23)}} \le 1, \\ {x_1} \ge - 6,{x_2} \ge - \frac{6}{5},{x_3} \ge - \frac{2}{5} \\ \end{array} \right\} . \end{aligned}$$

Figure 3 gives two estimations results of the DOA for (33) by using the different optimization methods. The smaller ellipsoid can be obtained by choosing the optimization method based on moment matrices, which can be shown in [27, 38], and the larger ellipsoid is computing by using the SOS optimization. It is easy to find the effectiveness of the research results from Fig. 3.

Remark 5

In fact, sum of square method provides a dynamic iterative linesearch process to obtain the most appropriate Lyapunov function and the largest estimation of DOA, it is more effective than some previous optimization methods [27, 28, 38], which set a selected Lyapunov function.

Fig. 3
figure 3

The largest estimations of the DOA for the local stable equilibrium of (33) using SOS and moment method

4 Conclusions

This paper has analyzed the stability of a class of SEIQ epidemic models, and estimated the domain of attraction for the local stable endemic equilibrium by using the advance SOS optimization method. Based on the discussion on the stability of the equilibriums, we have verified that the proof of global stability of the local stable endemic equilibrium in an positive invariant subset of the feasible region, can be efficiently achieved. In addition, we successfully used SOS optimization technique to obtained the optimal estimation of the domain of attraction for the endemic equilibrium. With two numerical examples, simulation results have illustrated the feasibility and effectiveness of the research results on the proof theory and the computational method addressed. Currently, some researchers are attempting to estimate the dependence of the domain of attraction on chaos emergence [46] or the Hamilton energy function, and discuss the relationship between the Hamilton energy function [47, 48] and the domain of attraction, so to solve the above much-studied topics by using SOS optimization technique are our future works.