1 Introduction

Mathematical epidemiology has made a significant progress in better understanding of the disease transmissions. Many epidemic models are described by the ordinary differential equations. In the real world, epidemic dynamics is always affected by the environmental noise, thus investigating the influence of the noises on dynamics of the epidemics is of interest to the researchers. The stochastic population model is always formulated by constructing the discrete time Markov chain based on the deterministic model (see [1] for example). By this way, the common SIR model with stochastic perturbations (see [26]) is like

$$ \textstyle\begin{cases} dS = [ {\mu ( {N - S} ) - \beta S } ]\,dt+ \sigma_{0} S\,dB_{0} ( t ), \\ dI = [ {\beta S - ( {\mu +\varepsilon+\epsilon} )} ]I\,dt + \sigma_{1} I\,dB_{1} ( t ) , \\ dR = ( {\varepsilon I-\gamma R} )\,dt + \sigma_{2} I\,dB_{2} ( t ). \end{cases} $$
(1)

For model (1), Lin and Jiang [2] studied the existence of a positive and global solution, then they showed sufficient conditions for the survival and extinction of the disease. Lahrouz and Omari [3] proved the existence of stationary distribution, which indicates the disease will continue to exist forever. Zhao [4] gave the threshold for this stochastic SIR epidemic model. Recently, by supposing the coefficients are periodic, Liu et al. [5] gave sufficient conditions for the existence of a random periodic solution. For more about model (1) and its extensions, one can refer to [6, 7] and the references therein.

Different from the method mentioned above and from the experimental point of view, the parameters of the model are always estimated by using the regression method with certain confidence intervals, which reveals that the parameters can exhibit random fluctuations to some extent. Then, from the point of view of the parameter randomization, many researchers formulated and studied the stochastic epidemic models with randomized parameters (see [812] for example). By supposing that the contact rate is affected by the noise like

$$\beta \to\beta + \sigma\dot{B}, $$

Gray et al. [12] established a classical stochastic SIS epidemic model of the form

$$ \textstyle\begin{cases} dS = [ {\mu ( {N - S} ) - \beta S + \delta I} ]\,dt - \sigma SI\,dB ( t ), \\ dI = [ {\beta S - ( {\mu + \delta} )} ]I\,dt + \sigma SI\,dB ( t ) , \end{cases} $$
(2)

where \(S(t)\) represents the number of individuals susceptible to the disease at time t, and \(I(t)\) represents the number of infected individuals. N is a constant input of new members into the population per unit time; β is the transmission coefficient between compartments S and I; μ means the natural death rate; δ is the recovery rate from infectious individuals to the susceptible; \(B ( t )\) is a standard Brownian motion on the complete probability space \(( {\Omega,{\mathcal{F}},{{ ( {{\mathcal {F}_{t}}} )}_{t \ge0}},{P}} )\) with the intensity \({\sigma ^{2}} > 0\). The authors proved that this model has a unique global positive solution and derived the existence of stationary distribution.

Inspired by the work of Capasso and Serio [13], Lin et al. [14] introduced a saturated incidence rate \({\frac{{\beta SI}}{{1 + aI}}}\) into epidemic model (2), where a is a positive constant, and \(\frac{{\beta I}}{{1 + aI}}\) measures the infection force of the disease with inhibition effect due to the crowding of the infective. This model reads as follows:

$$ \textstyle\begin{cases} dS = [ {\mu ( {N - S} ) - \frac{{\beta SI}}{{1 + aI}} + \delta I} ]\,dt - \frac{{\sigma SI}}{{1 + aI}}\,dB ( t ), \\ dI = [ {\frac{{\beta S}}{{1 + aI}} - ( {\mu + \delta} )} ]I\,dt + \frac{{\sigma SI}}{{1 + a ( t )I}}\,dB ( t ). \end{cases} $$
(3)

It is interesting to see that the authors gave a complete threshold for any size of noise. Moreover, they proved that the model has the ergodic property and derived the expression for its invariant density.

However, in the real word, many infectious diseases of humans, such as measles, mumps, rubella, chickenpox, diphtheria, pertussis, and influenza, fluctuate over time with seasonal variation [15]. This implies that the corresponding mathematical models may have the periodic solutions. Therefore, it is important to investigate the periodic dynamics of epidemic models. For more about the periodic properties of the epidemic model, one can see [1619] and the references cited therein. At the same time, one can easily find that some diseases mentioned above always do not have significant effects on the total population size. Consequently, in this paper, the total population is assumed to be a positive constant, denoted by N.

Motivated by the above, we present a stochastic SIS epidemic model with periodic coefficients as follows:

$$ \textstyle\begin{cases} dS ( t ) = [ {\mu ( t ) ( {N - S ( t )} ) - \frac{{\beta ( t )S ( t )I ( t )}}{{1 + a ( t )I ( t )}} + \delta ( t )I ( t )} ]\,dt - \frac{{\sigma ( t )S ( t )I ( t )}}{{1 + a ( t )I ( t )}}\,dB ( t ), \\ dI(t)= [ {\frac{{\beta ( t )S ( t )}}{{1 + a ( t )I ( t )}} - ( {\mu ( t ) + \delta ( t )} )} ]I ( t )\,dt + \frac{{\sigma ( t )S ( t )I ( t )}}{{1 + a ( t )I ( t )}}\,dB ( t ). \end{cases} $$
(4)

The main concerns of this paper are as follows:

  • What is the condition for the existence of a random periodic and positive solution of this model?

  • Under what conditions will the microorganism survive or will be washed out?

  • Is there a threshold which more or less helps to determine the survival of the microorganism?

For simplicity, we denote \({x^{*}} = \mathop{\sup} _{t \ge0} \{ {x ( t )} \}\) and \({x_{*}} = \mathop{\inf} _{t \ge 0} \{ {x ( t )} \}\) for a function \(x(t)\) defined on \([0,\infty)\). \(R_{0}^{T} = \frac{{\frac{1}{T}\int_{0}^{T} { [ {\beta ( t )N - \frac{{{\sigma^{2}} ( t ){N^{2}}}}{2}} ]\,dt} }}{{\frac{1}{T}\int_{0}^{T} { [ {\mu ( t ) + \delta ( t )} ]\,dt} }}\). One can easily check that \(\Gamma = \{ { ( {S,x} ) \in R_{+} ^{2}:S + x = N} \}\) is the positive invariant set of model (4), which is a crucial property for the proof of a periodic solution. In view of the biological meanings, we assume that the coefficients of model (4) are continuous, positive, bounded, and T-periodic functions on \([0,\infty)\). Then \({\mu_{*}} > 0\).

The existence of the uniquely positive solution can be proved by following the standard procedure in [12], so we omit it. In the following, we mainly focus on finding the suitable condition for the existence of a random periodic solution, persistence, and extinction of (4). Main contributions of this paper are as follows.

Theorem 1.1

If \({R_{0}^{T} > 1}\) holds, then model (4) has at least one random positive T-periodic solution in Γ.

The proof is located in Section 2. Here, we mainly illuminate Theorem 1.1 with an example.

Example 1

Considering model (4), we choose \(N = 1\), \(\mu ( t ) = 0.3 + 0.2\cos t\), \(\beta ( t ) = 0.7 + 0.3\sin t\), \(\delta ( t ) = 0.1\), \(a ( t ) = 0.5 + 0.3\sin2t\), and \(\sigma ( t ) = 0.3 + 0.2\sin t\) with the initial value \(( {S ( 0 ),I ( 0 )} )= ( {0.6,0.4} )\). Clearly, the coefficients are all positive 2π-periodic functions. Compute that \(R_{0}^{T} = 1.6125 > 1\). Then Theorem 1.1 implies that model (4) has a 2π-periodic solution which lies in \((0,1)\), see Figure 1.

Figure 1
figure 1

Simulations of \(S(t)\) and \(I(t)\) of Example 1. The red (solid) lines are the solutions of the stochastic model, and the blue (dotted) lines are the paths of the corresponding deterministic model

Remark 1

Khasminskii [20] said that a Markov process \(X(t)\) is T-periodic if and only if its transition probability function is T-periodic and the function \(P_{0}(t, A) =P(X(t)\in A)\) satisfies the equation

$${P_{0}} ( {s,A} ) = \int_{{R^{n}}} {{P_{0}} ( {s,dz} )P ( {s,z,s + T,A} )} \equiv{P_{0}} ( {s + T,A} ), $$

where \(A \in\Lambda\) and Λ is σ-algebra. This means that Figure 1 can show us the periodic behavior of model (4) under the meaning of distribution.

The following theorems concern the persistence in mean and extinction of model (4).

Theorem 1.2

Let \(( {S ( t ),I ( t )} )\) be the solution of (4) with the initial value \(( {S ( 0 ),I ( 0 )} )\in R_{+}^{2}\). If \({R_{0}^{T} > 1}\), then the disease will be persistent in mean, i.e.,

$$ \mathop{\lim\inf} _{t \to\infty} \frac{1}{t} \int_{0}^{t} {I ( s )\,ds} \ge{ \bigl( {{\beta^{*}} + {a^{*}} + {{ \bigl( {{a^{*}}} \bigr)}^{2}}N} \bigr)^{ - 1}} \bigl( {R_{0}^{T} - 1} \bigr) > 0. $$
(5)

Theorem 1.3

Let \(( {S ( t ),I ( t )} )\) be the solution of model (4) with the initial value \(( {S ( 0 ),I ( 0 )} )\in R_{+}^{2}\). If one of the two assumptions holds

  1. (A)

    \(\mathop{\sup} _{t \ge0} ( {{\sigma^{2}} ( t )N - \mu ( t )} ) \le0\) and \(R_{0}^{T} < 1\),

  2. (B)

    \(\frac{1}{T}\int_{0}^{T} {{ [ {\frac{{{{ [ {N\beta ( s ) - \theta ( {\mu ( s ) + \delta ( s )} )} ]}^{2}}}}{{2{N^{2}}{\sigma^{2}} ( s )}} - ( {1 - \theta} ) ( {\mu ( s ) + \delta ( s )} )} ]}\,ds< 0}\) holds for any constant \(\theta\in[0,1)\),

then the disease I goes extinct exponentially, namely

$$\begin{aligned} &\mathop{\lim\sup} _{t \to\infty} \frac{1}{t}\log\frac{{I ( t )}}{{I ( 0 )}} \le \frac{1}{T} \int_{0}^{T} { \bigl( {\mu ( t ) + \delta ( t )} \bigr)\,dt} \bigl( {R_{0}^{T} - 1} \bigr)< 0\quad \textit{a.s. if (A) holds}; \\ &\mathop{\lim\sup} _{t \to\infty} \frac{1}{t}\log\frac{{I ( t )}}{{I ( 0 )}} \le \frac{1}{T} \int_{0}^{T} {{ \biggl[ {\frac{{{{ [ {N\beta ( s ) - \theta ( {\mu ( s ) + \delta ( s )} )} ]}^{2}}}}{{2{N^{2}}{\sigma^{2}} ( s )}} - ( {1 - \theta} ) \bigl( {\mu ( s ) + \delta ( s )} \bigr)} \biggr]}\,ds< 0}\\ & \phantom{\mathop{\lim\sup} _{t \to\infty} \frac{1}{t}\log\frac{{I ( t )}}{{I ( 0 )}} \le}\textit{a.s. if (B) holds.} \end{aligned}$$

Moreover, \(( {S ( t ),I ( t )} )\) exponentially tends to \(( {N,0} )\) as \(t\rightarrow\infty\).

Remark 2

From Theorems 1.2 and 1.3, under the assumption: \(\mathop{\sup} _{t \ge0} ( {{\sigma^{2}} ( t )N - \mu ( t )} ) \le0\), the disease is persistent if \({R_{0}^{T} > 1}\), while it goes extinct if \({R_{0}^{T} < 1}\). So we consider \({R_{0}^{T} }\) as the threshold of the stochastic model (4).

Remark 3

Let the coefficients of model (4) all be constants and \(\theta=0\), then Theorems 1.2 and 1.3 are consistent with the related results in [12]. Thus, some known results are generalized and improved.

2 Proofs

First, we introduce some results concerning the periodic Markov process.

Definition 2.1

(see [20])

A stochastic process \(\xi(t) = \xi(t, \omega ), t\in R\), is said to be periodic with period T if, for every finite sequence of numbers \(t_{1}, t_{2}, \ldots, t_{n}\), the joint distribution of random variables \(\xi(t_{1} + h), \ldots, \xi(t_{n} + h)\) is independent of h, where \(h = kT\ (k = \pm1,\pm2, \ldots)\).

Consider the stochastic differential equation

$$ X ( t ) = X ( {{t_{0}}} ) + \int_{{t_{0}}}^{t} {f \bigl( {s,X ( s )} \bigr)\,ds} + \int_{{t_{0}}}^{t} {g \bigl( {s,X ( s )} \bigr)\,dB ( s )} ,\quad X \in{R^{n}}. $$
(6)

Lemma 2.2

(see [20])

Suppose that the coefficient of (6) is T-periodic in t and satisfies the condition

$$\bigl\vert {f ( {t,x} ) - f ( {t,y} )} \bigr\vert + \bigl\vert {g ( {t,x} ) - g ( {t,y} )} \bigr\vert \le B \vert {x - y} \vert \quad \textit{and}\quad \bigl\vert {f ( {t,x} )} \bigr\vert + \bigl\vert {g ( {t,x} )} \bigr\vert \le B \bigl( {1 + \vert x \vert } \bigr) $$

for some constant \(B>0\) in every cylinder \(I \times\mathcal{D}\); and suppose further that there exists a function \(V(t, x) \in C^{2}\) in \(R^{n}\) which is T-periodic in t and satisfies the following conditions:

$$\mathop{\inf} _{ \vert x \vert \ge r} \bigl\{ {V ( {t,x} )} \bigr\} \to\infty\quad \textit{as }r \to \infty\quad \textit{and} \quad \mathop{\sup} _{ \vert x \vert > R} LV ( x ) = - {A_{R}} \to - \infty\quad \textit{as }R \to\infty, $$

where the operator \(\mathcal{L}\) is given by \(\mathcal{L} = \frac{\partial}{{\partial t}} + \sum_{l = 1}^{n} {{f_{l}}} ( {t,x} )\frac{\partial}{{\partial{x_{l}}}} + \frac{1}{2}\sum_{i,j = 1}^{n} {{g_{i}}} ( {t,x} ){g_{j}} ( {t,x} )\frac{{{\partial^{2}}}}{{\partial{x_{i}}\,\partial{x_{j}}}}\). Then there exists a solution of (6) which is a T-periodic Markov process.

Proof of Theorem 1.1

Set \((S(0),I(0))\in\Gamma\), then \((S(t),I(t))\in\Gamma\) for all \(t>0\). Let \(y = \ln\frac{N}{{N - I}}\), then \(y(t)\in R_{+}\) such that

$$\begin{aligned} dy ( t ) = {}& {-} \biggl[ \mu ( t ){e^{y ( t )}} - \mu ( t ) - \frac{{\beta ( t )N ( {{e^{y ( t )}} - 1} )}}{{ ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )}} \\ &{}- \frac{{{\sigma^{2}} ( t ){N^{2}}{{ ( {{e^{y ( t )}} - 1} )}^{2}}}}{{2{{ [ { ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )} ]}^{2}}}} \biggr]\,dt \\ &{}- \delta ( t ) \bigl( {{e^{y ( t )}} - 1} \bigr)\,dt+ \frac{{\sigma ( t )N ( {{e^{y ( t )}} - 1} )}}{{ ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )}}\,dB ( t ). \end{aligned}$$
(7)

Note that \(S(t)+I(t)\equiv N\), we have only to prove (7) admits the periodic solution in the sequel. Obviously, the coefficients of (7) are all periodic. Denote

$$\lambda ( t ) = \beta ( t )N - \mu ( t ) - \delta ( t ) - \frac{{ ( {p + 1} ){\sigma^{2}} ( t ){N^{2}}}}{2}. $$

Since \({R_{0}^{T} > 1}\), we can choose \(p>0\) such that \(\frac{1}{T}\int_{0}^{T} {\lambda ( t )\,dt} >0\). Define a \(C^{2}\)-function \(V:[0,\infty) \times R_{+}^{2}\rightarrow R\) by

$$V \bigl( {t,y ( t )} \bigr) = \frac{{\gamma ( t )}}{p}{ \biggl[ {\frac{{N ( {{e^{y ( t )}} - 1} )}}{{{e^{y ( t )}}}}} \biggr]^{ - p}} + \frac {{{e^{y ( t )}}}}{N}, $$

where \(\gamma ( t ) = \frac{{p\frac{1}{T}\int_{0}^{T} {\lambda ( t )\,dt} \int_{t}^{t + T} {{e^{p\int_{s}^{t} {\lambda ( u )\,du} }}\,ds} }}{{1 - {e^{ - p\int_{0}^{T} {\lambda ( s )\,ds} }}}}\) is the uniquely positive T-periodic solution of the equation

$$ \frac{{d\gamma ( t )}}{{dt}} = p\lambda ( t )\gamma ( t ) - p \frac{1}{T} \int_{0}^{T} {\lambda ( t )\,dt}. $$
(8)

Applying the Itô formula to model (7), we get

$$\begin{aligned} dV \bigl( {t,y ( t )} \bigr) ={}& \mathcal{L}V \bigl( {t,y ( t )} \bigr)\,dt \\ &{}+ \biggl\{ {\frac{{\sigma ( t )N{{ [ {\frac{{N ( {{e^{y ( t )}} - 1} )}}{{{e^{y ( t )}}}}} ]}^{ - p}}}}{{ ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )}} - \frac{{\sigma ( t )N ( {{e^{y ( t )}} - 1} ){e^{y ( t )}}}}{{N [ { ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )} ]}}} \biggr\} \,dB ( t ), \end{aligned}$$

where

$$\begin{aligned} \mathcal{L}V \bigl( {t,y ( t )} \bigr) ={}& - { \biggl[ {\frac{{N ( {{e^{y ( t )}} - 1} )}}{{{e^{y ( t )}}}}} \biggr]^{ - p}} \biggl( { \frac{{\beta ( t )N}}{{ ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )}} - \bigl( {\mu ( t ) + \delta ( t )} \bigr)} \biggr)\gamma ( t ) \\ &{}- \frac{1}{p} \frac{{d\gamma ( t )}}{{dt}} \\ &{}- { \biggl[ {\frac{{N ( {{e^{y ( t )}} - 1} )}}{{{e^{y ( t )}}}}} \biggr]^{ - p}}\frac{{ ( {p + 1} ){\sigma^{2}} ( t ){N^{2}}}}{{2{{ [ { ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )} ]}^{2}}}} \gamma ( t ) - \frac{{\mu ( t ){e^{2y ( t )}}}}{N} \\ &{} - \frac{{\delta ( t ){e^{y ( t )}} ( {{e^{y ( t )}} - 1} )}}{N} \\ &{}+ \frac{{{e^{y ( t )}}}}{N} \biggl[ {\mu ( t ) + \frac{{\beta ( t )N ( {{e^{y ( t )}} - 1} )}}{{ ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )}} + \frac{{{\sigma^{2}} ( t ){N^{2}}{{ ( {{e^{y ( t )}} - 1} )}^{2}}}}{{{{ [ { ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )} ]}^{2}}}}} \biggr] \\ = {}& - { \biggl[ {\frac{{N ( {{e^{y ( t )}} - 1} )}}{{{e^{y ( t )}}}}} \biggr]^{ - p}} \biggl[ \biggl( { \beta ( t )N - \bigl( {\mu ( t ) + \delta ( t )} \bigr) - \frac{{ ( {p + 1} ){\sigma ^{2}} ( t ){N^{2}}}}{2}} \biggr)\gamma ( t ) \\ &{}- \frac{1}{p}\frac{{d\gamma ( t )}}{{dt}} \biggr]- \frac{{\mu ( t ){e^{2y ( t )}}}}{{2N}} + H ( t ). \end{aligned}$$
(9)

Here,

$$\begin{aligned} H ( t ) \le{}& { \biggl[ {\frac{{N ( {{e^{y ( t )}} - 1} )}}{{{e^{y ( t )}}}}} \biggr]^{ - p}} \frac{{\beta ( t ){N^{2}}a ( t ) ( {{e^{y ( t )}} - 1} )}}{{ ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )}} - \frac{{\mu ( t ){e^{2y ( t )}}}}{{2N}} - \frac{{\delta ( t ){e^{y ( t )}} ( {{e^{y ( t )}} - 1} )}}{N} \\ &{}+ \frac{{{e^{y ( t )}}}}{N} \biggl[ {\mu ( t ) + \frac{{\beta ( t )N ( {{e^{y ( t )}} - 1} )}}{{ ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )}} + \frac{{{\sigma^{2}} ( t ){N^{2}}{{ ( {{e^{y ( t )}} - 1} )}^{2}}}}{{{{ [ { ( {1 + Na ( t )} ){e^{y ( t )}} - Na ( t )} ]}^{2}}}}} \biggr] \\ \le{}& {-} \frac{{\mu ( t ){e^{2y ( t )}}}}{{2N}} + \frac{{{e^{y ( t )}}}}{N} \bigl[ {\mu ( t ) + \beta ( t )N + {\sigma^{2}} ( t ){N^{2}}} \bigr] + \beta ( t ){N^{3 - p}}a ( t ) \\ \le{}& {L_{0}} =: \mathop{\sup} _{x \ge0} \biggl\{ { - \frac{{{\mu _{*}}{x^{2}}}}{{2N}} + \frac{x}{N} \bigl[ {{\mu^{*}} + {\beta^{*}}N + {{ \bigl( {{\sigma^{*}}} \bigr)}^{2}} {N^{2}}} \bigr] + {\beta^{*}} {N^{3 - p}} {a^{*}}} \biggr\} . \end{aligned}$$

In view of (8), we obtain

$$\mathcal{L}V \bigl( {t,y ( t )} \bigr) \le - \frac {1}{T} \int_{0}^{T} {\lambda ( t )\,dt} { \biggl[ { \frac{{N ( {{e^{y ( t )}} - 1} )}}{{{e^{y ( t )}}}}} \biggr]^{ - p}} - \frac{{{\mu_{*}}{e^{2y ( t )}}}}{{2N}} + {L_{0}}. $$

Define a bounded closed set \(\mathcal{D} = [ { \frac{1}{r},r} ]\), where r is a sufficiently large positive number. Then

$$\begin{aligned} &{\mathcal {L}}V \bigl( {t,y ( t )} \bigr)\\ &\quad \le\max \biggl\{ { - \frac{1}{T} \int_{0}^{T} {\lambda ( t )\,dt} {{ \bigl[ {N \bigl( {1 - {e^{ - \frac{1}{r}}}} \bigr)} \bigr]}^{ - p}} + {L_{0}}, - \frac{{{\mu_{*}}{e^{2r}}}}{{2N}} + {L_{0}}} \biggr\} \to - \infty \quad \mbox{as }r \to\infty. \end{aligned}$$

On the other hand,

$$\mathop{\inf} _{y \ge r} \bigl\{ {V ( {t,y} )} \bigr\} \ge \frac{{{e^{r}}}}{N} \to\infty \quad \mbox{and} \quad \mathop{\inf} _{0 < y \le\frac{1}{r}} \bigl\{ {V ( {t,y} )} \bigr\} \ge\frac{{{\gamma_{*}}}}{p}{ \bigl[ {N \bigl( {1 - {e^{\frac{1}{r}}}} \bigr)} \bigr]^{ - p}} \to\infty \quad \mbox{as }r \to \infty. $$

By Lemma 2.2, (7) has at least one positive periodic solution. Thus, (4) has the periodic solution which lies in Γ. The proof is complete. □

Remark 4

The proofs of Theorem 1.2 and (A) in Theorem 1.3 are similar to those in [12], and hence are omitted.

Proof of (B) in Theorem 1.3

By the Itô formula, from (4) we have

$$\begin{aligned} d\ln I ( t ) ={}& \biggl[ {\frac{{\beta ( t )S ( t )}}{{1 + a ( t )I ( t )}} - \bigl( { \mu ( t ) + \delta ( t )} \bigr) - \frac{{{\sigma^{2}} ( t ){S^{2}} ( t )}}{{2{{ [ {1 + a ( t )I ( t )} ]}^{2}}}}} \biggr]\,dt + \frac{{\sigma ( t )S ( t )}}{{1 + a ( t )I ( t )}}\,dB ( t ) \\ \le{}& \biggl[ {\frac{{ [ {N\beta ( t ) - \theta ( {\mu ( t ) + \delta ( t )} )} ]S ( t )}}{{N [ {1 + a ( t )I ( t )} ]}} - ( {1 - \theta} ) \bigl( {\mu ( t ) + \delta ( t )} \bigr) - \frac {{{\sigma^{2}} ( t ){S^{2}} ( t )}}{{2{{ [ {1 + a ( t )I ( t )} ]}^{2}}}}} \biggr]\,dt \\ &{}+ \frac{{\sigma ( t )S ( t )}}{{1 + a ( t )I ( t )}}\,dB ( t ) \\ \le{}& \biggl[ {\frac{{{{ [ {N\beta ( t ) - \theta ( {\mu ( t ) + \delta ( t )} )} ]}^{2}}}}{{2{N^{2}}{\sigma^{2}} ( t )}} - ( {1 - \theta} ) \bigl( {\mu ( t ) + \delta ( t )} \bigr)} \biggr]\,dt + \frac{{\sigma ( t )S ( t )}}{{1 + a ( t )I ( t )}}\,dB ( t ). \end{aligned}$$

After taking integration, we get

$$\frac{1}{t}\ln\frac{{I ( t )}}{{I ( 0 )}} \le\frac{1}{t} \int_{0}^{t} { \biggl[ {\frac{{{{ [ {N\beta ( s ) - \theta ( {\mu ( s ) + \delta ( s )} )} ]}^{2}}}}{{2{N^{2}}{\sigma^{2}} ( s )}} - ( {1 - \theta} ) \bigl( {\mu ( s ) + \delta ( s )} \bigr)} \biggr]} \,ds + \frac{{M(t)}}{t} , $$

where \(M(t) = \int_{0}^{t} {\frac{{\sigma ( s )S ( s )}}{{1 + a ( s )I ( s )}}\,dB ( s )} \) is a local continuous martingale such that \(\mathop{\lim} _{t \to\infty} \frac{{M(t)}}{t} = 0\) due to the strong law of large numbers for martingales. Then we conclude

$$\mathop{\lim\sup} _{t \to\infty} \frac{1}{t}\ln\frac{{I ( t )}}{{I ( 0 )}} \le \frac{1}{T} \int_{0}^{T} { \biggl[ {\frac{{{{ [ {N\beta ( s ) - \theta ( {\mu ( s ) + \delta ( s )} )} ]}^{2}}}}{{2{N^{2}}{\sigma^{2}} ( s )}} - ( {1 - \theta} ) \bigl( {\mu ( s ) + \delta ( s )} \bigr)} \biggr]\,ds}< 0. $$

The proof is complete. □

3 Concluding remarks

In this paper, a stochastic SIS epidemic model with periodic coefficients is formulated and studied. First, we define a parameter \({R_{0}^{T}}\). Under assumption that the total population is fixed by N, we show that the model has at least one random periodic solution which is nontrivial and located in \((0,N)\times(0,N)\) if \({R_{0}^{T} > 1}\). These may give better understanding of how the periodic seasonal variation affects the disease. Then, several conditions for persistence in mean and extinction of the disease are also established. In detail,

  • When \({R_{0}^{T}}<1\), the disease will go extinct with probability 1 under extra mild conditions.

  • When \({R_{0}^{T}}>1\), the disease will be persistent in mean.

In case that the noise is small, it is clear that \({R_{0}^{T}}\) is the threshold of model (4) which can be used easily to determine whether the disease will survive or not.

Let \(\sigma\equiv0\), we have \(R^{T} = \frac{{\frac{1}{T}\int_{0}^{T} { [ {\beta ( t )N } ]\,dt} }}{{\frac{1}{T}\int _{0}^{T} { [ {\mu ( t ) + \delta ( t )} ]\,dt} }}\), which is the threshold of the corresponding deterministic SIS model. Clearly, \({R_{0}^{T}} < R^{T}\), this means that the disease may go extinct due to the noises, while the deterministic SIS model predicts its survival. (B) in Theorem 1.3 shows that large noise can lead the disease to die out. In general, the noise has negative effects on persistence of the disease.

Comparing with the autonomous SIS model [12, 13], the threshold \({R_{0}^{T}}\) (see [13]) is replaced by its averaged value in one period, and thus the result is generalized.

Finally, we want to address one conjecture on extinction of the disease, i.e.,

  • When \({R_{0}^{T}}<1\), the disease modeled by (4) will almost surely go extinct without any extra condition.