1 Introduction

Logistic equation is one of the most important models in mathematical ecology. A classical generalized logistic equation (or Gilpin-Ayala model [1]) can be described by the ordinary differential equation

$$ dN(t)/dt=N(t)\bigl[r-aN^{\theta}(t)\bigr], $$
(1)

where \(N(t)\) stands for the population size; r, a and θ are positive constants, r represents the growth rate and \(r/a\) is the carrying capacity. It is well known that the positive equilibrium state \(N^{\ast}=(r/a)^{1/\theta}\) is globally asymptotically stable (see, e.g., [1]).

However, the natural growth of species is often subject to various types of environmental noise [24]. It is therefore useful to study how these noises affect the population dynamics. To begin with, let us consider the white noise. Suppose that r and a are affected by white noises, with

$$r\rightarrow r+\sigma_{1}\dot{B}_{1}(t), \qquad a \rightarrow a-\sigma_{2}\dot{B}_{2}(t), $$

it then follows from (1) that

$$ dN(t)=N(t)\bigl[r-aN^{\theta}(t)\bigr]\, {dt}+ \sigma_{1} N(t)\, dB_{1}(t)+\sigma_{2} N^{1+\theta}(t)\, dB_{2}(t), $$
(2)

where \(\{B_{1}(t)\}_{t\geq0}\) and \(\{B_{2}(t)\}_{t\geq0}\) are two independent standard Brownian motions defined on the complete probability space \((\Omega,\mathcal{F},P)\) with a filtration \(\{ \mathcal{F}\}_{t\geq0}\), \(\sigma_{i}^{2}\) stands for the intensity of the white noise, \(i=1,2\).

Let us now take another type of environmental noise into account. It has been noted that [5] population models may experience random changes in their structure and parameters by factors such as nutrition or as rain falls. These random changes cannot be described by the white noise [6, 7]. Several authors have suggested that [69] one can use \(\beta(t)\) to model these random changes, where \(\beta(t)\) is a right-continuous Markov chain taking values in a finite-state space \(S=\{1,\ldots,m\}\). Then model (2) becomes

$$\begin{aligned} dN(t) =&N(t) \bigl[r\bigl(\beta(t)\bigr)-a\bigl(\beta(t) \bigr)N^{\theta(\beta(t))}(t) \bigr]\, {dt} \\ &+\sigma_{1}\bigl(\beta(t)\bigr) N(t)\, dB_{1}(t)+ \sigma_{2}\bigl(\beta(t)\bigr) N^{1+\theta(\beta(t))}(t)\, dB_{2}(t). \end{aligned}$$
(3)

In regime \(i\in S\), the system obeys

$$ dN(t)=N(t) \bigl[r(i)-a(i)N^{\theta(i)}(t) \bigr]\, {dt}+ \sigma_{1}(i) N(t)\, dB_{1}(t)+\sigma_{2}(i) N^{1+\theta(i)}(t)\, dB_{2}(t). $$
(4)

System (3) is operated as follows. Suppose that \(\beta(0)=i_{0}\in S\), then (3) satisfies

$$dN(t)=N(t) \bigl[r(i_{0})-a(i_{0})N^{\theta(i_{0})}(t) \bigr]\, {dt}+\sigma_{1}(i_{0}) N(t)\, dB_{1}(t)+ \sigma_{2}(i_{0}) N^{1+\theta(i_{0})}(t)\, dB_{2}(t) $$

for a random amount of time until \(\beta(t)\) jumps to another state, say, \(i_{1}\in S\). Then system (3) obeys

$$dN(t)=N(t) \bigl[r(i_{1})-a(i_{1})N^{\theta(i_{1})}(t) \bigr]\, {dt}+\sigma_{1}(i_{1}) N(t)\, dB_{1}(t)+ \sigma_{2}(i_{1}) N^{1+\theta(i_{1})}(t)\, dB_{2}(t) $$

until \(\beta(t)\) jumps to a new state again. Therefore, system (3) can be regarded as a hybrid system switching between the m subsystems (4) from one to another according to the law of \(\beta(t)\).

In recent years, population systems under Markovian switching have received much attention. Some nice and interesting properties, for example, stochastic boundedness, extinction, stochastic permanence, positive recurrent, invariant distribution, have been obtained (see, e.g., [620]). Especially, Takeuchi et al. [10] considered a two-dimensional autonomous Lotka-Volterra predator-prey model with Markovian switching and revealed a significant effect of Markovian switching on the population dynamics: the subsystems of the system develop periodically, but switching makes the system become neither permanent nor dissipative.

In the study of deterministic population models, people always seek for the equilibria and then investigate their stability. For example, model (1) has two equilibria: the trivial equilibrium state 0 and the positive equilibrium state \(N^{\ast}=(r/a)^{1/\theta}\). If \(r<0\), then 0 is globally asymptotically stable; if \(r>0\), then \(N^{\ast}\) is globally asymptotically stable. Note that system (3) does not have positive equilibrium state, then its solution will not tend to a positive constant. Thus an interesting question arises naturally: whether model (3) still has some structured stability? In this letter, we shall investigate this issue. In Section 2, we show that there is a critical value which has a close relationship with the stationary probability distribution of the Markov chain \(\beta(t)\). If the critical value is negative, then the trivial equilibrium state of system (3) is stochastically globally asymptotically stable; if the critical value is positive, then the solution of system (3) is positive recurrent and has a unique ergodic stationary distribution. We shall give some numerical simulations and conclusions in the last section.

2 Main results

Let the Markov chain \(\beta(t)\) be generated by \(Q=(q_{ij})_{m\times m}\), that is,

$$P\bigl\{ \beta(t+\Delta t)=j|\beta(t)=i\bigr\} =\left \{ \textstyle\begin{array}{l@{\quad}l} q_{ij}\Delta t+o(\Delta t) & \mbox{if } j\neq i, \\ 1+q_{ii}\Delta t+o(\Delta t) & \mbox{if } j=i, \end{array}\displaystyle \right . $$

where \(q_{ij}\geq0\) stands for the transition rate from i to j if \(j\neq i\), and \(q_{ii}=- \sum_{j=1,j\neq i}^{m}q_{ij}\) for \(i=1,2,\ldots,m\). As standing hypotheses we assume in this paper that:

  1. (A1)

    \(\beta(t)\) is independent of the Brownian motion.

  2. (A2)

    \(\beta(t)\) is irreducible, which means that system (3) can switch from any regime to any other regime. Hence \(\beta(t)\) has a unique stationary distribution \(\pi=(\pi_{1},\ldots,\pi_{m})\) which can be obtained by solving the equation \(\pi Q=0\) subject to \(\sum_{i=1}^{m}\pi_{i}=1\) and \(\pi_{i}>0\), \(i=1,\ldots,m\).

Moreover, we assume that \(\min_{i\in S}a(i)>0\) and \(0<\min_{i\in S}\theta(i)\leq\max_{i\in S}\theta(i)\leq1\).

To begin with, let us recall some important definitions and lemmas. Consider the following stochastic differential equation

$$ dX(t)=f\bigl(X(t),\beta(t)\bigr)\, dt+g\bigl(X(t),\beta(t)\bigr)\, dB(t), $$
(5)

with initial data \((X_{0},\beta(0))\), where \(f(0,i)=g(0,i)\equiv0\),

$$f: R^{n}\times S\rightarrow R^{n},\qquad g: R^{n} \times S\rightarrow R^{n\times m} $$

and \(\{B(t)\}_{t\geq0}\) is an m-dimensional Brownian motion. For every \(i\in S\), and for any twice continuously differentiable function \(V (x,i)\), define \(LV(x,i)\) as follows:

$$LV(x,i)=V_{x}(x,i)f(x,i)+0.5\operatorname{trace}\bigl[g^{T}(x,i)V_{xx}g(x,i) \bigr]+\sum_{j\neq i,j\in S}q_{ij}\bigl(V(x,j)-V(x,i) \bigr). $$

Definition 1

([21], p.204)

  1. (I)

    The trivial solution of Eq. (5) is said to be stable in probability if for \(\varepsilon\in(0,1)\) and \(\varsigma>0\), there exists \(\delta>0\) such that if \((X_{0},\beta(0))\in S_{\delta}\times S\), where \(S_{\delta}=\{x\in R^{n}: |x|<\delta\}\), then

    $$P \bigl\{ \bigl\vert X\bigl(t;X_{0},\beta(0)\bigr)\bigr\vert < \varsigma \mbox{ for all } t\geq0 \bigr\} \geq1-\varepsilon. $$
  2. (II)

    The trivial solution of Eq. (5) is said to be stochastically asymptotically stable if it is stable in probability and, moreover, for every \(\varepsilon\in(0, 1)\), there exists \(\delta_{0}>0\) such that if \((X_{0},\beta(0))\in S_{\delta_{0}}\times S\), then

    $$P \Bigl\{ \lim_{t\rightarrow+\infty}X\bigl(t;X_{0},\beta(0)\bigr)=0 \Bigr\} \geq 1-\varepsilon. $$
  3. (III)

    The trivial solution of Eq. (5) is said to be stochastically asymptotically stable in the large or stochastically globally asymptotically stable (SGAS) if it is stochastically asymptotically stable and, moreover,

    $$P \Bigl\{ \lim_{t\rightarrow+\infty}X\bigl(t;X_{0},\beta(0)\bigr)=0 \Bigr\} =1, \quad \forall \bigl(X_{0},\beta(0)\bigr)\in R^{n} \times S. $$

Lemma 1

([21], Theorem 5.37)

If there are functions \(V\in C^{2}(R^{n}\times S;(0,+\infty))\), \(\rho_{1},\rho_{1}\in\mathcal{K}_{\infty}\), \(\rho_{3}\in\mathcal{K}\) such that

$$\rho_{1}\bigl(\vert x\vert \bigr)\leq V(x,i)\leq \rho_{2}\bigl(\vert x\vert \bigr),\qquad LV(x,i)\leq- \rho_{3}(x)\quad \textit {for any } (x,i)\in R\times S, $$

where \(\mathcal{K}\) represents the family of all continuous increasing functions \(\kappa: (0,+\infty)\rightarrow(0,+\infty)\) such that \(\kappa(0)=0\) while \(\kappa(x)>0\) for \(x>0\), \(\mathcal{K}_{\infty}\) stands for the family of all functions \(\kappa\in\mathcal{K}\) with property \(\lim_{x\rightarrow+\infty}\kappa(x)=+\infty\), then the trivial solution of Eq. (5) is SGAS.

Definition 2

([22])

An \(R^{n}\times S\)-valued process \(Y(t;u)=(X(t),\beta(t))\), satisfying \((X_{0},\beta(0))=u\), is said to be recurrent with respect to some bounded set \(\mathcal{U}\subset R^{n}\times S\) if

$$P(\tau_{u}< \infty)=1 \quad \mbox{for any } u\notin\mathcal{U}, $$

where \(\tau_{u}=\inf\{t>0,Y(t;u)\in\mathcal{U}\}\). If

$$E(\tau_{u})< \infty \quad \mbox{for any } u\notin\mathcal{U}, $$

then \(Y(t;u)\) is said to be positive recurrent with respect to \(\mathcal{U}\).

Lemma 2

([22], Theorem 3.13)

A necessary and sufficient condition for positive recurrence of \(Y(t;u)\) with respect to a domain \(\mathcal{U}=D \times\{i\}\subset R^{n}\times S\) is that for each \(i\in S\), there exists a nonnegative function \(V(x, i)\): \(D^{c}\rightarrow R \) such that \(V (x, i)\) is twice continuously differentiable and that for some \(\varrho>0\),

$$LV(x,i)\leq-\varrho,\quad (x,i)\in D^{c}\times S, $$

where \(D^{c}\) is the complement of D.

Lemma 3

([22], Theorems 4.3 and 4.4)

The positive recurrent process \(Y(t;u)=(X(t), \beta(t))\) has a unique stationary distribution ψ which is ergodic, that is, if h is a function integrable with respect to the measure ψ, then

$$P \Biggl(\lim_{T\rightarrow+\infty}T^{-1}\int_{0}^{T}h \bigl(X(t),\beta (t)\bigr)\, dt=\sum_{i=1}^{m} \int_{R^{n}}h(x,i)\, d\psi(x,i) \Biggr)=1. $$

Lemma 4

([23], Lemma 2.3)

Consider the following linear system of equations:

$$ Qc=\eta, $$
(6)

where \(c,\eta\in R^{m}\). Suppose that \(Q=(q_{ij})\) is irreducible, then Eq. (6) has a solution if and only if \(\pi\eta=0\).

Lemma 5

For any initial value \((N(0),\beta(0))\in (0,+\infty)\times S\), there is a unique global positive solution \(N(t)\) to model (3) almost surely.

Proof

The proof is a slight modification of that in [12] by applying Itô’s formula to \(\sqrt{x}-1-0.5\ln x\), \(x>0\) and hence is omitted. □

Now we are in the position to give the main result of this letter.

Theorem 1

Consider system (3) with initial data \((N(0),\beta(0))\in(0,+\infty)\times S\).

  1. (i)

    If \(\bar{b}<0\), where \(\bar{b}=\sum_{i=1}^{m}\pi_{i}b_{i}\), \(b_{i}=r(i)-0.5\sigma_{1}^{2}(i)\), then the trivial solution is SGAS.

  2. (ii)

    If \(\bar{b}>0\), then the solution \(N(t)\) is positive recurrent with respect to the domain \(\mathcal{U}=(0,l)\times S\) and has a unique ergodic asymptotically invariant distribution (UEAID), where l is a positive number to be specified later.

Proof

Let \(b=(b_{1},\ldots,b_{m})^{T}\). Note that \(Q=(q_{ij})\) is irreducible, and \(\pi(-b+(1,\ldots,1)^{T} \sum_{i=1}^{m}\pi_{i}b_{i})=0\), then by Lemma 4 the linear equation

$$ Qc=-b+(1,\ldots,1)^{T}\sum_{i=1}^{m} \pi_{i}b_{i} $$
(7)

has a solution. Let \((c_{1},c_{2},\ldots,c_{m})\) be a solution of Eq. (7), it then follows from (7) that for \(i=1,\ldots,m\),

$$ \sum_{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i})= \sum_{j=1}^{m}q_{ij}c_{j}=-b_{i}+ \sum_{i=1}^{m}\pi_{i}b_{i}=-b_{i}+ \bar{b}. $$
(8)

Now let us prove (i). Define \(V_{1}(N,i)=(\alpha+c_{i})N^{1/\alpha}\), \(N>0\), where \(\alpha>1\) is sufficiently large satisfying \(\alpha+\min_{i\in S}c_{i}>0\). Clearly, \((\alpha+\min_{i\in S})N^{1/\alpha}\leq V_{1}(N,i)\leq(\alpha+\max_{i\in S})N^{1/\alpha}\). Compute that

$$\begin{aligned}& LV_{1}(N,i) \\& \quad =\frac{1}{\alpha}(\alpha+c_{i})N^{1/\alpha} \bigl(r(i)-a(i)N^{\theta(i)} \bigr) +\frac{\sigma_{1}^{2}(i)}{2\alpha} \biggl( \frac{1}{\alpha}-1 \biggr) (\alpha +c_{i})N^{1/\alpha} \\& \qquad {}+\frac{\sigma_{2}^{2}(i)}{2\alpha} \biggl(\frac{1}{\alpha }-1 \biggr) ( \alpha+c_{i})N^{1/\alpha+2\theta(i)} +N^{1/\alpha}\sum _{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i}) \\& \quad =\frac{1}{\alpha}(\alpha+c_{i})N^{1/\alpha} \biggl[ \bigl(r(i)-a(i)N^{\theta(i)} \bigr) +\frac{\sigma_{1}^{2}(i)}{2} \biggl( \frac{1}{\alpha}-1 \biggr) \\& \qquad {}+\frac{\sigma_{2}^{2}(i)}{2} \biggl(\frac{1}{\alpha }-1 \biggr)N^{2\theta(i)} \biggr] +\frac{1}{\alpha}(\alpha+c_{i})N^{1/\alpha} \biggl(1- \frac{c_{i}}{\alpha +c_{i}} \biggr)\sum_{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i}) \\& \quad =\frac{1}{\alpha}(\alpha+c_{i})N^{1/\alpha} \Biggl[ \Biggl(r(i)-\frac{\sigma_{1}^{2}(i)}{2}+\sum_{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i}) \Biggr)-a(i)N^{\theta(i)}+\frac{\sigma_{1}^{2}(i)}{2\alpha} \\& \qquad {}+\frac{\sigma_{2}^{2}(i)}{2} \biggl(\frac{1}{\alpha }-1 \biggr)N^{2\theta(i)} - \frac{c_{i}}{\alpha+c_{i}}\sum_{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i}) \Biggr] \\& \quad =\frac{1}{\alpha}(\alpha+c_{i})N^{1/\alpha} \biggl[\bar {b}-a(i)N^{\theta(i)}+\frac{\sigma_{1}^{2}(i)}{2\alpha}+\frac{\sigma _{2}^{2}(i)}{2} \biggl( \frac{1}{\alpha}-1 \biggr)N^{2\theta(i)} -\frac{c_{i}}{\alpha+c_{i}}( \bar{b}-b_{i}) \biggr] \\& \quad \leq\frac{1}{\alpha}(\alpha+c_{i})N^{1/\alpha} \biggl[\bar {b}+\frac{\sigma_{1}^{2}(i)}{2\alpha}-\frac{c_{i}}{\alpha+c_{i}}(\bar {b}-b_{i}) \biggr] \\& \quad =:N^{1/\alpha}F(\alpha,i), \end{aligned}$$
(9)

where

$$F(\alpha,i)=\frac{1}{\alpha}(\alpha+c_{i}) \biggl[\bar{b}+ \frac{\sigma _{1}^{2}(i)}{2\alpha}-\frac{c_{i}}{\alpha+c_{i}}(\bar{b}-b_{i}) \biggr]. $$

In the proof of (9), the fourth identity follows from (8) and the inequality follows from \(\min_{i\in s}a(i)>0\) and \(\alpha>1\). Since \(\bar{b}<0\), we can choose a sufficiently large \(\alpha>1\) such that

$$\alpha+\min_{i\in S}c_{i}>0,\qquad \bar{F}:=\max _{i\in S}F(\alpha,i)< 0. $$

Hence \(LV(N,i)\leq\bar{F}N^{1/\alpha}\). Then the desired assertion (i) follows from Lemma 1.

We are now in the position to prove (ii). Define \(V_{2}(N,i)=(1-\gamma c_{i})N^{-\gamma}+N\), \(N>0\), where \(\gamma>0\) is sufficiently small satisfying \(1-\gamma\max_{i\in S}c_{i}>0\). Therefore

$$\begin{aligned}& LV_{2}(N,i) \\& \quad =-\gamma(1-\gamma c_{i})N^{-\gamma} \bigl(r(i)-a(i)N^{\theta(i)} \bigr) +\frac{\sigma_{1}^{2}(i)}{2}\gamma(\gamma+1) (1-\gamma c_{i})N^{-\gamma} \\& \qquad {}+\frac{\sigma_{2}^{2}(i)}{2}\gamma(\gamma+1) (1-\gamma c_{i})N^{-\gamma+2\theta(i)} -\gamma N^{-\gamma}\sum_{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i})+N \bigl(r(i)-a(i)N^{\theta(i)} \bigr) \\& \quad =-\gamma(1-\gamma c_{i})N^{-\gamma} \biggl(r(i)- \frac {1}{2}\sigma_{1}^{2}(i)-\frac{\gamma}{2} \sigma_{1}^{2}(i) \biggr) \\& \qquad {}-\gamma(1-\gamma c_{i})N^{-\gamma} \biggl(1+\frac{\gamma c_{i}}{1-\gamma c_{i}} \biggr)\sum _{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i}) \\& \qquad {}+\gamma(1-\gamma c_{i})N^{\theta(i)-\gamma} \biggl(a(i)+ \frac{\sigma _{2}^{2}(i)}{2}(\gamma+1)N^{\theta(i)} \biggr)+N \bigl(r(i)-a(i)N^{\theta (i)} \bigr) \\& \quad =-\gamma(1-\gamma c_{i})N^{-\gamma} \Biggl(b(i)+\sum _{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i})- \frac{\gamma}{2}\sigma_{1}^{2}(i)+\frac {\gamma c_{i}}{1-\gamma c_{i}}\sum _{j=1,j\neq i}^{m}q_{ij}(c_{j}-c_{i}) \Biggr) \\& \qquad {}+\gamma(1-\gamma c_{i})N^{\theta(i)-\gamma} \biggl(a(i)+ \frac{\sigma _{2}^{2}(i)}{2}(\gamma+1)N^{\theta(i)} \biggr)+N \bigl(r(i)-a(i)N^{\theta (i)} \bigr) \\& \quad =-\gamma(1-\gamma c_{i})N^{-\gamma} \biggl(\bar{b}- \frac {\gamma}{2}\sigma_{1}^{2}(i)+\frac{\gamma c_{i}}{1-\gamma c_{i}}(\bar {b}-b_{i}) \biggr) \\& \qquad {}+\gamma(1-\gamma c_{i})N^{\theta(i)-\gamma} \biggl(a(i)+ \frac{\sigma _{2}^{2}(i)}{2}(\gamma+1)N^{\theta(i)} \biggr)+N \bigl(r(i)-a(i)N^{\theta (i)} \bigr). \end{aligned}$$

Note that \(\bar{b}>0\), we can let \(\gamma>0\) be sufficiently small satisfying

$$ 1-\gamma\max_{i\in S}c_{i}>0,\qquad \bar{b}-\frac {\gamma}{2}\max_{i\in S}\sigma_{1}^{2}(i)+ \frac{\gamma}{1-\gamma c_{i}}\min_{i\in S}\bigl\{ c_{i}( \bar{b}-b_{i})\bigr\} >0. $$
(10)

Since \(\min_{i\in S}\theta(i)>0\), then

$$ \lim_{N\rightarrow0}\frac{LV_{2}(N,i)}{-\gamma(1-\gamma c_{i})N^{-\gamma} (\bar{b}-\frac{\gamma}{2}\sigma_{1}^{2}(i)+\frac{\gamma c_{i}}{1-\gamma c_{i}}(\bar{b}-b_{i}) )}=1. $$
(11)

On the other hand, it follows from \(\max_{i\in S}\theta(i)\leq1\) that \(2\theta(i)\leq\theta(i)+1\). Hence

$$ \lim_{N\rightarrow+\infty}\frac{LV_{2}(N,i)}{-a(i)N^{\theta(i)+1}}=1. $$
(12)

By (11), we can see that there is \(N_{1}<1\) such that if \(N< N_{1}\), then \(LV_{2}(N,i)\leq-1\). Similarly, by (12), there is \(N_{2}>1\) such that if \(N>N_{2}\), then \(LV_{2}(N,i)\leq -1\). Let \(l=\max\{1/N_{1}, N_{2}\}\), then \(l>1\). Let \(D=(1/l,l)\), then we have

$$LV(N,i)\leq-1 \quad \mbox{for all } (N,i)\in D^{c}\times S. $$

Then the required assertion (ii) follows from Lemmas 2 and 3. □

To finish this section, let us consider the subsystem (4) of system (3).

Corollary 1

For subsystem (4),

  1. (i)

    if \(b_{i}<0\), then its trivial solution is SGAS;

  2. (ii)

    if \(b_{i}>0\), then the solution of subsystem (4) is positive recurrent with respect to the domain \((0,l)\) and has a UEAID.

3 Conclusions and numerical simulations

By Corollary 1, if \(b_{i}<0\) for some \(i\in S\), then the trivial solution of subsystem (4) is SGAS. Hence Theorem 1 means that if the trivial solution of every individual subsystem of system (3) is SGAS, then, as the result of Markovian switching, the trivial solution of system (3) is still SGAS. On the other hand, if \(b_{i}>0\) for some \(i\in S\), then the solution of subsystem (4) is positive recurrent and has a UEAID. Thus Theorem 1 shows that if the solution of every subsystem of system (3) is positive recurrent and has a UEAID, then, as the result of Markovian switching, the solution of system (3) is still positive recurrent and has a UEAID. However, Theorem 1 indicates a much more interesting result: If the solution of some subsystems in system (3) is positive recurrent and has a UEAID while the trivial solution of some subsystems is SGAS, then, as the results of Markovian switching, the solution of system (3) may be positive recurrent and has a UEAID or tends to its trivial solution, depending on the sign of \(\bar{b}=\sum_{i=1}^{m}\pi_{i}b_{i}\). If \(\bar{b}>0\), then the solution of system (3) is positive recurrent and has a UEAID; if \(\bar{b}<0\), then the trivial solution of system (3) is SGAS.

Now let us illustrate these results through some numerical figures. Consider the following stochastic logistic system under Markovian switching:

$$\begin{aligned} dN(t) =&N(t) \bigl[r\bigl(\beta(t)\bigr)-a\bigl(\beta(t)\bigr)N(t) \bigr]\, {dt} \\ &{}+\sigma_{1}\bigl(\beta(t)\bigr)N(t)\, {dB_{1}(t)}+ \sigma_{2}\bigl(\beta(t)\bigr)N^{2}(t)\, {dB_{2}(t)}, \end{aligned}$$
(13)

where \(\beta(t)\) is a Markov chain on the state space \(S=\{1,2\}\), \(\min_{i\in S}a(i)>0\). As pointed out in Section 1, system (13) can be regarded as the result of the following two subsystems switching from one to the other according to the movement of \(\beta(t)\):

$$ dN(t)=N(t)\bigl[r(1)-a(1)N(t)\bigr]\, {dt}+\sigma _{1}(1)N(t)\, {dB_{1}(t)}+\sigma_{2}(1)N^{2}(t) \, {dB_{2}(t)}, $$
(14)

where \(r(1)=0.4\), \(a(1)=0.2\), \(\sigma_{1}(1)=0.96\), \(\sigma_{2}(1)=1\), and

$$ dN(t)=N(t)\bigl[r(2)-a(2)N(t)\bigr]\, {dt}+\sigma _{1}(2)N(t)\, {dB_{1}(t)}+\sigma_{2}(2)N^{2}(t) \, {dB_{2}(t)}, $$
(15)

where \(r(2)=0.4\), \(a(2)=0.32\), \(\sigma_{1}(2)=0.8\), \(\sigma_{2}(2)=0.9\). Clearly, \(b_{1}=r(1)-0.5\sigma_{1}^{2}(1)=-0.06<0\) and \(b_{2}=0.08>0\). Then, by Corollary 1, the trivial solution of subsystem (14) is SGAS and the solution of subsystem (15) is positive recurrent and has a UEAID. Figure 1(a) shows that the trivial solution of subsystem (14) is SGAS and Figure 1(b) is the density of solution of (15). Now let us consider two cases to see the effect of Markovian switching on the behavior of system (13).

Figure 1
figure 1

Characteristics of ( 13 ) for \(\pmb{r(1)=0.4}\) , \(\pmb{a(1)=0.2}\) , \(\pmb{\sigma_{1}(1)=0.96}\) , \(\pmb{\sigma_{2}(1)=1}\) , \(\pmb{r(2)=0.4}\) , \(\pmb{a(2)=0.32}\) , \(\pmb{\sigma_{1}(2)=0.8}\) , \(\pmb{\sigma_{2}(2)=0.9}\) , initial value \(\pmb{N(0)=0.2}\) . Figure 1(a) is the solution of Eq. (14) using the Milstein method mentioned in [24]. This figure shows that the trivial solution of subsystem (14) is SGAS. Figure 1(b) is the density of solution of Eq. (15) using the Monte Carlo simulation method mentioned in [25]. Figure 1(c) is the solution of Eq. (13) with \(q_{12}=0.3\) and \(q_{21}=0.7\). These figures demonstrate that the trivial solution of system (13) is SGAS. Figure 1(d) is the density of solution of Eq. (13) with \(q_{12}=q_{21}=0.5\).

Case 1: To begin with, we let the generator of \(\beta(t)\) be \(Q=\bigl ( {\scriptsize\begin{matrix}{}-0.3 & 0.3 \cr 0.7 & -0.7 \end{matrix}} \bigr )\). It is easy to see that \(\beta(t)\) has a unique stationary distribution \(\pi=(0.7,0.3)\). Compute that \(\pi_{1}b_{1}+\pi _{2}b_{2}<0\). By virtue of Theorem 1, as the result of Markovian switching, the trivial solution of system (13) is SGAS, see Figure 1(c).

Case 2: Next we choose \(Q=\bigl ( {\scriptsize\begin{matrix}{} -0.5 & 0.5 \cr 0.5 & -0.5\end{matrix}} \bigr )\). Hence \(\pi=(\pi_{1},\pi_{2})=(0.5,0.5)\) and \(\pi_{1}b_{1}+\pi_{2}b_{2}>0\). It then follows from Theorem 1 that, as the result of Markovian switching, the solution of system (13) is positive recurrent and has a UEAID, see Figure 1(d) which is the density of solution of (13).