Journal of Applied Mathematics and Computing

, Volume 43, Issue 1, pp 387–407

# Asymptotic properties and simulations of a stochastic single-species dispersal model under regime switching

## Authors

• School of Mathematics and StatisticsNortheast Normal University
• School of ScienceChangchun University
• Daqing Jiang
• School of Mathematics and StatisticsNortheast Normal University
• Donal O’Regan
• School of Mathematics, Statistics and Applied MathematicsNational University of Ireland
Original Research

DOI: 10.1007/s12190-013-0669-x

Zu, L., Jiang, D. & O’Regan, D. J. Appl. Math. Comput. (2013) 43: 387. doi:10.1007/s12190-013-0669-x
• 215 Views

## Abstract

Taking both white noise and colored environmental noise into account, a single-species logistic model with population’s nonlinear diffusion among two patches is proposed and investigated. The sufficient conditions of the existence of positive solutions, stochastic permanence, persistence in mean and extinction are established. Moreover, we use an example and simulation figures to illustrate our main results.

### Keywords

Stochastic permanencePersistent in meanExtinctionEnvironment noise

### Mathematics Subject Classification

34F0534E1060H1060H20

## 1 Introduction

Species dispersal is a well-known phenomena in nature and there are many papers in the literature on ecological and evolutionary dynamics of population in a spatial heterogeneous environment. Single-species dynamics in patchy environment has also been well studied. Takeuchi [1] showed that species can survive in all patches at a globally asymptotically stable equilibrium point under any species dispersal when ever it can survive in each isolate patch. Wang and Chen [2] proved that the above result is true for a time-dependent logistic model in a two-patch environment. Allen [3] investigated the logistic nonlinear directed diffusion model
\begin{aligned}[c] \dot{x}_1&=x_{1}(a_{1}-b_1 x_1)+d_{12}\bigl(x_2^2- \alpha_{12}x_1^2\bigr), \\ \dot{x}_2&=x_{2}(a_{2}-b_2 x_2)+d_{21}\bigl(x_1^2- \alpha_{21}x_2^2\bigr), \end{aligned}
(1.1)
where xi denotes the density dependent growth rate in patch i. The constants dij (i,j=1,2, ji) are the dispersal rate from j-th patch to i-th patch, and the nonnegative constant αij can be selected to represent different boundary conditions in the continuous diffusion case [4]. Allen proved that initial value problem of (1.1) has a unique positive solution on a maximal interval (see [5]), the system is strongly persistent, and the population size can be unbounded or bounded under reversed conditions (see [3]). The authors in [4] extended Allen’s results and obtained the following necessary and sufficient conditions:
1. (i)

The system (1.1) possesses a globally stable positive equilibrium point $$(x_{1}^{*},x_{2}^{*})$$, if the largest eigenvalue of matrix A is less than 0;

2. (ii)

Every solution of the system is unbounded, if the above condition does not hold.

Here A=(aij)2×2, and aij=dij for ij, a11=−b1d12α12, a22=−b2d21α21. That is to say, if (b1+d12α12)(b2+d21α21)>d12d21, then system (1.1) has a globally stable positive equilibrium point.
However, in the real world, population dynamics is inevitably affected by the environmental noise which is important from a biological point of view to discover the properties of the nondeterministic system. Generally speaking, there are two types of environmental noise. One is white noise and there are many papers on this (see [610]) and there are many results such as global stability and stochastic permanence. The other is colored environmental noise. First we consider white noise. We stochastically perturb the intrinsic growth rate ai. Suppose
where Bi(t) is mutually independent Brownian motion, σi is a positive constant representing the intensity of the white noise. Then the stochastic system takes the following form
\begin{aligned}[c] d{x}_1&=\bigl[x_{1}(a_{1}-b_1 x_1)+d_{12}\bigl(x_2^2- \alpha_{12}x_1^2\bigr)\bigr]\,dt+ \sigma _{1}x_1\,d{B}_{1}(t), \\ d{x}_2&=\bigl[x_{2}(a_{2}-b_2 x_2)+d_{21}\bigl(x_1^2- \alpha_{21}x_2^2\bigr)\bigr]\,dt+\sigma _{2}x_2\,d{B}_{2}(t). \end{aligned}
(1.2)
For convenience, let $$\bar{b}_{1}=b_{1} +d_{12}\alpha_{12}$$, $$\bar{b}_{2}=b_{2} +d_{21}\alpha_{21}$$, so we have
\begin{aligned}[c] d{x}_1&=\bigl[x_{1}(a_{1}- \bar{b}_1 x_1)+d_{12}x_2^2 \bigr]\,dt+ \sigma _{1}x_1\,d{B}_{1}(t), \\ d{x}_2&=\bigl[x_{2}(a_{2}-\bar{b}_2 x_2)+d_{21}x_1^2\bigr]\,dt+\sigma _{2}x_2\,d{B}_{2}(t). \end{aligned}
(1.3)
Now we consider the classical colored noise, say telegraph noise (see e.g. [11, 12]). The telegraph noise can be described as a random switching between two or more environmental regimes, which differ in terms of factors such as nutrition or rainfall [13, 14]. Frequently, the switching is memoryless and the waiting time for the next switch has an exponential distribution. Therefore we can model the regime switching by a finite-state Markov chain. Assume that there are N regimes and the switching between these N regimes is governed by a continuous-time Markov chain r(t), t≥0 on the probability space, taking values in a finite state space S={1,2,…,N} with the generator Γ=(γuv)n×n given by
where Δt>0. Here γuv is the transition rate from u to v and γuv≥0 if uv, while
$$\gamma_{uu}=-\sum_{u\neq v} \gamma_{uv}.$$
Note that Γ always has an eigenvalue 0. The algebraic interpretation of irreducibility is that rank(Γ)=N−1. Under this condition, the Markov chain has a unique stationary (probability) distribution π=(π1,π2,…,πN)∈RN, which can be determined by solving the following linear equation
$$\pi\varGamma=0$$
subject to
$$\sum_{k=1}^{N}\pi_k =1 \quad \hbox{and} \quad \pi_k>0, \quad \forall k\in S.$$
The population stochastic system (1.3) under regime switching can be described by the following model
\begin{aligned}[c] d{x}_1&= \bigl(x_{1}\bigl[a_{1}\bigl(r(t) \bigr)-\bar{b}_1\bigl(r(t)\bigr) x_1 \bigr]+d_{12} \bigl(r(t)\bigr) x_2^2 \bigr)\,dt+ \sigma_{1}\bigl(r(t)\bigr)x_1\,d{B}_{1}(t), \\ d{x}_2&= \bigl(x_{2}\bigl[a_{2}\bigl(r(t) \bigr)-\bar{b}_2\bigl(r(t)\bigr) x_2\bigr]+d_{21} \bigl(r(t)\bigr)x_1^2 \bigr)\,dt+\sigma_{2} \bigl(r(t)\bigr)x_2\,d{B}_{2}(t). \end{aligned}
(1.4)
We assume that the Markov chain r(⋅) is independent of the Brownian motion B1 and B2.

In this paper, in order to obtain better dynamic properties of the SDE (1.4), we arrange the content as follows. In Sect. 2, we show that there exists a positive global solution with any initial positive value under some conditions. We investigate the persistence under two different meanings: stochastic permanence and persistence in mean in Sects. 3 and 4, respectively. Section 5 studies extinction and we get the result that a large intensity of white noise can cause the extinction of populations. Finally we illustrate the main results in Sect. 6.

Throughout this paper, unless otherwise specified, let be a complete probability space with a filtration satisfying the usual conditions (i.e. it is right continuous and contains all P-null sets). Let $$R_{+}^{2}$$ denote the positive cone of R2, namely . For convenience and simplicity in the following discussion, denote x(t)=(x1(t),x2(t)). If A is a vector or matrix, its transpose is denoted by AT. If A is a matrix, its trace norm is denoted by $$|A|=\sqrt{\hbox{trace}(A^{T} A)}$$ whilst its operator norm is denoted by ∥A∥=sup{|Ax|:|x|=1}. Let $$\hat{\bar {b}}_{i}=\min_{k\in S}\{\bar{b}_{i}(k)\}$$ (i=1,2), $$\check{d}_{12}=\max_{k\in S}\{ d_{12}(k)\}$$ and $$\check{d}_{21}=\max_{k\in S}\{d_{21}(k)\}$$ and we impose the following assumptions:

### Assumption 1

$$\hat{\bar{b}}_{1}\hat {\bar {b}}_{2}>\check{d}_{12}\check{d}_{21}$$.

### Assumption 2

For each kS, $$a_{i}(k)-\frac{1}{2}\sigma_{i}^{2}(k) >0$$, i=1,2.

## 2 Positive and global solutions

Now x(t) of system (1.4) denotes population densities at time t, so we are only interested in the positive solutions. The coefficients of SDE (1.4) do not satisfy a linear growth condition, though they are locally Lipschitz continuous. However, in order for a stochastic differential equation to have a unique global (i.e. no explosion in a finite time) solution for any given initial value, the coefficients of the equation are generally required to satisfy a linear growth condition and a local Lipschitz condition (cf. Mao [15]). To solve this problem, we will use a method similar to [16, Theorem 2.1] to prove the solution of (1.4) is nonnegative and global.

### Theorem 2.1

Let Assumption 1 hold. For any given initial value$$x(0)\in R_{+}^{2}$$, there is a unique positive solutionx(t) of system (1.4), and the solution will remain in$$R_{+}^{2}$$with probability 1.

### Proof

Define a C2-function $$V : {R}_{+}^{2} \times S\rightarrow{R}_{+}$$ by
$$V(x,k) =c_1(k) (x_{1}-1-\log x_{1})+c_2(k) (x_{2}-1-\log x_{2}),$$
(2.1)
here c1(k) and c2(k) are positive constants to be determined. The nonnegativity of this function can be observed from a−1−loga≥0 on a>0 with equality holding iff a=1. If $$x\in R_{+}^{2}$$, we see that
(2.2)
In fact, in order to ensure LV is bounded, we only need
\begin{aligned}[c] -c_1(k) \bar{b}_{1}(k)+c_2(k) d_{21}(k)&<0, \\ -c_2(k) \bar{b}_{2}(k)+c_1(k) d_{12}(k)&<0,\end{aligned}
(2.3)
which means that
$$\frac{d_{21}(k)}{\bar{b}_1(k)}< \frac{c_1(k)}{c_2(k)}< \frac{\bar {b}_2(k)}{d_{12}(k)},$$
(2.4)
and by Assumption 1 we are able to find positive constants c1(k), c2(k) satisfying the inequality (2.4). The coefficients of the quadratic terms of LV are negative and we have
Making use of the inequality a≤2(a−1−loga)+2 on a>0, we see that
(2.5)
Let . By the definition of V(x,k), for any k,lS, we have
$$V(x,l)=c_1(l) (x_{1}-1-\log x_{1})+c_2(l) (x_{2}-1-\log x_{2})\leq\check {q}V(x,k).$$
Thus
$$\sum_{l=1}^{N} \gamma_{kl}V(x,l)\leq\check{q}\Biggl(\sum_{l=1}^{N}| \gamma _{kl}|\Biggr)V(x,k).$$
(2.6)
Then it follows from (2.5) and (2.6) that
$$LV(x,k)\leq K_1^* \bigl[1+V(x,k)\bigr],$$
where $$K_{1}^{*}$$ is a positive constant. By a proof similar to [16, Theorem 2.1], we can obtain the desired assertion. □

## 3 Stochastic permanence

From Theorem 2.1 we know that the solution of SDE (1.4) will remain in the positive cone $$R_{+}^{2}$$ with probability 1 if Assumption 1 holds. This nice property provides a great opportunity for us to discuss how the solution varies in $$R_{+}^{2}$$ in detail. We will first give the definitions of stochastically ultimate boundedness and stochastic permanence.

### Definition 3.1

The SDE (1.4) is said to be stochastically ultimately bounded, if for any ϵ∈(0,1), there exist positive constants χ1(=χ1(ϵ)), χ2(=χ2(ϵ)), such that for any initial value $$x(0)\in R_{+}^{2}$$, the solution of the SDE (1.4) has the property that
$$\limsup_{t\rightarrow\infty}P\bigl\{x_1(t)>\chi_1 \bigr\}<\epsilon , \qquad \limsup_{t\rightarrow\infty}P\bigl\{x_2(t)> \chi_2\bigr\}<\epsilon,$$
where (x1(t), x2(t)) is the solution of SDE (1.4) with any initial value $$x(0)\in R_{+}^{2}$$.

### Definition 3.2

The SDE (1.4) is said to be stochastically permanent, if for any ϵ∈(0,1), there are positive constants χ1(=χ1(ϵ)), χ2(=χ2(ϵ)) and δ1(=δ1(ϵ)), $$\delta'_{1} (=\delta'_{1} (\epsilon))$$ such that
$$\liminf_{t\rightarrow\infty}P\bigl\{x_1(t)\leq \chi_1\bigr\}\geq1-\epsilon , \qquad \liminf_{t\rightarrow\infty}P \bigl\{x_1(t)\geq\delta_1\bigr\}\geq 1-\epsilon,$$
and
$$\liminf_{t\rightarrow\infty}P\bigl\{x_2(t)\leq\chi_2 \bigr\}\geq1-\epsilon , \qquad \liminf_{t\rightarrow\infty}P\bigl \{x_2(t)\geq\delta'_1\bigr\}\geq 1-\epsilon.$$

It is clear that if the system is stochastically permanent, it must be stochastically ultimately bounded.

### Lemma 3.1

Under Assumption 1, for any given initial value$$x(0)\in R_{+}^{2}$$, there exists a positive constantι(p) such that the solutionx(t) of SDE (1.4) has the following property:
$$\limsup_{t\rightarrow\infty}E {\bigl[}x_{1}^{p}(t)+ x_{2}^{p}(t) {\bigr]}\leq\iota(p), \quad t\geq0,\ p>1.$$
(3.1)

### Proof

Under Assumption 1, the solution x(t) with initial value $$x(0)\in R_{+}^{2}$$ will remain in $$R_{+}^{2}$$ with probability 1. For any given value $$x(0)\in R_{+}^{2}$$ and any given positive constant p>1 and positive constants c1, c2 to be determined, define
$$V\bigl(x(t)\bigr)=c_1 x_{1}^{p}(t)+c_2 x_{2}^{p}(t).$$
(3.2)
By virtue of the generalized Itô’s formula and Young inequality, we have
and
where, ε1, ε2 are positive constants to be determined. Then we have
(3.3)
We can find ε1, ε2 and c1, c2, such that
and note the inequalities can be turned into
$$\frac{\check{d}_{21}\frac{2}{p+1}(\varepsilon_2)^{-\frac {p-1}{2}}}{\hat {\bar{b}}_{1}-\check{d}_{12}\frac{p-1}{p+1}\varepsilon_1}< \frac {c_1}{c_2}< \frac{\hat{\bar{b}}_{2}-\check{d}_{21}\frac {p-1}{p+1}\varepsilon_2}{\check{d}_{12}\frac{2}{p+1}(\varepsilon _1)^{-\frac{p-1}{2}}},$$
(3.4)
namely
so taking $$\varepsilon_{1}=\frac{\hat{\bar{b}}_{1}}{\check {d}_{12}},~\varepsilon_{2}=\frac{\hat{\bar{b}}_{2}}{\check {d}_{21}}$$, and by Assumption 1, the above inequality holds. Let
It is easy to see that $$\check{\alpha}>0$$ and $$\hat{\beta}>0$$. Hence we get
(3.5)
so then we have
Therefore, letting $$z(t)=E [c_{1} x_{1}^{p}(t)+c_{2} x_{2}^{p}(t) ]$$ yields
Notice that the solution of equation
satisfies
Thus by the comparison argument we get
Then we have
which implies that there is a T>0, such that
In addition, $$E {[} x_{1}^{p}(t)+ x_{2}^{p}(t) {]}$$ is continuous and there exists a C(p)>0 such that
Let $$\iota(p)=\max\{\frac{2L(p)}{\min\{c_{1}, c_{2}\}}, C(p)\}$$, and then
The proof is complete. □

### Theorem 3.1

Under Assumption 1, solutions of SDE (1.4) are stochastically ultimately bounded.

The proof of Theorem 3.1 is a simple application of the Chebyshev inequality and Lemma 3.1.

Since the solution of SDE (1.4) is positive, we have the following lemma.

### Lemma 3.2

Let Assumption 1 hold, x(t) is the solution of SDE (1.4) with initial value$$x(0)\in R_{+}^{2}$$. Thenx(t) has the property that
(3.6)
whereϕ1(t) andϕ2(t) are the solutions of equations:

### Proof

SDE (1.4) can be reduced to
\begin{aligned}[c] d{x}_1(t)=&x_{1}(t)\biggl[a_{1} \bigl(r(t)\bigr)+d_{12}\bigl(r(t)\bigr)\frac {x_2^2(t)}{x_1(t)}-\bar {b}_1\bigl(r(t)\bigr) x_1(t)\biggr]\,dt\\&{}+ \sigma_{1}\bigl(r(t)\bigr)x_1(t)\,d{B}_{1}(t), \\ d{x}_2(t)=&x_{2}(t)\biggl[a_{2}\bigl(r(t) \bigr)+d_{21}\bigl(r(t)\bigr)\frac {x_1^2(t)}{x_2(t)}-\bar {b}_2 \bigl(r(t)\bigr) x_2(t)\biggr]\,dt\\&{}+\sigma_{2}\bigl(r(t) \bigr)x_2(t)\,d{B}_{2}(t). \end{aligned}
(3.7)
From Theorem 2.1 of [12], we know that
Similarly we have x2(t)≥ϕ2(t). This completes the proof. □
In view of [12, Lemmas 3.6, 5.2], one sees that, if Assumption 2 holds, there exist positive constants H1, H2 and θ such that $$a_{i}(k)-\frac{\theta+1}{2}\sigma_{i}^{2}(k) >0$$ (i=1,2) satisfying the following inequalities
$$\limsup_{t\rightarrow\infty}E \biggl[\frac{1}{ {(}\phi_1 (t) {)}^\theta} \biggr]\leq H_1, \qquad \limsup_{t\rightarrow\infty}E \biggl[ \frac{1}{ {(}\phi_2 (t) {)}^\theta} \biggr]\leq H_2,$$
(3.8)
and
$$\liminf_{t\rightarrow\infty}\frac{\log\phi_{1}(t)}{\log t}\geq - \frac {1}{\theta}, \qquad \liminf_{t\rightarrow\infty}\frac{\log\phi _{2}(t)}{\log t}\geq- \frac{1}{\theta} \quad \mbox{a.s.}$$
(3.9)
These, together with Lemma 3.2, yields

### Lemma 3.3

Under Assumptions 1 and 2 the solutionx(t) of SDE (1.4) with any initial value$$x(0)\in R_{+}^{2}$$has the property that
$$\limsup_{t\rightarrow\infty}E \biggl[\frac{1}{ {(}x_1 (t) {)}^\theta} \biggr]\leq H_1, \qquad \limsup_{t\rightarrow\infty}E \biggl[ \frac{1}{ {(}x_2 (t) {)}^\theta} \biggr]\leq H_2,$$
(3.10)
and
$$\liminf_{t\rightarrow\infty}\frac{\log x_{1}(t)}{\log t}\geq- \frac {1}{\theta}, \qquad \liminf_{t\rightarrow\infty}\frac{\log x_{2}(t)}{\log t}\geq- \frac{1}{\theta} \quad \mbox{\textit{a.s}.}$$
(3.11)
whereH1, H2are positive constants andθ>0 such that$$a_{i}(k)-\frac{\theta+1}{2}\sigma_{i}^{2}(k) >0$$, i=1,2, kS.

### Theorem 3.2

Under Assumptions 1 and 2, SDE (1.4) is stochastically permanent.

### Proof

Let x(t) be the solution of SDE (1.4) with any given positive initial value $$x(0)\in R_{+}^{2}$$. By (3.10) of Lemma 3.3, we have
$$\limsup_{t\rightarrow\infty}E \biggl[\frac{1}{ {(}x_1 (t) {)}^\theta} \biggr]\leq H_1, \qquad \limsup_{t\rightarrow\infty}E \biggl[\frac{1}{ {(}x_2 (t) {)}^\theta} \biggr]\leq H_2.$$
For $$x(t)\in R_{+} ^{2}$$ and for any ϵ>0, let $$\delta_{1}=(\frac {\epsilon}{H_{1}})^{\frac{1}{\theta}}$$, $$\delta'_{1}=(\frac{\epsilon }{H_{2}})^{\frac{1}{\theta}}$$ and we have
$$P \bigl\{x_1(t)<\delta_1 \bigr\}=P \biggl\{ \frac{1}{ {(}x_1(t) {)}^\theta}>\frac{1}{\delta_1^\theta} \biggr\}\leq\frac{E {[}\frac{1}{ (x_1(t) )^\theta} ]}{\frac{1}{\delta _1^\theta}} \leq \delta_1^\theta H_1=\epsilon,$$
and
$$P \bigl\{x_2(t)<\delta'_1 \bigr\}=P \biggl\{\frac{1}{ (x_2(t) )^\theta}>\frac{1}{(\delta_1')^\theta} \biggr\}\leq\frac {E [\frac{1}{ (x_2(t) )^\theta} ]}{\frac {1}{(\delta _1')^\theta}} \leq\bigl(\delta_1'\bigr)^\theta H_2=\epsilon,$$
hence,
$$\limsup_{t\rightarrow\infty}P \bigl\{x_1(t)< \delta_1 {\bigr\}}\leq \epsilon, \qquad \limsup_{t\rightarrow\infty}P \bigl\{x_2(t)<\delta'_1 \bigr\}\leq \epsilon,$$
which means
$$\liminf_{t\rightarrow\infty}P\bigl\{x_1(t)\geq \delta_1\bigr\}\geq1-\epsilon , \qquad \liminf_{t\rightarrow\infty}P \bigl\{x_2(t)\geq\delta_1'\bigr\}\geq 1- \epsilon.$$
The other part of Definition 3.2 follows from Theorem 3.1. □

## 4 Persistence in mean

In this section, we will investigate persistence in mean. First we introduce the definition.

### Definition 4.1

SDE (1.4) is said to be persistent in mean, if there exist positive constants mi, Mi (i=1,2) such that the solution x(t) of SDE (1.4) has the following property:
$$\limsup_{t\rightarrow\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds\leq M_i \quad \mbox{a.s., }i=1,2,$$
(4.1)
and
$$\liminf_{t\rightarrow\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds \geq m_i \quad \mbox{a.s., }i=1,2.$$
(4.2)

### Lemma 4.1

Let Assumption 1 hold. For any given initial value$$x(0)\in R_{+}^{2}$$, the solutionx(t) of SDE (1.4) has the property that
$$\limsup_{t\rightarrow\infty}\frac{\log[ x_{1}(t)+x_{2}(t)]}{\log t}\leq1 \quad \mbox{\textit{a.s}.}$$
(4.3)

### Proof

Define $$V:R_{+}^{2}\rightarrow R_{+}$$ by V(x(t))=x1(t)+x2(t) and applying the generalized Itô’s formula, one can see that
here $$\check{a}=\max_{k\in S} \{a_{1}(k),a_{2}(k) \}$$, $$\check {b}_{0}=\max_{k\in S} \{|d_{21}(k)-\bar{b}_{1}(k)|,|d_{12}(k)-\bar {b}_{2}(k)| \}$$. From (3.1) of Lemma 3.1, we have
$$\limsup_{t\rightarrow\infty}E \bigl[V \bigl(x(t) \bigr) \bigr]=\limsup_{t\rightarrow\infty} E \bigl[x_{1}(s)+x_{2}(s) \bigr]\leq\bigl[\iota(2)\bigr]^{\frac{1}{2}}$$
(4.4)
and
$$\limsup_{t\rightarrow\infty}\int_{t}^{t+1}E \bigl[x_{1}^2(s)+x_{2}^2(s) \bigr]\,ds\leq\iota(2).$$
(4.5)
An application of the Burkholder-Davis-Gundy inequality (see [17, 18]) and the Hölder inequality (see [18]), yields
where $$\check{\sigma}=\max_{k\in S}\{\sigma_{1}(k),\sigma_{2}(k)\}$$. This together with (4.4) and (4.5), yields
$$\limsup_{t\rightarrow\infty}E \Bigl[\sup_{t\leq u \leq t+1}V \bigl(x(u) \bigr) \Bigr]\leq {(}1+\check{a}+3\check{\sigma} {)}\bigl[ \iota(2)\bigr]^{\frac{1}{2}}+\check{b}_0\iota(2).$$
(4.6)
We observe from (4.6) that there is a positive constant $$K_{2}^{*}$$ such that
Let ϵ>0 be arbitrary. Then, by the well-known Chebyshev inequality, we have
Applying the Borel-Cantelli lemma (see [18]), for almost all ωΩ, we obtain that
$$\sup_{t\leq u\leq t+1}\bigl[x_{1}(u)+x_{2}(u) \bigr]\leq m^{1+\epsilon}$$
(4.7)
holds for all but finitely many m. Hence, we have that m0(ω), for almost all ωΩ, such that (4.7) holds whenever mm0. Consequently, for almost all ωΩ, if mm0 and mtm+1, results in
Therefore
Letting ϵ→0 we obtain the desired assertion (4.3). □

### Theorem 4.1

Under Assumptions 1 and 2, for any initial value$$x(0)\in R_{+} ^{2}$$, the solutionx(t) of SDE (1.4) is persistent in mean.

### Proof

Let $$V:R_{+} ^{2}\rightarrow R_{+}$$ be defined by
$$V\bigl(x(t)\bigr)=c_1x_1(t)+c_2x_2(t),$$
(4.8)
where c1, c2 satisfy the following inequality
$$\frac{\check{d}_{21}}{\hat{\bar{b}}_1}< \frac{c_1}{c_2}< \frac {\hat{\bar {b}}_2}{\check{d}_{12}}.$$
(4.9)
From Assumption 1, we can find positive constants c1, c2 satisfying the inequality (4.9). From the inequality (4.3) of Lemma 4.1 and (3.11) of Lemma 3.3, one can derive that
$$\lim_{t\rightarrow\infty} \frac{\log V {(}x(t) {)}}{t}=0 \quad \mbox{a.s.}$$
(4.10)
By virtue of the generalized Itô’s formula, we have
(4.11)
Let $$\frac{\hat{\sigma}^{2}(k)}{2}=\frac{1}{2(\frac{1}{\sigma _{1}^{2}(k)}+\frac{1}{\sigma_{2}^{2}(k)})}$$, $$\hat{b}=\min \{\frac {\hat {\bar{b}}_{1}}{c_{1}}-\frac{c_{2}}{c_{1}^{2}}\check{d}_{21},\ \frac{\hat{\bar {b}}_{2}}{c_{2}}-\frac{c_{1}}{c_{2}^{2}}\check{d}_{12} \}$$ and $$\check {a}(k)= \max\{a_{1}(k),a_{2}(k)\}$$. It is clear that $$\hat{b}>0$$ by (4.9). Applying Hölder inequality and Assumption 1, we have
(4.12)
and making use of the Cauchy inequality we have
(4.13)
Substituting (4.12), (4.13) into (4.11) leads to
(4.14)
Integrating both sides of the above inequality (4.14) from 0 to t gives
(4.15)
where M(t) is a martingale defined by
$$M(t)= \int_{0}^{t}\frac{c_1 \sigma_{1}(r(s))x_1(s) \,dB_{1}(s)+c_2 \sigma_{2}(r(s))x_2(s)\,dB_{2}(s)}{c_1 x_1(s) +c_2 x_2(s)}$$
with M(0)=0. The quadratic variation of this martingale is
By the strong law of large numbers for martingales (see [17, 18]), we therefore have
We can therefore divide both sides of (4.15) by t and take the superior limit to obtain
which means that
$$\limsup_{t\rightarrow\infty}\frac{1}{t}\int _{0}^{t}x_1(s)\,ds\leq \frac {2}{c_1\hat{b}}\sum_{k=1}^{N} \pi_k\biggl[\check{a}(k)-\frac{\hat{\sigma }^2(k)}{2}\biggr] \quad \mbox{a.s.}$$
(4.16)
and
$$\limsup_{t\rightarrow\infty}\frac{1}{t}\int _{0}^{t}x_2(s)\,ds\leq \frac {2}{c_2\hat{b}}\sum_{k=1}^{N} \pi_k\biggl[\check{a}(k)-\frac{\hat{\sigma }^2(k)}{2}\biggr] \quad \mbox{a.s.}$$
(4.17)
On the other hand, we observe from Theorem 5.2 of [16] that
and
where $$\check{\bar{b}}_{i}=\max_{k\in S}\{\bar{b}_{i}(k)\}$$, i=1,2. These, together with Lemma 3.2, yields
$$\liminf_{t\rightarrow\infty}\frac{1}{t}\int _{0}^{t}x_1(s)\,ds\geq \frac {1}{\check{\bar{b}}_1}\sum_{k=1}^{N} \pi_k\biggl[a_1(k)-\frac{\sigma _1^2(k)}{2}\biggr] \quad \mbox{a.s.}$$
(4.18)
and
$$\liminf_{t\rightarrow\infty}\frac{1}{t}\int _{0}^{t}x_2(s)\,ds\geq \frac {1}{\check{\bar{b}}_2}\sum_{k=1}^{N} \pi_k\biggl[a_2(k)-\frac{\sigma _2^2(k)}{2}\biggr] \quad \mbox{a.s.}$$
(4.19)
Let $$M_{i}=\frac{2}{c_{i}\hat{b}}\sum_{k=1}^{N}\pi_{k}[\check {a}(k)-\frac {\hat{\sigma}^{2}(k)}{2}]>0$$, $$m_{i}=\frac{1}{\check{\bar{b}}_{i}}\sum_{k=1}^{N} \pi_{k}[a_{i}(k)-\frac{\sigma_{i}^{2}(k)}{2}]>0$$, then we have
$$\limsup_{t\rightarrow\infty}\frac{1}{t}\int_{0}^{t}x_{i}(s)\,ds \leq M_i \quad \hbox{and} \quad \liminf_{t\rightarrow\infty} \frac{1}{t}\int_{0}^{t}x_{i}(s)\,ds \geq m_i \quad \mbox{a.s., }i=1,2.$$
This completes the proof of Theorem 4.1. □

## 5 Extinction

In the previous sections we have showed that under the conditions of Assumption 1 and $$a_{i}(k) >\frac{\sigma_{i}^{2}(k)}{2}$$ (i=1,2), that is, the white noise intensity is smaller, the species will be stochastically permanent and persistent in mean, so the population will not become extinct. However, we will show in this section that if the noise is sufficiently large, the solution to the associated SDE (1.4) will become extinct with probability 1.

### Theorem 5.1

Let Assumption 1 hold. Let$$\check{a}(k)=\max\{a_{1}(k),a_{2}(k)\}$$, $$\frac{\hat{\sigma}^{2}(k)}{ 2}=\frac{1}{2(1/\sigma_{1}^{2}(k)+1/\sigma_{2}^{2}(k))}$$andc1, c2be positive constants satisfying inequality (4.9). For any given initial value$$x(0)\in R_{+}^{2}$$, the solution of the SDE (1.4) has the property that
$$\limsup_{t\rightarrow\infty} \frac{\log [c_1 x_1 (t)+c_2 x_2(t) ]}{t}\leq\sum _{k=1}^{N}\pi_k\biggl[\check{a}(k)- \frac{\hat {\sigma}^2(k)}{2}\biggr] \quad \mbox{\textit{a.s}.}$$
In particular if$$\sum_{k=1}^{N}\pi_{k}[\check{a}(k)-\frac{\hat {\sigma }^{2}(k)}{2}]<0$$, then limt→∞x(t)=0 a.s.

### Proof

Let $$V:R_{+} ^{2}\rightarrow R_{+}$$ be defined as (4.8). From (4.14), we know that
(5.1)
Integrating the inequality (5.1) from 0 to t gives
$$\log V \bigl(x(t) \bigr)\leq\log V \bigl(x(0) \bigr)+\int _{0}^{t}\biggl[\check{a}\bigl(r(s)\bigr)- \frac{\hat{\sigma}^2(r(s))}{2}\biggr]\,ds+M(t),$$
(5.2)
where M(t) is a martingale defined in the proof of Theorem 4.1. By the strong law of large numbers for martingales and dividing t on the both sides of (5.2) and then letting t→∞ yields
(5.3)
We obtain the desired assertion. □

## 6 Example and numerical simulations

In this section we will introduce one example and some figures to illustrate our main theorems. In the following example, for the sake of convenience, we assume αij=1, and then $$\bar{b}_{1}=b_{1} +d_{12}$$, $$\bar{b}_{2}=b_{2} +d_{21}$$. Let the state space S={1,2}, so the SDE (1.4) with regime switching takes the following form
\begin{aligned}[c] d{x}_1&= \bigl(x_{1}\bigl[a_{1} \bigl(r(t)\bigr)-\bar{b}_1\bigl(r(t)\bigr) x_1 \bigr]+d_{12} \bigl(r(t)\bigr) x_2^2 \bigr)\,dt+ \sigma_{1}\bigl(r(t)\bigr)x_1\,d{B}_{1}(t), \\ d{x}_2&= \bigl(x_{2}\bigl[a_{2}\bigl(r(t) \bigr)-\bar{b}_2\bigl(r(t)\bigr) x_2\bigr]+d_{21} \bigl(r(t)\bigr)x_1^2 \bigr)\,dt+\sigma_{2} \bigl(r(t)\bigr)x_2\,d{B}_{2}(t), \end{aligned}
(6.1)
for t≥0. We numerically simulate the solution of (6.1). By the method mentioned in [19], for kS, we consider the discretized equation
\begin{aligned}[c] x_{1,i+1}=&x_{1,i}+\bigl[x_{1,i} \bigl(a_1(k)-\bar{b}_1(k) x_{1,i} \bigr)+d_{12}(k)x_{2,i}^2\bigr]h + \sigma_{1}(k)x_{1,i}\sqrt{h}\xi_{1,i}\\&{}+ \frac{1}{2}\sigma ^2_{1}(k)x_{1,i}\bigl(h \xi^2_{1,i}-h\bigr), \\ x_{2,i+1}=&x_{2,i}+\bigl[x_{2,i}\bigl(a_2(k)- \bar{b}_2(k) x_{2,i}\bigr)+d_{21}(k)x_{1,i}^2 \bigr]h +\sigma_{2}(k)x_{2,i}\sqrt{h}\xi_{2,i}\\&{}+ \frac{1}{2}\sigma ^2_{2}(k)x_{2,i}\bigl(h \xi^2_{2,i}-h\bigr). \end{aligned}
(6.2)
Using the numerical simulation method given out above and the help of Matlab software, we choose the initial value (x1(0),x2(0))=(0.8,0.7), time step h=0.01 and illustrate our main conclusions through the following example and figures.

### Example 6.1

Assume that the SDE (6.1) switches from one to the other according to the movement of the Markov chain r(t) on the state space S={1,2} with the generator
$$\varGamma=\left ( \begin{array}{c@{\quad}c} -2 & 2 \\ 1 & -1 \\ \end{array} \right ).$$
We shall give the stationary distribution $$\pi=(\pi_{1},\pi_{2})=(\frac {1}{3},\frac{2}{3})$$ of the Markov chain r(t) directly because π can be obtained by solving the simple linear equations
$$\pi\varGamma=0, \qquad \pi_1+\pi_ 2 =1.$$

### Case 1

Assume that
\begin{array}{l@{\qquad}l@{\qquad}l@{\qquad}l} a_{1}(1)=6, & \bar{b}_1(1)=4, & d_{12}(1)=2, & \sigma_{1}(1)=0.08; \\\noalign{\vspace{3pt}} a_{2}(1)=7, & \bar{b}_2(1)=6, & d_{21}(1)=3, & \sigma_{2}(1)=0.05; \\\noalign{\vspace{3pt}} a_{1}(2)=3, & \bar{b}_1(2)=4, & d_{12}(2)=2, & \sigma_{1}(2)=0.1; \\\noalign{\vspace{3pt}} a_{2}(2)=2, & \bar{b}_2(2)=3, & d_{21}(2)=1, & \sigma_{2}(2)=0.2. \end{array}
It is clear that $$\hat{\bar{b}}_{1}\hat{\bar{b}}_{2}=12>\check {d}_{12}\check{d}_{21}=6$$ and $$a_{i}(1)-\frac{1}{2}\sigma_{i}^{2}(1) >0$$, $$a_{i}(2)-\frac{1}{2}\sigma_{i}^{2}(2) >\nobreak 0$$, i=1,2, then Assumptions 1 and 2 hold. Making use of Theorems 2.1 and 3.2, we know that SDE (6.1) has a unique positive solution x(t) for any positive initial condition and it is stochastically persistent.
In Theorem 4.1, we need c1 and c2 to satisfy the inequality (4.9), so we only take the two numbers c1, c2 equal to 1. We compute
Then it follows from Theorem 4.1 that SDE (6.1) is permanent in mean.
We use Figs. 1, 2 and 3 to illustrate Case 1. The left pictures in Fig. 1 show that fluctuations caused by the smaller intensity of the white noise is relatively flat. The right subgraphs are the normal quantile-quantile plots of the paths x1(t) and x2(t), they are similar to the straight lines. This means that the distribution is approximately a standard normal distribution (see the middle histograms). In the left pictures of Fig. 3, we can see that the red □ represents the phase portrait of x1(t) and x2(t) when there is only one state k=1. Similarly, the blue ∘ represents the phase portrait of k=2, while, the black + describes the switching back and forth from one state k=1 to another state k=2 according to the movement of r(t). The black area is located between the red region and the blue region, and the red area and the blue area is similar to the two limit state of the black region. Corresponding to the left graph, the right picture in Fig. 3 describes the state with no random disturbance. From Fig. 3, we can clearly see the impact of white noise and colored environment noise on populations. From Figs. 13, we know that, with starting from the initial point (0.8,0.7), the process x(t) is positive recurrent with respect to the rectangle {(x1,x2):1.5<x1<2.7, 1.3<x2<2.5}, which also verifies the results correctly in Case 1.

### Case 2

Assume that
\begin{array}{l@{\qquad}l@{\qquad}l@{\qquad}l} a_{1}(1)=3, & \bar{b}_1(1)=3, & d_{12}(1)=2, & \sigma_{1}(1)=2.3; \\\noalign{\vspace{3pt}} a_{2}(1)=2, & \bar{b}_2(1)=2, & d_{21}(1)=1, & \sigma_{2}(1)=2.5; \\\noalign{\vspace{3pt}} a_{1}(2)=1, & \bar{b}_1(2)=4, & d_{12}(2)=1, & \sigma_{1}(2)=2.2; \\\noalign{\vspace{3pt}} a_{2}(2)=2, & \bar{b}_2(2)=2, & d_{21}(2)=\frac{3}{2}, & \sigma _{2}(2)=2.0. \end{array}
We take the numbers c1, c2 equal to 3 and 4, respectively. The condition $$\hat{\bar{b}}_{1}\hat{\bar{b}}_{2}=6>\check {d}_{12}\check {d}_{21}=3$$ holds, but, $$\sum_{k=1}^{2}\pi_{k}[\check{a}(k)-\frac{\hat{\sigma }^{2}(k)}{2}]\doteq -0.0817<0$$, so the conditions of Theorems 3.2 and 4.1 are not satisfied and the extinction conditions in Theorem 5.1 are satisfied, as the result of Markovian switching, the overall behavior, i.e. the SDE (6.1) will be extinctive by Theorem 5.1.
We give Figs. 4, 5 and 6 to illustrate Case 2. Note x1, x2 suffer large white noise. By comparing the images of random disturbance and no disturbance in the left pictures of Fig. 4 (blue lines and black lines, respectively), we can see that the fluctuations of random model is more violent and the species will die out, which can also be seen by the histogram and Fig. 5 and the left pictures in Fig. 6. From Fig. 6, we know that large white noise will lead to population extinction, even though the corresponding deterministic model is persistent (see the right picture in Fig. 6).