1 Introduction

It is a usual phenomenon for two or more species who live in proximity share the same basic requirements and compete for resources, habitat, food, or territory. It is therefore very important to study the competitive models for multi-species. As we know, the well-known Lotka-Volterra model concerning ecological population modeling has received great attention and has been studied extensively owing to its theoretical and practical significance. A deterministic, competitive Lotka-Volterra system with n interacting species is described by the n-dimensional differential equation

$$ \frac{dx_{i}(t)}{dt}=x_{i}(t)\Biggl[ b_{i}-\sum _{j=1}^{n}a_{ij}x_{j}(t) \Biggr],\quad i=1,2,\ldots,n, $$
(1.1)

where \(x_{i}(t)\) represents the population size of species i at time t, \(b_{i}\) is the growth rate of species i, and \(a_{ij}\) represents the effect of interspecific (if \(i\ne j\)) or intraspecific (if \(i=j\)) interaction.

On the other hand, from the biological point of view, population systems in the real world are inevitably affected by environmental noise. In practice, the growth rates are often subject to environmental noise. To obtain a more accurate description of such systems, we usually consider the stochastic perturbation of the growth rate \(b_{i}\) by an average value plus an error term. Then the intrinsic growth rate depending on time becomes

$$b_{i}\to b_{i}+\sigma_{i}\dot{B}_{i}(t), $$

where \(\dot{B}_{i}(t)\) is a white noise. As a result, system (1.1) becomes a stochastic Lotka-Volterra competition system with n interacting components as follows:

$$ dx_{i}(t)=x_{i}(t)\Biggl[\Biggl(b_{i}- \sum_{j=1}^{n}a_{ij}x_{j}(t)\Biggr) \,dt+\sigma_{i} \,dB_{i}(t)\Biggr],\quad i=1,2,\ldots,n, $$
(1.2)

where \(b_{i}\), \(a_{ij}\), \(\sigma_{i}\) are non-negative for \(i,j=1,2,\ldots,n\), and \(\sigma_{i}^{2}\) will be called the noise intensity matrix. Throughout this paper we always assume that the following hypothesis holds:

$$ b_{i}>0,\qquad a_{ii}>0,\qquad a_{ij}\geqslant0\quad (i\ne j). $$
(1.3)

It is therefore necessary to reveal how the noise affects the population systems. As a matter of fact, stochastic Lotka-Volterra competitive systems have recently been studied by many authors, for example, [16].

In the study of population systems, permanence and extinction are two important and interesting topics, respectively meaning that the population system will survive or die out in the future, which have received much attention (see [718]). Luo and Mao [15] revealed that a large white noise will force the stochastic Lotka-Volterra systems to become extinct while the population may be persistent under a relatively small white noise. Li and Mao [9] investigated a non-autonomous stochastic Lotka-Volterra competitive system, and some sufficient conditions on stochastic permanence and extinction were obtained. Li et al. [11] showed that both stochastic permanence and extinction have close relationships with the stationary probability distribution of the Markov chain. Tran and Yin [16] investigated stochastic permanence and extinction for stochastic competitive Lotka-Volterra systems using feedback controls.

However, most of the existing criteria are established for stochastic general Lotka-Volterra system. Hence, one natural question arises: How to derive some criteria with less conservatism for stochastic Lotka-Volterra competitive systems? This issue constitutes the first motivation of this paper.

Moreover, most of the existing criteria are established for total permanence and total extinction. To the best of our knowledge, partial permanence and partial extinction have scarcely been investigated, which are very important properties. Is it feasible to obtain some partial permanence and extinction conditions for stochastic Lotka-Volterra competitive systems? Thus, the second purpose of this paper is to solve this interesting problem.

The rest of the paper is arranged as follows. The main results of this paper are stated in Sections 3 and 4. In Section 2, some preliminaries, definitions and lemmas are given. Sufficient conditions on persistence in mean and extinction for one-species are obtained in Section 3. Based on these sufficient conditions on one-species, sufficient criteria on partial permanence and extinction on system (1.2) are established in Section 4. Section 5 provides some numerical examples to check the effectiveness of the derived results.

2 Notation

Throughout this paper, unless otherwise specified, let \((\Omega,\mathscr{F},\{\mathscr{F}_{t}\}_{t\geq0},\mathbb{P})\) be a complete probability space with a filtration \(\{\mathscr{F}_{t}\}_{t\geq0}\) satisfying the usual conditions (i.e., it is increasing and right continuous, while \(\mathscr{F}_{0}\) contains all \(\mathbb{P}\)-null sets). Let \({B}(t)=(B_{t}^{1},\ldots,B_{t}^{m})\) be an m-dimensional Brownian motion defined on the probability space. Let \(R_{+}^{n}=\{ x\in R^{n}: x_{i}>0\mbox{ for all }1\leqslant i\leqslant n\}\).

Lemma 2.1

([19])

Assume that condition (1.3) holds. Then, for any given initial value \(x(0)\in R_{+}^{n}\), there is a unique solution \(x(t)\) to system (1.2) and the solution will remain in \(R_{+}^{n}\) with probability 1, namely

$$P\bigl\{ x(t)\in R_{+}^{n}, \forall t\geqslant0\bigr\} =1. $$

Lemma 2.2

([19])

Assume that condition (1.3) holds. Then, for any given initial value \(x(0)\in R_{+}^{n}\), the solution \(x_{i}(t)\) to system (1.2) obeys

$$\sup_{0\leqslant t< +\infty}\sum_{i=1}^{n}Ex_{i}^{p}(t) \leqslant K_{p},\quad i=1,\ldots,n. $$

Definition 2.1

System (1.2) is said to be persistent in mean if there exist positive constants \(\alpha_{i}\), \(\beta_{i}\) such that the solution to system (1.2) has the following property:

$$\begin{aligned}& \limsup_{t\to\infty}\frac{1}{t}\int_{0}^{t} x_{i}(s)\,ds\leqslant\beta_{i} \quad\mbox{a.s. }i=1, \ldots,n, \\ & \liminf_{t\to\infty}\frac{1}{t}\int_{0}^{t} x_{i}(s)\,ds\geqslant\alpha_{i} \quad\mbox{a.s. }i=1, \ldots,n. \end{aligned}$$

To proceed with our study, we consider two auxiliary stochastic differential equations

$$\begin{aligned}& \textstyle\begin{cases} dy_{i}(t)=y_{i}(t)[(b_{i}-a_{ii}y_{i}(t))\,dt+\sigma_{i} \,dB_{i}(t)], \\ y_{i}(0)=x_{i}(0),\quad i=1,\ldots,n, \end{cases}\displaystyle \end{aligned}$$
(2.1)
$$\begin{aligned}& \textstyle\begin{cases} dz_{i}(t)=z_{i}(t)[(b_{i}-\sum_{j\ne i}a_{ij}y_{j}(t)- a_{ii}z_{i}(t))\,dt+\sigma_{i} \,dB_{i}(t)], \\ z_{i}(0)=x_{i}(0),\quad i=1,\ldots,n, \end{cases}\displaystyle \end{aligned}$$
(2.2)

where

$$y(t)=\bigl(y_{1}(t),\ldots,y_{n}(t)\bigr)^{T}, \qquad z(t)=\bigl(z_{1}(t),\ldots,z_{n}(t) \bigr)^{T}. $$

Lemma 2.3

Assume that condition (1.3) holds. Let \(x(t)\) be a solution to system (1.2) with \(x(0)\in R_{+}^{n}\), then we have

$$z(t)\leqslant x(t)\leqslant y(t), $$

i.e.,

$$z_{i}(t)\leqslant x_{i}(t)\leqslant y_{i}(t), \quad i=1,\ldots,n. $$

Proof

By the Itô formula, we derive that

$$\begin{aligned}& \begin{aligned}[b] \frac{1}{x_{i}(t)}={}& \frac{1}{x_{i}(0)}\exp\biggl[\biggl(\frac{\sigma_{i}^{2}}{2} -b_{i}\biggr)t+ \sum_{j\ne i}a_{ij}\int_{0}^{t}x_{j}(s) \,ds-\sigma_{i}B_{i}(t)\biggr] \\ &{}+a_{ii}\int_{0}^{t}\exp\biggl[ \biggl(\frac{\sigma_{i}^{2}}{2}-b_{i}\biggr) (t-s)+\sum _{j\ne i}a_{ij}\int_{s}^{t}x_{j}( \iota)\,d\iota \\ &{}-\sigma_{i}\bigl(B_{i}(t)-B_{i}(s)\bigr) \biggr]\,ds,\quad i=1,\ldots,n, \end{aligned} \end{aligned}$$
(2.3)
$$\begin{aligned}& \begin{aligned}[b] \frac{1}{y_{i}(t)}={}& \frac{1}{x_{i}(0)}\exp\biggl[\biggl(\frac{\sigma_{i}^{2}}{2} -b_{i}\biggr)t- \sigma_{i}B_{i}(t)\biggr]+a_{ii}\int _{0}^{t}\exp\biggl[\biggl(\frac{\sigma _{i}^{2}}{2}-b_{i} \biggr) (t-s) \\ &{}-\sigma_{i}\bigl(B_{i}(t)-B_{i}(s)\bigr) \biggr]\,ds,\quad i=1,\ldots,n, \end{aligned} \end{aligned}$$
(2.4)

which means

$$ x_{i}(t)\leqslant y_{i}(t),\quad i=1, \ldots,n. $$
(2.5)

Applying the Itô formula to equation (2.2) yields

$$ \begin{aligned}[b] \frac{1}{z_{i}(t)}={}& \frac{1}{x_{i}(0)}\exp\biggl[\biggl(\frac{\sigma_{i}^{2}}{2} -b_{i}\biggr)t+ \sum_{j\ne i}a_{ij}\int_{0}^{t}y_{j}(s) \,ds-\sigma_{i}B_{i}(t)\biggr] \\ &{}+a_{ii}\int_{0}^{t}\exp\biggl[ \biggl(\frac{\sigma_{i}^{2}}{2}-b_{i}\biggr) (t-s) +\sum _{j\ne i}a_{ij}\int_{s}^{t}y_{i}( \iota)\,d\iota \\ &{}-\sigma_{i}\bigl(B_{i}(t)-B_{i}(s)\bigr) \biggr]\,ds,\quad i=1,\ldots, n. \end{aligned} $$
(2.6)

From the representations of \(y_{i}(t)\) and \(z_{i}(t)\), and by (2.6) we have

$$z_{i}(t)\leqslant x_{i}(t),\quad i=1,\ldots,n. $$

 □

3 Persistence in mean and extinction of one-species

3.1 Persistence of one-species

In this section, we investigate persistence in mean and extinction of one-species for system (1.2). Now, let us present some lemmas which are essential to the proof of Theorem 3.1.

Lemma 3.1

([20])

Let condition (1.3) hold. The solution \(y_{i}(t)\) to equation (2.1) has the following property:

$$\lim_{t \to\infty}\frac{\log y_{i}(t)}{t}=\biggl(b_{i} - \frac{\sigma_{i}^{2}}{2}\biggr)\land0 \quad\textit{a.s.} $$

With the help of Lemma 3.1, we slightly improve Lemma 3.1 of [8] by weakening hypotheses posed on the coefficients of equation (2.2) as follows.

Lemma 3.2

Let condition (1.3) hold, and assume that \(b_{i}-\frac{\sigma_{i}^{2}}{2}>0\), \(b_{j}-\frac{\sigma _{j}^{2}}{2}\geq0\) (\(i\neq j\)) and \(b_{i}-\frac{\sigma_{i}^{2}}{2}-\sum_{j\ne i}\frac{a_{ij}}{a_{jj}}((b_{j}-\frac{\sigma_{j}^{2}}{2})\wedge 0)>0\). Then the solution to equation (2.2) has the property

$$ \lim_{t\to\infty}\frac{\log z_{i}(t)}{t}=0 \quad \textit{a.s.} $$
(3.1)

Theorem 3.1

Let condition (1.3) and assumptions in Lemma  3.2 hold. Then the solution to system (1.2) has the following property:

$$\begin{aligned}& \limsup_{t\to\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds\leqslant \frac{1}{a_{ii}}\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr) \quad \textit{a.s.}, \end{aligned}$$
(3.2)
$$\begin{aligned}& \liminf_{t\to\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds\geqslant \frac{1}{a_{ii}}\biggl[\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr)-\sum _{j\ne i}\frac{a_{ij}}{a_{jj}}\biggl(\biggl(b_{j}- \frac{\sigma_{j}^{2}}{2}\biggr)\wedge 0\biggr)\biggr] \quad\textit{a.s.}, \end{aligned}$$
(3.3)

which means the species i of system (1.2) is persistent in mean.

Proof

Applying the Itô formula to equation (2.1) yields

$$ \log y_{i}(t)=\log y_{i}(0)+ \biggl(b_{i} -\frac{\sigma_{i}^{2}}{2}\biggr)t-a_{ii}\int _{0}^{t}y_{i}(s)\,ds+ \sigma_{i}B_{i}(t). $$
(3.4)

Then we have

$$ \int_{0}^{t}y_{i}(s) \,ds=\frac{(b_{i}-\frac{\sigma_{i}^{2}}{2})}{a_{ii}}t +\frac{\sigma_{i}}{a_{ii}}B_{i}(t)-\frac{1}{a_{ii}} \bigl(\log y_{i}(t) -\log y_{i}(0)\bigr). $$
(3.5)

Dividing both sides of (3.5) by t yields

$$\begin{aligned} \frac{1}{t}\int_{0}^{t}y_{i}(s) \,ds= \frac{1}{a_{ii}}\biggl[\frac {\log y_{i}(0)}{t}- \frac{\log y_{i}(t)}{t}+ \biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr)+ \frac{1}{t}\int _{0}^{t}\sigma_{i}\,dB_{i}(s) \biggr]. \end{aligned}$$
(3.6)

Note that

$$\lim_{t\to\infty}\frac{\log y_{i}(0)}{t}=0, \qquad \frac{1}{t}\int _{0}^{t}\sigma_{i}\,dB_{i}(s)=0 \quad\mbox{a.s.} $$

This implies

$$\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}y_{i}(s) \,ds=-\frac {1}{a_{ii}}\lim_{t\to\infty}\frac{\log y_{i}(t)}{t}+ \frac{1}{a_{ii}}\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr). $$

By Lemma 3.1, letting \(t\to\infty\) on both sides of (3.6) yields

$$ \lim_{t\to\infty}\frac{1}{t}\int _{0}^{t}y_{j}(s)\,ds=\frac {1}{a_{jj}} \biggl(\biggl(b_{j}-\frac{\sigma_{j}^{2}}{2}\biggr)\vee 0\biggr)\quad \mbox{a.s.} $$
(3.7)

Combining Lemma 3.2 and (3.7), we can claim that

$$ \limsup_{t\to\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds\leqslant \lim _{t\to\infty}\frac{1}{t}\int_{0}^{t}y_{i}(s) \,ds= \frac{1}{a_{ii}}\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr) \quad \mbox{a.s.} $$
(3.8)

Now we process to show assertion (3.3). Applying the Itô formula to \(\log z_{i}(t)\) yields

$$ \begin{aligned}[b] \log z_{i}(t)={}&\log z_{i}(0)-\int_{0}^{t}\biggl( \frac{\sigma_{i}^{2}}{2}-b_{i}\biggr)\,ds -a_{ii}\int _{0}^{t}z_{i}(s)\,ds \\ &{}+\sum_{j\ne i}\int_{0}^{t}a_{ij}z_{j}(s) \,ds+ \int_{0}^{t}\sigma_{i} \,dB_{i}(s). \end{aligned} $$
(3.9)

Dividing both sides of (3.9) by t yields

$$ \begin{aligned}[b] \frac{\log z_{i}(t)}{t}={}& \frac{\log z_{i}(0)}{t}- \frac{1}{t}\int_{0}^{t} \biggl(\frac{\sigma_{i}^{2}}{2}-b_{i}\biggr)\,ds- \frac{a_{ii}}{t}\int _{0}^{t}z_{i}(s)\,ds \\ &{}+\frac{1}{t}\sum_{j\ne i}\int _{0}^{t}a_{ij}y_{j}(s)\,ds+ \frac{1}{t}\int_{0}^{t}\sigma_{i} \,dB_{i}(s). \end{aligned} $$
(3.10)

Using Lemma 3.2 and the law of strong large number for martingale, we have

$$ \lim_{t\to\infty}\frac{1}{t}\int _{0}^{t}\sigma_{i}\,dB_{i}(s)=0, \qquad \lim_{t\to\infty}\frac{\log z_{i}(t)}{t}=0 \quad\mbox{a.s.} $$
(3.11)

Combining Lemma 3.2 and (3.11), letting \(t\to+\infty\) on both sides of (3.9) yields

$$ \begin{aligned}[b] \lim_{t\to\infty}\frac{1}{t} \int_{0}^{t}z_{i}(s)\,ds &= \frac{1}{a_{ii}}\biggl[\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr) +\sum _{j\ne i}\lim_{t\to\infty}\frac{a_{ij}}{t} \int_{0}^{t}y_{j}(s)\,ds\biggr] \\ & =\frac{1}{a_{ii}}\biggl[\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr)- \sum_{j\ne i}\frac{a_{ij}}{a_{jj}}\biggl( \biggl(b_{j}-\frac{\sigma_{j}^{2}}{2}\biggr)\wedge 0\biggr)\biggr]>0. \end{aligned} $$
(3.12)

Since \(x_{i}(t)\geqslant z_{i}(t)\), assertion (3.3) is true. Therefore this theorem is proved. □

Remark 3.1

Compared with the existing literature [8], the conditions imposed on the permanence of one-species are weaker.

Applying Lemma 3.1 to system (1.2), we have the following corollary, which coincides with Theorem 3.1 in [8].

Corollary 3.1

Let condition (1.3) hold and assume that \(b_{i}-\frac{\sigma_{i}^{2}}{2}>0\), \(b_{i}-\frac{\sigma_{i}^{2}}{2}-\sum_{j\ne i}\frac{a_{ij}}{a_{jj}}(b_{j}-\frac{\sigma_{j}^{2}}{2})>0\) for all \(i=1,\ldots,n\). System (1.2) is persistent in mean.

3.2 Extinction of one-species

Theorem 3.2

Let condition (1.3) hold and \(x_{i}(t)\) be the solution to system (1.2) with positive initial value \(x_{i}(0)\). Then we have the following assertions:

  1. (i)

    If \(\sigma_{i}^{2}>2b_{i}\), the solution \(x_{i}(t)\) to system (1.2) has the property that

    $$ \limsup_{t\to\infty}\frac{\log x_{i}(t)}{t}\leqslant b_{i}-\frac{\sigma_{i}^{2}}{2} \quad\textit{a.s.} $$
    (3.13)

    That is, the species i of system (1.2) will become extinct.

  2. (ii)

    If \(\sigma_{i}^{2}=2b_{i}\), the solution \(x_{i}(t)\) to system (1.2) has the property that

    $$ \lim_{t\to\infty}x_{i}(t)=0 \quad \textit{a.s.} $$
    (3.14)

    That is, the species i of system (1.2) still become extinct with probability one.

Proof

The proof is rather technical, so we will divide it into two steps. The first step is to show the exponential extinction of species i when \(\sigma_{i}^{2}>2b_{i}\). The second step is to show the extinction in the case of \(\sigma_{i}^{2}=2b_{i}\).

Step 1. Applying the Itô formula to \(\log x_{i}(t)\) yields

$$ \log x_{i}(t)=\log x_{i}(0)+\int _{0}^{t}\biggl(b_{i} -\frac{\sigma_{i}^{2}}{2} \biggr)\,ds-\sum_{j=1}^{n}a_{ij} \int_{0}^{t}x_{j}(s)\,ds+\int _{0}^{t}\sigma_{i}\,dB_{i}(s). $$
(3.15)

Dividing both sides of (3.15) by t yields

$$ \begin{aligned}[b] \frac{\log x_{i}(t)}{t}={}& \frac{\log x_{i}(0)}{t} +\frac{1}{t}\int_{0}^{t} \biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr)\,ds \\ &{}-\frac{1}{t}\sum_{j=1}^{n}a_{ij} \int_{0}^{t}x_{j}(s)\,ds + \frac{1}{t}\int_{0}^{t}\sigma_{i} \,dB_{i}(s). \end{aligned} $$
(3.16)

Using the law of strong large number for martingales, we can claim that

$$\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t} \sigma_{i}\,dB_{i}(s)=0 \quad\mbox{a.s.} $$

Letting \(t\to\infty\) yields

$$\limsup_{t\to\infty}\frac{\log x_{i}(t)}{t}\leqslant b_{i}- \frac{\sigma_{i}^{2}}{2} \quad\mbox{a.s.} $$

Step 2. Now, let us finally show assertion (3.14). Decompose the sample space into three mutually exclusive events as follows:

$$\begin{aligned}& \Omega_{i1}=\Bigl\{ \omega:\limsup_{t\to\infty}x_{i}(t) \geqslant \liminf_{t\to\infty}x_{i}(t)=\gamma_{i}>0 \Bigr\} ; \\& \Omega_{i2}=\Bigl\{ \omega:\limsup_{t\to\infty}x_{i}(t)> \liminf_{t\to\infty}x_{i}(t)=0\Bigr\} ; \\& \Omega_{i3}=\Bigl\{ \omega:\lim_{t\to\infty}x_{i}(t)=0 \Bigr\} . \end{aligned}$$

When \(\sigma_{i}^{2}=2b_{i}\), equation (3.16) has the following form:

$$ \frac{\log x_{i}(t)}{t}=\frac{\log x_{i}(0)}{t} -\frac{1}{t}\sum _{j=1}^{n}a_{ij}\int _{0}^{t}x_{i}(s)\,ds +\frac{1}{t} \int_{0}^{t}\sigma_{i} \,dB_{i}(s). $$
(3.17)

Furthermore, we decompose the sample space into the following two mutually exclusive events according to the convergence of \(\int_{0}^{\infty}x_{i}(s)\,ds\):

$$ E_{i1}=\biggl\{ \omega:\int_{0}^{\infty}x_{i}(s) \,ds< \infty\biggr\} ,\qquad E_{i2}=\biggl\{ \omega:\int _{0}^{\infty}x_{i}(s)\,ds=\infty\biggr\} . $$
(3.18)

The proof of \(\lim_{t\to\infty}x_{i}(t)=0\) a.s. is equivalent to showing \(E_{i1}\subset\Omega_{i3}\), \(E_{i2}\subset\Omega_{i3}\) a.s. The strategy of the proof is as follows.

⋆:

First, by using the techniques proposed in [21], we show that \(E_{i1}\subset\Omega_{i3}\). It is sufficient to show \(P(E_{i1}\cap\Omega_{i1})=0\) and \(P(E_{i1}\cap\Omega_{i2})=0\).

⋆:

Second, using some novel techniques, we prove that \(P(E_{i2}\cap\Omega_{i1})=0\) and \(P(E_{i2}\cap\Omega_{i2})=0\), which means \(E_{i2}\subset\Omega_{i3}\) a.s.

Now we realize this strategy as follows.

Case 1. Let us now show \(E_{i1}\subset\Omega_{i3}\). Clearly, \(x_{i}(t)\in C(R_{+},R)\) a.s. It is straightforward to see from \(E_{i1}\) that \(\liminf_{t\to\infty}x_{i}(t)=0\) a.s. Therefore, we have obtained that \(P(E_{i1}\cap\Omega_{i1})=0\). Now we only need to prove that \(P(E_{i1}\cap\Omega_{i2})=0\). We prove it by contradiction.

If \(P(E_{i1}\cap\Omega_{i2})>0\), there exists a number \(\epsilon>0\) such that

$$ P(J_{1}\cap E_{i1})\geqslant2\epsilon, $$
(3.19)

where \(J_{1}=\{\limsup_{t\to\infty}x_{i}(t)>2\epsilon\}\). Let us now define a sequence of stopping times

$$\begin{aligned}& \tau_{1}=\inf\bigl\{ t\geqslant0 : x_{i}(t)\geqslant2 \epsilon\bigr\} ,\qquad \tau_{2k}=\inf\bigl\{ t\geqslant \tau_{2k-1} : x_{i}(t)\leqslant \epsilon\bigr\} , \\& \tau_{2k+1}=\inf\bigl\{ t\geqslant\tau_{2k} : x_{i}(t)\geqslant 2\epsilon\bigr\} ,\quad k=1,2,\ldots. \end{aligned}$$

From \(E_{i1}\), we also have \(E(I_{E_{i1}}\int_{0}^{\infty}x_{i}(s)\,ds)<\infty\), then we compute it

$$\begin{aligned} E\biggl(I_{E_{i1}}\int_{0}^{\infty}x_{i}(s) \,ds\biggr) \geqslant&\sum_{k=1}^{\infty}E\biggl[ I_{\{\tau_{2k-1}< \infty,\tau_{2k}< \infty\}\cap E_{i1}} \int_{\tau_{2k-1}}^{\tau_{2k}}x_{i}(s) \,ds\biggr] \\ \geqslant&\epsilon\sum_{k=1}^{\infty} E\bigl[ I_{\{\tau_{2k-1}< \infty\}\cap E_{i1}}(\tau_{2k}-\tau_{2k-1})\bigr], \end{aligned}$$

where \(I_{A}\) is the indicator function for all sets A. Since \(\tau_{2k}<\infty\) whenever \(\tau_{2k-1}<\infty\), by the above formula, so we have

$$ \epsilon\sum_{k=1}^{\infty} E \bigl[ I_{\{\tau_{2k-1}< \infty\}\cap E_{i1}}(\tau_{2k-1}-\tau_{2k})\bigr]< \infty. $$
(3.20)

On the other hand, integrating equation (1.2) from 0 to t yields

$$ x_{i}(t)=x_{i}(0)+\int_{0}^{t}x_{i}(s) \Biggl(b_{i}- \sum_{j=1}^{n}a_{ij}x_{j}(s) \Biggr)\,ds +\int_{0}^{t}\sigma_{i}x_{i}(s) \,dB_{i}(s). $$
(3.21)

A simple computation shows that

$$\begin{aligned}& E\Biggl[ x_{i}^{2}(s)\cdot\Biggl(b_{i} -\sum _{j=1}^{n}a_{ij}x_{j}(s) \Biggr)^{2}\Biggr] \\& \quad \leqslant\frac{1}{2}E\bigl(x_{i}^{4}(s)\bigr) +\frac{1}{2}E\Biggl[\Biggl(b_{i}-\sum _{j=1}^{n}a_{ij}x_{j}(s) \Biggr)^{4}\Biggr] \\& \quad \leqslant\frac{1}{2}E\bigl(x_{i}^{4}(s)\bigr) +\frac{1}{2}E\Biggl(b_{i}^{4}+6b_{i}^{2} \sum_{j=1}^{n}a_{ij}^{2}x_{j}^{2}(s) +\sum_{j=1}^{n}a_{ij}^{4}x_{j}^{4}(s) \Biggr) \\& \quad \leqslant\frac{1}{2}K_{4}+\frac{1}{2} \Biggl(b_{i}^{4}+6b_{i}^{2}\sum _{j=1}^{n}a_{ij}^{2}K_{2} +\sum_{j=1}^{n}a_{ij}^{4}K_{4} \Biggr) \\& \quad =:M_{i}^{2}, \end{aligned}$$

and

$$E\bigl(\sigma_{i}^{2}\cdot x_{i}^{2}(s) \bigr)=\sigma_{i}^{2}\cdot E\bigl(x_{i}^{2}(s) \bigr)\leqslant\sigma_{i}^{2}\cdot K_{2}=:N_{i}^{2}, $$

where \(K_{2}\) and \(K_{4}\) are defined in Lemma 2.2. Using the Hölder inequality and Burkholder-Davis-Gundy inequality (see [19]), we compute

$$\begin{aligned}& E\Bigl[ I_{\{\tau_{2k-1}< \infty\}\cap E_{i1}}\sup_{0\leqslant t\leqslant T}\bigl|x_{i}( \tau_{2k-1}+t)-x_{i}(\tau_{2k-1})\bigr|^{2}\Bigr] \\& \quad \leqslant2E\Biggl[ I_{\{\tau_{2k-1}< \infty\}\cap E_{i1}}\sup_{0\leqslant t\leqslant T}\Biggl|\int _{\tau_{2k-1}}^{\tau_{2k-1}+t}\Biggl(x_{i}(s)\cdot \Biggl(b_{i} -\sum_{j=1}^{n}a_{ij}x_{j}(s) \Biggr)\Biggr)\,ds\Biggr|^{2}\Biggr] \\& \qquad{} +2E\biggl[ I_{\{\tau_{2k-1}< \infty\}\cap E_{i1}}\sup_{0\leqslant t\leqslant T}\biggl|\int _{\tau_{2k-1}}^{\tau_{2k-1}+t}\bigl(\sigma_{i}\cdot x_{i}(s)\bigr)\,dB_{i}(s)\biggr|^{2}\biggr] \\& \quad \leqslant2TE\Biggl[ I_{\{\tau_{2k-1}< \infty\}\cap E_{i1}} \int_{\tau_{2k-1}}^{\tau_{2k-1}+T} \Biggl( x_{i}^{2}(s)\cdot\Biggl(b_{i} -\sum _{j=1}^{n}a_{ij}x_{j}(s) \Biggr)^{2}\Biggr) (s)\,ds\Biggr] \\ & \qquad{} +8E\biggl[ I_{\{\tau_{2k-1}< \infty\}\cap E_{i1}}\int_{\tau_{2k-1}}^{\tau_{2k-1}+T} \bigl(\sigma_{i}^{2}\cdot x_{i}^{2}(s) \bigr)\,ds\biggr] \\& \quad \leqslant2T\bigl(M_{i}^{2}+4N_{i}^{2} \bigr). \end{aligned}$$
(3.22)

Furthermore, we choose \(T=T(\epsilon)>0\) sufficiently small for

$$2T\bigl(M_{i}^{2}+4N_{i}^{2}\bigr) \leqslant\epsilon^{3}. $$

It then follows from (3.22) that

$$ P\bigl(\{\tau_{2k-1}< \infty\}\cap\{ H_{k}\cap E_{i1}\}\bigr) \leqslant\frac{2(T+4)T(M_{i}^{2}+N_{i}^{2})}{\epsilon^{2}}\leqslant \epsilon, $$
(3.23)

where

$$H_{k}=\Bigl\{ \sup_{1\leqslant t\leqslant T}\bigl|x_{i}( \tau_{2k-1}+t)-x_{i}(\tau_{2k-1})\bigr|\geqslant\epsilon \Bigr\} ,\quad k=1,2,\ldots,n. $$

Recalling the fact that \(\tau_{k}<\infty\), for \(k=1,2,\ldots\) , whenever \(\omega\in J_{1}\), we further compute

$$\begin{aligned}& P\bigl(\{\tau_{2k-1}< \infty\}\cap\bigl\{ H^{c}_{k} \cap E_{i1}\bigr\} \bigr) \\& \quad=P\bigl(\{\tau_{2k-1}< \infty\}\cap E_{i1} \bigr)-P \bigl(\{\tau_{2k-1}< \infty\}\cap \{H_{k}\cap E_{i1}\} \bigr) \\& \quad \geqslant2\epsilon-\epsilon=\epsilon. \end{aligned}$$

If \(\omega\in\{\tau_{2k-1}<\infty\}\cap\{ H^{c}_{k}\cap E_{i1}\}\), note that

$$ \tau_{2k}(\omega)-\tau_{2k-1}(\omega)\geqslant T. $$
(3.24)

We derive from (3.20) and (3.24) that

$$\begin{aligned} \infty&>\epsilon\sum_{k=1}^{\infty}E \bigl[ I_{\{\tau_{2k-1}< \infty\}\cap E_{i1}}(\tau_{2k}-\tau _{2k-1})\bigr] \\ &\geqslant\epsilon\sum_{k=1}^{\infty}E\bigl[ I_{\{\tau_{2k-1}< \infty\}\cap\{ H^{c}_{k}\cap E_{i1}\}} (\tau_{2k-1}-\tau_{2k})\bigr] \\ &\geqslant\epsilon T\sum_{k=1}^{\infty}P\bigl(\{ \tau_{2k-1}< \infty\} \cap\bigl\{ H^{c}_{k}\cap E_{i1}\bigr\} \bigr) \\ &\geqslant\epsilon T\sum_{k=1}^{\infty}\epsilon= \infty, \end{aligned}$$
(3.25)

which is a contraction. So that \(P(E_{i1}\cap\Omega_{i2})=0\) holds. Therefore, we obtain that \(E_{i1}\subset\Omega_{i3}\).

Case 2. Now, we turn to prove that \(E_{i2}\subset\Omega_{i3}\) a.s. It is sufficient to show \(P(E_{i2}\cap\Omega_{i1})=0\) and \(P(E_{i2}\cap\Omega_{i2})=0\). We prove it by contradiction.

If \(P(E_{i2}\cap\Omega_{i1})>0\) holds, for any \(\omega\in E_{i2}\cap\Omega_{i1}\), \(\epsilon_{0}\in(0,\frac{\gamma_{i}}{2})\), there exists \(T=(\epsilon_{0},\omega)\) such that

$$x_{i}(t)>\gamma_{i}-\epsilon_{0}> \frac{\gamma_{i}}{2}\quad \forall t>T\mbox{ a.s.} $$

It then follows from (3.17) that

$$\frac{1}{t}\int_{0}^{t}x_{i}(s) \,ds=\frac{1}{t}\int_{0}^{T}x_{i}(s) \,ds +\frac{1}{t}\int_{T}^{t}x_{i}(s) \,ds\geqslant \frac{1}{t}\int_{0}^{T}x_{i}(s) \,ds+\frac{t-T}{t}\frac{\gamma _{i}}{2} \quad\mbox{a.s.} $$

Letting \(t\to\infty\), we obtain that

$$\liminf_{t\to\infty}\frac{1}{t}\int_{0}^{t}x_{i}(s) \,ds>\frac {\gamma_{i}}{2}>0\quad\mbox{a.s.} $$

This implies

$$\limsup_{t\to\infty}\frac{\log x_{i}(t)}{t}\leqslant -\sum _{j=1}^{n}a_{ij}\frac{\gamma_{i}}{2}< 0 \quad \mbox{a.s.}, $$

which contradicts the definition of \(E_{i2}\) and \(\Omega_{i1}\). So \(P(E_{i2}\cap\Omega_{i1})=0\) must hold.

Now we process to show \(P(E_{i2}\cap\Omega_{i2})>0\) is false. For this purpose, we need more notations as follows:

$$\begin{aligned}& \Pi_{t}^{\epsilon}(i):=\bigl\{ 0\leqslant s\leqslant t:x_{i}(s)\geqslant\epsilon\bigr\} ,\qquad \delta_{t}^{\epsilon}(i):= \frac{m(\Pi_{t}^{\epsilon}(i))}{t}, \\& \delta^{\epsilon}(i):=\liminf_{t\to\infty}\delta_{t}^{\epsilon}{i}, \qquad \Delta^{\epsilon}(i):=\bigl\{ \omega\in E_{i2}\cap \Omega_{i2}:\delta^{\epsilon}(i)>0\bigr\} , \end{aligned}$$

where \(m(\Pi_{t}^{\epsilon}(i))\) indicates the length of \(\Pi_{t}^{\epsilon}(i)\). It is easy to see that \(\Delta^{0}(i)=E_{i2}\cap\Omega_{i2}\). For any \(\epsilon _{1}<\epsilon_{2}\), simple computations show that

$$\begin{aligned} &\Pi_{t}^{\epsilon_{1}}(i)\supset\Pi_{t}^{\epsilon_{2}}(i),\qquad m \bigl(\Pi_{t}^{\epsilon_{1}}(i)\bigr)\geqslant m\bigl( \Pi_{t}^{\epsilon_{2}}(i)\bigr), \\ &\delta_{t}^{\epsilon_{1}}(i)=\frac{m(\Pi_{t}^{\epsilon _{1}})(i)}{t} \geqslant \delta_{t}^{\epsilon_{2}}(i)=\frac{m(\Pi_{t}^{\epsilon_{2}})(i)}{t}, \end{aligned} $$

which implies

$$\delta^{\epsilon_{2}}(i)\leqslant\delta^{\epsilon_{1}}(i),\qquad \Delta^{\epsilon_{2}}(i)\subset\Delta^{\epsilon_{1}}(i)\quad \forall \epsilon_{1}< \epsilon_{2}. $$

It is easy to observe from the continuity of probability that

$$P\bigl(\Delta^{\epsilon}(i)\bigr)\to P\bigl(\Delta^{0}(i) \bigr)=P(E_{2}\cap\Omega_{2}) \quad\mbox{as }\epsilon\to0. $$

If \(P(E_{i2}\cap\Omega_{i2})>0\), there exists \(\epsilon>0\) such that \(P(D^{\epsilon})>0\). For any \(\omega\in\Delta^{\epsilon}(i)\), simple computations show that

$$\frac{1}{t}\int_{0}^{t}x_{i}(s) \,ds=\frac{1}{t}\int_{\Pi _{t}^{\epsilon}(i)}x_{i}(s)\,ds + \frac{1}{t}\int_{[0,t]\setminus \Pi_{t}^{\epsilon}}x_{i}(s)\,ds\geqslant \frac{1}{t}\int_{\Pi _{t}^{\epsilon}(i)}x_{i}(s)\,ds \quad \mbox{a.s.} $$

By letting \(t\to\infty\), we have

$$ \liminf_{t\to\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds\geqslant \liminf _{t\to\infty}\frac{1}{t}\int_{\Pi_{t}^{\epsilon }}x_{i}(s) \,ds\geqslant \delta^{\epsilon}(i)\epsilon \quad\mbox{a.s.} $$
(3.26)

Substituting (3.26) into (3.17), we obtain that

$$\limsup_{t\to\infty}\frac{\log x_{i}(t)}{t}\leqslant -\sum _{j=1}^{n}a_{ij}\delta^{\epsilon}(i) \epsilon< 0 \quad\mbox{a.s.} $$

This contradicts the definition of \(E_{i2}\) and \(\Omega_{i2}\). It yields the desired assertion \(P(E_{i2}\cap\Omega_{i2})=0\) immediately. Combining the fact \(E_{i1}\subset\Omega_{i3}\), \(P(E_{i2}\cap\Omega_{i1})=0\) and \(P(E_{i2}\cap\Omega_{i2})=0\), we can claim that

$$\lim_{t\to\infty}x_{i}(t)=0 \quad\mbox{a.s.} $$

The proof is completed. □

Remark 3.2

In comparison with [8] and [11], we point out that species i is still extinct when \(\sigma_{i}^{2}=2b_{i}\) by using some novel stochastic analysis techniques.

Corollary 3.2

Let condition (1.3) hold and \(x(t)\) be a solution to system (1.2) with positive initial value \(x(0)\). Assume that there exists an integer m, \(1\leqslant m< n\), such that

$$\begin{aligned}& \sigma_{i}^{2}>2b_{i},\quad i=1, \ldots,m, \end{aligned}$$
(3.27)
$$\begin{aligned}& \sigma_{i}^{2}=2b_{i},\quad i=m+1,\ldots,n. \end{aligned}$$
(3.28)

Then we have the following assertions:

  1. (i)

    For all \(i=1,\ldots,n\), the solution \(x_{i}(t)\) to system (1.2) has the property that

    $$ \lim_{t\to\infty}\frac{\log x_{i}(t)}{t}= b_{i}- \frac{\sigma_{i}^{2}}{2} \quad\textit{a.s. } i=1,\ldots,m. $$
    (3.29)
  2. (ii)

    For all \(i=m+1,\ldots,n\), the solution \(x_{i}(t)\) to system (1.2) has the property that

    $$ \lim_{t\to\infty}\frac{\log x_{i}(t)}{t}=0 \quad\textit{a.s. } i=m+1,\ldots,n. $$
    (3.30)

Proof

By virtue of Theorem 3.2, for all \(\sigma_{i}^{2}>2b_{i}\), \(i=1,\ldots,m\), we obtain that

$$\limsup_{t\to\infty}\frac{\log x_{i}(t)}{t}\leqslant b_{i}- \frac{\sigma_{i}^{2}}{2} \quad\mbox{a.s. } i=1,\ldots,m. $$

This shows that for any \(\epsilon_{i}\in(0,\frac{\sigma_{i}^{2}}{2}-b_{i})\) there is a positive random variable \(T(\epsilon_{i})\) such that

$$x_{i}(t)\leqslant\exp\biggl\{ \biggl(b_{i}- \frac{\sigma _{i}^{2}}{2}\biggr)t +\epsilon_{i}t\biggr\} \quad \forall t>T( \epsilon_{i}), \mbox{a.s. } i=1,\ldots,m, $$

which means

$$\int_{0}^{\infty}x_{i}(s)\,ds< \infty \quad \mbox{a.s. } i=1,\ldots,m. $$

Then letting \(t\to\infty\) on both sides of (3.16) yields

$$\lim_{t\to\infty}\frac{\log x_{i}(t)}{t}=b_{i}- \frac{\sigma_{i}^{2}}{2} \quad\mbox{a.s. } i=1,\ldots,m, $$

which is the required assertion (3.29).

Now we process to show assertion (3.30). By utilizing Theorem 3.1 and conditions (3.28), we derive

$$\lim_{t\to\infty}x_{i}(t)=0 \quad\mbox{a.s. } i=m+1, \ldots,n. $$

This implies

$$ \lim_{t\to\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds=0 \quad\mbox{a.s. } i=m+1,\ldots,n. $$
(3.31)

By the law of strong large numbers for martingales and (3.31), letting \(t\to\infty\) on both sides of (3.17) yields

$$\lim_{t\to\infty}\frac{\log x_{i}(t)}{t}=0 \quad\mbox{a.s. } i=m+1, \ldots,n. $$

The proof is completed. □

4 Partial permanence and extinction

Now in this section we present conditions for system (1.2) to be partially permanent and extinct. To proceed with our study, we consider the following auxiliary stochastic equation:

$$ \textstyle\begin{cases} d\varPhi_{i}(t)=\varPhi_{i}(t)(b_{i}-\sum_{j=1}^{m}a_{ij}\varPhi_{j}(t))\,dt +\sigma_{i}\varPhi_{i}(t)\,dB_{i}(t), \\ \varPhi_{i}(0)=x_{i}(0),\quad i=1,\ldots,m. \end{cases} $$
(4.1)

Theorem 4.1

Let condition (1.3) hold. Assume that there exists an integer m, \(1\leqslant m< n\), such that

$$\begin{aligned}& \begin{aligned} &b_{i}>\frac{\sigma_{i}^{2}}{2},\qquad a_{ii}-\sum _{j\ne i}^{m}a_{ji}>0, \\ &b_{i}-\frac{\sigma_{i}^{2}}{2}-\sum_{k\ne i}^{m} \frac{a_{ik}}{a_{kk}}\biggl(b_{k}-\frac{\sigma_{k}^{2}}{2}\biggr)>0,\quad i=1, \ldots,m, \end{aligned} \end{aligned}$$
(4.2)
$$\begin{aligned}& b_{i}< \frac{\sigma_{i}^{2}}{2},\quad i=m+1,\ldots,n. \end{aligned}$$
(4.3)

Then we have the following assertions:

  1. (i)

    For all \(i=1,\ldots,m\), the solution \(x(t)\) to system (1.2) has the property that

    $$\begin{aligned}& \limsup_{t\to\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds\leqslant \frac{1}{a_{ii}}\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr) \quad\textit{a.s. } i=1,\ldots,m. \end{aligned}$$
    (4.4)
    $$\begin{aligned}& \begin{aligned}[b] &\liminf_{t\to\infty}\frac{1}{t}\int _{0}^{t}x_{i}(s)\,ds \\ &\quad\geqslant \frac{1}{a_{ii}}\Biggl[\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr) -\sum _{k\ne i}^{m}\frac{a_{ik}}{a_{kk}} \biggl(b_{k}-\frac{\sigma_{k}^{2}}{2}\biggr)\Biggr] \quad\textit{a.s. } i=1, \ldots,m. \end{aligned} \end{aligned}$$
    (4.5)

    That is, for each \(i=1,\ldots,m\), the species i of system (1.2) is persistent in mean;

  2. (ii)

    For all \(i=m+1,\ldots,n\), the solution \(x(t)\) to system (1.2) has the property that

    $$\begin{aligned} \limsup_{t\to\infty}\frac{\log x_{i}(t)}{t} \leqslant& b_{i}-\frac{\sigma_{i}^{2}}{2} -\sum_{j=1}^{m} \frac{a_{ij}}{a_{jj}} \Biggl[\biggl(b_{j}-\frac{\sigma_{j}^{2}}{2}\biggr) \\ &{} -\sum_{k\ne j}^{m}\frac{a_{jk}}{a_{kk}} \biggl(b_{k}-\frac{\sigma_{k}^{2}}{2}\biggr)\Biggr] \quad\textit{a.s. } i=m+1, \ldots,n. \end{aligned}$$
    (4.6)

    That is, for each \(i=m+1,\ldots,n\), the species i will become extinct.

Proof

We will divide the proof into two steps. The first step is to show the permanence of the top m species of system (1.2). The second step is to show the extinction for the bottom \(n-m\) species of system (1.2).

Step 1. Applying the Itô formula to (4.1) yields

$$ d\log\varPhi_{i}(t)=\Biggl(b_{i}-\frac{\sigma_{i}^{2}}{2} -\sum _{j=1}^{m}a_{ij} \varPhi_{j}(t)\Biggr)\,dt +\sigma_{i}\,dB_{i}(t), \quad i=1,\ldots,m. $$
(4.7)

Simple computations show that

$$\begin{aligned} d\bigl(\log x_{i}(t)-\log\varPhi_{i}(t)\bigr) =&-\sum _{j=1}^{m} a_{ij} \bigl(x_{j}(t)-\varPhi_{j}(t)\bigr)\,dt \\ &{} -\sum_{l=m+1}^{n} a_{il}x_{l}(t) \,dt,\quad i=1,\ldots,m. \end{aligned}$$
(4.8)

Applying the Itô formula to \(V(t)=\sum_{i=1}^{m}|\log x_{i}(t)-\log\varPhi_{i}(t)|\) yields

$$\begin{aligned} D^{+}V(t)&=\sum_{i=1}^{m} \operatorname{sgn}\bigl(x_{i}(t)-\varPhi_{i}(t)\bigr)\cdot \bigl[d\log x_{i}(t)-d\log\varPhi_{i}(t)\bigr] \\ &=-\sum_{i=1}^{m}\operatorname{sgn} \bigl(x_{i}(t)-\varPhi_{i}(t)\bigr)\cdot\Biggl[ \sum _{j=1}^{m}a_{ij} \bigl(x_{j}(t)-\varPhi_{j}(t)\bigr) +\sum _{l=m+1}^{n}a_{il}x_{l}(t)\Biggr] \,dt \\ &\leqslant-\sum_{i=1}^{m}a_{ii}\bigl|x_{i}(t)- \varPhi_{i}(t)\bigr|\,dt +\sum_{i=1}^{m} \sum_{j\ne i}^{m}a_{ij}\bigl|x_{j}(t)- \varPhi_{j}(t)\bigr|\,dt +\sum_{i=1}^{m} \sum_{l=m+1}^{n}a_{il}x_{l}(t) \,dt \\ &=-\sum_{i=1}^{m} a_{ii}\bigl|x_{i}(t)- \varPhi_{i}(t)\bigr|\,dt+\sum_{j=1}^{m} \sum_{i\ne j}^{m}a_{ji}\bigl|x_{i}(t)- \varPhi_{i}(t)\bigr|\,dt +\sum_{i=1}^{m} \sum_{l=m+1}^{n}a_{il}x_{l}(t) \,dt \\ &=-\sum_{i=1}^{m} a_{ii}\bigl|x_{i}(t)- \varPhi_{i}(t)\bigr|\,dt+\sum_{i=1}^{m} \sum_{j\ne i}^{m}a_{ji}\bigl|x_{i}(t)- \varPhi_{i}(t)\bigr|\,dt +\sum_{i=1}^{m} \sum_{l=m+1}^{n}a_{il}x_{l}(t) \,dt \\ &\leqslant-\sum_{i=1}^{m} \Biggl(a_{ii}-\sum_{j\ne i}^{m}a_{ji} \Biggr)\bigl|x_{i}(t)-\varPhi_{i}(t)\bigr|\,dt +\sum _{i=1}^{m}\sum_{l=m+1}^{n}a_{il}x_{l}(t) \,dt. \end{aligned}$$

Hence we get

$$ D^{+}V(t)\leqslant-\mu\sum _{i=1}^{m}\bigl|x_{i}(t)-\varPhi_{i}(t)\bigr| \,dt +\sum_{l=m+1}^{n}\theta_{l}x_{l}(t) \,dt,\quad i=1,\ldots,m, $$
(4.9)

where \(\mu=\min_{1\leqslant i\leqslant m}(a_{ii}-\sum_{j\ne i}^{m}a_{ji})>0\), \(\theta_{l}=\sum_{i=1}^{m}a_{il}\geqslant0\). We therefore have

$$\begin{aligned}& V(t)+\mu\int_{0}^{t}\sum _{i=1}^{m} \bigl|x_{i}(s)-\varPhi_{i}(s)\bigr| \,ds \\& \quad\leqslant V(0)+\sum_{l=m+1}^{n} \theta_{l}\int_{0}^{t}x_{l}(s) \,ds,\quad i=1,\ldots,m. \end{aligned}$$
(4.10)

Letting \(t\to\infty\) on both sides of (4.10) yields

$$\begin{aligned} \int_{0}^{\infty}\bigl|x_{i}(s)- \varPhi_{i}(s)\bigr|\,ds \leqslant&\int_{0}^{\infty} \sum_{i=1}^{m}\bigl|x_{i}(s)- \varPhi_{i}(s)\bigr|\,ds \\ \leqslant&\frac{1}{\mu}\Biggl[ V(0)+\sum_{l=m+1}^{n} \theta_{l}\int_{0}^{\infty}x_{l}(s) \,ds\Biggr]. \end{aligned}$$
(4.11)

By Theorem 3.2 and condition (4.3), we have

$$ \int_{0}^{\infty}x_{l}(s) \,ds< +\infty \quad\mbox{a.s. } l=m+1,\ldots,n. $$
(4.12)

Substituting (4.12) into (4.11) yields

$$ \int_{0}^{\infty}\bigl|x_{i}(s)- \varPhi_{i}(s)\bigr|\,ds< +\infty \quad\mbox{a.s. } i=1,\ldots,m. $$
(4.13)

By virtue of the similar techniques proposed in Step 2 of Theorem 3.2, we have

$$ \lim_{t\to\infty}\bigl|x_{i}(t)- \varPhi_{i}(t)\bigr|=0 \quad\mbox{a.s. } i=1,\ldots,m. $$
(4.14)

When condition (4.2) is satisfied, by applying Corollary 3.1 to system (4.1), we have

$$\begin{aligned}& \limsup_{t\to\infty}\frac{1}{t}\int_{0}^{t} \varPhi_{i}(s)\,ds \leqslant\frac{1}{a_{ii}}\biggl(b_{i}- \frac{\sigma_{i}^{2}}{2}\biggr) \quad\mbox{a.s. } i=1,\ldots,m, \\& \liminf_{t\to\infty}\frac{1}{t}\int_{0}^{t} \varPhi_{i}(s)\,ds \geqslant\frac{1}{a_{ii}}\Biggl[ \biggl(b_{i}-\frac{\sigma _{i}^{2}}{2}\biggr)-\sum _{k\ne i}^{m}\frac{a_{ik}}{a_{kk}}\biggl(b_{k}- \frac{\sigma_{k}^{2}}{2}\biggr)\Biggr] \quad\mbox{a.s. } i=1,\ldots,m. \end{aligned}$$

A simple computation shows that

$$\begin{aligned}& \begin{aligned}[b] \limsup_{t\to\infty} \frac{1}{t}\int_{0}^{t}x_{i}(s) \,ds & \leqslant\limsup_{t\to\infty}\frac{1}{t}\int _{0}^{t}\bigl(x_{i}(s)- \varPhi_{i}(s)\bigr)\,ds +\limsup_{t\to\infty} \frac{1}{t}\int_{0}^{t}\varPhi_{i}(s) \,ds \\ & \leqslant\frac{1}{a_{ii}}\biggl(b_{i}-\frac{\sigma_{i}^{2}}{2} \biggr) \quad\mbox{a.s. } i=1,\ldots,m, \end{aligned} \end{aligned}$$
(4.15)
$$\begin{aligned}& \begin{aligned}[b] &\liminf_{t\to\infty} \frac{1}{t}\int_{0}^{t}x_{i}(s) \,ds \\ &\quad \geqslant \liminf_{t\to\infty}\frac{1}{t}\int _{0}^{t}\bigl(x_{i}(s)- \varPhi_{i}(s)\bigr)\,ds +\liminf_{t\to\infty} \frac{1}{t}\int_{0}^{t}\varPhi_{i}(s) \,ds \\ &\quad\geqslant \frac{1}{a_{ii}}\Biggl[\biggl(b_{i}- \frac{\sigma _{i}^{2}}{2}\biggr)-\sum_{k\ne i}^{m} \frac{a_{ik}}{a_{kk}}\biggl(b_{k}-\frac{\sigma_{k}^{2}}{2}\biggr)\Biggr] \quad \mbox{a.s. } i=1,\ldots,m. \end{aligned} \end{aligned}$$
(4.16)

Therefore, we obtain that \(x_{i}(t)\) is persistent in mean, for all \(i=1,\ldots,m\).

Step 2. For all \(i=m+1,\ldots,n\), applying Itô to \(\log x_{i}(t)\) yields

$$ \begin{aligned}[b] \frac{\log x_{i}(t)}{t}={}& \frac{\log x_{i}(0)}{t}+\frac{1}{t}\int_{0}^{t} \biggl(b_{i}-\frac{\sigma_{i}^{2}}{2}\biggr)\,ds -\sum _{j=1}^{m} \frac{a_{ij}}{t}\int_{0}^{t}x_{j}(s) \,ds \\ &{}-\sum_{l=m+1}^{n} \frac{a_{il}}{t}\int _{0}^{t}x_{l}(s)\,ds +\frac{1}{t} \int_{0}^{t}\sigma_{i} \,dB_{i}(s),\quad i=m+1,\ldots,n. \end{aligned} $$
(4.17)

It follows from (4.15) that

$$\begin{aligned} &\liminf_{t\to\infty}\frac{1}{t}\sum _{j=1}^{m}a_{ij} \int_{0}^{t}x_{j}(s) \,ds \\ &\quad\geqslant \sum_{j=1}^{m}\frac{a_{ij}}{a_{jj}} \Biggl[\biggl(b_{j}-\frac{\sigma_{j}^{2}}{2}\biggr) -\sum _{k\ne j}^{m}\frac{a_{jk}}{a_{kk}}\biggl(b_{k}- \frac{\sigma_{k}^{2}}{2}\biggr)\Biggr] \quad\mbox{a.s. } j=1,\ldots,m. \end{aligned} $$

By letting \(t\to\infty\) on both sides of (4.16) yields. We can conclude that

$$\begin{aligned} \limsup_{t\to\infty}\frac{\log x_{i}(t)}{t}\leqslant{}& b_{i}- \frac{\sigma_{i}^{2}}{2}-\sum_{j=1}^{m} \frac{a_{ij}}{a_{jj}} \Biggl[\biggl(b_{j}-\frac{\sigma_{j}^{2}}{2}\biggr) \\ &{} -\sum_{k\ne j}^{m}\frac{a_{jk}}{a_{kk}} \biggl(b_{k}-\frac{\sigma_{k}^{2}}{2}\biggr)\Biggr] \quad\mbox{a.s. } i=m+1, \ldots,n. \end{aligned} $$

Finally, we can get \(x_{i}(t)\) will become extinct for all \(i=m+1,\ldots,n\). The proof is completed. □

5 Numerical simulations

In this paper, we have discussed the persistence in mean and extinction of system (1.2). Moreover, sufficient conditions have been established in Theorems 3.1, 3.2 and 4.1. Thus, in this section, we give out the numerical experiment for the case \(n=2\) as follows to support to our results.

$$ \textstyle\begin{cases} dx_{1}(t)=x_{1}(t)[(0.9-0.3x_{1}(t)-1.2x_{2}(t))\,dt+\sigma _{1}\,dB_{1}(t)], \\ dx_{2}(t)=x_{2}(t)[(1.1-0.3x_{1}(t)-0.4x_{2}(t))\,dt+\sigma_{2}\,dB_{2}(t)]. \end{cases} $$
(5.1)

The existence and uniqueness of the solution follows from Lemma 2.1. We consider the solution with initial data \(x_{1}(0)=1.4\), \(x_{2}(0)=2.1\). By Matlab software, we simulate the solution to system (5.1) with different values of \(\sigma_{1}\) and \(\sigma_{2}\).

In Figure 1, \(\sigma_{1}=0.03\), \(\sigma_{2}=\sqrt{2.2}\). By Theorems 3.1, 3.2 and 4.1, species 1 is persistent in mean and species 2 is extinct with zero exponential extinction rate.

Figure 1
figure 1

The solution of system ( 5.1 ) with \(\pmb{\sigma_{1}=0.03}\) , \(\pmb{\sigma _{2}=\sqrt{2.2}}\) . The blue line represents species 1, while the green line represents species 2.

In Figure 2, \(\sigma_{1}=0.03\), \(\sigma_{2}=0.04\). By the conditions of Corollary 3.1, all of the species are persistent in mean.

Figure 2
figure 2

The solution of system ( 5.1 ) with \(\pmb{\sigma_{1}=0.03}\) , \(\pmb{\sigma _{2}=0.04}\) . The blue line represents species 1, while the green line represents species 2.

In Figure 3, \(\sigma_{1}=\sqrt{1.8}\), \(\sigma_{2}=1.7\). By Corollary 3.2, species 1 is extinct with zero exponential extinction rate and species 2 is exponentially extinct.

Figure 3
figure 3

The solution of system ( 5.1 ) with \(\pmb{\sigma_{1}=\sqrt{1.8}}\) , \(\pmb{\sigma_{2}=1.7}\) . The blue line represents species 1, while the green line represents species 2.

6 Conclusions

This paper is devoted to partial permanence and extinction on a stochastic Lotka-Volterra competitive model. Firstly, by using some novel techniques, we established some weaker sufficient conditions on the persistence in mean and extinction for one-species. Secondly, based on these sufficient conditions for one-species and some stochastic analysis techniques, sufficient criteria for ensuring the partial permanence and extinction of the populations of the n different species in the ecosystem have been obtained. Finally, numerical experiment is provided to illustrate the effectiveness of our results.