## 1 Introduction

In many applications such as finance and biology, we often wish to replace the time dependent instantaneous measure by a stationary (or ergodic) measure. Thus, we face the following questions: Do the systems possess ergodic properties? Under what conditions do the systems have the desired properties of ergodicity? There are many criteria for the existence and uniqueness of stationary distribution for diffusion processes. See, for example, Hasminskii [1], Pinsky [2], Prato and Zabcyk [3], Yin and Zhu [4], Zhang [5]. However, most criteria suppose the infinitesimal operators satisfy the uniform ellipticity condition. An interesting question is: What happens if the infinitesimal operators can be degenerate? Is there a simple and general sufficiency condition? In this paper, we will give a new sufficient condition to verify the existence and uniqueness of stationary distribution for general diffusion processes.

Let $\left\{{X}^{{x}_{0}}\left(t\right),t\ge 0,{x}_{0}\in \mathbb{S}\right\}$ be a family of diffusion processes on some probability space $\left(\mathrm{\Omega },\mathbb{F},P\right)$, where $\mathbb{S}\subseteq {\mathbb{R}}^{n}$ stands for the state space. Let ${C}_{0}^{2}\left(\mathbb{S}\right):={C}_{0}^{2}\left(\mathbb{S},\mathbb{R}\right)$ be the continuous functions from $\mathbb{S}$ into ℝ with continuous derivatives up to order two and with compact support in $\mathbb{S}$. Let $\left({\left\{X\left(t\right)\right\}}_{t\ge 0}\right)$ be a diffusion process on $\mathbb{S}$ with a transition function ${P}^{t}\left(x,dy\right)=P\left(X\left(t\right)\in dy|X\left(0\right)=x\right)$ and for f in ${C}_{0}^{2}\left(\mathbb{S}\right)$ with infinitesimal generator

$Lf\left(x\right)=\frac{1}{2}\sum _{i,j=1}^{n}{a}_{ij}\left(x\right)\frac{{\partial }^{2}f}{\partial {x}_{i}\phantom{\rule{0.2em}{0ex}}\partial {x}_{j}}+\sum _{i=1}^{n}{b}_{i}\left(x\right)\frac{\partial f}{\partial {x}_{i}}$
(1.1)

on a domain $D\subseteq \mathbb{S}$. The infinitesimal generator corresponding processes are termed diffusion processes. Besides, we assume the diffusion processes satisfy the following basic assumptions:

1. (i)

$a=\left\{{a}_{ij}\left(x\right)\right\}$ and $b=\left\{{b}_{i}\left(x\right)\right\}$ are locally bounded and measurable functions on D;

2. (ii)

a is continuous on D;

3. (iii)

there exits a unique strong solution $X\left(t\right)$;

4. (iv)

the solution $X\left(t\right)$ does not explode at any finite time.

For any $f\in {C}_{b}\left(\mathbb{S}\right)$, the set of bounded continuous functions on $\mathbb{S}$, define

$\left({P}^{t}f\right)\left(x\right)={\int }_{\mathbb{S}}{P}^{t}\left(x,dy\right)f\left(y\right),\phantom{\rule{1em}{0ex}}x\in \mathbb{S}.$
(1.2)

For simplicity, denote ${P}^{t}\left({x}_{i},dz\right)$ by ${P}_{{x}_{i}}^{t}$. Recall that a sequence of probability measures $\left\{{P}_{{x}_{i}}^{t}:i=0,1,\dots ,\right\}$ on $\left(\mathbb{S},{\mathcal{B}}^{n}\left(\mathbb{S}\right)\right)$ is said to converge weakly to a probability measure ${P}_{{y}_{0}}^{t}$ (on $\mathbb{S}$), for every $t\ge 0$, if

$\underset{{x}_{i}\to {y}_{0}}{lim}\left({P}^{t}f\right)\left({x}_{i}\right):=Ef\left({X}^{{x}_{i}}\left(t\right)\right)=\left({P}^{t}f\right)\left({y}_{0}\right):=Ef\left({X}^{{y}_{0}}\left(t\right)\right)$
(1.3)

holds for all bounded continuous functions f on $\mathbb{S}$. A diffusion process $X\left(t\right)$ is said to have the (weak) Feller property if for any bounded and continuous function f, ${P}^{t}f$ is a bounded continuous function. Besides, if ${a}_{ij}\left(x\right)\equiv 0$, $1\le i,j\le n$, then the processes become deterministic stochastic processes. In this case, for any bounded continuous function f,

$\left({P}^{t}f\right)\left(x\right)=\left\{\begin{array}{ll}0,& x\ne {x}_{0},\\ 1,& x={x}_{0}.\end{array}$
(1.4)

Thus, the deterministic processes do not have the weak Feller property. Therefore, to consider the Feller property $X\left(t\right)$ at any time t, the random variable $X\left(t\right)$ can take on at least a countable number of values with positive probabilities. More precisely, the diffusion $X\left(t\right)$ whose semigroup should be irreducible (see, for example, Cerrai [6]). Hence, we impose the following assumption on the diffusion semigroup:

1. (v)

the semigroup ${P}^{t}$ is irreducible.

The aim of the present paper is twofold. First, we aim to give a new criterion for general diffusions, especially when the coefficients of diffusions are non-Lipschitz or coefficients of diffusions are degenerate. This is highly non-trivial because when the diffusion matrix is singular, the corresponding infinitesimal operator L is a class of non-elliptic operators, which the maximum principle on an elliptic operator fails. Our second aim is to give general sufficient conditions on the stationary distribution for population dynamical systems.

To proceed, we list several notions:

${A}^{T}$: the transpose of any matrix or vector A;

$|A|$: the trace norm of matrix A, i.e., $|A|=\sqrt{trace\left({A}^{T}A\right)}$;

$B\left(0,r\right)=\left\{x\in \mathbb{S}:|x| and ${B}^{c}\left(0,r\right):=\left\{x\in \mathbb{S}:|x|\ge r\right\}$;

K: a generic positive constant whose values may vary at its different appearances.

## 2 Main results

Before we show the main result, we first impose the following assumptions:

(A1) $E|X\left(t\right)|<+\mathrm{\infty }$ for each $t>0$;

(A2) ${sup}_{t\ge 0}E|X\left(t\right)|<+\mathrm{\infty }$.

To begin with, we cite a known result from Bhattacharya and Waymire [7] as a lemma.

Lemma 2.1 [[7], pp.643-645]

Let ${P}_{{x}_{0}}^{t},{P}_{{x}_{1}}^{t},\dots ,{P}_{{y}_{0}}^{t}$ be probability measures on $\mathbb{S}$ for every $t\ge 0$. The following are equivalent statements.

1. (a)

$\left\{{P}_{{x}_{i}}^{t}:i=0,1,2,\dots \right\}$ converge weakly to ${P}_{{y}_{0}}^{t}$.

2. (b)

Equation (1.3) holds for all infinitely differentiable functions vanishing outside a bounded set.

Lemma 2.2 Let assumptions (i)-(v) and (A1) hold, the diffusion process $X\left(t\right)$ has the weak Feller property.

Proof The proof of this lemma is essentially the same as that of Lemma 3.2 of Tong et al. [8]. □

Theorem 2.1 If assumptions (i)-(v) and (A2) hold, then the diffusion process $X\left(t\right)$ has a unique stationary distribution.

Proof The proof of this theorem is divided into two steps as follows.

Step 1: We show that the Markov process $X\left(t\right)$ whose transition semigroup ${P}^{t}$ has an invariant measure. For this end, we first prove that for some ${x}_{0}\in \mathbb{S}$, the family $\left\{{P}^{t}\left({x}_{0},dy\right):t\ge 0\right\}$ is tight, i.e., given $\epsilon >0$, there exists $r<\mathrm{\infty }$ such that ${P}^{t}\left({x}_{0},{B}^{c}\left(0,r\right)\right)\le \epsilon$. It will follow from the Prohorov theorem that there exist ${t}_{n}\to \mathrm{\infty }$ and a probability measure π, perhaps depending on ${x}_{0}$, such that

(2.1)

By Chebyshev’s inequality, $\mathrm{\forall }p>0$,

${P}^{t}\left({x}_{0},|y|>r\right)=P\left(|X\left(t\right)|>r|X\left(0\right)={x}_{0}\right)\le \frac{E{|X\left(t\right)|}^{p}}{{r}^{p}}.$

Therefore, the tightness of the family $\left\{{P}^{t}\left({x}_{0},dy\right):t\ge 0\right\}$ follows from assumption (A2).

The next task is to prove that the limit in (2.1) does not depend on ${x}_{0}$. To this end, define

${\mu }_{T}\left(A\right)=\frac{1}{T}{\int }_{0}^{T}{P}^{t}\left({x}_{0},A\right)\phantom{\rule{0.2em}{0ex}}dt.$
(2.2)

By Chebyshev’s inequality,

${\mu }_{T}\left({B}^{c}\left(0,r\right)\right)=\frac{1}{T}{\int }_{0}^{T}{P}^{t}\left({x}_{0},{B}_{r}^{c}\right)\phantom{\rule{0.2em}{0ex}}dt$
(2.3)
$\le \frac{1}{rT}{\int }_{0}^{T}E|X\left(t\right)|\phantom{\rule{0.2em}{0ex}}dt\le \frac{K}{r}$
(2.4)

and we have, for any $\epsilon >0$, ${\mu }_{T}\left(B\left(0,r\right)\right)>1-\epsilon$, whenever r is large enough. Therefore, $\left\{{\mu }_{T},T>0\right\}$ is tight, namely, there exists a subsequence ${t}_{n}↑+\mathrm{\infty }$, the sequence ${\mu }_{{t}_{n}}$ converges weakly to a measure π. By the Krylov-Boyoliubov theorem(see, e.g., Prato and Zabczyk [3], Corollary 3.1.2, p.22), one can follow π is an invariant measure for ${P}^{t}$, $t\ge 0$. In other words,

${P}^{t}\left({x}_{0},dz\right)\to \pi \left(dz\right)\phantom{\rule{1em}{0ex}}\mathrm{\forall }{x}_{0}\in \mathbb{S}.$
(2.5)

Step 2: We show that (2.5) is the unique stationary distribution. To this end, for every bounded continuous function $f:\mathbb{S}\to \mathbb{R}$, (2.5) implies

$\left({P}^{{t}_{n}}f\right)\left(y\right)\to \overline{f}:={\int }_{\mathbb{S}}f\left(z\right)\pi \left(dz\right).$
(2.6)

By Lebesgue’s dominated convergence theorem, applied to the sequence ${P}^{{t}_{n}}f$, we have

${P}^{t}\left({P}^{{t}_{n}}f\right)\left(x\right)={\int }_{\mathbb{S}}\left({P}^{{t}_{n}}f\right)\left(y\right){P}^{t}\left(x,dy\right)\to {P}^{t}\overline{f}=\overline{f}.$
(2.7)

By Lemma 2.2, if f is bounded and continuous, then ${P}^{t}f$ is a bounded continuous function. Therefore, applying (2.6) to ${P}^{t}f$ yields that

${P}^{{t}_{n}}\left({P}^{t}f\right)\left(x\right)\to {\int }_{\mathbb{S}}\left({P}^{t}f\right)\left(z\right)\pi \left(dz\right).$
(2.8)

Note that ${P}^{t}\left({P}^{{t}_{n}}f\right)={P}^{{t}_{n}}\left({P}^{t}f\right)={P}^{t+{t}_{n}}f$, the limits in (2.7) and (2.8) coincide,

${\int }_{\mathbb{S}}\left({P}^{t}f\right)\left(z\right)\pi \left(dz\right)={\int }_{\mathbb{S}}f\left(z\right)\pi \left(dz\right),$

i.e., if $X\left(0\right)$ has a distribution π, then $Ef\left(X\left(t\right)\right)=Ef\left(X\left(0\right)\right)$ for all $t\ge 0$. Namely, π is a stationary distribution.

To prove the uniqueness, let ${\pi }^{\prime }$ be any invariant probability, then for all bounded continuous $f:\mathbb{S}\to \mathbb{R}$,

${\int }_{\mathbb{S}}\left({P}^{{t}_{n}}f\right)\left(z\right){\pi }^{\prime }\left(dz\right)={\int }_{\mathbb{S}}f\left(z\right){\pi }^{\prime }\left(dz\right).$
(2.9)

But by (2.6), the left-hand side of equality (2.9) converges to $\overline{f}$. Therefore, ${\int }_{\mathbb{S}}f\left(z\right)\pi \left(dz\right)={\int }_{\mathbb{S}}f\left(z\right){\pi }^{\prime }\left(dz\right)$, which implies ${\pi }^{\prime }=\pi$. □

## 3 An example

In this section, we give an example to illustrate our conditions and results.

Example 3.1 Recently, Mao [9] has considered the stationary distribution of stochastic population dynamics. They assumed that population sizes follow the following stochastic differential equations:

$d{X}_{i}\left(t\right)={X}_{i}\left(t\right)\left({b}_{i}+\sum _{j=1}^{n}{a}_{ij}{X}_{j}\left(t\right)\right)\phantom{\rule{0.2em}{0ex}}dt+{X}_{i}\left(t\right)\sum _{j=1}^{n}{\sigma }_{ij}\phantom{\rule{0.2em}{0ex}}d{W}_{j}\left(t\right),$
(3.1)

where ${X}_{i}\left(t\right)$ stands for the population size of species i at time t, ${b}_{i}$ is the intrinsic growth rate of species i, ${a}_{ij}$ represents the effect of interspecies (if $i\ne j$) or intraspecies (if $i=j$) interaction. Here ${W}_{j}\left(t\right)$ is an independent one-dimensional Brownian motion. Let $W\left(t\right)={\left({W}_{1}\left(t\right),\dots ,{W}_{n}\left(t\right)\right)}^{T}$ be an n-dimensional Brownian motion. Then Eq. (3.1) can be rewritten as

$dX\left(t\right)=diag\left({X}_{1}\left(t\right),\dots ,{X}_{n}\left(t\right)\right)\left[\left(b+AX\left(t\right)\right)\phantom{\rule{0.2em}{0ex}}dt+\sigma \phantom{\rule{0.2em}{0ex}}dW\left(t\right)\right],$
(3.2)

where $X\left(t\right)={\left({X}_{1}\left(t\right),\dots ,{X}_{n}\left(t\right)\right)}^{T}$, $b={\left({b}_{1},{b}_{2},\dots ,{b}_{n}\right)}^{T}$, $A={\left({a}_{ij}\right)}_{n×n}$, $\sigma ={\left({\sigma }_{ij}\right)}_{n×n}$, $\gamma \left(u\right)={\left({\gamma }_{1}\left(u\right),\dots ,{\gamma }_{n}\left(u\right)\right)}^{T}$. Besides, we suppose that $W\left(t\right)$ and $N\left(t\right)$ are independent and for $i,j=1,\dots ,n$, ${\sigma }_{ij}$ are nonnegative constants, and ${\sigma }_{ii}>0$ for some i. If ${a}_{ii}<0$, ${a}_{ij}>0$, $1\le i,j\le n$, $i\ne j$, then the model (3.2) is termed the facultative Lotka-Volterra model. If ${a}_{ii}<0$, ${a}_{ij}\le 0$, $1\le i,j\le n$, $i\ne j$, then the model (3.2) is termed the competitive Lotka-Volterra model. Both Lotka-Volterra models have been extensively studied by many authors (see, e.g., Bao et al. [10, 11]). For the competitive model (3.2) with Poisson jumps, Bao et al. [10, 11] show that Eq. (3.2) has some nice results such as global positive solution, existence of an invariant measure, some asymptotic properties. To obtain our results, we impose the following assumptions:

(H1) −A is a nonsingular M-matrix;

(H2) ${a}_{ii}<0$, ${a}_{ij}\le 0$, $i=1,\dots ,n$.

Lemma 3.1 If one of assumptions (H1) and (H2) holds, then there is a positive constant K such that for any initial value ${x}_{0}\in {\mathbb{R}}_{+}^{n}$,

$\underset{t\in {\mathbb{R}}_{+}}{sup}E|X\left(t\right)|\le K.$
(3.3)

Proof The proof is essentially the same as the proof of Theorem 3.1 of Mao [9] and Theorem 3.1 of Bao [11]. We omit the proof. □

Theorem 3.1 If one of assumptions (H1) and (H2) holds, then the model (3.2) has a unique stationary distribution.

Proof According to the results obtained by Mao [9], Tong et al. [8], and Bao et al. [11], it is not hard to check that the model (3.2) satisfies all the conditions of Theorem 2.1 together with Lemma 3.1. Hence, the uniqueness of stationary distribution follows immediately. □

Remark 3.1 Assumption (H1) relaxes the sufficient conditions obtained by Tong et al. [8]. Assumption (H2) means that if population dynamics is competitive, then it has a unique stationary distribution, which implies ergodic properties of the model (3.2). This gives a new method to estimate parameters for competitive population dynamics.