Skip to main content
Log in

Strong stochastic persistence of some Lévy-driven Lotka–Volterra systems

  • Published:
Journal of Mathematical Biology Aims and scope Submit manuscript

Abstract

We study a class of Lotka–Volterra stochastic differential equations with continuous and pure-jump noise components, and derive conditions that guarantee the strong stochastic persistence (SSP) of the populations engaged in the ecological dynamics. More specifically, we prove that, under certain technical assumptions on the jump sizes and rates, there is convergence of the laws of the stochastic process to a unique stationary distribution supported far away from extinction. We show how the techniques and conditions used in proving SSP for general Kolmogorov systems driven solely by Brownian motion must be adapted and tailored in order to account for the jumps of the driving noise. We provide examples of applications to the case where the underlying food-web is: (a) a 1-predator, 2-prey food-web, and (b) a multi-layer food-chain.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Applebaum D (2009) Lévy processes and stochastic calculus, 2nd edn. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Bao J, Mao X, Yin G, Yuan C (2012) Competitive lotka-volterra population dynamics with jumps. Nonlinear Anal 74:6601–6616

    Article  MathSciNet  Google Scholar 

  • Bao J, Yuan C (2011) Stochastic population dynamics driven by lévy noise. J Math Anal Appl 391:363–375

    Article  Google Scholar 

  • Benaim M (2018) Stochastic persistence. arXiv:1806.08450. Accessed 25 Dec 2020

  • Benaïm M, Lobry C (2016) Lotka-volterra with randomly fluctuating environments or how switching between beneficial environments can make survival harder. Ann Appl Probab 26(6):3754–3785

    Article  MathSciNet  Google Scholar 

  • Bertoin J (1998) Lévy processes. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Billingsley P (1999) Convergence of probability measures, 2nd edn. John Wiley & Sons Inc., New York

    Book  Google Scholar 

  • Chessa S, Fujita H (2002) Equazione stocastica di dinamica di popolazioni di tipo preda-predatore. Bollettino Unione Matematica Italiana 5–B:789–804

    Google Scholar 

  • Ciomaga A (2010) On the strong maximum principle for second order nonlinear parabolic integro-differential equations. Adv Differ Equ 17(7/8):635–671

    MathSciNet  MATH  Google Scholar 

  • Dunne J (2006) The network structure of food webs. In: Pascual M, Dunne J (eds) Ecological networks: linking structure to dynamics in food webs. Oxford University Press, Oxford, pp 28–86

    Google Scholar 

  • Gard T, Hallam T (1979) Persistence in food-webs. i. lotka-volterra food-chains. Bull Math Biol 41(6):877–891

    MathSciNet  MATH  Google Scholar 

  • Hening A, Nguyen D (2018a) Coexistence and extinction for stochastic kolmogorov systems. Ann Appl Prob 28(3):1893–1942

  • Hening A, Nguyen D (2018b) Persistence in stochastic lotka-volterra food chains with intraspecific competition. Bull Math Biol 80:2527–2560

  • Hening A, Nguyen D (2018c) Stochastic lotka-volterra food chain. J Math Biol 77(1):135–163

  • Hening A, Nguyen D, Schreiber S (2020) A classification of the dynamics of three-dimensional stochastic ecological systems. arxiv:2004.00535. Accessed 25 Dec 2020

  • Hening A, Nguyen D, Chesson P (2021) A general theory of coexistence and extinction for stochastic ecological communities. J Math Biol 82(56)

  • Hofbauer J (1981) A general cooperation theorem for hypercycles. Monatshefte für Mathematik 91(3):233–240

    Article  MathSciNet  Google Scholar 

  • Kaspi H, Mandelbaum A (1994) On harris recurrence in continuous time. Math Oper Res 19(1)

  • Khasminskii R (2012) Stochastic stability of differential equations. Springer-Verlag, Berlin

    Book  Google Scholar 

  • Kunita H (2019) Stochastic flows and jump-diffusions. Springer Verlag, Berlin

    Book  Google Scholar 

  • Liptser R (1980) A strong law of large numbers for local martingales. Stochastics 3(1–4):217–228

    Article  MathSciNet  Google Scholar 

  • Liu M, Wang K (2014) Stochastic lotka-volterra systems with lévy noise. J Math Anal Appl 410(2):750–763

    Article  MathSciNet  Google Scholar 

  • Mao X (2003) Asymptotic behaviour of the stochastic lotka-volterra model. J Math Anal Appl 287(1):141–156

    Article  MathSciNet  Google Scholar 

  • Mao X, Luo Q (2007) Stochastic population dynamics under regime switching. J Math Anal Appl 334(1):69–84

    Article  MathSciNet  Google Scholar 

  • Meyn S, Tweedie R (1992a) Stability of markovian processes i: criteria for discrete-time chains. Adv Appl Prob 24(3):542–574

  • Meyn S, Tweedie R (1992b) Stability of markovian processes ii: Continuous time processes and sampled chains. Adv Appl Prob 25(3):487–517

  • Meyn S, Tweedie R (1993) Stability of markovian processes iii: foster-lyapunov criteria for continuous-time processes. Adv Appl Prob 25(3):518–548

    Article  MathSciNet  Google Scholar 

  • Milner-Gulland E (2011) Animal migration: a synthesis. Oxford University Press, Oxford

    Book  Google Scholar 

  • Morin P (2011) Community ecology. Wiley-Blackwell, Newyork

    Book  Google Scholar 

  • Ohman M, Mantua N, Keister J, García-Reyes M, McClatchie S (2017) Enso impacts on ecosystem indicators in the california current system. https://www.us-ocb.org/enso-impacts-on-ecosystem-indicators-in-the-california-current-system/. Accessed 23 Jan 2021

  • Pimm S (1982) Food Webs. Springer, Netherlands

    Book  Google Scholar 

  • Polis G, Strong D (1996) Food web complexity and community dynamics. Am Nat 147:813–846

    Article  Google Scholar 

  • Rebolledo R (2019) An open-system approach to complex biological networks. SIAM J Appl Math 79(2):619–640

    Article  MathSciNet  Google Scholar 

  • Rudnicki R (2003) Long-time behavior of a stochastis prey-predator model. Stoc Process Appl 108(1):93–107

    Article  Google Scholar 

  • Schreiber S, Benaïm M, Atchadé K (2011) Persistence in fluctuating environments. J Math Biol 62(5):655–683

    Article  MathSciNet  Google Scholar 

  • Strickler E (2019) Persistance de processus de markov déterministes par morceaux. Ph.D. thesis, Université de Neuchatel

  • Strickler E, Nguyen D (2020) A method to deal with the critical case in stochastic population dynamics. SIAM J Appl Math 80(3):1567–1589

    Article  MathSciNet  Google Scholar 

  • Terborgh J, Holt R, Estes J (2010) Trophic cascades: what they are, how they work, and why they matter. In: Terborgh J, Estes J (eds) Trophic cascades: predators, prey and the changing dynamics of nature. Island Press, Washington D.C., pp 1–18

    Google Scholar 

  • Thébault E, Loreau M (2003) Food-web constraints on biodiversity-ecosystem functioning relationships. Proc Natl Acad Sci USA 100:14949–14954

    Article  Google Scholar 

  • Thompson R (2012) Food webs: reconciling the structure and function of biodiversity. Trends Ecol Evol 27(12):689–697

    Article  Google Scholar 

  • Williams R, Martínez N (2004) Limits to trophic levels and omnivory in complex food webs: theory and data. Am Nat 163:458–468

    Article  Google Scholar 

  • Yuang C, Mao X (2003) Asymptotic stability in distribution of stochastic differential equations with markovian switching. Stoch Proc Appl 103:277–291

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

I would like to thank my Ph.D. adviser Dr. Rolando Rebolledo for his supportive and attentive guidance during the preparation of the first draft of this article.

Funding

The author has been supported by ANID, ex-CONICYT, through Beca de Doctorado Nacional, 21170406, convocatoria 2017. This research was partially funded by project ANID-FONDECYT 1200925. The author declares that has no financial or personal relationship with other people or organizations that could inappropriately influence or bias the content of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leonardo Videla.

Ethics declarations

Conflicts of interest

The author declares that he has no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Leonardo Videla has been supported by CONICYT through Beca de Doctorado 21170406, Convocatoria 2017. This research was partially funded by project ANID-FONDECYT 1200925.

Appendix: Deferred proofs

Appendix: Deferred proofs

Proof of Lemma 1

Since \((\mathbf{A}, \mathbf{B})\) is feasible, consider the vector \(\mathbf{c}\) given by Definition 1. We will prove that there exists positive constants \(K_1\), \(K_2\) such that:

$$\begin{aligned} \dfrac{\sum _{i}c_i x_i (B_i +(\mathbf{A}\mathbf{x})_i)}{1+\sum _{i}c_i x_i} < K_1 - K_2 (x_1+x_2+\cdots +x_n). \end{aligned}$$
(48)

for every \(\mathbf{x}\in {\mathbb {R}}^n_{+}\). Indeed:

$$\begin{aligned} \sum _{i}c_i x_i (B_i +(Ax)_i)&= \sum _{i}c_iB_i x_i - \sum _{i}c_ia_{ii}x_i^2 - \sum _{i}\sum _{j \in \text {predator}_i}x_i x_j [c_ia_{ij} -c_j a_{ji}]\\&\le -\sum _{i=1}^n c_i a_{ii} x_i^2 + \sum _{i=1}^{n-1}c_i B_i x_i \\&\le -\min _{i=1, \ldots , n} \left\{ c_i a_{ii} \right\} \sum _{i=1}^n x_i^2 + \max _{i=1, \ldots , n} \left\{ c_i b_i \right\} \sum _{i=1}^n x_i \\&\le -\dfrac{\min _{i=1, \ldots , n} \left\{ c_i a_{ii} \right\} }{n} (\sum _{i=1}^n x_i)^ 2 + \max _{i=1, \ldots , n} \left\{ c_i b_i \right\} \sum _{i=1}^n x_i, \end{aligned}$$

where we have used Jensen’s inequality to obtain the last line. Thus:

$$\begin{aligned} \dfrac{ \sum _{i}c_i x_i (B_i +(\mathbf{A}\mathbf{x})_i) }{1+\sum _{i=1}^n c_i x_i}&\le \max _{i=1, \ldots , n} \left\{ c_i b_i \right\} \dfrac{\sum _{i=1}^n x_i}{1+\sum _{i=1}^n c_i x_i}\\&\quad -\dfrac{\min _{i=1, \ldots , n} \left\{ c_i a_{ii} \right\} }{n} \dfrac{(\sum _{i=1}^n x_i)^ 2}{1+\sum _{i=1}^n c_i x_i}\\&\le \dfrac{ \max _{i=1, \ldots , n} \left\{ c_i b_i \right\} }{\min _{i=1, \ldots , n} c_i}\\&\quad -\dfrac{\min _{i=1, \ldots , n} \left\{ c_i a_{ii} \right\} }{n} \dfrac{(\sum _{i=1}^n x_i)^ 2}{1+\sum _{i=1}^n c_i x_i}\\&\le \dfrac{ \max _{i=1, \ldots , n} \left\{ c_i b_i \right\} }{\min _{i=1, \ldots , n} c_i} \\&\quad -\dfrac{\min _{i=1, \ldots , n} \left\{ c_i a_{ii} \right\} }{n} \dfrac{(\sum _{i=1}^n x_i)^ 2}{1+\max _{i=1,\ldots , n} c_i \sum _{i=1}^n x_i}\\&\le \dfrac{ \max _{i=1, \ldots , n} \left\{ c_i b_i \right\} }{\min _{i=1, \ldots , n} c_i} \\&\quad -\dfrac{\min _{i=1, \ldots , n} \left\{ c_i a_{ii} \right\} }{n} \dfrac{\sum _{i=1}^n x_i}{ (\sum _{i=1}^n x_i)^{-1}+\max _{i=1,\ldots , n} c_i) }. \end{aligned}$$

Of course, there exists \(R> 0\) such that \(\Vert \mathbf{x}\Vert > R\) implies \(\dfrac{1}{\sum _{i=1}^{n}x_i} < 1\). Define \(\tilde{K}_1= \dfrac{ \max _{i=1, \ldots , n} \left\{ c_i b_i \right\} }{\min _{i=1, \ldots , n} c_i}\) and \(K_2= \dfrac{\min _{i=1, \ldots , n} \left\{ c_i a_{ii} \right\} }{n (1+\max _{i=1,\ldots , n} c_i)} \). Thus, for \(\Vert \mathbf{x}\Vert > R\), we have:

$$\begin{aligned} \dfrac{\sum _{i}c_i x_i (B_i +(\mathbf{A}\mathbf{x})_i)}{1+\sum _{i}c_i x_i} <\tilde{K}_1 - K_2 (x_1+x_2+\cdots +x_n). \end{aligned}$$

Observe that the right hand side is a continuous function of \(\mathbf{x}\). Let:

$$\begin{aligned} r_1&= \sup _{\Vert \mathbf{x}\Vert < R} \dfrac{\sum _{i}c_i x_i (B_i +(bA\mathbf{x})_i)}{1+\sum _{i}c_i x_i}\\ r_2&= \sup _{\Vert \mathbf{x}\Vert \le R}K_2(x_1+x_2+\cdots +x_n). \end{aligned}$$

Then, (48) holds with \(K_1:=\max \{\tilde{K}_1, r_1+r_2\}\). \(\square \)

Proof of Lemma 2

Let \(V: {\mathbb {R}}^n_{++} \mapsto {\mathbb {R}}_{+}\) be the function defined by:

$$\begin{aligned} V (\mathbf{x}):= \mathbf{c}^T \mathbf{x}- \sum _{i=1}^n \log (c_i x_i), \end{aligned}$$

where \(c_i\) are the constant guaranteed by Assumption 1. Then V is log-Lyapunov and for \(\mathbf{x}\in {\mathbb {R}}^n_{++}\):

$$\begin{aligned} {\mathcal {L}}V(\mathbf{x})&= \sum _{i} (c_i x_i - 1 ) (B_i + \sum _j A_{ij} x_j) \nonumber \\&\quad + \dfrac{1}{2}\sum _{i=1}^n \sigma ^2_{i} + \int _{{\mathbb {R}}^n} \mathbf {1}^{T}\mathbf{L}(\mathbf{x},\mathbf{z}) - \sum _{i=1}^n \log (1+L_{i}(\mathbf{x},\mathbf{z})) \nu (\mathrm {d}\mathbf{z}). \ \end{aligned}$$
(49)

By Assumption 2, Taylor’s theorem, and Assumption 3, the last term is bounded above. Moreover, since the intraspecies interaction are negative, we have that the above expression is not grater than:

$$\begin{aligned} K_1 \mathbf{1}^T \mathbf{x}- K_2 \mathbf{x}^T \mathbf{x}+ K_3 - \sum _{i} \sum _{j: \in \text {predator}_i} x_i x_j (c_i a_{ij}-c_j a_{ji}). \end{aligned}$$

for some positive constants \(K_1, K_2, K_3\). Now, the last sum is non-negative, by definition of feasibility. In toto, thus, there exist positive constants \(K_1\), \(K_2\), \(K_3\) such that:

$$\begin{aligned} {\mathcal {L}}V(x)\le K_1 \mathbf{1}^T \mathbf{x}- K_2 \mathbf{x}^T \mathbf{x}+ K_3. \end{aligned}$$

We conclude that there exists a constant \(M> 0\) such that on \({\mathbb {R}}^n_{++}\):

$$\begin{aligned} {\mathcal {L}}V < M. \end{aligned}$$
(50)

If we set:

$$\begin{aligned} V_k := \left\{ \mathbf{x}\in {\mathbb {R}}^n_{++}: \text { exists } i \in \{1, 2, \ldots , n\} \text { such that } x_i> \dfrac{k}{c_i}e^k \text { or } x_i < \dfrac{1}{c_i}e^{-k}\right\} , \end{aligned}$$

then

$$\begin{aligned} \inf _{\mathbf{x}\in V_k} V(\mathbf{x})> k. \end{aligned}$$
(51)

Since V is continuous, \(V_k\) is open, and thus

$$\begin{aligned} \tau _k:= \inf \{ t \ge 0: \mathbf{X}_{t} \in V_k \} \end{aligned}$$

is a \({\mathcal {F}}_t\)-stopping time. Dynkin’s formula applied to V gives:

$$\begin{aligned} {\mathbb {E}}_{\mathbf{X}_0}\left( V (\mathbf{X}_{\tau _k \wedge t}) \right) \le V(\mathbf{X}_0) +M{\mathbb {E}}_{\mathbf{X}_0}(\tau _k \wedge t) \le V(\mathbf{X}_0) + Mt. \end{aligned}$$
(52)

Fix \(N \in {\mathbb {N}}\), and consider the set:

$$\begin{aligned} \varOmega _N:= \bigcap _{k\ge 1} \{\omega \in \varOmega : \tau _k < N\}. \end{aligned}$$

Define:

$$\begin{aligned} \epsilon _N: = {\mathbb {P}}_{\mathbf{X}_0} (\varOmega _N) \end{aligned}$$

Observe that for every \(k \ge 1\):

$$\begin{aligned} {\mathbb {E}}_{\mathbf{X}_0} \left( V (\mathbf{X}_{\tau _k \wedge N}) \right)&={\mathbb {E}}_{\mathbf{X}_0}\left( \mathbf {1}_{\varOmega _N} V (\mathbf{X}_{\tau _k \wedge N}) \right) + {\mathbb {E}}_{\mathbf{X}_0}\left( \mathbf {1}_{\varOmega \setminus \varOmega _N} V (\mathbf{X}_{\tau _k \wedge N}) \right) \\&\ge {\mathbb {E}}_{\mathbf{X}_0}\left( \mathbf {1}_{\varOmega _N} V (\mathbf{X}_{\tau _k}) \right) \\&\ge k {\mathbb {E}}_{\mathbf{X}_0} \left( \mathbf {1}_{\varOmega _N} \right) \\&\ge k \epsilon _N, \end{aligned}$$

where the second-to-last line follows from the fact that \(\mathbf{X}\) has right-continuous a.s. paths and by the relation (51). This last inequality and (52) gives:

$$\begin{aligned} k \epsilon _N < V(\mathbf{X}_0) + MN \quad \text { for every } k \ge 1, \end{aligned}$$

and thus necessarily \(\epsilon _N={\mathbb {P}}_{\mathbf{X}_0}(\varOmega _N)=0\) for every \(N \in {\mathbb {N}}\). Consequently,

$$\begin{aligned} {\mathbb {P}}_{\mathbf{X}_0} \left( \lim _{k\rightarrow \infty } \tau _k < \infty \right) ={\mathbb {P}}_{\mathbf{X}_0} \left( \bigcup _{N \in {\mathbb {N}}} \varOmega _N \right) = 0, \end{aligned}$$

and this concludes the proof. \(\square \)

Recall that \((P_t: t\ge 0)\) is the semigroup associated to the process \(\mathbf{X}\), i.e.,

$$\begin{aligned} P_t f(\mathbf{x}):= {\mathbb {E}}_{\mathbf{x}} (f(\mathbf{X}_t)), \end{aligned}$$

for bounded measurable real functions f.

Proof of Lemma 3

Let \(F_{i}(\mathbf{x})=x_{i} (B_i+\sum _{j=1}^n A_{ij}x_j ) \), \(G_{i}(\mathbf{x})= x_{i}\sigma _{i}\), and \(H_i(\mathbf{x}, \mathbf{z})= x_i L_{i}(\mathbf{x}, \mathbf{z})\). For two initial conditions \(\mathbf{x}, \mathbf{y}\), let \(\ {^\mathbf{x}}{\tilde{\mathbf{X}}}{}\) and \(\ {^\mathbf{y}}{\tilde{\mathbf{X}}}{}\) be the solutions of (2) in the natural coupling, i.e., driven by the same Lévy noise \((\mathbf{W}, \tilde{N})\). Let \(D_{k}:= \{\mathbf{x}\in {\mathbb {R}}^n_+: \Vert \mathbf{x}\Vert \le k\}\), and set \(\eta _k:=\inf \{ t\ge 0: \ {^\mathbf{x}}{\tilde{\mathbf{X}}}{_t} \in D^C_k \text { or } \ {^\mathbf{y}}{\tilde{\mathbf{X}}}{_t} \in D^C_k \}\). For \(\mathbf{u}=\mathbf{x}\text { or } \mathbf{y}\) define:

$$\begin{aligned} \ {^\mathbf{u}}{\mathbf{X}}{_t}= {\left\{ \begin{array}{ll} \ {^\mathbf{u}}{\tilde{\mathbf{X}}}{_t}, \quad t < \eta _k\\ 0, \qquad \text { otherwise.} \end{array}\right. } \end{aligned}$$

Observe that \(\mathbf{F}=(F_{1}, F_2, \ldots , F_n) \) is locally Lipschitz continuous, and thus for every \(k\in {\mathbb {N}}\) there exist a constant \(M_k\) such that for \(\mathbf{x}, \mathbf{y}\in D_k\):

$$\begin{aligned} \Vert \mathbf{F}(\mathbf{x})- \mathbf{F}(\mathbf{y}) \Vert ^2 < M_k \Vert \mathbf{x}-\mathbf{y}\Vert ^2. \end{aligned}$$
(53)

Fix \(T\ge 0\). For any time \(0 \le t \le T\) define \(t_{k}:= t \wedge \eta _k\). For \(t \le T_k\):

$$\begin{aligned} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_{t}} - \ {^\mathbf{y}}{\mathbf{X}}{_{t}}\Vert ^2&\le 4 \left( \Vert \mathbf{x}-\mathbf{y}\Vert ^2 + \left| \int _{0}^{t} \mathbf {F}(\ {^\mathbf{x}}{\mathbf{X}}{_s}) -\mathbf {F}(\ {^\mathbf{y}}{\mathbf{X}}{_s}) ds \right| ^2 \right. \end{aligned}$$
(54)
$$\begin{aligned}&\quad \quad + \left| \int _{0}^{t} \mathbf {G}(\ {^\mathbf{x}}{\mathbf{X}}{_s}) - \mathbf {G}(\ {^\mathbf{y}}{\mathbf{X}}{_s}) d\mathbf{W}_s \right| ^2 \nonumber \\&\quad \quad \left. + \left| \int _{0}^{t} \int _{{\mathbb {R}}^n} \mathbf {H}(\ {^\mathbf{x}}{\mathbf{X}}{_s}, \mathbf{z}) - \mathbf {H}(\ {^\mathbf{y}}{\mathbf{X}}{_s}, \mathbf{z})\tilde{N}(ds, d\mathbf{z}) \right| ^2 \right) . \end{aligned}$$
(55)

Cauchy-Schwartz inequality yields:

$$\begin{aligned} \left\| \int _{0}^{t} F(\ {^\mathbf{x}}{\mathbf{X}}{_s}) -F(\ {^\mathbf{y}}{\mathbf{X}}{_s} ) ds \right\| ^2 \le T \int _{0}^{t} \Vert F(\ {^\mathbf{x}}{\mathbf{X}}{_s}) - F(\ {^\mathbf{y}}{\mathbf{X}}{_s}) \Vert ^ 2 ds. \end{aligned}$$

On \(s \le T_k\), by (53) we have that:

$$\begin{aligned} \Vert \mathbf{F}(\ {^\mathbf{x}}{\mathbf{X}}{_s}) - \mathbf{F}(\ {^\mathbf{y}}{\mathbf{X}}{_s}) \Vert ^ 2 \le M_k \Vert \ {^\mathbf{x}}{\mathbf{X}}{_s}-\ {^\mathbf{y}}{\mathbf{X}}{_s} \Vert ^ 2 , \end{aligned}$$

and trivially, for \(s \le T_k\):

$$\begin{aligned} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_s}-\ {^\mathbf{y}}{\mathbf{X}}{_s}\Vert ^ 2 \le \sup _{u \le s_k} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_u}-\ {^\mathbf{y}}{\mathbf{X}}{_u}\Vert ^ 2. \end{aligned}$$

Let \(\tilde{M}_1\) be a Lipschitz constant for G . By Burkholder-Davis-Gundy inequality, Lipschitz-continuity of G and Fubini’s theorem:

$$\begin{aligned}&{\mathbb {E}}_{\mathbf{x},\mathbf{y}}\left( \sup _{0 \le t \le T_k}\left| \int _{0}^{t} \mathbf {G}(\ {^\mathbf{x}}{\mathbf{X}}{_s}) - \mathbf {G}(\ {^\mathbf{y}}{\mathbf{X}}{_s}) d\mathbf{W}_s \right| ^2 \right) \le {\mathbb {E}}_{\mathbf{x},\mathbf{y}} \left( \int _{0}^{T_k} \Vert \mathbf {G}(\ {^\mathbf{x}}{\mathbf{X}}{_s}) - \mathbf {G}(\ {^\mathbf{y}}{\mathbf{X}}{_s})\Vert ^2 ds \right) \\&\le \tilde{M}_1 {\mathbb {E}}_{\mathbf{x},\mathbf{y}} \left( \int _{0}^{T_k} \sup _{u\le s_k} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_u} - \ {^\mathbf{y}}{\mathbf{X}}{_u}\Vert ^2 ds \right) \\&\le \tilde{M}_1 {\mathbb {E}}_{\mathbf{x},\mathbf{y}} \left( \int _{0}^{T} \sup _{u\le s_k} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_u} - \ {^\mathbf{y}}{\mathbf{X}}{_u}\Vert ^2 ds \right) \\&\le \tilde{M}_1 \int _{0}^{T} {\mathbb {E}}_{\mathbf{x},\mathbf{y}} \left( \sup _{u\le s_k} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_u} - \ {^\mathbf{y}}{\mathbf{X}}{_u}\Vert ^2 \right) ds. \end{aligned}$$

Analogously, for the jump component, we have:

$$\begin{aligned} {\mathbb {E}}_{\mathbf{x},\mathbf{y}}&\left( \sup _{0 \le t \le T_k}\left| \int _{0}^{t} \int _{{\mathbb {R}}^n} \mathbf {H}(\ {^\mathbf{x}}{\mathbf{X}}{_s}, z) - \mathbf {H}(\ {^\mathbf{y}}{\mathbf{X}}{_s}, \mathbf{z}) \tilde{N}(ds, d\mathbf{z}) \right| ^2 \right) \\&\quad \le \tilde{M}_2 \int _{0}^{T} {\mathbb {E}}_{\mathbf{x},\mathbf{y}} \left( \sup _{u\le s_k} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_u} - \ {^\mathbf{y}}{\mathbf{X}}{_u}\Vert ^2 \right) ds, \end{aligned}$$

where the constant \(\tilde{M}_2\) is guaranteed by Assumption3(c). Thus, if we take supremum on \(0 \le t \le T_{k}\) and then take expectation at both sides of the inequality (54), we obtain:

$$\begin{aligned} {\mathbb {E}}_{\mathbf{x}, \mathbf{y}}\left( \sup _{0 \le t \le T_k} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_t} - \ {^\mathbf{y}}{\mathbf{X}}{_t}\Vert ^2 \right)&\le C \left( \Vert \mathbf{x}-\mathbf{y}\Vert \right. \\&\quad \left. + C_k \int _{0}^T {\mathbb {E}}_{\mathbf{x},\mathbf{y}}\left( \sup _{0 \le u \le s_k} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_u} - \ {^\mathbf{y}}{\mathbf{X}}{_u}\Vert ^2\right) ds \right) , \end{aligned}$$

where C is a universal constant and \((C_k: k\ge 0)\) is an increasing positive sequence. We conclude by Gromwall’s Lemma that:

$$\begin{aligned} {\mathbb {E}}_{\mathbf{x}, \mathbf{y}}\left( \sup _{0 \le t \le T_k} \Vert \ {^\mathbf{x}}{\mathbf{X}}{_t} - \ {^\mathbf{y}}{\mathbf{X}}{_t} \Vert ^2 \right) \le C \Vert \mathbf{x}-\mathbf{y}\Vert ^2 e^{C_k T}. \end{aligned}$$
(56)

Now, let \(f: {\mathbb {R}}^n_{+} \mapsto {\mathbb {R}}\) be a bounded continuous function, and fix \(\mathbf{x}\in {\mathbb {R}}^n_{+}\), \(t \ge 0\), \(\varepsilon > 0\). Let \(r > 0\) such that \( \bar{B} ( \mathbf{x}, r ) \subset {\mathbb {R}}^n_{++}\). Plainly, for \(\mathbf{y}\in \bar{B} ( \mathbf{x}, r)\):

$$\begin{aligned} \vert {\mathbb {E}}_{\mathbf{x}}(f(\mathbf{X}_t))- {\mathbb {E}}_{\mathbf{y}}(f(\mathbf{X}_{t}))\vert \le {\mathbb {E}}_{\mathbf{x}, \mathbf{y}} \left( \vert f(\ {^\mathbf{x}}{\tilde{\mathbf{X}}}{_t}) - f(\ {^\mathbf{y}}{\tilde{\mathbf{X}}}{_t})\vert \right) . \end{aligned}$$

On the other hand, observe that from inequality (52), we deduce that for \(\mathbf{y}\in \bar{B}(\mathbf{x}, r)\):

$$\begin{aligned} {\mathbb {P}}_{\mathbf{y}} (\tau _k < t)k&\le {\mathbb {E}}_{\mathbf{y}} (V(\mathbf{X}_{\tau _k\wedge t}))\le V (\mathbf{y})+Mt \\&\le \sup _{\mathbf{z}\in \bar{B} (\mathbf{x}, r)} V(\mathbf{z}) + Mt, \end{aligned}$$

and thus, we can choose a \(k_0\) such that uniformly on \(\mathbf{y}\in \bar{B} (\mathbf{x}, r)\), we have:

$$\begin{aligned} {\mathbb {P}}_{\mathbf{x},\mathbf{y}} (\eta _k < t) \le \dfrac{\varepsilon }{6 \Vert f \Vert }. \end{aligned}$$

Since f is continuous, it is uniformly continuous on \(D_{k_0}\). Let \(\delta : {\mathbb {R}}_{+} \mapsto {\mathbb {R}}_{+}\) be a modulus of continuity of f on \(D_{k_0}\), i.e. a function that satisfies that for every \(\mathbf{y}_0 \in D_k\), \(f(B(\mathbf{y}_0, \delta (\varepsilon )) \cap D_k) \subseteq B (f(\mathbf{y}_0), \varepsilon )\). Let \(\varDelta = \delta (\varepsilon /3)\). Then, again for \(\mathbf{y}\in \bar{B} (\mathbf{x}, r)\):

$$\begin{aligned} {\mathbb {E}}_{\mathbf{x}, \mathbf{y}} \left( \vert f(\ {^\mathbf{x}}{\tilde{\mathbf{X}}}{_t}) - f(\ {^\mathbf{y}}{\tilde{\mathbf{X}}}{_t})\vert \right)&\le {\mathbb {E}}_{\mathbf{x}, \mathbf{y}} \left( \mathbf {1}_{\eta _k<t}\vert f(\ {^\mathbf{x}}{\tilde{\mathbf{X}}}{_t}) - f(\ {^\mathbf{y}}{\tilde{\mathbf{X}}}{_t})\vert \right) \end{aligned}$$
(57)
$$\begin{aligned}&\qquad + {\mathbb {E}}_{\mathbf{x}, \mathbf{y}} \left( \mathbf {1}_{t<\eta _k}\vert f(\ {^\mathbf{x}}{\tilde{\mathbf{X}}}{_t}) - f(\ {^\mathbf{y}}{\tilde{\mathbf{X}}}{_t})\vert \right) \nonumber \\&\le \dfrac{\varepsilon }{3} + {\mathbb {E}}_{\mathbf{x}, \mathbf{y}} \left( \mathbf {1}_{t<\eta _k}\vert f(\ {^\mathbf{x}}{\mathbf{X}}{_t}) -f(\ {^\mathbf{y}}{\mathbf{X}}{_t})\vert \right) \nonumber \\&= \dfrac{\varepsilon }{3} + {\mathbb {E}}_{\mathbf{x}, \mathbf{y}} \left( \mathbf {1}_{t<\eta _k}\mathbf {1}_{\vert \ {^\mathbf{x}}{\mathbf{X}}{_t} - \ {^\mathbf{y}}{\mathbf{X}}{_t} \vert< \varDelta }\vert f(\ {^\mathbf{x}}{\mathbf{X}}{_t}) -f(\ {^\mathbf{y}}{\mathbf{X}}{_t})\vert \right) \nonumber \\&\quad + {\mathbb {E}}_{\mathbf{x}, \mathbf{y}} \left( \mathbf {1}_{t<\eta _k}\mathbf {1}_{\vert \ {^\mathbf{x}}{\mathbf{X}}{_t} - \ {^\mathbf{x}}{\mathbf{X}}{_t} \vert> \varDelta }\vert f(\ {^\mathbf{x}}{\mathbf{X}}{_t}) -f(\ {^\mathbf{y}}{\mathbf{X}}{_t})\vert \right) \nonumber \\&\le \dfrac{\varepsilon }{3} + \dfrac{\varepsilon }{3} + 2 \Vert f \Vert {\mathbb {P}}_{\mathbf{x},\mathbf{y}} \left( \sup _{0 \le s \le t_k} \vert \ {^\mathbf{x}}{\mathbf{X}}{_s} - \ {^\mathbf{y}}{\mathbf{X}}{_s}\vert > \varDelta \right) . \end{aligned}$$
(58)

Using (56) and Markov’s Inequality, the last term is not greater than \( \Vert \mathbf{x}-\mathbf{y}\Vert ^2 \dfrac{2 C \Vert f \Vert e ^{C_k t}}{\varDelta ^2} \) and this is smaller than \(\varepsilon /3\) for every \(\mathbf{y}\) in the open ball \(B \left( \mathbf{x}, r \wedge \dfrac{\sqrt{\varepsilon }\varDelta }{\sqrt{6 C \Vert f \Vert e^{C_{k}t}}} \right) \). The result follows. \(\square \)

Proof of Lemma 9

For fixed \(\alpha > 0\) set \(\varrho (\alpha ; \mathbf{x})= \exp \{\alpha (1+\mathbf{c}^T \mathbf{x}) \}\). An easy computation shows that:

$$\begin{aligned} {\mathcal {L}}\varrho (\alpha ; \mathbf{x})&= \varrho (\alpha ; \mathbf{x})\left( \alpha \sum _{i=1}^n c_ix_i (B_i +(\mathbf{A}\mathbf{x})_i) + \dfrac{\alpha ^2 }{2} \sum _{i=1}^n c_i^2 \sigma _i^2 x_i^2 \right. \\&\quad \left. + \int _{{\mathbb {R}}^n} (\exp \{\alpha \mathbf{c}^ T (\mathbf{x}\circ \mathbf{L}(\mathbf{x},\mathbf{z})) \} -1- \alpha \mathbf{x}^ T (\mathbf{x}\circ \mathbf{L}(\mathbf{x},\mathbf{z})))\nu (d\mathbf{z}) \right) . \end{aligned}$$

Fix \(\alpha _1> 0\) such that for \(i=1, \ldots , n\):

$$\begin{aligned} a_{ii} > \dfrac{1}{2} \alpha _1 c_i \sigma _i^2. \end{aligned}$$

and observe that for every \(0 \le \alpha \le \alpha _1\), our choice of the \(c_i\), Assumptions 3 and 5 imply that for every \(\delta > 0\) there exists \(M> 0\) such that:

$$\begin{aligned} {\mathcal {L}}\varrho (\alpha ; \mathbf{x}) \le -\delta \varrho (\alpha ;\mathbf{x})+M, \end{aligned}$$

whenever \(\alpha \) is small enough. Just as in the proof of Lemma 5, Dynkin’s formula yields:

$$\begin{aligned} {\mathbb {E}}_{\mathbf{x}} (\varrho (\alpha ; \mathbf{X}_t)) \le \dfrac{M}{\delta } +\varrho (\alpha ; \mathbf{x}) e^{-\delta t}. \end{aligned}$$
(59)

for \(0 \le \alpha \le \alpha _1\). Consider the random variables \(\ {^\mathbf{x}}{U}{_t} = \varrho (\alpha _1, \ {^\mathbf{x}}{\mathbf{X}}{_t})\). Since \(\varrho (\alpha ; \cdot )\) is continuous, De la Vallée Poussin’s Lemma implies that for every compact set \(K \subset {\mathbb {R}}^n_{+}\) and \(T > 0\) the family:

$$\begin{aligned} (\ {^\mathbf{x}}{U}{_t}: \mathbf{x}\in K, 0 \le t \le T), \end{aligned}$$

is a uniformly integrable (UI) family of random variables, written:

$$\begin{aligned} \lim _{N \rightarrow \infty } \sup _{\mathbf{x}\in K, 0 \le t \le T} {\mathbb {E}}_{\mathbf{x}} (U_t \mathbf {1}_{U_t > N})=0. \end{aligned}$$
(60)

Now, by our assumptions on \(\mathbf{L}\):

$$\begin{aligned} \vert \varUpsilon (\mathbf{x})\vert \le C (1+\mathbf{c}^T \mathbf{x}), \end{aligned}$$

for some positive constant C. Consequently, there exists \(\alpha _0\) such that for \(\alpha < \alpha _0\):

$$\begin{aligned} \exp ^{\alpha \vert \varUpsilon (\mathbf{x})\vert } \le \varrho (\alpha _1/2, \mathbf{x}). \end{aligned}$$

Consequently, there exists \(R_1\) such that whenever \(\varrho (\alpha _1, \mathbf{x})> R_1\) and \(\alpha < \alpha _0\):

$$\begin{aligned} \dfrac{ \exp ^{\alpha \vert \varUpsilon (\mathbf{x})\vert } }{\varrho (\alpha _1, \mathbf{x})} < \dfrac{\varepsilon }{4}. \end{aligned}$$

Next, consider a sequence \((\mathbf{x}_{k})\) converging to \(\mathbf{x}\in {\mathbb {R}}^n_{+}\), and set \(\varepsilon > 0\). Fix \(T > 0\). Since \(\mathbf{x}_{k}\) is convergent, there exists a compact set K containing \((\mathbf{x}_k)\) and \(\mathbf{x}\). By the UI property (60), there exists \(R_2 >0\) such that for every \(\mathbf{y}\in K\) and \(0 \le t \le T\):

$$\begin{aligned} {\mathbb {E}}_{\mathbf{y}} ( U_t \mathbf {1}_{U_t> R_2} ) < 1. \end{aligned}$$

Fix \(R_3 = \max \{R_1, R_2\}\), and let \(\eta : {\mathbb {R}}^n_{+}\mapsto [0,1]\) be a smooth function such that \(\eta (\mathbf{x})=1\) for \(\varrho (\alpha _1; \mathbf{x})< R_3\) and \(\eta (\mathbf{x})=0\) for \(\varrho (\alpha _1; \mathbf{x}) > 2 R_3\). Now, for \(\alpha < \alpha _0\), \(0\le t \le T\) and \(k \in {\mathbb {N}}\):

$$\begin{aligned} \vert {\mathbb {E}}_{\mathbf{x}_k} (e^{\alpha \varUpsilon (\mathbf{X}_t)}) - {\mathbb {E}}_{\mathbf{x}} (e^{\alpha \varUpsilon (X_t)}) \vert&\le \vert {\mathbb {E}}_{\mathbf{x}_k} (\eta (\mathbf{X}_t)e^{\alpha \varUpsilon (\mathbf{X}_t)} ) - {\mathbb {E}}_{\mathbf{x}} (\eta (\mathbf{X}_t)e^{\alpha \varUpsilon (\mathbf{X}_t)}) \vert \\&\quad + \vert {\mathbb {E}}_{\mathbf{x}_k} ((1-\eta (\mathbf{X}_t))e^{\alpha \varUpsilon (\mathbf{X}_t)} )-{\mathbb {E}}_{\mathbf{x}} ((1-\eta (\mathbf{X}_t))e^{\alpha \varUpsilon (\mathbf{X}_t)})\vert . \end{aligned}$$

The \({\mathcal {C}}_b\)-Feller property (Lemma 3) applies to the first term on the right, and thus there exist a \(k_0 > 0\) such that for every \(k \ge k_0\) it is smaller that \(\varepsilon /2\). As for the second term, observe that:

$$\begin{aligned} {\mathbb {E}}_{\mathbf{x}} ((1-\eta (\mathbf{X}_t)) e^{\alpha \varUpsilon (\mathbf{X}_t)} )&\le {\mathbb {E}}_{\mathbf{x}} ((1-\eta (\mathbf{X}_t)) e^{\alpha \vert \varUpsilon (\mathbf{X}_t)\vert })\\&\le {\mathbb {E}}_{\mathbf{x}} \left( (1-\eta (\mathbf{X}_t)) U_t \dfrac{e^{\alpha \vert \varUpsilon (\mathbf{X}_t)\vert } }{U_t} \right) \\&\le \dfrac{\varepsilon }{4} {\mathbb {E}}_{\mathbf{x}} \left( (1-\eta (\mathbf{X}_t)) U_t \right) \\&\le \dfrac{\varepsilon }{4} {\mathbb {E}}_{\mathbf{x}} \left( U_t \mathbf {1}_{U_t > R_3} \right) \\&\le \dfrac{\varepsilon }{4}. \end{aligned}$$

Of course, an analogous inequality holds for the terms \({\mathbb {E}}_{\mathbf{x}_k}(\cdot )\). This concludes the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Videla, L. Strong stochastic persistence of some Lévy-driven Lotka–Volterra systems. J. Math. Biol. 84, 11 (2022). https://doi.org/10.1007/s00285-022-01714-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00285-022-01714-6

Keywords

Mathematics Subject Classification

Navigation