Skip to main content

Advertisement

Log in

Persistence in Stochastic Lotka–Volterra Food Chains with Intraspecific Competition

  • Original Article
  • Published:
Bulletin of Mathematical Biology Aims and scope Submit manuscript

Abstract

This paper is devoted to the analysis of a simple Lotka–Volterra food chain evolving in a stochastic environment. It can be seen as the companion paper of Hening and Nguyen (J Math Biol 76:697–754, 2018b) where we have characterized the persistence and extinction of such a food chain under the assumption that there is no intraspecific competition among predators. In the current paper, we focus on the case when all the species experience intracompetition. The food chain we analyze consists of one prey and \(n-1\) predators. The jth predator eats the \(j-1\)st species and is eaten by the \(j+1\)st predator; this way each species only interacts with at most two other species—the ones that are immediately above or below it in the trophic chain. We show that one can classify, based on the invasion rates of the predators (which we can determine from the interaction coefficients of the system via an algorithm), which species go extinct and which converge to their unique invariant probability measure. We obtain stronger results than in the case with no intraspecific competition because in this setting we can make use of the general results of Hening and Nguyen (Ann Appl Probab 28:1893–1942, 2018a). Unlike most of the results available in the literature, we provide an in-depth analysis for both non-degenerate and degenerate noise. We exhibit our general results by analyzing trophic cascades in a plant–herbivore–predator system and providing persistence/extinction criteria for food chains of length \(n\le 4\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Baxendale PH (1991) Invariant measures for nonlinear stochastic differential equations. In: Lecture Notes in Mathematics Lyapunov exponents (Oberwolfach 1990), vol 1486. Springer, Berlin, pp 123–140

    Google Scholar 

  • Blath J, Etheridge A, Meredith M (2007) Coexistence in locally regulated competing populations and survival of branching annihilating random walk. Ann Appl Probab 17(5–6):1474–1507

    Article  MathSciNet  Google Scholar 

  • Benaïm M (2018) Stochastic persistence. arXiv:1806.08450

  • Benaïm M, Hofbauer J, Sandholm WH (2008) Robust permanence and impermanence for stochastic replicator dynamics. J Biol Dyn 2(2):180–195

    Article  MathSciNet  Google Scholar 

  • Benaïm M, Lobry C (2016) Lotka Volterra in fluctuating environment or “how switching between beneficial environments can make survival harder”. Ann Appl Probab 26:3754–3785

    Article  MathSciNet  Google Scholar 

  • Benaïm M, Schreiber SJ (2009) Persistence of structured populations in random environments. Theor Popul Biol 76(1):19–34

    Article  Google Scholar 

  • Braumann CA (2000) Variable effort harvesting models in random environments: generalization to density-dependent noise intensities. Math Biosci 177/178(2002): 229–245. In: Deterministic and stochastic modeling of biointeraction (West Lafayette, IN)

  • Cattiaux P, Collet P, Lambert A, Martínez S, Méléard S, San Martín J (2009) Quasi-stationary distributions and diffusion models in population dynamics. Ann Probab 37(5):1926–1969

    Article  MathSciNet  Google Scholar 

  • Chesson PL, Ellner S (1989) Invasibility and stochastic boundedness in monotonic competition models. J Math Biol 27(2):117–138

    Article  MathSciNet  Google Scholar 

  • Chesson P (2000) General theory of competitive coexistence in spatially-varying environments. Theor Popul Biol 58(3):211–237

    Article  MathSciNet  Google Scholar 

  • Cattiaux P, Méléard S (2010) Competitive or weak cooperative stochastic Lotka–Volterra systems conditioned on non-extinction. J Math Biol 60(6):797–829

    Article  MathSciNet  Google Scholar 

  • Dieu NT, Nguyen DH, Du NH, Yin G (2016) Classification of asymptotic behavior in a stochastic SIR model. SIAM J Appl Dyn Syst 15(2):1062–1084

    Article  MathSciNet  Google Scholar 

  • Dow M (2008) Explicit inverses of Toeplitz and associated matrices. ANZIAM J 44:185–215

    Article  Google Scholar 

  • Du NH, Sam VH (2006) Dynamics of a stochastic Lotka–Volterra model perturbed by white noise. J Math Anal Appl 324(1):82–97

    Article  MathSciNet  Google Scholar 

  • Evans SN, Hening A, Schreiber SJ (2015) Protected polymorphisms and evolutionary stability of patch-selection strategies in stochastic environments. J Math Biol 71(2):325–359

    Article  MathSciNet  Google Scholar 

  • Evans SN, Ralph PL, Schreiber SJ, Sen A (2013) Stochastic population growth in spatially heterogeneous environments. J Math Biol 66(3):423–476

    Article  MathSciNet  Google Scholar 

  • Freedman HI, So JWH (1985) Global stability and persistence of simple food chains. Math Biosci 76(1):69–86

    Article  MathSciNet  Google Scholar 

  • Gard TC (1980) Persistence in food chains with general interactions. Math Biosci 51(1–2):165–174

    Article  MathSciNet  Google Scholar 

  • Gard TC (1984) Persistence in stochastic food web models. Bull Math Biol 46(3):357–370

    Article  MathSciNet  Google Scholar 

  • Gard TC (1988) Introduction to stochastic differential equations. M Dekker

  • Gard TC, Hallam TG (1979) Persistence in food webs. I. Lotka–Volterra food chains. Bull Math Biol 41(6):877–891

    MathSciNet  MATH  Google Scholar 

  • Hansson L-A (1992) The role of food chain composition and nutrient availability in shaping algal biomass development. Ecology 73(1):241–247

    Article  MathSciNet  Google Scholar 

  • Harrison GW (1979) Global stability of food chains. Am Nat 114(3):455–457

    Article  Google Scholar 

  • Hening A, Nguyen D (2018a) Coexistence and extinction for stochastic Kolmogorov systems. Ann Appl Probab 28:1893–1942

    Article  MathSciNet  Google Scholar 

  • Hening A, Nguyen D (2018b) Stochastic Lotka–Volterra food chains. J Math Biol 77:135–163

    Article  MathSciNet  Google Scholar 

  • Hening A, Nguyen D, Yin G (2018) Stochastic population growth in spatially heterogeneous environments: the density-dependent case. J Math Biol 76(3):697–754

    Article  MathSciNet  Google Scholar 

  • Hofbauer J (1981) A general cooperation theorem for hypercycles. Monatshefte für Mathematik 91(3):233–240

    Article  MathSciNet  Google Scholar 

  • Hastings A, Powell T (1991) Chaos in a three-species food chain. Ecology 72(3):896–903

    Article  Google Scholar 

  • Hofbauer J, So JW-H (1989) Uniform persistence and repellors for maps. Proc Am Math Soc 107(4):1137–1142

    Article  MathSciNet  Google Scholar 

  • Hening A, Strickler E (2017) On a predator-prey system with random switching that never converges to its equilibrium. arXiv:1710.01220

  • Hutson V (1984) A theorem on average Liapunov functions. Monatshefte für Mathematik 98(4):267–275

    Article  MathSciNet  Google Scholar 

  • Kendall BE, Bjørnstad ON, Bascompte J, Keitt TH, Fagan WF (2000) Dispersal, environmental correlation, and spatial synchrony in population dynamics. Am Nat 155(5):628–636

    Article  Google Scholar 

  • Klebanoff A, Hastings A (1994) Chaos in three species food chains. J Math Biol 32(5):427–451

    Article  MathSciNet  Google Scholar 

  • Liu M, Bai C (2016) Analysis of a stochastic tri-trophic food-chain model with harvesting. J Math Biol 73(3):597–625

    Article  MathSciNet  Google Scholar 

  • Lande R, Engen S, Saether BE (2003) Stochastic population dynamics in ecology and conservation. Oxford University Press, Oxford

    Book  Google Scholar 

  • Liebhold A, Koenig WD, Bjørnstad ON (2004) Spatial synchrony in population dynamics. Ann Rev Ecol Evol Syst 35:467–490

    Article  Google Scholar 

  • Mallik RK (2001) The inverse of a tridiagonal matrix. Linear Algebra Appl 325(1–3):109–139

    Article  MathSciNet  Google Scholar 

  • Moore JC, de Ruiter PC (2012) Energetic food webs: an analysis of real and model ecosystems, OUP Oxford,

  • Odum EP, Barrett GW (1971) Fundamentals of ecology, vol 3. Saunders, Philadelphia

    Google Scholar 

  • Oksanen T, Power ME, Oksanen L (1995) Ideal free habitat selection and consumer-resource dynamics. Am Nat 146(4):565–585

    Article  Google Scholar 

  • Paine RT (1988) Road maps of interactions or grist for theoretical development? Ecology 69(6):1648–1654

    Article  Google Scholar 

  • Post DM, Conners ME, Goldberg DS (2000) Prey preference by a top predator and the stability of linked food chains. Ecology 81(1):8–14

    Article  Google Scholar 

  • Persson L, Diehl S, Johansson L, Andersson G, Hamrin SF (1992) Trophic interactions in temperate lake ecosystems: a test of food chain theory. Am Nat 140(1):59–84

    Article  Google Scholar 

  • Polansky P (1979) Invariant distributions for multi-population models in random environments. Theor Popul Biol 16(1):25–34

    Article  MathSciNet  Google Scholar 

  • Polis GA (1991) Complex trophic interactions in deserts: an empirical critique of food-web theory. Am Nat 138(1):123–155

    Article  Google Scholar 

  • Rudnicki R (2003) Long-time behaviour of a stochastic prey-predator model. Stoch Process Appl 108(1):93–107

    Article  MathSciNet  Google Scholar 

  • Schreiber SJ, Benaïm M, Atchadé KAS (2011) Persistence in fluctuating environments. J Math Biol 62(5):655–683

    Article  MathSciNet  Google Scholar 

  • Schreiber SJ (2012) Persistence for stochastic difference equations: a mini-review. J Differ Equ Appl 18(8):1381–1403

    Article  MathSciNet  Google Scholar 

  • Schreiber SJ, Lloyd-Smith JO (2009) Invasion dynamics in spatially heterogeneous environments. Am Nat 174(4):490–505

    Article  Google Scholar 

  • So JWH (1979) A note on the global stability and bifurcation phenomenon of a Lotka–Volterra food chain. J Theor Biol 80(2):185–187

    Article  MathSciNet  Google Scholar 

  • Terborgh J, Holt RD, Estes JA, Terborgh J, Estes J (2010) Trophic cascades: what they are, how they work, and why they matter. In: Trophic cascades: predators, prey and the changing dynamics of nature, pp 1–18

  • Turelli M (1977) Random environments and stochastic calculus. Theor Popul Biol 12(2):140–178

    Article  MathSciNet  Google Scholar 

  • Vander Zanden MJ, Shuter BJ, Lester N, Rasmussen JB (1999) Patterns of food chain length in lakes: a stable isotope study. Am Nat 154(4):406–416

    Article  Google Scholar 

  • Wootton JT, Power ME (1993) Productivity, consumers, and the structure of a river food chain. Proc Nat Acad Sci 90(4):1384–1387

    Article  Google Scholar 

Download references

Acknowledgements

We thank an anonymous referee for comments which helped improve this manuscript and Sebastian Schreiber for helpful discussions and suggestions. Dang H. Nguyen has been partially supported by Nafosted No. 101.03-2017.23 and by the National Science Foundation under Grant DMS-1207667.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexandru Hening.

Appendix A: Proofs

Appendix A: Proofs

The following result tells us that there is no ergodic invariant probability measure \(\mu \) that has a gap in the chain of predators.

Lemma A.1

Suppose \(\mu \in {\mathcal {M}}\) such that \(I_\mu = \{n_1,\ldots ,n_k\}\). Then \(I_\mu \) must be of the form \(\{1,2,\ldots ,l\}\) for some \(l\ge 1\).

Proof

We argue by contradiction. First, suppose that \(n_1>1\). By (11)

$$\begin{aligned} \begin{aligned} \lambda _{n_1}(\mu )= 0&=-{\tilde{a}}_{n_1,0} + a_{n_1,n_1-1}\int _{ {\mathbb {R}}_+^n} x_{n_1-1}\mathrm{d}\mu - a_{n_1,n_1}\int _{ {\mathbb {R}}_+^n} x_{n_1}\mathrm{d}\mu \\&= -{\tilde{a}}_{n_1,0} - a_{n_1,n_1}\int _{ {\mathbb {R}}_+^n} x_{n_1}\mathrm{d}\mu \\&<0 \end{aligned} \end{aligned}$$

which is a contradiction.

Alternatively, suppose that there exists \(\mu \in {\mathcal {M}}\) such that \(I_\mu = \{1,\ldots , u^*, v^*,\ldots ,\) \(n_k\}\) with \(1\le u^*<v^*-1\le n_k\le n\). As a result one can see that \(v^*-1\notin I^\mu \). Then by (11)

$$\begin{aligned} \begin{aligned} \lambda _{v^*}(\mu )= 0&=-{\tilde{a}}_{n_1,0} + a_{v^*,v^*-1}\int _{ {\mathbb {R}}_+^n} x_{v^*-1}\mathrm{d}\mu - a_{v^*,v^*}\int _{ {\mathbb {R}}_+^n} x_{v^*}\mathrm{d}\mu - a_{v^*,v^*+1}\\&\qquad \times \int _{ {\mathbb {R}}_+^n} x_{v^*+1}\mathrm{d}\mu \\&= -{\tilde{a}}_{v^*,0} - a_{v^*,v^*}\int _{ {\mathbb {R}}_+^n} x_{v^*}\mathrm{d}\mu - a_{v^*,v^*+1}\int _{ {\mathbb {R}}_+^n} x_{v^*+1}\mathrm{d}\mu \ \\&<0 \end{aligned} \end{aligned}$$

which is a contradiction. \(\square \)

For \(i=1,\ldots ,n\), denote by \({\mathcal {M}}_i\) the set of all invariant probability measures \(\mu \) of \({\mathbf {X}}\) satisfying \(\mu ({\mathbb {R}}^{(i),\circ }_+)=1\). For \(i=0\), define \({\mathcal {M}}_0=\{\varvec{\delta }^*\}\). By Lemma A.1, we have \({{\mathrm{Conv}}}({\mathcal {M}})={{\mathrm{Conv}}}(\cup _{i=0}^{n-1}{\mathcal {M}}_i)\) and \({{\mathrm{Conv}}}(\cup _{i=0}^{n}{\mathcal {M}}_i)\) is the set of all invariant probability measures of \({\mathbf {X}}\) on \({\mathbb {R}}^n_+\).

Lemma A.2

We have the following claims.

  • If \({\mathcal {I}}_k\le 0\) then \({\mathcal {I}}_{k+1}<0\).

  • If \({\mathcal {I}}_n\le 0\), there \({\mathbf {X}}\) has no invariant probability measure on \({\mathbb {R}}^{n,\circ }_+\).

Proof

If \({\mathcal {I}}_{k+1}= -{\tilde{a}}_{k+1,0} + a_{k+1,j} x^{(k)}_k\ge 0\), then \(x^{(k)}_k>0\). We will show in Sect. 4 that \(x^{(k)}_k\) has the same sign as \({\mathcal {I}}_k\). Thus, if \({\mathcal {I}}_{k+1}\ge 0\) then \({\mathcal {I}}_k>0\), which proves the first claim.

If \({\mathbf {X}}\) has an invariant probability measure \(\mu \) on \({\mathbb {R}}^{n,\circ }_+\), then we must have \(\int _{{\mathbb {R}}^n_+}x_n\mu (d{\mathbf {x}})=x^{(n)}_n\). As a result \(x^{(n)}_n>0\), which leads to \({\mathcal {I}}_n>0\) since they have the same sign. The second claim is therefore proved. \(\square \)

Lemma A.3

We have the following claims.

  1. (1)

    For any initial condition \({\mathbf {X}}(0)={\mathbf {x}}\in {\mathbb {R}}^n_+\), the family \(\{\widetilde{\Pi }_t(\cdot ), t\ge 1\}\) is tight in \({\mathbb {R}}^n_+\), and its \(\hbox {weak}^*\)-limit set, denoted by \({\mathcal {U}}={\mathcal {U}}(\omega )\) is a family of invariant probability measures of \({\mathbf {X}}\) with probability 1.

  2. (2)

    Suppose that there is a sequence \((T_k)_{k\in {\mathbb {N}}}\) such that \(\lim _{k\rightarrow \infty }T_k=\infty \) and \((\widetilde{\Pi }_{T_k}(\cdot ))_{k\in {\mathbb {N}}}\) converge weakly to an invariant probability measure \(\pi \) of \({\mathbf {X}}\) when \(k\rightarrow \infty \) . Then for this sample path, we have \(\int _{{\mathbb {R}}^n_+}h({\mathbf {x}})\widetilde{\Pi }_{T_k}(\mathrm{d}{\mathbf {x}})\rightarrow \int _{{\mathbb {R}}^n_+}h({\mathbf {x}})\pi (\mathrm{d}{\mathbf {x}})\) for any continuous function \(h:{\mathbb {R}}^n_+\rightarrow {\mathbb {R}}\) satisfying \(|h({\mathbf {x}})|<K_h(1+\Vert {\mathbf {x}}\Vert )\,,\,{\mathbf {x}}\in {\mathbb {R}}^n_+\), with \(K_h\) a positive constant and \(\delta \in [0,\delta _1)\).

  3. (3)

    For any \({\mathbf {x}}\in {\mathbb {R}}^{n,\circ }_+\)

    $$\begin{aligned} {\mathbb {P}}_{\mathbf {x}}\left\{ \lim _{t\rightarrow \infty }\left( \dfrac{\ln X_i(t)}{t}-\lambda _i\left( {\tilde{\Pi }}_t\right) \right) =0,\, i=1,\ldots ,n\right\} =1 \end{aligned}$$
    (28)

    and

    $$\begin{aligned} {\mathbb {P}}_{\mathbf {x}}\left\{ \limsup _{t\rightarrow \infty } \dfrac{\ln X_i(t)}{t}\le 0, i=1,\ldots ,\right\} n=1. \end{aligned}$$
    (29)

Proof

Let \({\tilde{c}}_1=1, {\tilde{c}}_i:=\prod _{j=2}^i\dfrac{a_{k-1,k}}{2a_{k,k-1}}=c_{i-1}\dfrac{a_{i-1,i-1}}{2a_{i,i-1}}, i\ge 2.\) Put

$$\begin{aligned} {\tilde{\gamma }}=\min _{i=1,\ldots ,n}\left\{ {c_i}\frac{a_{ii}}{2}\right\} \end{aligned}$$

we can easily verify that

$$\begin{aligned} \sum _{i=1}^n{\tilde{c}}_i f_i({\mathbf {x}})\le {\tilde{C}} -{\tilde{\gamma }}\sum _{i=1}^n x_i\,\text { for some positive constant }\, {\tilde{C}}. \end{aligned}$$

Thus, when \(\Vert x\Vert \) is sufficiently large, \(|\sum _{i=1}^n{\tilde{c}}_i f_i({\mathbf {x}})|\ge {\tilde{\gamma }}\sum _{i=1}^n x_i\), which implies

$$\begin{aligned} \liminf _{\Vert x\Vert \rightarrow \infty } \dfrac{\sum _{i=1}^n{\tilde{c}}_i |f_i({\mathbf {x}})|}{\sum _{i=1}^n x_i} \ge \liminf _{\Vert x\Vert \rightarrow \infty } \dfrac{\left| \sum _{i=1}^n{\tilde{c}}_i f_i({\mathbf {x}})\right| }{\sum _{i=1}^n x_i}\ge {\tilde{\gamma }}. \end{aligned}$$

As a result,

$$\begin{aligned} \liminf _{\Vert x\Vert \rightarrow \infty } \dfrac{\Vert {\mathbf {x}}\Vert ^\delta }{\sum _{i=1}^n |f_i({\mathbf {x}})|}=0\quad \text { for any }\,\delta \in (0,1). \end{aligned}$$

In other words, Assumption 1.4 of Hening and Nguyen (2018a) is satisfied by our model. Thus, the first and second claims of this lemma follow from (Hening and Nguyen 2018a, Lemmas 4.6, 4.7). By Itô’s formula and the definition of \({\tilde{\Pi }}_t\), we have

$$\begin{aligned} \left( \dfrac{\ln X_i(t)}{t}-\lambda _i\left( {\tilde{\Pi }}_t\right) \right) = \dfrac{\ln X_i(0)}{t}+\frac{E_i(t)}{ t}. \end{aligned}$$

By the strong law of large numbers for martingales,

$$\begin{aligned} \lim _{t\rightarrow \infty }\dfrac{\ln X_i(0)}{t}+\frac{E_i(t)}{ t}=0 \,\text { a.s.} \end{aligned}$$

which leads to (28).

(29) can be derived by using Eq. (4.22) of Hening and Nguyen (2018a) or by mimicking the proof of (Du and Sam 2006, Theorem 2.4). \(\square \)

Proof of Theorem 1.1 (i)

Since \({\mathcal {I}}_n>0\), it follows from Lemma A.2 that \({\mathcal {I}}_k>0\) for any \(k=1,\ldots ,n\). By Lemma A.1, for any \(\mu \in {{\mathrm{Conv}}}({\mathcal {M}})={{\mathrm{Conv}}}(\cup _{i=0}^{n-1}{\mathcal {M}}_i)\), we can decompose \(\mu =\rho _1\mu _{i_1}+\cdots +\rho _k\mu _{i_k}\) where \(0\le i_1<\cdots <i_k\le n-1\) and \(\mu _{i_j}\in {\mathcal {M}}_{i_j}\), \(\rho _j> 0\) for \(j=1,\ldots ,k\) and \(\sum \rho _j=1\). Since \(i_1< i_j\) for \(j=2,\ldots ,k\), we deduce from (11) that \(\lambda _{i_1+1}(\mu _{i_j})=0\) for \(j=2,\ldots ,k\). On the other hand, (4) and (12) imply

$$\begin{aligned} \lambda _{i_1+1}(\mu _{i_1})= -{\tilde{a}}_{i_1+1,0} + a_{i_1+1,i_1} x^{(i_1)}_{i_1}={\mathcal {I}}_{i_1+1}>0. \end{aligned}$$

As a result,

$$\begin{aligned} \lambda _{i_1+1}(\mu )=\rho _1\lambda _{i_1+1}(\mu _{i_1})>0. \end{aligned}$$

Thus,

$$\begin{aligned} \max _{i=1,\ldots ,n}\lambda _i(\mu )>0,\quad \text { for any }\,\mu \in {{\mathrm{Conv}}}({\mathcal {M}}). \end{aligned}$$
(30)

In other words, Assumption 2.1 is satisfied. By Theorem 3.1 of Hening and Nguyen (2018a), there exist positive \(p_1,\ldots , p_n, T\) and constants \(\theta , \kappa \in (0,1)\) such that

$$\begin{aligned} {\mathbb {E}}_{{\mathbf {x}}} V^\theta (X( T))\le \kappa V^\theta (x)+ K \end{aligned}$$
(31)

where

$$\begin{aligned} V({\mathbf {x}}):=\dfrac{1+{\mathbf {c}}^\top {\mathbf {x}}}{\Pi _{i=1}^nx_i^{p_i}}\,\text { for }\, {\mathbf {x}}\in {\mathbb {R}}^{n,\circ }_+,\,\text { with } {\mathbf {c}}\,\text { defined in }\, (2.3),\,\text { and }\, \sum _{i=1}^np_i<1. \end{aligned}$$

Equation (31) and the Markov property of \({\mathbf {X}}\) lead to

$$\begin{aligned} {\mathbb {E}}_{{\mathbf {x}}} V^\theta (X(m T))\le \kappa ^m V^\theta (x)+K\sum _{j=1}^{m-1}\kappa ^j. \end{aligned}$$

Thus,

$$\begin{aligned} \limsup _{m\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}} V^\theta ({\mathbf {X}}(m T))\le \dfrac{ K}{1-\kappa },\quad {\mathbf {x}}\in {\mathbb {R}}^{n,\circ }_+. \end{aligned}$$
(32)

By (Hening and Nguyen 2018a, Lemma 2.1), there exists \({\widehat{K}}>0\) such that

$$\begin{aligned} {\mathbb {E}}_{{\mathbf {x}}} V^\theta ({\mathbf {X}}(t))\le \exp ({\widehat{K}}t)V^\theta ({\mathbf {x}}),\quad {\mathbf {x}}\in {\mathbb {R}}^{n,\circ }_+, \end{aligned}$$

which together with the Markov property implies

$$\begin{aligned} {\mathbb {E}}_{{\mathbf {x}}} V^\theta ({\mathbf {X}}(t))\le \exp ({\widehat{K}} T){\mathbb {E}}_{\mathbf {x}}V^\theta ({\mathbf {X}}(m T)) \text { for } t\in [m T,(m+1) T]. \end{aligned}$$
(33)

In view of (32) and (33), we have

$$\begin{aligned} \limsup _{t\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}} V^\theta ({\mathbf {X}}(t))\le \exp ({\widehat{K}} T)\dfrac{ K}{1-\kappa }. \end{aligned}$$

For any fixed \(\varepsilon >0\), define \(K:=\left\{ {\mathbf {x}}\in {\mathbb {R}}^{n,\circ }_+: V^\theta ({\mathbf {x}})\le \dfrac{1}{\varepsilon }\exp ({\widehat{K}} T)\dfrac{ K}{1-\kappa }\right\} \) then K is a compact subset of \({\mathbb {R}}^{n,\circ }_+\). The definition of K together with the last inequality yields

$$\begin{aligned} \limsup _{t\rightarrow \infty }{\mathbb {P}}_{{\mathbf {x}}} \{{\mathbf {X}}(t)\notin K\}\le \left( \varepsilon \exp (-\,{\widehat{K}} T)\dfrac{1-\kappa }{ K}\right) \limsup _{t\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}} V^\theta ({\mathbf {X}}(t))\le \varepsilon . \end{aligned}$$
(34)

The stochastic persistence in probability is therefore proved.

To prove (5), we need to show that for any initial value \({\mathbf {x}}\in {\mathbb {R}}^{n,\circ }_+\), the weak-limit points of \({\tilde{\Pi }}_{t}\) are a subset of \({\mathcal {M}}_n\) with probability 1.

Suppose the claim is false. Then, by part (i) of Lemma A.3, we can find \({\mathbf {x}}\in {\mathbb {R}}^{n,\circ }_+\) and \({\tilde{\Omega }}_{\mathbf {x}}\subset \Omega \) with \({\mathbb {P}}_{\mathbf {x}}({\tilde{\Omega }}_{\mathbf {x}})>0\) and such that for \(\omega \in {\tilde{\Omega }}_{\mathbf {x}}\), there exists \(t_k=t_k(\omega )\) satisfying that \(\lim _{k\rightarrow \infty }t_k=\infty \) and \({\tilde{\Pi }}_{t_k}(\omega )\) converge weakly to \(\mu (\omega )=\rho _1\mu _1+\rho _2\mu _2\) where \(\mu _1\in {{\mathrm{Conv}}}({\mathcal {M}})\) and \(\mu _2\in {\mathcal {M}}_n\) and \(\rho _1>0\). By Lemma A.2, \(\lambda _n(\mu _1)>0\). In view of (11), \(\lambda _n(\mu _2)=0\). Thus, for almost all \(\omega \in {\tilde{\Omega }}_{\mathbf {x}}\), we have from part (ii) of Lemma A.3 that

$$\begin{aligned} \lim _{k\rightarrow \infty }\dfrac{\ln X_n(t_k)}{t_k} =\lim _{k\rightarrow \infty }\lambda _n\left( {\tilde{\Pi }}_{t_k}\right) =\lambda _n(\mu )>0, \end{aligned}$$

which contradicts (29). Thus, with probability 1, the weak-limit points of \({\tilde{\Pi }}_{t}\) as \(t\rightarrow \infty \) must be contained in \({\mathcal {M}}_n\). Then, (5) follows from (12).

When \(\Sigma \) is positive definite, it follows from (Hening and Nguyen 2018a, Theorem 3.1) that the food chain \({\mathbf {X}}\) is strongly stochastically persistent and its transition probability converges to its unique invariant probability measure \(\pi ^{(n)}\) on \({\mathbb {R}}_+^{n,\circ }\) exponentially fast in total variation. \(\square \)

Proof of Theorem 1.1 (ii)

We suppose there exists \(j^*<n\) such that \({\mathcal {I}}_{j^*}>0\) and \({\mathcal {I}}_{j^*+1}<0\). By Lemma A.2 part (ii), there are no invariant probability measures on \({\mathbb {R}}^{(j),\circ }_+\) for \(j=j^*+1,\ldots ,n\). Using Lemma A.1, we see that the set of invariant probability measures on \({\mathbb {R}}^n_+\) of \({\mathbf {X}}\) is \({{\mathrm{Conv}}}(\cup _{i=0}^{j^*}{\mathcal {M}}_i)\).

Note that \(\lambda _{j^*+1}(\mu )=-{\tilde{a}}_{j^*+1}<0\) if \(\mu \in {\mathcal {M}}_i\) for \(i<j^*\) and \(\lambda _{j^*+1}(\mu )={\mathcal {I}}_{j^*+1}<0\) if \(\mu \in {\mathcal {M}}_{j^*}\). As a result, \(\lambda _{j^*+1}(\mu )<0\) for any \(\mu \in {{\mathrm{Conv}}}(\cup _{i=0}^{j^*}{\mathcal {M}}_i)\). Similarly, \(\lambda _{j}(\mu )<0\) for any \(j>j^*+1\) and \(\mu \in {{\mathrm{Conv}}}(\cup _{i=0}^{j^*}{\mathcal {M}}_i)\). By (28) we have that

$$\begin{aligned} \lim _{t\rightarrow \infty }X_j(t)=0, j=j^*+1,\ldots ,n \,{\mathbb {P}}_{\mathbf {x}}-\text {a.s.} \end{aligned}$$

Since

$$\begin{aligned} \int _{{\mathbb {R}}^n_+}x_i'\mu (\mathrm{d}{\mathbf {x}}')= {\left\{ \begin{array}{ll} x^{(j^*)}_i &{}\quad \text { if}\; i=1,\ldots ,j^*,\\ 0 &{}\quad \text {if}\, i=j^*+1,\ldots , n. \end{array}\right. } \quad \text {for }\; \mu \in {\mathcal {M}}_{j^*}, \end{aligned}$$
(35)

we have

$$\begin{aligned} \lambda _{i}(\mu )= {\left\{ \begin{array}{ll} {\mathcal {I}}_{j^*+1}&{}\quad \text { if}\; i=j^*+1\\ -\,{\tilde{a}}_{i0}&{}\quad \text {if}\; i>j^*+1. \end{array}\right. } \quad \text {for}\;\mu \in {\mathcal {M}}_{j^*}. \end{aligned}$$
(36)

Using (29) and a contradiction argument similar to that in the proof of part (i), we can show that with probability 1, the weak-limit points of \({\tilde{\Pi }}_{t}\) as \(t\rightarrow \infty \) must be contained in \({\mathcal {M}}_{j^*}\). Thus, for \({\mathbf {x}}\in {\mathbb {R}}^{n,\circ }_+\), we have from (35), (36), and Lemma A.2 that

$$\begin{aligned} \lim _{t\rightarrow \infty }\dfrac{1}{t}\int _0^t X_i(s)\mathrm{d}s= {\left\{ \begin{array}{ll} x^{(j^*)}_i&{}\quad \text {if}\; i=1,\ldots ,j^*,\\ 0&{}\quad \text {if}\; i=j^*+1,\ldots , n \end{array}\right. }\quad {\mathbb {P}}_{\mathbf {x}}-\text {a.s}. \end{aligned}$$

and

$$\begin{aligned} \lim _{t\rightarrow \infty }\dfrac{\ln X_i(t)}{t}= {\left\{ \begin{array}{ll} {\mathcal {I}}_{j^*+1}&{}\quad \text {if}\; i=j^*+1\\ -\,{\tilde{a}}_{i0}&{}\quad \text {if}\; i>j^*+1. \end{array}\right. } \quad {\mathbb {P}}_{\mathbf {x}}-\text {a.s}. \end{aligned}$$

To prove the persistence in probability of \((X_1,\ldots ,X_{j^*})\), we define

$$\begin{aligned} {\mathbb {R}}^{(j^*),\diamond }= & {} \left\{ {\mathbf {x}}=(x_1,\ldots ,x_n)\in {\mathbb {R}}^n_+: x_j>0 \,\text { for }\, j=1,\ldots ,j^*\right\} ,\,\text { and }\, \partial {\mathbb {R}}^{(j^*),\diamond }\\= & {} {\mathbb {R}}^n_+{\setminus }{\mathbb {R}}^{(j^*),\diamond }. \end{aligned}$$

We have proved that \({{\mathrm{Conv}}}(\bigcup _{j=0}^{j^*}{\mathcal {M}}_j)\) is the set of invariant probability measures of \({\mathbf {X}}\) on \({\mathbb {R}}^n_+\). Note that \({{\mathrm{Conv}}}(\bigcup _{j=0}^{j^*-1}{\mathcal {M}}_j)\) is the set of invariant probability measures of \({\mathbf {X}}\) on \(\partial {\mathbb {R}}^{(j^*),\diamond }\). Since \({\mathcal {I}}_{j^*}>0\), applying (30) with n replaced by \(j^*\) we obtain

$$\begin{aligned} \max _{i=1,\ldots ,j^*}\lambda _i(\mu )>0,\quad \text {for any}\;\mu \in {{\mathrm{Conv}}}\left( \cup _{j=0}^{j^*-1}{\mathcal {M}}_j\right) . \end{aligned}$$
(37)

Using this condition, we can imitate the proofs in (Hening and Nguyen 2018a,Sect. 3) to construct a Lyapunov function \(U({\mathbf {x}}):{\mathbb {R}}^{(j^*),\diamond }_+\mapsto {\mathbb {R}}_+\) of the form

$$\begin{aligned} U({\mathbf {x}})=\dfrac{1+{\mathbf {c}}^\top {\mathbf {x}}}{\Pi _{i=1}^{j^*}x_i^{{\tilde{p}}_i}},\quad {\tilde{p}}_i>0, i=1,\ldots ,j^* \end{aligned}$$

satisfying

$$\begin{aligned} {\mathbb {E}}_{{\mathbf {x}}} U^{{\tilde{\theta }}}(X( T))\le {\tilde{\kappa }} U^{{\tilde{\theta }}}(x)+{\tilde{K}},\quad \text {for}\;{\mathbf {x}}\in {\mathbb {R}}^{(j^*),\diamond }_+ \end{aligned}$$
(38)

and

$$\begin{aligned} {\mathbb {E}}_{{\mathbf {x}}} U^{{\tilde{\theta }}}(X( t))\le \exp ({\overline{K}} t) U^{{\tilde{\theta }}}(x)\quad \text {for}\;{\mathbf {x}}\in {\mathbb {R}}^{(j^*),\diamond }_+ , \end{aligned}$$
(39)

where \({\tilde{p}}_i>0\) for \(i=1,\ldots ,j^*\), \(\sum _{i=1}^{j^*}{\tilde{p}}_i<1\), \({\tilde{\theta }}, {\tilde{\kappa }}\) are some constants in (0, 1), and \({\tilde{T}}, {\tilde{K}}, {\overline{K}}\) are positive constants. Using (38) and (39), we can obtain the persistence in probability of \((X_1,\ldots ,X_{j^*})\) in the same manner as (34). The proof is complete. \(\square \)

Proof of Theorem 1.1 (iii)

Let \(f:{\mathbb {R}}^n_+\mapsto {\mathbb {R}}\) be a continuous function and \(\sup _{{\mathbf {x}}\in {\mathbb {R}}^n_+}|f({\mathbf {x}})|\le 1\). Fix \({\mathbf {x}}_0\in {\mathbb {R}}^{n,\circ }_+\). We have to show that

$$\begin{aligned} \lim _{t\rightarrow \infty }\left| \int _{{\mathbb {R}}^n_+} f({\mathbf {x}}')\pi _{j^*}(\mathrm{d}{\mathbf {x}}')-\int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(t, {\mathbf {x}}_0, \mathrm{d}{\mathbf {x}}')\right| =0. \end{aligned}$$
(40)

In part (ii), we have proved that \((X_1,\ldots , X_{j^*})\) is persistent in probability. Thus, for any \(\varepsilon >0\), there exist \(T_1>0\) and \(H>1\) such that

$$\begin{aligned} {\mathbb {P}}_{{\mathbf {x}}_0}\left\{ H^{-1}\le X_j(t)\le H, j=1,\ldots , j^*\right\} >1-\varepsilon \quad \text {for any}\;t\ge T_1.\end{aligned}$$
(41)

For \(\delta \ge 0\) define

$$\begin{aligned} K_\delta= & {} \{{\mathbf {x}}=(x_1,\ldots ,x_n)\in {\mathbb {R}}^n_+: H^{-1}\le x_j\le H,\text { for } j=1,\ldots , j^*, x_j\le \delta ,\text { for }\\ j= & {} j^*+1,\ldots ,n\}. \end{aligned}$$

Let \({\overline{f}}=\int _{{\mathbb {R}}^n_+} f({\mathbf {x}}')\pi _{j^*}(\mathrm{d}{\mathbf {x}}')\). In view of (6), there exists \(T_2>0\) such that

$$\begin{aligned} \left| \int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(T_2, {\mathbf {x}}, \mathrm{d}{\mathbf {x}}')-{\overline{f}}\right| <\varepsilon \quad \text {for any}\; {\mathbf {x}}\in K_0 \end{aligned}$$
(42)

Since \({\mathbf {X}}\) is a Markov–Feller process on \({\mathbb {R}}^n_+\), we can find a sufficiently small \(\delta =\delta (\varepsilon )>0\) such that

$$\begin{aligned}&\left| \int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(T_2, {\mathbf {x}}_1, \mathrm{d}{\mathbf {x}}')-\int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(T_2, {\mathbf {x}}_2, \mathrm{d}{\mathbf {x}}')\right| <\varepsilon \nonumber \\&\qquad \text {given that}\; \Vert {\mathbf {x}}_1-{\mathbf {x}}_2\Vert \le \delta . \end{aligned}$$
(43)

Thus, (42) and (43) imply

$$\begin{aligned} \left| \int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(T_2, {\mathbf {x}}, \mathrm{d}{\mathbf {x}}')-{\overline{f}}\right| <2\varepsilon \quad \text {for any} \;{\mathbf {x}}\in K_\delta . \end{aligned}$$
(44)

Since \(X_{j^*+1},\ldots , X_n\) converges to 0 almost surely, there exists \(T_3>T_1\) such that

$$\begin{aligned} {\mathbb {P}}_{{\mathbf {x}}_0}\left\{ X_j(t)\le \delta , j=j^*+1,\ldots ,n\right\} >1-\varepsilon \quad \text {for any}\;t\ge T_3.\end{aligned}$$
(45)

We deduce from (41) and (45) that

$$\begin{aligned} P(t, {\mathbf {x}}_0,K_\delta )={\mathbb {P}}_{{\mathbf {x}}_0}\left\{ {\mathbf {X}}_j(t)\in K_\delta \right\} >1-2\varepsilon \quad \text {for any }\;t\ge T_3. \end{aligned}$$
(46)

For any \(t\ge T_3+T_2\), we have from the Chapman–Kolmogorov equation, (44), (46) and \(|f({\mathbf {x}})|\le 1\) that

$$\begin{aligned} \begin{aligned} \left| \int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(t, {\mathbf {x}}_0, \mathrm{d}{\mathbf {x}}')-{\overline{f}}\right|&= \left| \int _{{\mathbb {R}}^n_+}\left( \int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(T_2, {\mathbf {x}}, \mathrm{d}{\mathbf {x}}')-{\overline{f}}\right) P(t-T_2,{\mathbf {x}}_0,\mathrm{d}{\mathbf {x}})\right| \\&\le \left| \int _{K_\delta }\left( \int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(T_2, {\mathbf {x}}, \mathrm{d}{\mathbf {x}}')-{\overline{f}}\right) P(t-T_2,{\mathbf {x}}_0,\mathrm{d}{\mathbf {x}})\right| \\&\quad +\, \left| \int _{{\mathbb {R}}^n_+\setminus K_\delta }\left( \int _{{\mathbb {R}}^n_+}f({\mathbf {x}}')P(T_2, {\mathbf {x}}, \mathrm{d}{\mathbf {x}}')-{\overline{f}}\right) P(t-T_2,{\mathbf {x}}_0,\mathrm{d}{\mathbf {x}})\right| \\&\le 2\varepsilon (1-\varepsilon )+2(2\varepsilon )\le 6\varepsilon , \end{aligned} \end{aligned}$$

which leads to (A.13). The proof is complete. \(\square \)

Proof of Theorem 1.1 (iv)

If \(\Sigma _{j^*}\) is positive definite, then by Theorem 1.1 part (i) for \({\mathbf {x}}\in {\mathbb {R}}^{(j^*),\circ }_+\) one has that as \(t\rightarrow \infty \) the transition probability \(P(t, x, \cdot )\) converges in total variation to a unique invariant probability measure \(\pi _{j^*}\). Moreover, the convergence is uniform in each compact set of \({\mathbb {R}}^{(j^*),\circ }_+\) (due to the property of the Lyapunov function constructed in the proof). As a result (6) is satisfied and the conclusion follows by part (iii) of Theorem 1.1. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hening, A., Nguyen, D.H. Persistence in Stochastic Lotka–Volterra Food Chains with Intraspecific Competition. Bull Math Biol 80, 2527–2560 (2018). https://doi.org/10.1007/s11538-018-0468-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11538-018-0468-5

Keywords

Mathematics Subject Classification

Navigation