Abstract
A general theory is developed to study individual based models which are discrete in time. We begin by constructing a Markov chain model that converges to a one-dimensional map in the infinite population limit. Stochastic fluctuations are hence intrinsic to the system and can induce qualitative changes to the dynamics predicted from the deterministic map. From the Chapman–Kolmogorov equation for the discrete-time Markov process, we derive the analogues of the Fokker–Planck equation and the Langevin equation, which are routinely employed for continuous time processes. In particular, a stochastic difference equation is derived which accurately reproduces the results found from the Markov chain model. Stochastic corrections to the deterministic map can be quantified by linearizing the fluctuations around the attractor of the map. The proposed scheme is tested on stochastic models which have the logistic and Ricker maps as their deterministic limits.
Similar content being viewed by others
References
Biancalani, T., Fanelli, D., Di Patti, F.: Stochastic Turing patterns in the Brusselator model. Phys. Rev. E 81, 046215 (2010)
Black, A.J., McKane, A.J.: Stochastic formulation of ecological models and their applications. Trends Ecol. Evol. 27, 337–345 (2012)
Butler, T., Goldenfeld, N.: Robust ecological pattern formation induced by demographic noise. Phys. Rev. E 80, 030902(R) (2009)
Cencini, M., Vulpiani, A.: Finite size Lyapunov exponent: review on applications. J. Phys. A 46, 254019 (2013)
Challenger, J.D., Fanelli, D., McKane, A.J.: Intrinsic noise and discrete-time processes. Phys. Rev. E 88, 040102(R) (2013)
Crutchfield, J.P., Farmer, J.D., Huberman, B.A.: Fluctuations and simple chaotic dynamics. Phys. Rep. 92, 45–82 (1982)
Ewens, W.J.: Mathematical Population Genetics. I. Theoretical Introduction. Springer-Verlag, New York (2004)
Fisher, R.A.: On the dominance ratio. Proc. R. Soc. Edinb. 42, 321–341 (1922)
Gao, J., Zheng, Z.: Direct dynamical test for deterministic chaos. Europhys. Lett. 25, 485–490 (1994)
Gao, J., Zheng, Z.: Direct dynamical test for deterministic chaos and optimal embedding of a chaotic time series. Phys. Rev. E 49, 3807–3814 (1994)
Gao, J.B., Hwang, S.K., Liu, J.M.: When can noise induce chaos? Phys. Rev. Lett. 82, 1132 (1999)
Gardiner, C.W.: Handbook of Stochastic Methods, 4th edn. Springer-Verlag, Berlin (2009)
Gillespie, D.T.: A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J. Comp. Phys. 22, 403–434 (1976)
Gillespie, D.T.: Markov processes: An Introduction for Physical Scientists. Academic Press, San Diego (1992)
Godfray, H.C.J., Hassell, M.P.: Discrete and continuous insect populations in tropical environments. J. Anim. Ecol. 58, 153–174 (1989)
Goutsias, J., Jenkinson, G.: Markovian dynamics on complex reaction networks. Phys. Rep. 529, 199–264 (2013)
Hassell, M.P.: Density-dependence in single-species populations. J. Anim. Ecol. 44, 283–295 (1975)
Hassell, M.P., Comins, H.N.: Discrete time models for two-species competition. Theor. Popul. Biol. 9, 202–221 (1976)
May, R.M.: Biological populations with nonoverlapping generations: stable points, stable cycles and chaos. Science 186, 645–647 (1974)
May, R.M.: Biological populations obeying difference equations: stable points, stable cycles, and chaos. J. Theor. Biol. 51, 511–524 (1975)
May, R.M., Oster, G.F.: Bifurcations and dynamic complexity in simple ecological models. Am. Nat. 110, 573–599 (1976)
Mayer-Kress, G., Haken, H.: The influence of noise on the logistic model. J. Stat. Phys. 26, 149–171 (1981)
McKane, A.J., Biancalani, T., Rogers, T.: Stochastic pattern formation and spontaneous polarisation: the linear noise approximation and beyond. Bull. Math. Biol. (2014). doi:10.1007/s11538-013-9827-4
McKane, A.J., Newman, T.J.: Predator-prey cycles from resonant amplification of demographic stochasticity. Phys. Rev. Lett. 94, 218102 (2005)
Moll, J.D., Brown, J.S.: Competition and coexistence with multiple life-history stages. Am. Nat. 171, 839–843 (2008)
Neubert, M.G., Kot, M.: The subcritical collapse of predator populations in discrete-time predator-prey models. Math. Biosci. 110, 45–66 (1992)
Ott, E.: Chaos in Dynamical Systems. Cambridge University Press, Cambridge (1993)
Packard, N.H., Crutchfield, J.P., Farmer, J.D., Shaw, R.S.: Geometry from a time series. Phys. Rev. Lett. 45, 712–716 (1980)
Reichl, L.E.: A Modern Course in Statistical Physics, 2nd edn. Wiley, New York (1998)
Ricker, W.: Stock and recruitment. J. Fish. Res. Board Can. 11, 559–623 (1954)
Risken, H.: The Fokker-Planck Equation, 2nd edn. Springer-Verlag, Berlin (1989)
Strogatz, S.H.: Nonlinear Dynamics and Chaos. Perseus Publishing, Cambridge, Mass (1994)
Takens, F.: Detecting strange attractors in turbulence. In: Rand, D.A., Young, L.S. (eds.) Dynamical Systems and Turbulence. Lecture Notes in Mathematics, vol. 898, pp. 366–381. Springer-Verlag, Berlin (1981)
van Kampen, N.G.: Stochastic Processes in Physics and Chemistry, 3rd edn. Elsevier, Amsterdam (2007)
Zunino, L., Soriano, M.C., Rosso, O.A.: Distinguishing chaotic and stochastic dynamics from time series by using a multiscale symbolic approach. Phys. Rev. E 86, 046210 (2012)
Acknowledgments
AJM wishes to thank the mathematical biology group at the University of Oxford for hospitality during the period when a significant portion of this work was carried out. We also thank Amos Maritan for useful correspondence. JDC and DF acknowledge support from Programs Prin2009 and Prin2012 financed by the Italian MIUR.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Derivation of the Kramers–Moyal expansion
In this appendix we discuss the derivation of the Kramers–Moyal expansion for discrete-time stochastic processes of the kind that we are interested in and which are described in Sect. 2 of the main text.
The form that we ultimately use, given by Eq. (22), is of a different type to that found in textbooks [12, 31]. However let us begin by briefly reviewing the derivation of the conventional form as given by Eq. (8). To derive this equation, we start from the Chapman–Kolmogorov equation which defines a Markov process. We shall suppress the dependence on the initial conditions \(z_0\) at \(t_0\), since they play no part in the derivation. Letting \(z' = z - \Delta z\), the integrand of the integral appearing in Eq. (7) may be written as
Taylor expanding this, the integrand equals
In the Chapman–Kolmogorov equation we had to integrate over \(z'\). For fixed \(z\), this is equivalent to integrating over \(\Delta z\). We integrate \(\Delta z\) from minus infinity to plus infinity, even though it is “small”. This assumes that the integrand is very highly peaked and essentially gives no contribution to the integral apart from at very small \(\Delta z\).
Introducing the jump moments defined by
we find that the Chapman–Kolmogorov equation reduces to Eq. (8). Another form for the \(M_{\ell }(z)\) is
from which the form (9) given in the main text can be found.
However, as described in the main text, the Kramers–Moyal expansion (8) is not so useful. Essentially this stems from the fact that \(\Delta z\) is not “small”, and so the expansion cannot usefully be truncated after a finite number (that is, two) terms. We showed, in Eqs. (16) and (17), that the first term in the expansion could be considerably simplified by taking the Fourier transform. We can also show this for the next term (the diffusion-like term). Its Fourier transform is
Taking the inverse Fourier transform of this expression gives the diffusion-like term in Eq. (19).
In order to be relatively concise, there are several aspects of the above derivation that we have treated in a rather informal manner. In the first three lines of the derivation \(p\) is shorthand for \(p(z)\). On the fourth line the integral takes the form \(\int ^{\infty }_{-\infty } f(p)\,P_{t}(z)\,dz\), where again \(p=p(z)\), and \(f(p)\) in this case is \(p(1-p) e^{ikp}\). The change of variable used to obtain the next line may not be \(1-1\) and one may have to split up the range of integration, and carry out different transformations in different ranges. For example, in the case of the logistic map, the range is split up so that for \(0 \le z \le 1/2\), the transformation is \(z=z_{-} \equiv [1 - \sqrt{1 - (4p/\lambda )]}/2\), whereas for \(1/2 \le z \le 1\) it is \(z=z_{+} \equiv [1 + \sqrt{1 - (4p/\lambda )]}/2\). The key point is that this has no effect on \(f(p)\), since \(p=\lambda z(1-z)\) takes on the same value whether \(z=z_{-}\) or \(z=z_{+}\). Thus the only relevant part of the transformation is \(P_t(z) dz = \mathcal {P}_t(p) dp\), where
Having checked that the simplification occurs for \(r=2\) (the diffusion-like term), we now show that it occurs to all orders in \(r\). To do, this let us eliminate the jump moments \(M_{\ell }\) in favor of the jump moments \(J_r\) by substituting Eq. (11) into Eq. (8) to find
But now for any series \(\sum _{\ell =0}^{\infty } \sum ^{\ell }_{r=0}\,a_{r \ell } = \sum ^{\infty }_{r=0}\,\sum _{\ell = r}^{\infty } a_{r \ell }\), and so
We now take the Fourier transform of Eq. (49) to find:
Taking the inverse Fourier transform of this equation we find Eq. (22).
Appendix B: The Form of the Jump Moments \(J_r\) for large \(N\).
The truncation of the Kramers–Moyal expansion (22) at the \(r=2\) term, relies on showing that the \(J_r\) fall off like \(N^{-2}\) for \(r > 2\). In Sect. 2, we have already shown that \(J_1 = 0\) (and that \(J_0 = 1\)). For orientation, let us determine \(J_2\):
More generally in this appendix we want to investigate the form of \(J_r\) for large \(N\) and any value of \(r\). Now,
If we denote the \(s\)th moment of the binomial distribution by \(\mu _{s}(p)\), then Eq. (52) becomes
So to calculate the \(J_{r}(p)\), the moments of the binomial distribution need to be known. We have seen that \(J_1(p)=0\) and \(J_{2}(p)=N^{-1}p(1-p)\). We can find \(J_{3}(p)\) from
which goes like \(N^{-2}\) and so does not contribute to the generalization of the Fokker–Planck equation. We now show that all \(J_{r}(p)\) for \(r \ge 3\) fall off this fast. We use properties of the factorial moments to prove this result. Recall that the factorial moments are defined by
and can be shown to be given by
The proof is straightforward. Define
Then differentiating \(r\) times with respect to \(w\) and setting \(w=1\) gives \(\nu _r(p)\), and so the result.
Now,
where \(A = 1 + \cdots + (r-1) = r(r-1)/2\) and where \(C_1\) (like the constants \(C_2\) and \(C_3\) appearing below) is constant which need not concern us. Here we assume that \(r \ge 3\). So we have,
These results imply that
We can now use the result (56): \(\nu _r(p) = (Np)^r -Ap (Np)^{r-1} + \mathcal {O}(N)^{r-2}\) to see that
for \(r \ge 3\). Since \(A=r(r-1)/2\) and \(\mu _2(p) = (Np)^2 + Np(1-p)\), the result is also true for \(r=0,1,2\) with the correction term being exactly zero.
Substituting Eq. (57) into Eq. (53) gives
But
and if \(r > 2\)
Therefore Eq. (58) proves that \(J_r(p)=\mathcal {O}(N^{-2})\) for \(r > 2\).
Rights and permissions
About this article
Cite this article
Challenger, J.D., Fanelli , D. & McKane, A.J. The Theory of Individual Based Discrete-Time Processes. J Stat Phys 156, 131–155 (2014). https://doi.org/10.1007/s10955-014-0990-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-014-0990-2