Skip to main content
Log in

Non-explosivity of Stochastically Modeled Reaction Networks that are Complex Balanced

  • Original Article
  • Published:
Bulletin of Mathematical Biology Aims and scope Submit manuscript

Abstract

We consider stochastically modeled reaction networks and prove that if a constant solution to the Kolmogorov forward equation decays fast enough relatively to the transition rates, then the model is non-explosive. In particular, complex-balanced reaction networks are non-explosive.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Anderson DF, Cotter SL (2016) Product-form stationary distributions for deficiency zero networks with non-mass action kinetics. Bull Math Biol 78:2390–2407

    Article  MathSciNet  Google Scholar 

  • Anderson DF, Kurtz TG (2011) Continuous time Markov chain models for chemical reaction networks. In: Koeppl H, Densmore D, Setti G, di Bernardo M (eds) Design and analysis of biomolecular circuits: engineering approaches to systems and synthetic biology, chapter 1. Springer, Berlin

    Google Scholar 

  • Anderson DF, Kurtz TG (2015) Stochastic analysis of biochemical systems. Springer, Berlin

    Book  Google Scholar 

  • Anderson RM, May RM (1992) Infectious diseases of humans: dynamics and control, vol 28. Wiley Online Library

  • Anderson DF, Craciun G, Kurtz TG (2010) Product-form stationary distributions for deficiency zero chemical reaction networks. Bull Math Biol 72(8):1947–1970

    Article  MathSciNet  Google Scholar 

  • Cappelletti D, Wiuf C (2014) Product-form poisson-like distributions and complex balanced reaction systems. SIAM J. Appl. Math. 76(1):411–432

    Article  MathSciNet  Google Scholar 

  • Chertock A, Kurganov A, Wang X, Yaping W (2012) On a chemotaxis model with saturated chemotactic flux. Kinet Relat Models 5(1):51–95

    Article  MathSciNet  Google Scholar 

  • Childress S, Percus JK (1981) Nonlinear aspects of chemotaxis. Math Biosci 56(3):217–237

    Article  MathSciNet  Google Scholar 

  • Cornish-Bowden A (2004) Fundamentals of enzyme kinetics, 3rd edn. Portland Press, London

    Google Scholar 

  • Cummings R, Doty D, Soloveichik D (2014) Probability 1 computation with chemical reaction networks. In: DNA computing and molecular programming. Springer, Berlin

    Google Scholar 

  • Doty D, Lutz JH, Patitz MJ, Schweller RT, Summers SM, Woods D (2012) The tile assembly model is intrinsically universal. In: 2012 IEEE 53rd annual symposium on foundations of computer science (FOCS)

  • Echeverría P (1982) A criterion for invariant measures of markov processes. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 61(1):1–16

    Article  MathSciNet  Google Scholar 

  • Ethier SN, Kurtz TG (1986) Markov processes: characterization and convergence. Wiley, New York

    Book  Google Scholar 

  • Feinberg M (1987) Chemical reaction network structure and the stability of complex isothermal reactors-I. The deficiency zero and deficiency one theorems. Chem Eng Sci 42(10):2229–2268

    Article  Google Scholar 

  • Herrero MA, Velázquez JJL (1997) A blow-up mechanism for a chemotaxis model. Ann Scuola Norm Sup Pisa Cl Sci (4) 24(4):633–683

    MathSciNet  MATH  Google Scholar 

  • Horn F, Jackson R (1972) General mass action kinetics. Arch Ration Mech Anal 47(2):81–116

    Article  MathSciNet  Google Scholar 

  • Ingram PJ, Stumpf MPH, Stark J (2008) Nonidentifiability of the source of intrinsic noise in gene expression from single-burst data. PLoS Comput Biol 4(10):e1000192

    Article  MathSciNet  Google Scholar 

  • Kallenberg O (2006) Foundations of modern probability. Springer, Berlin

    MATH  Google Scholar 

  • Karlebach G, Shamir R (2008) Modelling and analysis of gene regulatory networks. Nat Rev Mol Cell Biol 9(10):770

    Article  Google Scholar 

  • Kurtz TG (2011) Equivalence of stochastic equations and martingale problems. In: Crisan D (ed) Stochastic analysis 2010. Springer, Heidelberg, pp 113–130

    Chapter  Google Scholar 

  • Kurtz TG, Stockbridge RH (1998) Existence of Markov controls and characterization of optimal Markov controls. SIAM J Control Optim 36(2):609–653 (electronic)

    Article  MathSciNet  Google Scholar 

  • Lawler GF (1995) Introduction to stochastic processes. Probability series. Chapman & Hall/CRC, Boca Raton

    MATH  Google Scholar 

  • May RM (2001) Stability and complexity in model ecosystems, vol 6. Princeton University Press, Princeton

    MATH  Google Scholar 

  • Norris JR (1998) Markov chains. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Paulsson J (2004) Summing up the noise in gene networks. Nature 427:415–418

    Article  Google Scholar 

  • Peschel M, Breitenecker F (1984) Socio-economic consequences of the Volterra approach. In: Trappl R (ed) Cybernetics and systems research, vol 2. North Holland, Amsterdam

    MATH  Google Scholar 

  • Peschel M, Mende W (1986) The predator–prey model: do we live in a Volterra world?. Springer, Berlin

    MATH  Google Scholar 

  • Rothemund PWK, Winfree E (2000) The program-size complexity of self-assembled squares. In: Proceedings of the thirty-second annual ACM symposium on theory of computing. ACM, New York

  • Soloveichik D, Seelig G, Winfree E (2010) DNA as a universal substrate for chemical kinetics. Proc Natl Acad Sci 107(12):5393–5398

    Article  Google Scholar 

  • Sontag ED, Zeilberger D (2010) A symbolic computation approach to a problem involving multivariate Poisson distributions. Adv Appl Math 44:359–377

    Article  MathSciNet  Google Scholar 

  • Weidlich W, Haag G (2012) Concepts and models of a quantitative sociology: the dynamics of interacting populations, vol 14. Springer, Berlin

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniele Cappelletti.

Additional information

David F. Anderson: Supported by NSF-DMS-1318832 and Army Research Office Grant W911NF-14-1-0401.

Appendix

Appendix

Let \(E\subset {{\mathbb {Z}}}^d\) and for \(x,y\in E\), \(x\ne y\), let \(q(x,y)\ge 0\) and assume Condition 1 is satisfied. As already discussed in the Introduction, a continuous-time Markov chain on E with transition intensities \(\{q(x,y)\}\) is intuitively a stochastic process \((X_t)_{t\ge 0}\) in E such that

$$\begin{aligned} P(X_{t+h}=y|{{\mathcal {F}}}_t^X)=q(X_t,y)h+o(h),\quad t,h\ge 0, \end{aligned}$$

where \(\{{{\mathcal {F}}}_t^X\}\) is the filtration generated by \((X_t)_{t\ge 0}\). There are several ways of making this intuition precise.

Definition 5

  1. (a)

    The one-dimensional distributions of \((X_t)_{t\ge 0}\) satisfy the forward or master equation if for every \(x\in E\)

    $$\begin{aligned} \frac{\text {d}}{\text {d}t}P_t(x)=\sum _{y\in E\setminus \{x\}} P_t(y)q(y,x)-\sum _{y\in E\setminus \{x\}} P_t(x)q(x,y), \end{aligned}$$
    (18)

    where the derivative at \(t=0\) is interpreted as a right derivative.

  2. (b)

    Consider

    $$\begin{aligned} {{\mathcal {D}}}(E)=\{f:E\rightarrow {{\mathbb {R}}}: \#\{x\in E:f(x)\ne 0\}<\infty \} \end{aligned}$$

    and define the linear operator A on \({{\mathcal {D}}}(E)\) by

    $$\begin{aligned} Af(x)=\sum _{y\in E\setminus \{x\}}q(x,y)(f(y)-f(x)). \end{aligned}$$

    Then \((X_t)_{t\ge 0}\) is a solution of the martingale problem for A if

    $$\begin{aligned} f(X_t)-f(X_0)-\int _0^tAf(X_s)\hbox {d}s \end{aligned}$$
    (19)

    is a \(\{{{\mathcal {F}}}_t^X\}\)-martingale for each \(f\in {{\mathcal {D}}}(E)\).

  3. (c)

    For \(x,y\in E\), \(x\ne y\), let \((N^{xy}_t)_{t\ge 0}\) be a Poisson process with intensity q(xy). The stochastic equation for \((X_t)_{t\ge 0}\) is given by

    $$\begin{aligned} f(X_t)=f(X_0)+\sum _{x,y\in E :x\ne y}\int _0^t(f(y)-f(X_{s-}))\mathbf{1}_{\{X_{s-}=x\}}\hbox {d}N^{xy}_s. \end{aligned}$$
    (20)

Remark 2

Note that by Condition 1, for \(f\in {{\mathcal {D}}}(E)\), \(\sup _{x\in E}|Af(x)|<\infty \).

Furthermore, we introduce an additional state \(\partial \) as done in the main text. Solutions of (18) need to satisfy \(P_t(x)\ge 0\), \(\sum _{x\in E}P_t(x)\le 1\) and \(P_t(\partial )=1-\sum _{x\in E}P_t(x)\). Moreover, for solutions of the martingale problem and the stochastic equation, we extend \(f\in {{\mathcal {D}}}(E)\) to \(E\cup \{\partial \}\) by defining \(f(\partial )=0\) and define \(X_t=\partial \), if \(X_t\notin E\).

With this understanding of solutions, it follows that any solution of the stochastic equation (20) is a solution of the martingale problem. Indeed, we can write

$$\begin{aligned} f(X_t)-f(X_0)-\int _0^tAf(X_s)\hbox {d}s=\sum _{x,y\in E :x\ne y}\int _0^t(f(y)-f(X_{s-}))\mathbf{1}_{\{X_{s-}=x\}}\hbox {d}\tilde{N}^{xy}_s, \end{aligned}$$

where

$$\begin{aligned} \tilde{N}^{xy}_s = N^{xy}_s-q(x,y)s \end{aligned}$$

is a martingale. Moreover, for any solution of the martingale problem, \(P_t(x)=P(X_t=x)\) is a solution of (18): this follows by considering \(f=\mathbf{1}_{\{x\}}\) for \(x\in E\) and by taking expectation (Anderson and Kurtz 2015). The converse of these observations also holds.

Theorem 4

If \((P_t)_{t\ge 0}\) is a solution of (18), then there exists a solution of the martingale problem such that for \(x\in E\cup \partial \), \(P(X_t=x)=P_t(x)\). If \((X_t)_{t\ge 0}\) is a solution of the martingale problem, then there exists a solution \((\tilde{X}_t)_{t\ge 0}\) of the stochastic equation such that \((X_t)_{t\ge 0}\) and \((\tilde{X}_t)_{t\ge 0}\) have the same distribution.

Proof

The first statement follows from Corollary 3.2 of Kurtz and Stockbridge (1998). The second statement can be proved by arguments similar to the proof of (17) in Kurtz (2011). \(\square \)

It follows from Theorem 4 that weak uniqueness for any of the three characterizations implies weak uniqueness for the other two, for a fixed initial distribution. Moreover, the law of a solution to the stochastic equation is uniquely determined up to time \(T_{\infty }\), where \(T_{\infty }\) is defined as in Definition 1. This observation provides the proof of the next result.

Theorem 5

If for a fixed initial distribution \(P_0\) we have \(P(T_{\infty }=\infty )=1\), then there exists a unique process \((X_t)_{t\ge 0}\) with initial distribution \(P_0\) satisfying any of the three characterization of Definition 5.

We conclude with the following result, which can be read from Ethier and Kurtz (1986, Theorem 9.17 of Chapter 4) and is due to Echeverría (1982).

Theorem 6

If \(\pi \) is a constant solution to the forward Kolmogorov equation, namely if

$$\begin{aligned} \sum _{y\in E\setminus \{x\}} \pi (x)q(x,y)=\sum _{y\in E\setminus \{x\}} \pi (y)q(y,x), \end{aligned}$$

then there exists a solution of the martingale problem which is a stationary process with stationary distribution \(\pi \).

Note that under the hypotheses of Theorem 6, Theorem 4 implies the existence of a solution to the martingale problem with \(P_t=\pi \) for all \(t\ge 0\), and a stationary solution to the stochastic equation (20).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Anderson, D.F., Cappelletti, D., Koyama, M. et al. Non-explosivity of Stochastically Modeled Reaction Networks that are Complex Balanced. Bull Math Biol 80, 2561–2579 (2018). https://doi.org/10.1007/s11538-018-0473-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11538-018-0473-8

Keywords

Navigation