Time-dependent product-form Poisson distributions for reaction networks with higher order complexes

Abstract

It is well known that stochastically modeled reaction networks that are complex balanced admit a stationary distribution that is a product of Poisson distributions. In this paper, we consider the following related question: supposing that the initial distribution of a stochastically modeled reaction network is a product of Poissons, under what conditions will the distribution remain a product of Poissons for all time? By drawing inspiration from Crispin Gardiner’s “Poisson representation” for the solution to the chemical master equation, we provide a necessary and sufficient condition for such a product-form distribution to hold for all time. Interestingly, the condition is a dynamical “complex-balancing” for only those complexes that have multiplicity greater than or equal to two (i.e. the higher order complexes that yield non-linear terms to the dynamics). We term this new condition the “dynamical and restricted complex balance” condition (DR for short).

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

References

  1. Anderson DF, Cotter SL (2016) Product-form stationary distributions for deficiency zero networks with non-mass action kinetics. Bull Math Biol 78:2390–2407

    MathSciNet  MATH  Article  Google Scholar 

  2. Anderson DF, Kurtz TG (2011) Continuous time Markov chain models for chemical reaction networks. In: Koeppl H et al (eds) Design and analysis of biomolecular circuits: engineering approaches to systems and synthetic biology. Springer, Berlin, pp 3–42

    Google Scholar 

  3. Anderson DF, Kurtz TG (2015) Stochastic analysis of biochemical systems, volume 1.2 of Stochastics in biological systems, 1st edn. Springer, Cham

    Book  Google Scholar 

  4. Anderson DF, Craciun G, Kurtz TG (2010) Product-form stationary distributions for deficiency zero chemical reaction networks. Bull Math Biol 72(8):1947–1970

    MathSciNet  MATH  Article  Google Scholar 

  5. Anderson DF, Cappelletti D, Koyama M, Kurtz TG (2018) Non-explosivity of stochastically modeled reaction networks that are complex balanced. Bull Math Biol 80(10):2561–2579

    MathSciNet  MATH  Article  Google Scholar 

  6. Cao Z, Grima R (2018) Linear mapping approximation of gene regulatory networks with stochastic dynamics. Nat Commun 9(1):3305

    Article  Google Scholar 

  7. Cappelletti D, Wiuf C (2016) Product-form Poisson-like distributions and complex balanced reaction systems. SIAM J Appl Math 76(1):411–432

    MathSciNet  MATH  Article  Google Scholar 

  8. Ethier SN, Kurtz TG (1986) Markov processes: characterization and convergence. Wiley, New York

    MATH  Book  Google Scholar 

  9. Feinberg M (1972) Complex balancing in general kinetic systems. Arch Ration Mech Anal 49:187–194

    MathSciNet  Article  Google Scholar 

  10. Gardiner C (1985) Stochastic methods. Springer series in synergetics. Springer, Berlin

    Google Scholar 

  11. Gardiner C, Chaturvedi S (1977) The Poisson representation. I. A new technique for chemical master equations. J Stat Phys 17(6):429–468

    MathSciNet  MATH  Article  Google Scholar 

  12. Gillespie DT (1992) A rigorous derivation of the chemical master equation. Physica A 188(1–3):404–425

    Article  Google Scholar 

  13. Gillespie DT (2001) Approximate accelerated stochastic simulation of chemically reacting systems. J Chem Phys 115(4):1716–1733

    Article  Google Scholar 

  14. Horn F (1972) Necessary and sufficient conditions for complex balancing in chemical kinetics. Arch Ration Mech Anal 49(3):172–186

    MathSciNet  Article  Google Scholar 

  15. Horn F, Jackson R (1972) General mass action kinetics. Arch Ration Mech Anal 47:187–194

    MathSciNet  Article  Google Scholar 

  16. Horn RA, Johnson CR (2012) Matrix analysis. Cambridge University Press, Cambridge

    Book  Google Scholar 

  17. Jahnke T, Huisinga W (2007) Solving the chemical master equation for monomolecular reaction systems analytically. J Math Biol 54(1):1–26

    MathSciNet  MATH  Article  Google Scholar 

  18. Munsky B, Li G, Fox ZR, Shepherd DP, Neuert G (2018) Distribution shapes govern the discovery of predictive models for gene regulation. Proc Natl Acad Sci 115(29):7533–7538

    Article  Google Scholar 

  19. Neuert G, Munsky B, Tan RZ, Teytelman L, Khammash M, van Oudenaarden A (2013) Systematic identification of signal-activated stochastic gene regulation. Science 339(6119):584–587

    Article  Google Scholar 

  20. Peccoud J, Ycart B (1995) Markovian modeling of gene-product synthesis. Theor Popul Biol 48(2):222–234

    MATH  Article  Google Scholar 

  21. Ramos AF, Innocentini GCP, Hornos JEM (2011) Exact time-dependent solutions for a self-regulating gene. Phys Rev E 83(6):062902

    Article  Google Scholar 

  22. Schnoerr D, Sanguinetti G, Grima R (2017) Approximation and inference methods for stochastic biochemical kinetics—a tutorial review. J Phys A Math Theor 50(9):093001

    MathSciNet  MATH  Article  Google Scholar 

  23. Shahrezaei V, Swain PS (2008) Analytical distributions for stochastic gene expression. Proc Natl Acad Sci 105(45):17256–17261

    Article  Google Scholar 

  24. Shivakumar PN, Chew KH (1974) A sufficient condition for nonvanishing of determinants. Proc Am Math Soc 43:63–66

    MathSciNet  MATH  Article  Google Scholar 

  25. Smadbeck P, Kaznessis YN (2012) Efficient moment matrix generation for arbitrary chemical networks. Chem Eng Sci 84:612–618

    Article  Google Scholar 

  26. Wilkinson DJ (2006) Stochastic modelling for systems biology. Chapman and Hall/CRC, New York

    MATH  Google Scholar 

  27. Zechner C, Ruess J, Krenn P, Pelet S, Peter M, Lygeros J, Koeppl H (2012) Moment-based inference predicts bimodality in transient gene expression. Proc Natl Acad Sci 109(21):8340–8345

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the Isaac Newton Institute for hosting a 6 month program entitled “Stochastic Dynamical Systems in Biology: Numerical Methods and Applications” where this collaboration initiated. Anderson and Yuan are currently supported by Army Research Office Grant W911NF-18-1-0324. Schnoerr is currently supported by Biotechnology and Biological Sciences Research Council Grant BB/P028306/1.

Author information

Affiliations

Authors

Corresponding author

Correspondence to David F. Anderson.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proof of Lemma 2.1

The proof will proceed in a manner similar to that of Example 4.6, in that we will show that under the assumption that the DR condition holds, the non-linear monomials can be written as a linear combination of the linear terms. In order to make this precise, we require a number of definitions.

The ith row of matrix A is said to be strictly diagonally dominant (SDD) if \(|a_{ii}| > \sum _{j\ne i} |a_{ij}|\). We then say that the matrix A is strictly diagonally dominant if all its rows are SDD. Similarly, the ith row of matrix A is said to be weakly diagonally dominant (WDD) if \(|a_{ii}| \ge \sum _{j\ne i} |a_{ij}|\) and we say that the matrix A is weakly diagonally dominant if all its rows are WDD.

There is a directed graph associated to any \(m\times m\) square matrix. Its vertices are given by \(\displaystyle \{1,2,\ldots ,m\}\) and its edges are defined as follows: for \(i \ne j\), there exists an edge \(i\rightarrow j\) if and only if \(a_{ij} \ne 0\).

SDD matrices are always invertible (Horn and Johnson 2012). However, WDD matrices could be singular and the following lemma can be used to identify invertibility of a WDD matrix (Shivakumar and Chew 1974).

Lemma A.1

Suppose that A is WDD and that for each row \(i_1\) that is not SDD, there exists a walk \( i_1 \rightarrow i_2 \rightarrow \cdots \rightarrow i_k\) in the directed graph of A ending at row \(i_k\), which is SDD. Then A is non-singular.

We restate Lemma 2.1 for the sake of reference.

Lemma 2.1

Consider a reaction network endowed with deterministic mass action kinetics. Let c(t) be the solution to the system (8). If for \({\tilde{c}} = c(0)\in {\mathbb {R}}^d_{>0}\) we have that c(t) satisfies the DR condition of Definition 2.3, then, for this particular choice of initial condition, the right-hand side of (8) is linear and \(c(t) \in {\mathbb {R}}^d_{>0}\) for all \(t \ge 0\).

Proof

We begin by noting that some deterministic models may blow-up in finite time. We therefore define

$$\begin{aligned} T^*&=\inf \{ s> 0 : \text {for any } m>0, \text { there exists } \varepsilon > 0 \text { such that when } s- t \\&\le \varepsilon ,~ \Vert c(t) \Vert _1 \ge m \text { holds}\}. \end{aligned}$$

Note that if the set is empty, then we take \(T^*\) to be infinity. Our first goal will be to show that \(c(t) \in {\mathbb {R}}^d_{>0}\) for any \(t < T^*\).

We therefore let \(t<T^*\). We then know that there exists an \(m>0\), such that \(\Vert c(s)\Vert _1\le m\) for any \(s\in [0,t]\). Consider the ith component of the differential equation with \(s \ge t\):

figurea

which implies \(c_i(t) >0\) for any \(t< T^*\).

We will now show that the dynamics of c(t) are linear for \(t < T^*\). Denote the linkage classes of \({\mathcal {C}}\) by \({\mathcal {L}}_1, {\mathcal {L}}_2, \dots , {\mathcal {L}}_n\). We have

$$\begin{aligned} \frac{d}{dt}{c}(t)&= \sum _{k=1}^K \kappa _k c(t)^{y_{k}}(y_k' - y_k) = \sum _{z\in {\mathcal {C}}} z \left( \sum _{k: y_k^\prime =z} \kappa _k c(t)^{y_{k}} - \sum _{k: y_k=z} \kappa _k c(t)^{y_{k}} \right) \\&= \sum _{\ell } \sum _{z\in {\mathcal {L}}_\ell } z \left( \sum _{k: y_k^\prime =z} \kappa _k c(t)^{y_{k}} - \sum _{k: y_k=z} \kappa _k c(t)^{y_{k}} \right) . \end{aligned}$$

Our goal is to show that for any linkage class \({\mathcal {L}}_\ell \), the summation

$$\begin{aligned} \sum _{z\in {\mathcal {L}}_\ell } z \left( \sum _{k: y_k^\prime =z} \kappa _k c(t)^{y_{k}} - \sum _{k: y_k=z} \kappa _k c(t)^{y_{k}} \right) \end{aligned}$$
(45)

only contributes linear terms to the dynamics of the process, and hence the overall dynamics of the deterministic model (8) is linear.

We now restrict ourselves to the summation (45). There are three cases that we consider.

Case 1:

Suppose the linkage class \({\mathcal {L}}_\ell \) contains only higher order complexes. Then every term in the summation (45) is zero by the DR condition (9). Thus,

$$\begin{aligned} \sum _{z\in {\mathcal {L}}_\ell } z \left( \sum _{k: y_k^\prime =z} \kappa _k c(t)^{y_{k}} - \sum _{k: y_k=z} \kappa _k c(t)^{y_{k}} \right) = \sum _{z\in {\mathcal {L}}_\ell } 0 = 0. \end{aligned}$$
Case 2:

Suppose the linkage class \({\mathcal {L}}_\ell \) contains only zeroth-order and first-order complexes, then

$$\begin{aligned} \sum _{z\in {\mathcal {L}}_\ell } z \left( \sum _{k: y_k^\prime =z} \kappa _k c(t)^{y_{k}} - \sum _{k: y_k=z} \kappa _k c(t)^{y_{k}} \right) \end{aligned}$$

only contributes linearly.

Case 3:

We now suppose the linkage class \({\mathcal {L}}_\ell \) contains both higher-order and lower-order complexes. Suppose \(z_1,z_2,\dots , z_m\) are the higher order complexes and that \(z_{m+1},\dots , z_{|{\mathcal {L}}_\ell |}\) are zeroth-order and first-order complexes. We will follow the idea in Example 4.6 by moving all the nonlinear monomials to one side of the equation, and solving for them in terms of the linear terms. To do so, we change notation slightly by explicitly enumerating the reactions and their rate constants by the reactions themselves. That is, for \(y \rightarrow y'\in {\mathcal {R}}\), we write \(\kappa _{y \rightarrow y'}\). We stress that this change is isolated to this portion of the proof.

After making this change in notation, we can write the DR condition for complex \(z_i\), \(i=1,\ldots , m\), as

$$\begin{aligned} \sum _{j =1}^{|{\mathcal {L}}_\ell |} \kappa _{z_i \rightarrow z_j} c(t)^{z_i} - \sum _{j=1}^m \kappa _{z_j \rightarrow z_i} c(t)^{z_j} =\sum _{j=m+1}^{|{\mathcal {L}}_\ell |} \kappa _{z_j \rightarrow z_i} c(t)^{z_j}, \end{aligned}$$

where we take \(\kappa _{y \rightarrow y'} = 0\) if \(y\rightarrow y' \notin {\mathcal {R}}\).

We have m such conditions, and so we can rewrite the DR condition (9) as a vector equation \(A {\tilde{x}} = b\), where

  1. (1)

    \({\tilde{x}}\) is an \(m\times 1\) column vector whose \(i^{th}\) component is given by \({\tilde{x}}_i = c(t)^{z_i}\) for \(i=1,\ldots , m\). That is, the vector \({\tilde{x}}\) contains all the higher order monomials in the linkage class \({\mathcal {L}}_\ell \).

  2. (2)

    b is an \(m \times 1\) column vector whose \(i^{th}\) component is given by

    $$\begin{aligned} b_i =\sum _{j=m+1}^{|{\mathcal {L}}_\ell |} \kappa _{z_j \rightarrow z_i} c(t)^{z_j}, \end{aligned}$$

    which are all linear.

  3. (3)

    A is an \(m\times m\) matrix whose entries are defined as

    $$\begin{aligned} A_{ii} = \sum _{j =1}^{|{\mathcal {L}}_\ell |} \kappa _{z_i \rightarrow z_j} \ge 0 \quad \text {and for } j\ne i,\quad A_{ij} = - \kappa _{z_j \rightarrow z_i} \le 0. \end{aligned}$$
    (46)

Hence \(A_{ij} < 0\) if and only if \(z_j\rightarrow z_i \in {\mathcal {R}}\). Notice that if we can show A is invertible, then we can write \({\tilde{x}}= A^{-1}b\). In this situation, all the higher order monomials can be expressed using first order monomials and hence (45) can be written as linear combinations of first-order monomials and the dynamics will be linear.

It will be more convenient to work with the transpose matrix, \(A^T\). The row sums of \(A^{T}\) corresponds to column sums of A, hence for the \(i^{th}\) row

$$\begin{aligned} \sum _{j=1}^m (A^T)_{ij}&= A_{ii} + \sum _{j\ne i}^m A_{ji} = \sum _{j =1}^{|{\mathcal {L}}_\ell |} \kappa _{z_i \rightarrow z_j} - \sum _{j=1}^m \kappa _{z_i \rightarrow z_j} = \sum _{j = m+1}^{|{\mathcal {L}}_\ell |} \kappa _{z_i \rightarrow z_j} \ge 0, \end{aligned}$$
(47)

which implies \(A^T\) is weakly diagonally dominant matrix. Moreover, row i is not SDD if and only if \( \kappa _{z_i \rightarrow z_j} = 0\) for \(j = m+1,\ldots ,|{\mathcal {L}}_\ell |\), i.e., there is no reaction from \(z_i\) to a lower order complex. To finish our proof that A is invertible, we will prove the following claim.

Claim If c(t) satisfies the DR condition of Definition 2.3, then the path condition in Lemma A.1 holds for \(A^T\).

Proof of the Claim

First, we consider the associated directed graph of the matrix \(A^T\). Notice that by (46), \((A^T)_{ij} \ne 0\) if and only if \(\kappa _{z_i\rightarrow z_j } > 0\), i.e. , \(z_i \rightarrow z_j \in {\mathcal {R}}\). Hence the associated directed graph is equivalent to our reaction graph, where row i corresponds to complex \(z_i\) in the reaction graph. Then by (47), row i is not SDD if and only if \( \kappa _{z_i \rightarrow z_j} = 0\) for \(j = m+1,\ldots ,|{\mathcal {L}}_\ell |\), i.e., there is no reaction from \(z_i\) to a lower order complex.

Suppose, in order to find a contradiction, that the path condition does not hold for \(A^T\). Specifically, we assume there exists a row \(i_1\) which can not reach a row that is SDD in the associated directed graph. Then, consider the following set of complexes

$$\begin{aligned} \tilde{C} = \{ z\in {\mathcal {L}}_\ell : \text { there is a path from } z_{i_1} \text { to } z \} \subset {\mathcal {L}}_\ell . \end{aligned}$$

Then \(z\notin {\tilde{C}}\) for any \(\Vert z\Vert _1\le 1\), since otherwise, there exists a reaction from higher order complex to lower order complex along the path from \(z_{i_1}\) to z, which contradicts with the fact that all rows are not SDD. Consequently, \(\Vert z \Vert _1\ge 2\) for any \(z\in {\tilde{C}}\). Therefore, by the DR condition for all complexes \(z\in \tilde{C}\), we have

$$\begin{aligned} \sum _{z\in \tilde{C} } \sum _{k: y_k^\prime =z} \kappa _k c(t)^{y_{k}} = \sum _{z\in \tilde{C} } \sum _{k: y_k=z} \kappa _k c(t)^{y_{k}} , \end{aligned}$$

which immediately leads to the equation,

$$\begin{aligned} \sum _{k: y_k^\prime \in \tilde{C} } \kappa _k c(t)^{y_{k}} = \sum _{k: y_k \in \tilde{C} } \kappa _k c(t)^{y_{k}}. \end{aligned}$$
(48)

If \(y_k\in {\tilde{C}}\), then \(y_k^\prime \in {\tilde{C}}\) since there exists a path connecting \(z_{i_1}\) and \(y_k^\prime \) via \(y_k\). That is,

$$\begin{aligned} \{k: y_k^\prime \in \tilde{C} \} \supseteq \{k: y_k \in \tilde{C} \}. \end{aligned}$$

Given that they have the same summands in (48) and \(c(t) > 0\) for \(t< T^*\), the index sets are equal \(\{ k: y_k^\prime \in \tilde{C} \} = \{ k: y_k \in \tilde{C} \}\). However this would imply \(\tilde{C}\) is a linkage class by itself, as for any complex \(z \in {\tilde{C}}\) and \(z^\prime \notin {\tilde{C}}\), \(z\rightarrow z^\prime \notin {\mathcal {R}}\) and \(z^\prime \rightarrow z\notin {\mathcal {R}}\). Since \(\tilde{C}\) contained strictly inside \({\mathcal {L}}_\ell \) (first-order complexes are not in \({\tilde{C}}\)), we get a contradiction. Hence the path condition in Lemma A.1 holds for \(A^T\). \(\square \)

Given the claim, and by Lemma A.1, we get A is invertible, and hence (45) can be written as linear combinations of first order monomials.

In conclusion, for each linkages class \({\mathcal {L}}_\ell \), the summation (45) contributes at most linear monomials to the dynamics. Hence the right-hand side of (8) is linear.

This analysis held under the assumption that \(t < T^*\). However, because we can now conclude that the dynamics are linear for \(t < T^*\), we must have that \(T^* = \infty \), and the proof is now complete. \(\square \)

Proofs of Lemmas 3.2 and 3.3

We restate Lemma 3.2 for the sake of reference.

Lemma 3.2

Suppose \(P_\mu (x,t)\) is given by (22) with \(c(t) \in {\mathbb {R}}_{>0}^d\) for all \(t \ge 0\). Then \(P_\mu (x,t)\) is the solution to the Kolmogorov forward equation (7) if and only if c(t) satisfies the deterministic equation (8) and

$$\begin{aligned} \sum _{k} \kappa _k c(t)^{y_k} \left[ g_{x,c(t)}(y_k^\prime ) -g_{x,c(t)}(y_k) \right] = 0 \end{aligned}$$
(20)

where for each \(x \in {\mathbb {Z}}^d_{\ge 0}\) and \(c \in {\mathbb {R}}^d_{ > 0}\),

$$\begin{aligned} g_{x,c}(y_k) = \sum _{j=1}^d \left( \frac{x_j}{c_j}-1\right) y_{kj} - \frac{x !}{(x -y_{k})!} c^{-y_k} +1. \end{aligned}$$
(21)

Moreover, if \(\Vert y_k\Vert _1 \le 1\), then \(g_{x,c} (y_k) = 0\).

Proof

We will first assume that \(P_\mu (x,t)\) is as in (22) and that it is the solution to the Kolmogorov forward equation (7). Our goal is to show that (27) holds.

By Proposition 3.1, c(t) satisfies (8). In particular, it is differentiable. Because \(P_\mu (x,t)\) is as in (22), the left-hand side of (7) satisfies

$$\begin{aligned} \frac{d}{dt} P_\mu (x,t)&= \frac{d}{dt} \left( \prod _{i=1}^d e^{-c_i(t)}\frac{c_i(t)^{x_i}}{x_i ! } \right) \nonumber \\&= \sum _{j=1}^d \prod _{i\ne j} e^{-c_i(t)}\frac{c_i(t)^{x_i}}{x_i ! } \left( -c_j^\prime (t)e^{-c_j(t)}\frac{c_j(t)^{x_j}}{x_j ! } + x_je^{-c_j(t)}\frac{c_j(t)^{x_j-1}}{x_j ! } c_j^\prime (t) \right) \nonumber \\&= \prod _{i=1}^d e^{-c_i(t)}\frac{c_i(t)^{x_i}}{x_i ! } \sum _{j=1}^d \left( -c_j^\prime (t) + x_j \frac{c_j^\prime (t) }{c_j(t) } \right) \nonumber \\&= e^{-c(t)}\frac{c(t)^{x}}{x ! } \sum _{j=1}^d c_j^\prime (t) \left( \frac{x_j}{c_j(t) }-1\right) \nonumber \\&= e^{-c(t)}\frac{c(t)^{x}}{x ! } \sum _{j=1}^d \sum _{k=1}^K \kappa _k c(t)^{y_k} (y_{kj}^{\prime } - y_{kj}) \left( \frac{x_j}{c_j(t) }-1\right) \nonumber \\&= \left( e^{-c(t)}\frac{c(t)^{x}}{x ! } \right) \sum _{k=1}^K \kappa _k c(t)^{y_k} \sum _{j=1}^d \left( \frac{x_j}{c_j(t) }-1\right) (y_{kj}^{\prime } - y_{kj}). \end{aligned}$$
(49)

The right hand side of (7) is

$$\begin{aligned}&\sum _{k=1}^K \lambda _{k}(x-\zeta _k) P_\mu (x-\zeta _k ,t) - \sum _{k=1}^K \lambda _k(x) P_\mu (x,t)\nonumber \\&\quad = \sum _{k=1}^K \kappa _k \left( \frac{(x-\zeta _k)!}{(x-\zeta _k-y_k)!} e^{-c(t)}\frac{c(t)^{x-\zeta _{k}}}{(x-\zeta _{k} )! }\right) - \sum _{k=1}^K \kappa _k \left( \frac{x !}{(x -y_{k})!} e^{-c(t)}\frac{c(t)^{x}}{x! } \right) \nonumber \\&\quad = \left( e^{-c(t)}\frac{c(t)^{x}}{x ! } \right) \sum _{k=1}^K \kappa _k \left( \frac{ x!}{(x -\zeta _{k} - y_{k})!} c(t)^{-\zeta _{k}} - \frac{x !}{(x -y_{k})!} \right) \nonumber \\&\quad = \left( e^{-c(t)}\frac{c(t)^{x}}{x ! } \right) \sum _{k=1}^K \kappa _k c(t)^{y_k} \left( \frac{ x!}{(x -y_{k}^\prime )!} c(t)^{-y_k^\prime } - \frac{x !}{(x -y_{k})!} c(t)^{-y_k} \right) . \end{aligned}$$
(50)

Since \(P_\mu (x,t)\) is the solution to (7), we must have that (49) and (50) are equal. That is,

$$\begin{aligned}&\sum _{k=1}^K \kappa _k c(t)^{y_k} \left( \sum _{j=1}^d \bigg [\left( \frac{x_j}{c_j(t) }-1\right) (y_{kj}^{\prime } - y_{kj}) \right. \nonumber \\&\left. \quad - \left( \frac{ x!}{(x -y_{k}^\prime )!} c(t)^{-y_k^\prime } - \frac{x !}{(x -y_{k})!} c(t)^{-y_k} \right) \bigg ] \right) = 0. \end{aligned}$$
(51)

Define the following function

$$\begin{aligned} f_{x,c}(y_k) = \sum _{j=1}^d \left( \frac{x_j}{c_j }-1\right) y_{kj} - \frac{x !}{(x -y_{k})!} c^{-y_k} \end{aligned}$$

and let \(\displaystyle g_{x,c}(y_k) = f_{x,c} (y_k) +1\). Then we can rewrite Eq. (51) above as

$$\begin{aligned} \sum _{k=1}^K \kappa _k c(t)^{y_k} \left[ g_{x,c(t)}(y_k^\prime ) - g_{x,c(t)}(y_k) \right] = 0, \end{aligned}$$

which shows (27) holds.

To show the other direction, suppose c(t) is the solution to the deterministic equation (8) and that (27) is satisfied. We must show that \(P_\mu (x,t)\) as in (23) is the solution to the Kolmogorov forward equation (7). However, this follows by reversing the steps above.

All that remains is to demonstrate that if \(\Vert y_k\Vert _1 \le 1\), then \(g_{x,c}(y_k) = 0\). There are only two cases that need consideration.

Case 1 If \(y_k = \mathbf {0} \), then

$$\begin{aligned} g_{x,c}(y_k) = \sum _{j=1}^d \left( \frac{x_j}{c_j }-1\right) y_{kj} - \frac{x !}{(x -y_{k})!} c^{-y_k} +1 = 0-1 +1= 0. \end{aligned}$$

Case 2 If \(y_k = e_\ell \), the vector whose \(\ell ^{th}\) entry is 1 and all other entries are zero, then

$$\begin{aligned} g_{x,c}(y_k) = \sum _{j=1}^d \left( \frac{x_j}{c_j }-1\right) y_{kj} - \frac{x !}{(x -y_{k})!} c^{-y_k} +1 = \frac{x_\ell }{c_\ell } - 1 - \frac{x_\ell }{c_\ell } +1= 0. \end{aligned}$$

Hence, the proof is complete. \(\square \)

We restate Lemma 3.3 for the sake of reference.

Lemma 3.3

Let \(\{z_1,z_2, \ldots , z_m\} \subset {\mathcal {C}}\) be the collection of complexes that are at least binary (i.e. \(\Vert z_i\Vert _1 \ge 2\)). Fix a value \(c \in {\mathbb {R}}^d_{>0}\). For each \(i \in \{1,\dots , m\}\) let \(f_i: {\mathbb {Z}}_{\ge 0}^d \rightarrow {\mathbb {R}}\) be defined as

$$\begin{aligned} f_i(x) = g_{x,c}(z_i), \end{aligned}$$

where the functions \(g_{x,c}\) are defined in the proof of Lemma 3.2. Then \(\{ f_i\}_{i=1}^m\) are linear independent.

The main idea of the proof rests on noticing that this collection of functions consists of polynomials of different leading orders. An example will be helpful to illustrate. Let us turn to the binary case with two species, and denote \({\mathcal {C}} = \{2e_1, 2e_2, e_1+e_2\} \). Then the relevant functions are

$$\begin{aligned} f_1(x)&= 2 \left( \frac{x_1}{c_1 }-1\right) - \frac{x_1(x_1-1)}{c_1^2 } +1 = - \frac{x_1^2}{c_1^2} + \left( 2+\frac{1}{c_1} \right) \frac{x_1}{c_1} -1\\ f_2(x)&= 2 \left( \frac{x_2}{c_2 }-1\right) - \frac{x_2(x_2-1)}{c_2^2 } +1 = - \frac{x_2^2}{c_2^2} + \left( 2+\frac{1}{c_2} \right) \frac{x_2}{c_2} -1\\ f_3(x)&= \left( \frac{x_1}{c_1 }-1\right) +\left( \frac{x_2}{c_2}-1\right) - \frac{x_1 x_2}{c_1c_2 } +1 = -\frac{x_1 x_2}{c_1c_2 } + \frac{x_1}{c_1} + \frac{x_2}{c_2} -1. \end{aligned}$$

To see why they are linearly independent, let \(\alpha _i\) be such that \(\alpha _1 f_1(x) + \alpha _2 f_2(x) + \alpha _3 f_3(x) = 0\) for all x. Since the leading powers of the monomials are different, we therefore conclude that we must have \(\alpha _1 = \alpha _2 = \alpha _3 = 0\).

Proof of Lemma 3.3

Suppose there exists \(\alpha _i\) for \(i=1,2,\ldots , m\) such that

$$\begin{aligned} \alpha _1 f_1(x) +\cdots + \alpha _m f_m(x) = 0, \end{aligned}$$

for all \(x \in {\mathbb {Z}}^d_{\ge 0}\).

Let \(s=\max _{i=1,2,\ldots ,m} \Vert z_i\Vert _1 \) and denote \(\mathcal {\tilde{C}} = \{ z_i: \Vert z_i \Vert _1 = s\}\). Notice that for any function \(f_i\) where \(z_i \in \tilde{{\mathcal {C}}}\), \(f_i(x)\) is a polynomial in x and the leading term of the polynomial is \(\frac{1}{c^{z_i}}x^{z_i}\). Notice that for \(i\ne j\), we have \(z_i \ne z_j\) and hence \(x^{z_i} \ne x^{z_j}\). We may therefore conclude that \(\alpha _i = 0\) for any \(z_i \in {\tilde{{\mathcal {C}}}}\).

The proof is then concluded by noting that the above procedure can be performed iteratively as you decrease the 1-norm of the complexes. \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Anderson, D.F., Schnoerr, D. & Yuan, C. Time-dependent product-form Poisson distributions for reaction networks with higher order complexes. J. Math. Biol. 80, 1919–1951 (2020). https://doi.org/10.1007/s00285-020-01485-y

Download citation

Keywords

  • Reaction networks
  • Stochastic processes
  • Poisson distribution
  • Complex balancing
  • Deficiency

Mathematics Subject Classification

  • 60J28
  • 60J27
  • 92C42
  • 37N25