Advertisement

Algebraic expressions of conditional expectations in gene regulatory networks

Abstract

Gene Regulatory Networks are powerful models for describing the mechanisms and dynamics inside a cell. These networks are generally large in dimension and seldom yield analytical formulations. It was shown that studying the conditional expectations between dimensions (interactions or species) of a network could lead to drastic dimension reduction. These conditional expectations were classically given by solving equations of motions derived from the Chemical Master Equation. In this paper we deviate from this convention and take an Algebraic approach instead. That is, we explore the consequences of conditional expectations being described by a polynomial function. There are two main results in this work. Firstly, if the conditional expectation can be described by a polynomial function, then coefficients of this polynomial function can be reconstructed using the classical moments. And secondly, there are dimensions in Gene Regulatory Networks which inherently have conditional expectations with algebraic forms. We demonstrate through examples, that the theory derived in this work can be used to develop new and effective numerical schemes for forward simulation and parameter inference. The algebraic line of investigation of conditional expectations has considerable scope to be applied to many different aspects of Gene Regulatory Networks; this paper serves as a preliminary commentary in this direction.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    The terms species, dimensions, and vertices originate from different fields of study but refer to the same concept. Hence, we interchange between the terms to match the context.

  2. 2.

    In our context the results can be reformulated to be raw moments, factorial moments, or central moments. For this reason we say classical moments to encompass it all.

  3. 3.

    The PyME implementation of the OFSP method and the MCM were used in this work (Sunkara 2017; Sunkara and Hegland 2010). It must be noted that the MCM module in PyME is not optimised for speed. All code was run on an Intel i7 2.5 GHz with 16GB of RAM.

  4. 4.

    A Gaussian reconstruction in this context involves computing the Gaussian distribution over the discrete state space and then normalising to make the total mass one.

References

  1. Anderson D (2007) A modified next reaction method for simulating chemical systems with time dependent propensities and delays. J Chem Phys 127(21):214107. https://doi.org/10.1063/1.2799998

  2. Andreychenko A, Mikeev L, Wolf V (2015) Reconstruction of multimodal distributions for hybrid moment-based chemical kinetics. J Coupled Syst Multiscale Dyn 3(2):156–163. https://doi.org/10.1166/jcsmd.2015.1073

  3. Andreychenko A, Bortolussi L, Grima R, Thomas P, Wolf V (2017) Distribution approximations for the chemical master equation: comparison of the method of moments and the system size expansion. In: Graw F, Matthäus F, Pahle J (eds) Modeling cellular systems. Contributions in mathematical and computational sciences. Springer, Cham, pp 39–66. https://doi.org/10.1007/978-3-319-45833-5_2

  4. Ball K, Kurtz TG, Popovic L, Rempala G (2006) Asymptotic analysis of multiscale approximations to reaction networks. Ann Appl Probab 16(4):1925–1961

  5. Banasiak J (2014) Positive semigroups with applications. PhD thesis, University of KwaZulu-Natal, Durban, South Africa

  6. Barkai N, Leibler S (2000) Biological rhythms: circadian clocks limited by noise. Nature 403(6767):267–268. https://doi.org/10.1038/35002258

  7. Blake WJ, Krn M, Cantor CR, Collins JJ (2003) Noise in eukaryotic gene expression. Nature 422(6932):633–637. https://doi.org/10.1038/nature01546

  8. Bokes P, King JR, Wood ATA, Loose M (2012) Exact and approximate distributions of protein and mRNA levels in the low-copy regime of gene expression. J Math Biol 64(5):829–854. https://doi.org/10.1007/s00285-011-0433-5

  9. Burrage K, MacNamara S, Tian TH (2006) Accelerated leap methods for simulating discrete stochastic chemical kinetics. Posit Syst Proc 341:359–366. https://doi.org/10.1007/3-540-34774-7_46

  10. Cao Z, Grima R (2018) Linear mapping approximation of gene regulatory networks with stochastic dynamics. Nat Commun. https://doi.org/10.1038/s41467-018-05822-0

  11. Cardelli L, Kwiatkowska M, Laurenti L (2016) Stochastic analysis of chemical reaction networks using linear noise approximation. BioSystems 149:26–33. https://doi.org/10.1016/j.biosystems.2016.09.004

  12. Choudhary K, Oehler S, Narang A (2014) Protein distributions from a stochastic model of the lac operon of E. coli with DNA looping: analytical solution and comparison with experiments. PLoS ONE. https://doi.org/10.1371/journal.pone.0102580

  13. Engblom S (2006) Computing the moments of high dimensional solutions of the master equation. Appl Math Comput 180(2):498–515. https://doi.org/10.1016/j.amc.2005.12.032

  14. Gardner TS, Cantor CR, Collins JJ (2000) Construction of a genetic toggle switch in Escherichia coli. Nature 403(6767):339–342. https://doi.org/10.1038/35002131

  15. Gillespie DT (1977) Exact stochastic simulation of coupled chemical reactions. J Phys Chem 81(25):2340–2361. https://doi.org/10.1021/j100540a008

  16. Goutsias J (2005) Quasiequilibrium approximation of fast reaction kinetics in stochastic biochemical systems. J Chem Phys 122(18):184102. https://doi.org/10.1063/1.1889434

  17. Grima R, Schmidt DR, Newman TJ (2012) Steady-state fluctuations of a genetic feedback loop: an exact solution. J Chem Phys. https://doi.org/10.1063/1.4736721

  18. Haseltine EL, Rawlings JB (2002) Approximate simulation of coupled fast and slow reactions for stochastic chemical kinetics. J Chem Phys 117(15):6959–6969. https://doi.org/10.1063/1.1505860

  19. Hasenauer J, Wolf V, Kazeroonian A, Theis FJ (2013) Method of conditional moments (MCM) for the Chemical Master Equation. J Math Biol. https://doi.org/10.1007/s00285-013-0711-5

  20. Hellander A, Lötstedt P (2007) Hybrid method for the chemical master equation. J Comput Phys 227(1):100–122. https://doi.org/10.1016/j.jcp.2007.07.020

  21. Henzinger TA, Mikeev L, Mateescu M, Wolf V (2010) Hybrid numerical solution of the chemical master equation. In: Proceedings of the 8th international conference on computational methods in systems biology. ACM, Trento, pp 55–65. https://doi.org/10.1145/1839764.1839772

  22. Higham DJ (2008) Modeling and simulating chemical reactions. SIAM Rev 50(2):347–368. https://doi.org/10.1137/060666457

  23. Jahnke T (2011) On reduced models for the chemical master equation. Multiscale Model Simul 9(4):1646–1676. https://doi.org/10.1137/110821500

  24. Jahnke T, Huisinga W (2007) Solving the chemical master equation for monomolecular reaction systems analytically. J Math Biol 54:1–26

  25. Jahnke T, Kreim M (2012) Error bound for piecewise deterministic processes modeling stochastic reaction systems. SIAM Multiscale Model Simul 10(4):1119–1147. https://doi.org/10.1137/120871894

  26. Jahnke T, Sunkara V (2014) Error bound for hybrid models of two-scaled stochastic reaction systems. In: Dahlke S, Dahmen W, Griebel M, Hackbusch W, Ritter K, Schneider R, Schwab C, Yserentant H (eds) Extraction of quantifiable information from complex systems: lecture notes in computational science and engineering, vol 102. Springer, Berlin, pp 303–319. https://doi.org/10.1007/978-3-319-08159-5_15

  27. Karlebach G, Shamir R (2008) Modelling and analysis of gene regulatory networks. Nat Rev Mol Cell Biol 9(10):770–780. https://doi.org/10.1038/nrm2503

  28. Khammash M, Munsky B (2006) The finite state projection algorithm for the solution of the chemical master equation. J Chem Phys 124(044104):1–12. https://doi.org/10.1063/1.2145882

  29. Kurtz TG (1972) Relationship between stochastic and deterministic models for chemical reactions. J Chem Phys 57(7):2976–2978. https://doi.org/10.1063/1.1678692

  30. MacArthur BD, Ma’ayan A, Lemischka IR (2009) Systems biology of stem cell fate and cellular reprogramming. Nat Rev Mol Cell Biol 10(10):672–681. https://doi.org/10.1038/nrm2766

  31. MacNamara S, Bersani AM, Burrage K, Sidje RB (2008) Stochastic chemical kinetics and the total quasi-steady-state assumption: application to the stochastic simulation algorithm and chemical master equation. J Chem Phys 129(095105):1–13. https://doi.org/10.1063/1.2971036

  32. Menz S, Latorre J, Schütte C, Huisinga W (2012) Hybrid stochastic-deterministic solution of the chemical master equation. Multiscale Model Simul 10(4):1232–1262. https://doi.org/10.1137/110825716

  33. Nagel W, Steyer R (2017) Probability and conditional expectation. Wiley series in probability and statistics. Wiley, Oxford. https://doi.org/10.1002/9781119243496

  34. Pájaro M, Alonso AA, Otero-Muras I, Vázquez C (2017) Stochastic modeling and numerical simulation of gene regulatory networks with protein bursting. J Theor Biol 421:51–70. https://doi.org/10.1016/j.jtbi.2017.03.017

  35. Rao CV, Arkin AP (2003) Stochastic chemical kinetics and the quasi-steady-state assumption: application to the Gillespie algorithm. J Chem Phys 118(11):4999–5010. https://doi.org/10.1063/1.1545446

  36. Ruess J (2015) Minimal moment equations for stochastic models of biochemical reaction networks with partially finite state space. J Chem Phys 143(24):244103. https://doi.org/10.1063/1.4937937

  37. Seber GAF, Lee AJ (2003) Linear regression analysis. Wiley, Hoboken. https://doi.org/10.1002/9780471722199

  38. Singh A, Hespanha JP (2005) Models for multi-specie chemical reactions using polynomial stochastic hybrid systems. In: IEEE conference on decision and control, pp 2969–2974. https://doi.org/10.1109/CDC.2005.1582616

  39. Smadbeck P, Kaznessis YN (2012) Efficient moment matrix generation for arbitrary chemical networks. Chem Eng Sci 84:612–618. https://doi.org/10.1016/j.ces.2012.08.031

  40. Smadbeck P, Kaznessis YN (2013) A closure scheme for chemical master equations. Proc Natl Acad Sci 110(35):14261–14265. https://doi.org/10.1073/pnas.1306481110

  41. Srinivastiv R, You L, Summers J, Yin J (2002) Stochastic vs. deterministic modeling of intracellular viral kinetics. J Theor Biol 218(3):309–321. https://doi.org/10.1006/jtbi.2002.3078

  42. Sunkara V (2013) Analysis and numerics of the chemical master equation. PhD thesis, Australian National University

  43. Sunkara V (2017) PyME (Python solver for the chemical master equation). https://github.com/vikramsunkara/PyME. Accessed 1 Aug 2019

  44. Sunkara V, Hegland M (2010) An optimal finite state projection method. Procedia Comput Sci 1(1):1579–1586. https://doi.org/10.1016/j.procs.2010.04.177

  45. Thomas P, Popovi N, Grima R (2014) Phenotypic switching in gene regulatory networks. Proc Natl Acad Sci 111(19):6994–6999. https://doi.org/10.1073/pnas.1400049111

  46. Van Kampen NG (2007) Stochastic processes in physics and chemistry, 3rd edn. North Holland, Amsterdam

  47. Vilar JMG, Kueh HY, Barkai N, Leibler S (2002) Mechanisms of noise-resistance in genetic oscillators. Proc Natl Acad Sci 99(9):5988–5992. https://doi.org/10.1073/pnas.092133899

  48. Wilkinson DJ (2006) Stochastic modelling for systems biology. Mathematical and computational biology series. Chapman & Hall, CRC, Boca Raton

Download references

Author information

Correspondence to Vikram Sunkara.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proofs

Proof of Lemma 3.1-3

We substitute the conditional expectation form into Eve’s law (Law of Total Variance) and then reduce.

Eve’s Law states that

$$\begin{aligned} {\mathrm {cov}}(Y,Y) = {\mathbb {E}}[{\mathrm {cov}}(Y_x,Y_x)] + {\mathrm {cov}}({\mathbb {E}}[Y_x],{\mathbb {E}}[Y_x]). \end{aligned}$$

Verbosely, the total variation of Y is the sum of the expectation of the conditional variances and the variance of the conditional expectation. We begin by reducing the covariance of the conditional expectations:

$$\begin{aligned} {\mathrm {cov}}({\mathbb {E}}[Y_x],{\mathbb {E}}[Y_x])&:= \sum _{x \in \Omega _X} \left[ \left( {\mathbb {E}}[Y_x] - {\mathbb {E}}[Y]\right) \left( {\mathbb {E}}[Y_x] - {\mathbb {E}}[Y]\right) ^T \right] {{\,\mathrm{\textit{p}}\,}}(X=x), \end{aligned}$$

substituting the linear conditional expectation form and the expanding gives us

$$\begin{aligned}&= \sum _{x \in \Omega _X} \left[ \left( \alpha \, x + \beta - {\mathbb {E}}[Y]\right) \left( \alpha \, x + \beta - {\mathbb {E}}[Y]\right) ^T \right] {{\,\mathrm{\textit{p}}\,}}(X=x),\\&= \sum _{x \in \Omega _X} \left[ \left( \alpha \, x + {\mathbb {E}}[Y] - \alpha \, {\mathbb {E}}[X] - {\mathbb {E}}[Y]\right) \left( \alpha \, x + {\mathbb {E}}[Y] - \alpha \, {\mathbb {E}}[X] - {\mathbb {E}}[Y]\right) ^T \right] {{\,\mathrm{\textit{p}}\,}}(X=x),\\&= \sum _{x \in \Omega _X} \left[ \left( \alpha \, x - \alpha \, {\mathbb {E}}[X] \right) \left( \alpha \, x - \alpha \, {\mathbb {E}}[X]\right) ^T \right] {{\,\mathrm{\textit{p}}\,}}(X=x),\\&= \sum _{x \in \Omega _X} \alpha \, \left[ \left( x - {\mathbb {E}}[X] \right) \left( x - {\mathbb {E}}[X]\right) ^T \right] \, \alpha ^T {{\,\mathrm{\textit{p}}\,}}(X=x),\\&= \alpha \, \left[ \sum _{x \in \Omega _X} \left( x - {\mathbb {E}}[X] \right) \left( x - {\mathbb {E}}[X]\right) ^T{{\,\mathrm{\textit{p}}\,}}(X=x) \right] \, \alpha ^T, \end{aligned}$$

substituting the definition of a covariance gives

$$\begin{aligned}&= \alpha \, {\mathrm {cov}}(X,X)\, \alpha ^T. \end{aligned}$$

Substituting this term above into Eve’s law gives us that,

$$\begin{aligned} {\mathbb {E}}[{\mathrm {cov}}(Y_x,Y_x)] = {\mathrm {cov}}(Y,Y) - \alpha \, {\mathrm {cov}}(X,X) \, \alpha ^{T} . \end{aligned}$$

\(\square \)

Parameters of the three models

See Tables 9, 10 and 11.

Table 9 Model 1 system parameters. \(T_{final}=5.0\)
Table 10 Model 2 system parameters. \(T_{final}=3.0\)
Table 11 Model 3 system parameters. \(T_{final}=0.6\)

Proof that the simple mRNA translation model has a linear conditional expectation structure

The idea and outline for this proof was given by one of the anonymous reviewers of this paper. The author is grateful to the reviewer and the peer-review process for this contribution.

We prove that the simple mRNA translation model has linear conditional expectation structure by using the notion of generating functions. We begin by first deriving the definition of the conditional expectation in terms of the generating function.

Conditional expectation in terms of the generating function

Let X and Y be two coupled random variables whose state space are the natural numbers including zero. The generating function of the joint distribution \({{\,\mathrm{\textit{p}}\,}}(X=\cdot ,Y=\cdot )\) is given by,

$$\begin{aligned} \phi (t,s) := \sum _{\tilde{x} \in \Omega _X ,y \in \Omega _Y} t^{\tilde{x}} \, s^y \, {{\,\mathrm{\textit{p}}\,}}(X=\tilde{x},Y=y), \text { for } t,s \in {\mathbb {C}}. \end{aligned}$$
(C.1)

It is well known that taking the nth derivative of \(\phi \) and setting t or s to zero gives the nth degree classical moment of the random variables X and Y, respectively. We aim to similarly formulate the conditional expectation in terms of derivatives of the generating function.

For \(x\in \Omega _X,\) we define

$$\begin{aligned} g_x(s):= & {} \frac{\partial ^x \phi (t,s)}{dt^x} \Big \vert _{t=0}, \nonumber \\= & {} x! \, \sum _{y\in \Omega _Y} s^y \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y). \end{aligned}$$
(C.2)

Verbosely, the function \(g_x(s)\) is the xth derivative of \(\phi \) with respect to t,  evaluated at \(t=0.\) We take the natural logorithm of \(g_x(s)\) to get,

$$\begin{aligned} \log (g_x(s)) = \sum _{n=1}^{x} n \, + \log \left( \sum _{y\in \Omega _Y} s^y \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y) \right) . \end{aligned}$$

Taking the derivative of the expression above with respect to s gives us,

$$\begin{aligned} \frac{d \log (g_x(s)) }{ds} = \frac{ \sum _{y\in \Omega _Y} y \, s^{y-1} \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y) }{\sum _{y\in \Omega _Y} s^y \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y)}. \end{aligned}$$
(C.3)

Then evaluating the function at \(s=1\) gives us,

$$\begin{aligned} \frac{d \log (g_x(s)) }{ds} \Big \vert _{s=1}= & {} \frac{ \sum _{y\in \Omega _Y} y \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y) }{\sum _{y\in \Omega _Y} \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y)}, \nonumber \\= & {} \sum _{y\in \Omega _Y} y {{\,\mathrm{\textit{p}}\,}}(Y=y\, | X=x), \nonumber \\:= & {} {\mathbb {E}}[Y_x]. \end{aligned}$$
(C.4)

We have derived the definition of the conditional expectation as function of the derivatives of the generating function. Naturally, if the generating function is known, one can evaluate the terms in (C.4) and determine the corresponding conditional expectation structure.

Linear conditional expectation form of the simple mRNA transcription model

We prove that the simple mRNA transcription model has a linear conditional expectation form by using the generating function given by Bokes et al. (2012) and substituting it into (C.4). We begin by establishing some notation in order to align with the work by Brokes et al.

Let MN be the random variables corresponding with mRNA population and protein population, respectively. Let the reaction channels be given as follows:

$$\begin{aligned} R_1:\, \emptyset \xrightarrow {k_1} M, \, R_2:\, M \xrightarrow {\gamma _1} \emptyset , \,R_3:\, M \xrightarrow {k_2} M + N , \, R_4:\, N \xrightarrow {\gamma _2} \emptyset . \end{aligned}$$

We are investigating the dynamics of the stationary distribution, hence we omit the time component. It was shown by Brokes et al. that the stationary moments of the simple mRNA translation model are as follows:

$$\begin{aligned} {\mathbb {E}}[M]= & {} \frac{k_1}{\gamma _1},\, {\mathbb {E}}[N] = \frac{k_1\,k_2}{\gamma _1\,\gamma _2} , \end{aligned}$$
(C.5)
$$\begin{aligned} {\mathbb {V}}[M]= & {} \frac{k_1}{\gamma _1} , \, {\mathbb {V}}[N] = \frac{k_1\,k_2}{\gamma _1\,\gamma _2}\,\left( 1 + \frac{k_2}{\gamma _1 + \gamma _2}\right) , {\mathrm {cov}}(M,N) = \frac{k_1\,k_2}{\gamma _1\,( \gamma _1+ \gamma _2)} .\qquad \end{aligned}$$
(C.6)

Then the generating function of the stationary distribution is given by,

$$\begin{aligned} \phi (t,s) = e^{a(s) + (t-1)\,b(s)}, \end{aligned}$$
(C.7)

where

$$\begin{aligned} a(s) := \alpha \,\beta \,\int _0^s K(1, 1+ \lambda ,\beta (r-1))dr \text { and } b(s) := \alpha \,K(1,1+\lambda , \beta (s-1)), \end{aligned}$$

with \(K(\cdot ,\cdot ,\cdot )\) being the Kummer’s function and

$$\begin{aligned} \lambda =\frac{\gamma _1}{\gamma _2}, \, \alpha = \frac{k_1}{\gamma _1}, \, \beta = \frac{k_2}{\gamma _2}. \end{aligned}$$
(C.8)

To find the conditional expectation of the simple mRNA translation model, we will substitute its generating function (C.7), into (C.2) and reduce.

$$\begin{aligned} g_m(s):= & {} \frac{\partial ^m \phi (t,s)}{dt^m} \Big \vert _{t=0}, \\= & {} e^{a(s) + (t-1)\,b(s)}\,b(s)^{m} \Big \vert _{t=0}, \\= & {} e^{a(s) - b(s)}\,b(y)^{m}. \end{aligned}$$

Taking the natural log gives us,

$$\begin{aligned} \log g_m(s) = a(s) - b(s) + m\,\log (b(s)). \end{aligned}$$

Then taking the derivative with respect to s gives us,

$$\begin{aligned} \frac{ \log g_m(s)}{ds} = \frac{da(s)}{ds} - \frac{db(s)}{ds} + \frac{m}{b(s)}\, \frac{db(s)}{ds}. \end{aligned}$$
(C.9)

By the fundamental theorem of calculus we have that

$$\begin{aligned} \frac{da(s)}{ds} = \alpha \,\beta \,K(1,1+\lambda ,\beta (s-1)), \end{aligned}$$

and by the properties of the derivative of the Krummer’s function, \(\frac{d}{dc} K(a,b,f(c)) = \frac{a\,f'(c)}{b} K(a+1,b+1,f(c)),\) we have that

$$\begin{aligned} \frac{db(s)}{ds} = \frac{\alpha \,\beta }{1+\lambda }\,K(2,2+\lambda ,\beta (s-1)). \end{aligned}$$

Substituting these terms into (C.9), then evaluating at \(s=1\) and applying the property that \(K(\cdot ,\cdot ,0) = 1\) gives us,

$$\begin{aligned} \frac{ \log g_m(s)}{ds} \big \vert _{s=1} = \alpha \,\beta - \frac{\alpha \,\beta }{1+\lambda }\ + m \,\frac{1}{\beta }\, \frac{\alpha \,\beta }{1+\lambda }. \end{aligned}$$

By the definition given in (C.4), we have that

$$\begin{aligned} {\mathbb {E}}[N_m] = \alpha \,\beta - \frac{\alpha \,\beta }{1+\lambda }\ + m \,\frac{1}{\beta }\, \frac{\alpha \,\beta }{1+\lambda }. \end{aligned}$$

After substituting in the term (C.8), the conditional expectation in terms of the reaction rates is given to be,

$$\begin{aligned} {\mathbb {E}}[N_m] = \frac{k_1\, k_2}{\gamma _1\, \gamma _2} - \frac{k_1\, k_2}{\gamma _1(\gamma _1+\gamma _2)} + m\,\frac{k_2}{\gamma _1+\gamma _2}. \end{aligned}$$
(C.10)

Hence, the conditional expectation of the simple mRNA translation model has a linear form. We now cross-validate the coefficients linking the terms above to the raw moments using Lemma 3.1.

Cross-validation

Using (3.2) we know that linear conditional expectations of protein conditioned on mRNA should have the form:

$$\begin{aligned} {\mathbb {E}}[N_m]&= \frac{{\mathrm {cov}}(M,N)}{{\mathbb {V}}[M]}\,\left( m - {\mathbb {E}}[M]\right) + {\mathbb {E}}[N]. \end{aligned}$$

Substituting in (C.5) and (C.6) for the moments gives us,

$$\begin{aligned} = \frac{k_2}{\gamma _1 + \gamma _2}\,\left( m - \frac{k_1}{\gamma _1} \right) +\frac{k_1\,k_2}{\gamma _1\,\gamma _2}, \end{aligned}$$

expanding the terms gives us,

$$\begin{aligned} = m\,\frac{k_2}{\gamma _1+\gamma _2} - \frac{k_1\, k_2}{\gamma _1(\gamma _1+\gamma _2)} + \frac{k_1\, k_2}{\gamma _1\, \gamma _2}. \end{aligned}$$
(C.11)

Both the terms in (C.10) and (C.11) match.

Model 3: Conditional expectation through time

In this section we evaluate Model 3 at different time points to observe if the conditional expectation’s quadratic structure is present through time. Since there are no analytical solutions for the model known to date, we use an OFSP approximation as the reference solution and see how close this approximation’s conditional expectation is to the conditional expectation ansatz. The OFSP approximation was set to have a global \(\ell _1\) error of \(10^{-7}.\)

In Fig. 10a–c, the joint distribution is rendered in a contour plot, evaluated at time points \(T=0.15,\ 0.3,\)\(\text { and } 1.2.\) Below the joint distributions, in Fig. 10d–e, the corresponding conditional expectation and the quadratic ACE ansatz are given. We see that the conditional expectation and the ansatz are fairly similar. There are some mismatches at the boundary, but this is to be expected since the OFSP does produce artefacts at the boundary due to truncation criterions.

To further investigate the resolution at which conditional expectations and the ACE ansatz differ, we study the differences between them though time using three different metrics: the \(\ell _{\infty }\) norm, to study the maximum error at a particular time point; the \(\ell _{2}\) norm, to study the difference over the entire state space; and lastly, the relative error in \(\ell _{2},\) to see how the error is changing with respect to the change in the conditional expectation. In Fig. 10g, we see that the \(\ell _{\infty }\) norm is of the order \(10^{-2}\) in the interval of interest and the error is increasing with time. Then in Fig. 10h, we notice that the \(\ell _2\) norm has a similar trend as the \(\ell _{\infty }.\) However, interestingly the total error over the state space of the \(\ell _2\) norm is only twice as much as that of the \(\ell _{\infty }\) norm, implying that there are only a few states which are contributing most of the error. Lastly, in Fig. 10i, we study the relative error over time. We notice that this error falls to roughly \(10^{-4},\) implying that the error between the ACE ansatz and the conditional expectation is roughly ten thousand times smaller than the conditional expectation. This suggests that the model likely does exhibit a quadratic conditional expectation structure.

Fig. 10
figure10

Model 3 evaluated at time points \(T=0.15,\ 0.3,\text { and } 1.2\) (\(T=0.6\) is given in Fig. 1c). ac Contour plots describing the joint probability distributions generated using the OFSP method with a global error of \(10^{-7}.\) The distributions corresponding to time points \(T=0.15,\ 0.3,\text { and } 1.2,\) are given from left to right respectively. df The conditional expectation of the joint probability distribution is marked with red crosses. The ACE polynomial fit of order two is drawn as a solid blue line. The conditional expectations evaluated at time points \(T=0.15,\ 0.3,\text { and } 1.2,\) are given from left to right respectively. gf The \(\ell _{\infty }\) and \(\ell _{2}\) norm of the difference between the OFSP conditional expectation and the ACE quadratic ansatz through time, respectively. i Relative error with respect to the \(\ell _2\) norm showing how the error in the conditional expectation evolves with respect to the conditional expectation (color figure online)

Simple gene switch derivations

Chemical master equation

$$\begin{aligned}&\frac{d{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t)}{dt}\\&\quad = \tau _{\mathbf{off }} {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m,A=a;t) \\&\qquad +\ \gamma _1\, (m+1){{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m+1,A=a;t) \\&\qquad +\ \kappa _2 \, m{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a-1;t) \\&\qquad +\ \gamma _2\, (a+1){{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a+1;t) \\&\qquad - \left[ \tau _{\mathbf{on }} + (\gamma _1+ \kappa _2)\, m +( \gamma _2 + \hat{\tau }_{\mathbf{on }})\, a \right] {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t).\\&\frac{d{{\,\mathrm{\textit{p}}\,}}(G=\mathbf{on },M=m,A=a;t)}{dt}\\&\quad = \tau _\mathbf{on }{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t) \\&\qquad +\ \kappa _1 {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m-1,A=a;t) \\&\qquad +\ \gamma _1 \, (m+1) {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m+1,A=a;t)\\&\qquad +\ \kappa _2 \, m {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m,A=a-1;t)\\&\qquad +\ \gamma _2 \, (a+1){{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m,A=a+1;t) \\&\qquad +\ \hat{\tau }_{\mathbf{on }} \, (a+1) {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a+1;t) \\&\qquad -\ \left\{ \tau _{\mathbf{off }} + \kappa _1 + (\gamma _1 + \kappa _2)\, m + \gamma _2\, a \right\} {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m,A=a;t) \end{aligned}$$

Marginal distributions

We follow the same steps as in the generalised form (see Sect. 2.2). Deriving the CME for the marginal distribution of the gene and the proteins involves the following two steps:

  • substituting \( {{\,\mathrm{\textit{p}}\,}}( G=\cdot ,M=\cdot ,A=\cdot ;t) = {{\,\mathrm{\textit{p}}\,}}(M=\cdot \,|\,G=\cdot , A=\cdot ;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\cdot ,A=\cdot ;t),\)

  • summing over all \(m \in \Omega _M\) and then collating all conditional probability terms.

Step 1

$$\begin{aligned}&\frac{d{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t)}{dt}\\&\quad = \tau _{\mathbf{off }} {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \ \\&\qquad +\ \gamma _1\, (m+1){{\,\mathrm{\textit{p}}\,}}(M=m+1\,|\,G=\mathbf{off }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t) \\&\qquad +\ \kappa _2 \, m{{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a-1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a-1;t) \\&\qquad +\ \gamma _2\, (a+1){{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a+1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a+1;t) \\&\qquad - \left[ \tau _{\mathbf{on }} + (\gamma _1+ \kappa _2)\, m +( \gamma _2 + \hat{\tau }_{\mathbf{on }})\, a \right] {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a;t)\\&\qquad \times {{\,\mathrm{\textit{p}}\,}}( G=\cdot ,A=\cdot ;t).\\&\frac{d{{\,\mathrm{\textit{p}}\,}}(G=\mathbf{on },M=m,A=a;t)}{dt}\\&\quad = \tau _\mathbf{on }{{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t) \\&\qquad +\ \kappa _1 {{\,\mathrm{\textit{p}}\,}}(M=m-1\,|\,G=\mathbf{on }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \\&\qquad +\ \gamma _1 \, (m+1) {{\,\mathrm{\textit{p}}\,}}(M=m+1\,|\,G=\mathbf{on }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t)\\&\qquad +\ \kappa _2 \, m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a-1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a-1;t) \\&\qquad +\ \gamma _2 \, (a+1){{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a+1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a+1;t) \\&\qquad +\ \hat{\tau }_{\mathbf{on }} \, (a+1){{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a+1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a+1;t) \\&\qquad -\ \left\{ \tau _{\mathbf{off }} + \kappa _1 + (\gamma _1 + \kappa _2)\, m + \gamma _2\, a \right\} {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a;t)\\&\qquad \times \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \end{aligned}$$

Step 2

$$\begin{aligned}&\sum _m\frac{d{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t)}{dt}\\&\quad = \tau _{\mathbf{off }} \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \ \\&\qquad +\ \gamma _1\, \left( \sum _m (m+1){{\,\mathrm{\textit{p}}\,}}(M=m+1\,|\,G=\mathbf{off }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t) \\&\qquad +\ \kappa _2 \, \left( \sum _m m{{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a-1;t)\ \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a-1;t) \\&\qquad +\ \gamma _2\, (a+1)\, \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a+1;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a+1;t) \\&\qquad - \left[ \tau _{\mathbf{on }} + \left( \sum _m (\gamma _1+ \kappa _2)\, m\, {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a;t) \right) +( \gamma _2 + \hat{\tau }_{\mathbf{on }})\, a \right] \\&\qquad \times {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t).\\&\sum _m\frac{d{{\,\mathrm{\textit{p}}\,}}(G=\mathbf{on },M=m,A=a;t)}{dt}\\&\quad = \tau _\mathbf{on }\left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t) \\&\qquad +\ \kappa _1 \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m-1\,|\,G=\mathbf{on }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \\&\qquad +\ \gamma _1 \, \left( \sum _m (m+1) {{\,\mathrm{\textit{p}}\,}}(M=m+1\,|\,G=\mathbf{on }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t)\\&\qquad +\ \kappa _2 \, \left( \sum _m m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a-1;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a-1;t)\\&\qquad +\ \gamma _2 \, (a+1)\, \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a+1;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a+1;t) \\&\qquad +\ \hat{\tau }_{\mathbf{on }} \, (a+1)\, \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a+1;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a+1;t) \\&\qquad -\ \left[ \tau _{\mathbf{off }} + \kappa _1 + \left( \sum _m (\gamma _1 + \kappa _2)\, m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a;t)\ \right) + \gamma _2\, a \right] \\&\qquad \times {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \end{aligned}$$

Formal ACE-Ansatz approximation derivation

Before we begin the derivation, it is important to discuss Assumption 2.1-3. We state that the joint distribution needs to have non-zero probability over all of the state space through all time. We can easily violate this condition by starting the Kurtz process with the initial probability distribution which is non-zero over only a subset of the entire state space (e.g. a single state). However, the CME generator (2.4) has the feature that regardless of the initial condition, in an infinitesimal time, all the states have non-zero probability. Hence, numerically, if the processes does start at a single state, we can evolve it forward by a small time step using OFSP, and then use this time point for the initial condition in the dimension reduction methods. In the case of the Simple Gene Switch example in Sect. 5.1.2, we used \(t=1\) as the starting point for all dimension reduction methods.

We use the following notational convention: the approximation of the probability measure \(p(G=g,A=a;t)\) is denoted by the function w(gat),  furthermore, the approximation for the expectation operator \({\mathbb {E}}[\bullet (t)]\) is denoted by the function \(\eta _{\bullet }(t).\) Then the formal derivation of equation (5.1)–(5.8) are given by Eqs. (F.1)–(F.12).

$$\begin{aligned} \frac{d\, w(\mathbf{off },a,t)}{dt} =&\, \tau _\mathbf{off }\, w(\mathbf{on },a,t) \nonumber \\&\quad +\, k_2\, \eta _{M|}(\mathbf{off },a-1,t)\, w(\mathbf{off },a-1,t) \nonumber \\&\quad +\, \gamma _2\,(a+1)\,\,w(\mathbf{off },a+1,t) \nonumber \\&\quad -\, \left( \tau _\mathbf{on }+ k_2\,\eta _{M|}(\mathbf{off },a,t) + (\gamma _2 + \hat{\tau }_\mathbf{on })\,a \right) \, w(\mathbf{off },a,t), \end{aligned}$$
(F.1)
$$\begin{aligned} \frac{d\, w(\mathbf{on }, a , t)}{dt} =&\, \tau _\mathbf{on }\, w(\mathbf{off },a,t) \nonumber \\&\quad +\, k_2\, \eta _{M|}(\mathbf{on },a-1,t)\,w(\mathbf{on },a-1,t) \nonumber \\&\quad +\, \gamma _2\,(a+1)\,w(\mathbf{on },a+1,t) \nonumber \\&\quad +\, \hat{\tau }_\mathbf{on }\,(a+1)\,w(\mathbf{off },a+1,t) \nonumber \\&\quad -\, \left( \tau _\mathbf{off }+ k_2\, \eta _{M|}(\mathbf{on },a,t) +\ \gamma _2\,a \right) \, w(\mathbf{on },a,t). \end{aligned}$$
(F.2)
$$\begin{aligned} \eta _{M|}(g,a,t) =&\, \alpha \, \left( \left[ \begin{array}{c} g \\ a \end{array}\right] - \left[ \begin{array}{c} \eta _G(t) \\ \eta _A(t) \end{array}\right] \right) +\eta _M(t) \end{aligned}$$
(F.3)
$$\begin{aligned} \frac{d\, \eta _M(t)}{dt} =&\, k_1\, \eta _G(t) - \gamma _1\, \eta _M(t). \end{aligned}$$
(F.4)
$$\begin{aligned} \frac{d\,\eta _{G\,M}(t)}{dt} =&\tau _\mathbf{on }\,\left( -\eta _{G\,M}(t) - \eta _M(t) \right) - \tau _\mathbf{off }\, \eta _{G\,M}(t) + k_1\, \eta _G(t) \nonumber \\&\quad -\,\gamma _1\, \eta _{G\,M}(t) + \hat{\tau }_\mathbf{on }\, \left( \eta _{M\, A}(t) - \eta _{G\,M\,A}(t)\right) . \end{aligned}$$
(F.5)
$$\begin{aligned} \frac{d\,\eta _{M\,A}(t)}{dt} =&\, k_1\, \eta _{G\,A}(t) - (\gamma _1 + \gamma _2)\,\eta _{M\,A}(t) + k_2\, \eta _{M^2}(t) \nonumber \\&\quad -\,\hat{\tau }_\mathbf{on }\, \left( \eta _{M\, A}(t) - \eta _{G\,M\,A}(t) \right) \end{aligned}$$
(F.6)
$$\begin{aligned} \frac{d\,\eta _{M^2}(t)}{dt} =&\, k_1\,\left( 2\,\eta _{G\,M}(t) + \eta _G(t)\right) + \gamma _1\,\left( -2\,\eta _{M^2}(t) + \eta _M(t) \right) . \end{aligned}$$
(F.7)
$$\begin{aligned} \eta _{G\,M\,A}(t) =&\, \sum _{a\in {\mathbb {Z}}_+} \eta _{M|}(\mathbf{on },a,t) \,a\, w(\mathbf{on },a,t). \end{aligned}$$
(F.8)
$$\begin{aligned} \eta _{A}(t) =&\, \sum _{a\in {\mathbb {Z}}_+} \,a\, \left[ w(\mathbf{on },a,t) + w(\mathbf{off },a,t)\right] . \end{aligned}$$
(F.9)
$$\begin{aligned} \eta _{A^2}(t) =&\,\sum _{a\in {\mathbb {Z}}_+} \,a^2\, \left[ w(\mathbf{on },a,t) + w(\mathbf{off },a,t)\right] . \end{aligned}$$
(F.10)
$$\begin{aligned} \eta _{G^2}(t) =&\, \eta _G(t). \end{aligned}$$
(F.11)
$$\begin{aligned} \alpha :=&\, \left[ \begin{array}{cc} \eta _{G\,M}(t) - \eta _G(t)\,\eta _M(t)&\eta _{M\,A}(t) - \eta _M(t)\,\eta _A(t) \end{array}\right] \nonumber \\&\qquad \left( \begin{array}{cc} \eta _{G^2}(t) - \eta _G(t)^2 &{} \eta _{G\,A}(t) - \eta _G(t)\,\eta _A(t) \\ \eta _{G\,A}(t) - \eta _G(t)\,\eta _A(t) &{} \eta _{A^2}(t) - \eta _A(t)^2 \end{array} \right) ^{-1}. \end{aligned}$$
(F.12)

Two gene toggle switch derivations

We use the following notational convention: the approximation of the probability measure \(prob(G_0=g,P=p;t)\) is denoted by the function w(gpt),  furthermore, the approximation for the expectation operator \({\mathbb {E}}[\bullet (t)]\) is denoted by the function \(\eta _{\bullet }(t).\) Like in the simple gene switch case, the approximation is started at \(t=0.35\) to satisfy Assumption 2.1-3. We introduce the equations of motions in the following order: marginal distributions, moments, higher order moment closures, and the linear ACE-Ansatz approximations.

Marginal distribution

$$\begin{aligned} \frac{dw( G_0^\mathbf{on },p,t)}{dt}= & {} \sigma _1\,\eta _{M|}(G_0^\mathbf{off }, p,t)\,w(G_0^\mathbf{off },p,t)\nonumber \\&+\ \rho _2\, w( G_0^\mathbf{on },p-1,t) \nonumber \\&+\ k\,(p+1)\,w( G_0^\mathbf{on },p+1,t) \nonumber \\&+\ \sigma _3\,(1.0-\eta _{G_1|}(G_0^\mathbf{on },p+1,t ))\,(p+1)\,w(G_0^\mathbf{on },p+1,t) \nonumber \\&- \ \sigma _2\, w(G_0^\mathbf{on },p,t) \nonumber \\&-\ \rho _2\, \, w(G_0^\mathbf{on },p,t) \nonumber \\&-\ k\,p\, w(G_0^\mathbf{on },p,t) \nonumber \\&-\ \sigma _3\,(1.0-\eta _{G_1|}(G_0^\mathbf{on },p,t ))\,p\,w(G_0^\mathbf{on },p,t) \end{aligned}$$
(G.1)
$$\begin{aligned} \frac{dw( G_0^\mathbf{off },p,t)}{dt}= & {} \ \sigma _2\, w(G_0^\mathbf{on },p,t) \nonumber \\&+\ \rho _1\, w( G_0^\mathbf{off },p-1,t) \nonumber \\&+\ k\,(p+1)\,w( G_0^\mathbf{off },p+1,t) \nonumber \\&+\ \sigma _3\,(1.0-\eta _{G_1|}(G_0^\mathbf{off },p+1,t ))\,(p+1)\,w(G_0^\mathbf{off },p+1,t) \nonumber \\&- \ \sigma _1\,\eta _{M|}(G_0^\mathbf{off }, p,t)\,w(G_0^\mathbf{off },p,t) \nonumber \\&-\ \rho _1\, \, w(G_0^\mathbf{off },p,t) \nonumber \\&-\ k\,p\, w(G_0^\mathbf{off },p,t) \nonumber \\&-\ \sigma _3\,(1.0-\eta _{G_1|}(G_0^\mathbf{off },p,t ))\,p\,w(G_0^\mathbf{off },p,t) \end{aligned}$$
(G.2)

Moments

We derive the equations of motion for the following eight moments: \({\mathbb {E}}[G_1(t)],\)\( {\mathbb {E}}[M(t)],\)\({\mathbb {E}}[G_0\,G_1(t)],\)\({\mathbb {E}}[G_0\,M(t)],\)\({\mathbb {E}}[G_1\,P(t)],\)\({\mathbb {E}}[G_1\,M(t)],\)\({\mathbb {E}}[P\,M(t)],\) and \({\mathbb {E}}[M^2(t)].\)

Let \(\mu (t) := [ \eta _{G_1}(t), \eta _{M}(t), \eta _{G_0\,G_1}(t),\eta _{G_0\,M}(t),\eta _{G_1\, M}(t),\eta _{P\, M}(t),\eta _{M^2}(t) ],\) then the equation of motion for the approximation of the moments has the form:

$$\begin{aligned} \frac{d\mu (t)}{dt} = A\, \mu (t) + A^*, \end{aligned}$$

where

$$\begin{aligned} A := \left[ \begin{array}{cccccccc} -\sigma _4 &{} 0 &{} 0&{} 0&{} -\sigma _3&{} 0&{} 0&{} 0 \\ -\rho _3 + \rho _4 &{} -k - \sigma _1 &{} 0&{} \sigma _1&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} -\sigma _2 - \sigma _4&{} 0&{} 0&{} \sigma _1&{} 0&{} 0\\ 0&{} -\sigma _1&{} -\rho _3 + \rho _4&{} -k + \sigma _1 - \sigma _2&{} 0&{} 0&{} 0&{} \sigma _1 \\ \rho _1&{} 0&{} -\rho _1 + \rho _2, 0&{} -k + \sigma _3 - \sigma _4&{} 0&{} 0&{} 0 \\ \rho _4&{} 0&{} 0&{} 0&{} 0&{} -k - \sigma _1 - \sigma _4&{} \sigma _3&{} 0 \\ 0&{} \rho _1&{} 0&{} -\rho _1 + \rho _2&{} -\rho _3 + \rho _4&{} 0&{} -2\,k - \sigma _1 - \sigma _3&{} 0 \\ -\rho _3 + \rho _4&{} k + 2\,\rho _3 + \sigma _1&{} 0&{} -\sigma _1&{} 0&{} -2\,\rho _3 + 2\,\rho _4&{} 0&{} -2\,k - 2\, \sigma _1 \end{array} \right] \end{aligned}$$

and

$$\begin{aligned} A^* := \left[ \begin{array}{c} \sigma _3\,\eta _P(t) \\ \rho _3\\ -\sigma _1\,\eta _{G_0\,G_1\,M}(t) + \sigma _3\,\eta _{G_0\,P}(t) -\sigma _3\,\eta _{G_0\,G_1\,P}(t) \\ -\sigma _1\,\eta _{G_0\,M^2}(t) + \rho _3\,\eta _{G_0}(t) \\ \sigma _3\,\eta _{P^2}(t) -\sigma _3\,\eta _P(t) -\sigma _3\,\eta _{G_1\, P^2}(t) \\ \sigma _1\,\eta _{G_0\,G_1\,M}(t) -\sigma _3\,\eta _{G_1\,P\,M}(t) \\ 2\,\sigma _1\,\eta _{G_0\,M^2}(t) + \rho _3 \end{array} \right] . \end{aligned}$$

Higher order moment closures

Let \(w(G_0^\mathbf{on },t) := \sum _p w(G_0^\mathbf{on },p,t)\), and \(w(p,t) := w(G_0^\mathbf{on },p,t) + w(G_0^\mathbf{off },p,t)\). We apply the follow moment closers:

$$\begin{aligned} \eta _{G_0\,M^2}(t)= & {} (\eta _{M|}(G_0^\mathbf{on },t)^2 +\eta _{M|}(G_0^\mathbf{on },t))\,w(G_0^\mathbf{on },t), \end{aligned}$$
(G.3)
$$\begin{aligned} \eta _{G_1\,P^2}(t)= & {} \sum _p \eta _{G_1|}(p,t)\, p^2 \, w(p,t), \end{aligned}$$
(G.4)
$$\begin{aligned} \eta _{G_0\,G_1\,M}(t)= & {} \eta _{G_1|}(G_0^\mathbf{on },t)\,\eta _{M|}(G_0^\mathbf{on },t)\,w(G_0^\mathbf{on },t), \end{aligned}$$
(G.5)
$$\begin{aligned} \eta _{G_1\,P\,M}= & {} \sum _p \eta _{G_1|}(p,t) \, \eta _{M|}(p,t) \, p \, w(p,t), \end{aligned}$$
(G.6)
$$\begin{aligned} \eta _{G_0\,G_1\,P}(t)= & {} \sum _p \eta _{G_1|}(G_0^\mathbf{on },p,t) \, p \, w(G_0^\mathbf{on },p,t), \end{aligned}$$
(G.7)
$$\begin{aligned} \eta _{G_0\,P\,M}(t)= & {} \sum _p \eta _{M|}(G_0^\mathbf{on },p,t) \, p \, w(G_0^\mathbf{on },p,t). \end{aligned}$$
(G.8)

Similarly, we can use the marginal distribution, \(w(G_0^\mathbf{on },p,t),\) to generate the corresponding moments:

$$\begin{aligned} \eta _P(t)= & {} \sum _p p\, w(p,t), \end{aligned}$$
(G.9)
$$\begin{aligned} \eta _{P^2}(t)= & {} \sum _p p^2\, w(p,t), \end{aligned}$$
(G.10)
$$\begin{aligned} \eta _{G_0}(t)= & {} \sum _p w(G_0^\mathbf{on },p,t) \end{aligned}$$
(G.11)
$$\begin{aligned} \eta _{G_0\,P}(t)= & {} \sum _p p\, w(G_0^\mathbf{on },p,t). \end{aligned}$$
(G.12)

Linear ACE-Ansatz approximations

We approximate the conditional expectations with the linear ACE anzats:

$$\begin{aligned} \eta _{M|}(g,p,t)&= \alpha _{M|G_0,P}\, \left( \left[ \begin{array}{c} g \\ p \end{array}\right] - \left[ \begin{array}{c} \eta _{G_0}(t) \\ \eta _{P}(t) \end{array}\right] \right) +\eta _M(t) , \end{aligned}$$
(G.13)
$$\begin{aligned} \eta _{G1|}(g,p,t)&= \alpha _{G_1|G_0,P}\, \left( \left[ \begin{array}{c} g \\ p \end{array}\right] - \left[ \begin{array}{c} \eta _{G_0}(t) \\ \eta _{P}(t) \end{array}\right] \right) +\eta _{G_1}(t) , \end{aligned}$$
(G.14)
$$\begin{aligned} \eta _{G_1|}(p,t)&= \alpha _{G_1|P} (p - \eta _P(t)) + \eta _{G_1}(t), \end{aligned}$$
(G.15)
$$\begin{aligned} \eta _{G_1|}(g,t)&= \alpha _{G_1|G_0} (g - \eta _{G_0}(t)) + \eta _{G_1}(t), \end{aligned}$$
(G.16)
$$\begin{aligned} \eta _{M|}(p,t)&= \alpha _{M|P} (p - \eta _P(t)) + \eta _{M}(t), \end{aligned}$$
(G.17)
$$\begin{aligned} \eta _{M|}(g,t)&= \alpha _{M|G_0} (g - \eta _{G_0}(t)) + \eta _{M}(t). \end{aligned}$$
(G.18)

Where the gradients are given by:

$$\begin{aligned} \alpha _{M|G_0,P}:= & {} \left[ \begin{array}{cc} \eta _{G_0\,M}(t) - \eta _{G_0}(t)\,\eta _M(t)&\eta _{M\,P}(t) - \eta _M(t)\,\eta _P(t) \end{array}\right] \\&\quad \left( \begin{array}{cc} \eta _{G_0}(t) - \eta _{G_0}(t)^2 &{} \eta _{G_0\,P}(t) - \eta _{G_0}(t)\,\eta _P(t) \\ \eta _{G_0\,P}(t) - \eta _{G_0}(t)\,\eta _P(t) &{} \eta _{P^2}(t) - \eta _P(t)^2 \end{array} \right) ^{-1},\\ \alpha _{G_1|G_0,P}:= & {} \left[ \begin{array}{cc} \eta _{G_0\,G_1}(t) - \eta _{G_0}(t)\,\eta _{G_1}(t)&\eta _{G_1\,P}(t) - \eta _{G_1}(t)\,\eta _P(t) \end{array}\right] \\&\quad \left( \begin{array}{cc} \eta _{G_0}(t) - \eta _{G_0}(t)^2 &{} \eta _{G_0\,P}(t) - \eta _{G_0}(t)\,\eta _P(t) \\ \eta _{G_0\,P}(t) - \eta _{G_0}(t)\,\eta _P(t) &{} \eta _{P^2}(t) - \eta _P(t)^2 \end{array} \right) ^{-1},\\ \alpha _{G_1|P}:= & {} \left( \frac{ \eta _{G_1\,P}(t) - \eta _{G_1}(t)\,\eta _P(t)}{ \eta _{P^2}(t) - \eta _P(t)^2} \right) ,\\ \alpha _{G_1|G_0}:= & {} \left( \frac{ \eta _{G_1\,G_0}(t) - \eta _{G_1}(t)\,\eta _{G_0}(t)}{ \eta _{G_0}(t) - \eta _{G_0}(t)^2} \right) ,\\ \alpha _{M|P}:= & {} \left( \frac{ \eta _{M\,P}(t) - \eta _{M}(t)\,\eta _P(t)}{ \eta _{P^2}(t) - \eta _P(t)^2} \right) ,\\ \alpha _{M|G_0}:= & {} \left( \frac{ \eta _{M\,G_0}(t) - \eta _{M}(t)\,\eta _{G_0}(t)}{ \eta _{G_0}(t) - \eta _{G_0}(t)^2} \right) . \end{aligned}$$

SIR system parameters

The initial starting population was set to \((S(0)=200, I(0)=4).\) The OFSP method was configured to have a global error of \(10^{-6},\) with compression performed every 10 steps where each time step was of length 0.002. The distribution is the snapshot of the system at \(t=0.15.\) We also omit the recovered state since the total population is conserved, that is, \(S(t)+I(t) +R(t) = 204\) for all time (See Table 12).

Table 12 SIR system parameters

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sunkara, V. Algebraic expressions of conditional expectations in gene regulatory networks. J. Math. Biol. 79, 1779–1829 (2019). https://doi.org/10.1007/s00285-019-01410-y

Download citation

Keywords

  • Markov chains
  • Chemical Master Equation
  • Dimension reduction

Mathematics Subject Classification

  • 65C40
  • 60G20
  • 92B05