Skip to main content
Log in

Algebraic expressions of conditional expectations in gene regulatory networks

  • Published:
Journal of Mathematical Biology Aims and scope Submit manuscript

Abstract

Gene Regulatory Networks are powerful models for describing the mechanisms and dynamics inside a cell. These networks are generally large in dimension and seldom yield analytical formulations. It was shown that studying the conditional expectations between dimensions (interactions or species) of a network could lead to drastic dimension reduction. These conditional expectations were classically given by solving equations of motions derived from the Chemical Master Equation. In this paper we deviate from this convention and take an Algebraic approach instead. That is, we explore the consequences of conditional expectations being described by a polynomial function. There are two main results in this work. Firstly, if the conditional expectation can be described by a polynomial function, then coefficients of this polynomial function can be reconstructed using the classical moments. And secondly, there are dimensions in Gene Regulatory Networks which inherently have conditional expectations with algebraic forms. We demonstrate through examples, that the theory derived in this work can be used to develop new and effective numerical schemes for forward simulation and parameter inference. The algebraic line of investigation of conditional expectations has considerable scope to be applied to many different aspects of Gene Regulatory Networks; this paper serves as a preliminary commentary in this direction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. The terms species, dimensions, and vertices originate from different fields of study but refer to the same concept. Hence, we interchange between the terms to match the context.

  2. In our context the results can be reformulated to be raw moments, factorial moments, or central moments. For this reason we say classical moments to encompass it all.

  3. The PyME implementation of the OFSP method and the MCM were used in this work (Sunkara 2017; Sunkara and Hegland 2010). It must be noted that the MCM module in PyME is not optimised for speed. All code was run on an Intel i7 2.5 GHz with 16GB of RAM.

  4. A Gaussian reconstruction in this context involves computing the Gaussian distribution over the discrete state space and then normalising to make the total mass one.

References

Download references

Funding

V. Sunkara was supported by the BMBF (Germany) project PrevOp-OVERLOAD, grant number 01EC1408H.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vikram Sunkara.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proofs

Proof of Lemma 3.1-3

We substitute the conditional expectation form into Eve’s law (Law of Total Variance) and then reduce.

Eve’s Law states that

$$\begin{aligned} {\mathrm {cov}}(Y,Y) = {\mathbb {E}}[{\mathrm {cov}}(Y_x,Y_x)] + {\mathrm {cov}}({\mathbb {E}}[Y_x],{\mathbb {E}}[Y_x]). \end{aligned}$$

Verbosely, the total variation of Y is the sum of the expectation of the conditional variances and the variance of the conditional expectation. We begin by reducing the covariance of the conditional expectations:

$$\begin{aligned} {\mathrm {cov}}({\mathbb {E}}[Y_x],{\mathbb {E}}[Y_x])&:= \sum _{x \in \Omega _X} \left[ \left( {\mathbb {E}}[Y_x] - {\mathbb {E}}[Y]\right) \left( {\mathbb {E}}[Y_x] - {\mathbb {E}}[Y]\right) ^T \right] {{\,\mathrm{\textit{p}}\,}}(X=x), \end{aligned}$$

substituting the linear conditional expectation form and the expanding gives us

$$\begin{aligned}&= \sum _{x \in \Omega _X} \left[ \left( \alpha \, x + \beta - {\mathbb {E}}[Y]\right) \left( \alpha \, x + \beta - {\mathbb {E}}[Y]\right) ^T \right] {{\,\mathrm{\textit{p}}\,}}(X=x),\\&= \sum _{x \in \Omega _X} \left[ \left( \alpha \, x + {\mathbb {E}}[Y] - \alpha \, {\mathbb {E}}[X] - {\mathbb {E}}[Y]\right) \left( \alpha \, x + {\mathbb {E}}[Y] - \alpha \, {\mathbb {E}}[X] - {\mathbb {E}}[Y]\right) ^T \right] {{\,\mathrm{\textit{p}}\,}}(X=x),\\&= \sum _{x \in \Omega _X} \left[ \left( \alpha \, x - \alpha \, {\mathbb {E}}[X] \right) \left( \alpha \, x - \alpha \, {\mathbb {E}}[X]\right) ^T \right] {{\,\mathrm{\textit{p}}\,}}(X=x),\\&= \sum _{x \in \Omega _X} \alpha \, \left[ \left( x - {\mathbb {E}}[X] \right) \left( x - {\mathbb {E}}[X]\right) ^T \right] \, \alpha ^T {{\,\mathrm{\textit{p}}\,}}(X=x),\\&= \alpha \, \left[ \sum _{x \in \Omega _X} \left( x - {\mathbb {E}}[X] \right) \left( x - {\mathbb {E}}[X]\right) ^T{{\,\mathrm{\textit{p}}\,}}(X=x) \right] \, \alpha ^T, \end{aligned}$$

substituting the definition of a covariance gives

$$\begin{aligned}&= \alpha \, {\mathrm {cov}}(X,X)\, \alpha ^T. \end{aligned}$$

Substituting this term above into Eve’s law gives us that,

$$\begin{aligned} {\mathbb {E}}[{\mathrm {cov}}(Y_x,Y_x)] = {\mathrm {cov}}(Y,Y) - \alpha \, {\mathrm {cov}}(X,X) \, \alpha ^{T} . \end{aligned}$$

\(\square \)

Parameters of the three models

See Tables 9, 10 and 11.

Table 9 Model 1 system parameters. \(T_{final}=5.0\)
Table 10 Model 2 system parameters. \(T_{final}=3.0\)
Table 11 Model 3 system parameters. \(T_{final}=0.6\)

Proof that the simple mRNA translation model has a linear conditional expectation structure

The idea and outline for this proof was given by one of the anonymous reviewers of this paper. The author is grateful to the reviewer and the peer-review process for this contribution.

We prove that the simple mRNA translation model has linear conditional expectation structure by using the notion of generating functions. We begin by first deriving the definition of the conditional expectation in terms of the generating function.

1.1 Conditional expectation in terms of the generating function

Let X and Y be two coupled random variables whose state space are the natural numbers including zero. The generating function of the joint distribution \({{\,\mathrm{\textit{p}}\,}}(X=\cdot ,Y=\cdot )\) is given by,

$$\begin{aligned} \phi (t,s) := \sum _{\tilde{x} \in \Omega _X ,y \in \Omega _Y} t^{\tilde{x}} \, s^y \, {{\,\mathrm{\textit{p}}\,}}(X=\tilde{x},Y=y), \text { for } t,s \in {\mathbb {C}}. \end{aligned}$$
(C.1)

It is well known that taking the nth derivative of \(\phi \) and setting t or s to zero gives the nth degree classical moment of the random variables X and Y, respectively. We aim to similarly formulate the conditional expectation in terms of derivatives of the generating function.

For \(x\in \Omega _X,\) we define

$$\begin{aligned} g_x(s):= & {} \frac{\partial ^x \phi (t,s)}{dt^x} \Big \vert _{t=0}, \nonumber \\= & {} x! \, \sum _{y\in \Omega _Y} s^y \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y). \end{aligned}$$
(C.2)

Verbosely, the function \(g_x(s)\) is the xth derivative of \(\phi \) with respect to t,  evaluated at \(t=0.\) We take the natural logorithm of \(g_x(s)\) to get,

$$\begin{aligned} \log (g_x(s)) = \sum _{n=1}^{x} n \, + \log \left( \sum _{y\in \Omega _Y} s^y \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y) \right) . \end{aligned}$$

Taking the derivative of the expression above with respect to s gives us,

$$\begin{aligned} \frac{d \log (g_x(s)) }{ds} = \frac{ \sum _{y\in \Omega _Y} y \, s^{y-1} \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y) }{\sum _{y\in \Omega _Y} s^y \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y)}. \end{aligned}$$
(C.3)

Then evaluating the function at \(s=1\) gives us,

$$\begin{aligned} \frac{d \log (g_x(s)) }{ds} \Big \vert _{s=1}= & {} \frac{ \sum _{y\in \Omega _Y} y \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y) }{\sum _{y\in \Omega _Y} \, {{\,\mathrm{\textit{p}}\,}}(X=x,Y=y)}, \nonumber \\= & {} \sum _{y\in \Omega _Y} y {{\,\mathrm{\textit{p}}\,}}(Y=y\, | X=x), \nonumber \\:= & {} {\mathbb {E}}[Y_x]. \end{aligned}$$
(C.4)

We have derived the definition of the conditional expectation as function of the derivatives of the generating function. Naturally, if the generating function is known, one can evaluate the terms in (C.4) and determine the corresponding conditional expectation structure.

1.2 Linear conditional expectation form of the simple mRNA transcription model

We prove that the simple mRNA transcription model has a linear conditional expectation form by using the generating function given by Bokes et al. (2012) and substituting it into (C.4). We begin by establishing some notation in order to align with the work by Brokes et al.

Let MN be the random variables corresponding with mRNA population and protein population, respectively. Let the reaction channels be given as follows:

$$\begin{aligned} R_1:\, \emptyset \xrightarrow {k_1} M, \, R_2:\, M \xrightarrow {\gamma _1} \emptyset , \,R_3:\, M \xrightarrow {k_2} M + N , \, R_4:\, N \xrightarrow {\gamma _2} \emptyset . \end{aligned}$$

We are investigating the dynamics of the stationary distribution, hence we omit the time component. It was shown by Brokes et al. that the stationary moments of the simple mRNA translation model are as follows:

$$\begin{aligned} {\mathbb {E}}[M]= & {} \frac{k_1}{\gamma _1},\, {\mathbb {E}}[N] = \frac{k_1\,k_2}{\gamma _1\,\gamma _2} , \end{aligned}$$
(C.5)
$$\begin{aligned} {\mathbb {V}}[M]= & {} \frac{k_1}{\gamma _1} , \, {\mathbb {V}}[N] = \frac{k_1\,k_2}{\gamma _1\,\gamma _2}\,\left( 1 + \frac{k_2}{\gamma _1 + \gamma _2}\right) , {\mathrm {cov}}(M,N) = \frac{k_1\,k_2}{\gamma _1\,( \gamma _1+ \gamma _2)} .\qquad \end{aligned}$$
(C.6)

Then the generating function of the stationary distribution is given by,

$$\begin{aligned} \phi (t,s) = e^{a(s) + (t-1)\,b(s)}, \end{aligned}$$
(C.7)

where

$$\begin{aligned} a(s) := \alpha \,\beta \,\int _0^s K(1, 1+ \lambda ,\beta (r-1))dr \text { and } b(s) := \alpha \,K(1,1+\lambda , \beta (s-1)), \end{aligned}$$

with \(K(\cdot ,\cdot ,\cdot )\) being the Kummer’s function and

$$\begin{aligned} \lambda =\frac{\gamma _1}{\gamma _2}, \, \alpha = \frac{k_1}{\gamma _1}, \, \beta = \frac{k_2}{\gamma _2}. \end{aligned}$$
(C.8)

To find the conditional expectation of the simple mRNA translation model, we will substitute its generating function (C.7), into (C.2) and reduce.

$$\begin{aligned} g_m(s):= & {} \frac{\partial ^m \phi (t,s)}{dt^m} \Big \vert _{t=0}, \\= & {} e^{a(s) + (t-1)\,b(s)}\,b(s)^{m} \Big \vert _{t=0}, \\= & {} e^{a(s) - b(s)}\,b(y)^{m}. \end{aligned}$$

Taking the natural log gives us,

$$\begin{aligned} \log g_m(s) = a(s) - b(s) + m\,\log (b(s)). \end{aligned}$$

Then taking the derivative with respect to s gives us,

$$\begin{aligned} \frac{ \log g_m(s)}{ds} = \frac{da(s)}{ds} - \frac{db(s)}{ds} + \frac{m}{b(s)}\, \frac{db(s)}{ds}. \end{aligned}$$
(C.9)

By the fundamental theorem of calculus we have that

$$\begin{aligned} \frac{da(s)}{ds} = \alpha \,\beta \,K(1,1+\lambda ,\beta (s-1)), \end{aligned}$$

and by the properties of the derivative of the Krummer’s function, \(\frac{d}{dc} K(a,b,f(c)) = \frac{a\,f'(c)}{b} K(a+1,b+1,f(c)),\) we have that

$$\begin{aligned} \frac{db(s)}{ds} = \frac{\alpha \,\beta }{1+\lambda }\,K(2,2+\lambda ,\beta (s-1)). \end{aligned}$$

Substituting these terms into (C.9), then evaluating at \(s=1\) and applying the property that \(K(\cdot ,\cdot ,0) = 1\) gives us,

$$\begin{aligned} \frac{ \log g_m(s)}{ds} \big \vert _{s=1} = \alpha \,\beta - \frac{\alpha \,\beta }{1+\lambda }\ + m \,\frac{1}{\beta }\, \frac{\alpha \,\beta }{1+\lambda }. \end{aligned}$$

By the definition given in (C.4), we have that

$$\begin{aligned} {\mathbb {E}}[N_m] = \alpha \,\beta - \frac{\alpha \,\beta }{1+\lambda }\ + m \,\frac{1}{\beta }\, \frac{\alpha \,\beta }{1+\lambda }. \end{aligned}$$

After substituting in the term (C.8), the conditional expectation in terms of the reaction rates is given to be,

$$\begin{aligned} {\mathbb {E}}[N_m] = \frac{k_1\, k_2}{\gamma _1\, \gamma _2} - \frac{k_1\, k_2}{\gamma _1(\gamma _1+\gamma _2)} + m\,\frac{k_2}{\gamma _1+\gamma _2}. \end{aligned}$$
(C.10)

Hence, the conditional expectation of the simple mRNA translation model has a linear form. We now cross-validate the coefficients linking the terms above to the raw moments using Lemma 3.1.

1.3 Cross-validation

Using (3.2) we know that linear conditional expectations of protein conditioned on mRNA should have the form:

$$\begin{aligned} {\mathbb {E}}[N_m]&= \frac{{\mathrm {cov}}(M,N)}{{\mathbb {V}}[M]}\,\left( m - {\mathbb {E}}[M]\right) + {\mathbb {E}}[N]. \end{aligned}$$

Substituting in (C.5) and (C.6) for the moments gives us,

$$\begin{aligned} = \frac{k_2}{\gamma _1 + \gamma _2}\,\left( m - \frac{k_1}{\gamma _1} \right) +\frac{k_1\,k_2}{\gamma _1\,\gamma _2}, \end{aligned}$$

expanding the terms gives us,

$$\begin{aligned} = m\,\frac{k_2}{\gamma _1+\gamma _2} - \frac{k_1\, k_2}{\gamma _1(\gamma _1+\gamma _2)} + \frac{k_1\, k_2}{\gamma _1\, \gamma _2}. \end{aligned}$$
(C.11)

Both the terms in (C.10) and (C.11) match.

Model 3: Conditional expectation through time

In this section we evaluate Model 3 at different time points to observe if the conditional expectation’s quadratic structure is present through time. Since there are no analytical solutions for the model known to date, we use an OFSP approximation as the reference solution and see how close this approximation’s conditional expectation is to the conditional expectation ansatz. The OFSP approximation was set to have a global \(\ell _1\) error of \(10^{-7}.\)

In Fig. 10a–c, the joint distribution is rendered in a contour plot, evaluated at time points \(T=0.15,\ 0.3,\)\(\text { and } 1.2.\) Below the joint distributions, in Fig. 10d–e, the corresponding conditional expectation and the quadratic ACE ansatz are given. We see that the conditional expectation and the ansatz are fairly similar. There are some mismatches at the boundary, but this is to be expected since the OFSP does produce artefacts at the boundary due to truncation criterions.

To further investigate the resolution at which conditional expectations and the ACE ansatz differ, we study the differences between them though time using three different metrics: the \(\ell _{\infty }\) norm, to study the maximum error at a particular time point; the \(\ell _{2}\) norm, to study the difference over the entire state space; and lastly, the relative error in \(\ell _{2},\) to see how the error is changing with respect to the change in the conditional expectation. In Fig. 10g, we see that the \(\ell _{\infty }\) norm is of the order \(10^{-2}\) in the interval of interest and the error is increasing with time. Then in Fig. 10h, we notice that the \(\ell _2\) norm has a similar trend as the \(\ell _{\infty }.\) However, interestingly the total error over the state space of the \(\ell _2\) norm is only twice as much as that of the \(\ell _{\infty }\) norm, implying that there are only a few states which are contributing most of the error. Lastly, in Fig. 10i, we study the relative error over time. We notice that this error falls to roughly \(10^{-4},\) implying that the error between the ACE ansatz and the conditional expectation is roughly ten thousand times smaller than the conditional expectation. This suggests that the model likely does exhibit a quadratic conditional expectation structure.

Fig. 10
figure 10

Model 3 evaluated at time points \(T=0.15,\ 0.3,\text { and } 1.2\) (\(T=0.6\) is given in Fig. 1c). ac Contour plots describing the joint probability distributions generated using the OFSP method with a global error of \(10^{-7}.\) The distributions corresponding to time points \(T=0.15,\ 0.3,\text { and } 1.2,\) are given from left to right respectively. df The conditional expectation of the joint probability distribution is marked with red crosses. The ACE polynomial fit of order two is drawn as a solid blue line. The conditional expectations evaluated at time points \(T=0.15,\ 0.3,\text { and } 1.2,\) are given from left to right respectively. gf The \(\ell _{\infty }\) and \(\ell _{2}\) norm of the difference between the OFSP conditional expectation and the ACE quadratic ansatz through time, respectively. i Relative error with respect to the \(\ell _2\) norm showing how the error in the conditional expectation evolves with respect to the conditional expectation (color figure online)

Simple gene switch derivations

1.1 Chemical master equation

$$\begin{aligned}&\frac{d{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t)}{dt}\\&\quad = \tau _{\mathbf{off }} {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m,A=a;t) \\&\qquad +\ \gamma _1\, (m+1){{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m+1,A=a;t) \\&\qquad +\ \kappa _2 \, m{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a-1;t) \\&\qquad +\ \gamma _2\, (a+1){{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a+1;t) \\&\qquad - \left[ \tau _{\mathbf{on }} + (\gamma _1+ \kappa _2)\, m +( \gamma _2 + \hat{\tau }_{\mathbf{on }})\, a \right] {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t).\\&\frac{d{{\,\mathrm{\textit{p}}\,}}(G=\mathbf{on },M=m,A=a;t)}{dt}\\&\quad = \tau _\mathbf{on }{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t) \\&\qquad +\ \kappa _1 {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m-1,A=a;t) \\&\qquad +\ \gamma _1 \, (m+1) {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m+1,A=a;t)\\&\qquad +\ \kappa _2 \, m {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m,A=a-1;t)\\&\qquad +\ \gamma _2 \, (a+1){{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m,A=a+1;t) \\&\qquad +\ \hat{\tau }_{\mathbf{on }} \, (a+1) {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a+1;t) \\&\qquad -\ \left\{ \tau _{\mathbf{off }} + \kappa _1 + (\gamma _1 + \kappa _2)\, m + \gamma _2\, a \right\} {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },M=m,A=a;t) \end{aligned}$$

1.2 Marginal distributions

We follow the same steps as in the generalised form (see Sect. 2.2). Deriving the CME for the marginal distribution of the gene and the proteins involves the following two steps:

  • substituting \( {{\,\mathrm{\textit{p}}\,}}( G=\cdot ,M=\cdot ,A=\cdot ;t) = {{\,\mathrm{\textit{p}}\,}}(M=\cdot \,|\,G=\cdot , A=\cdot ;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\cdot ,A=\cdot ;t),\)

  • summing over all \(m \in \Omega _M\) and then collating all conditional probability terms.

1.2.1 Step 1

$$\begin{aligned}&\frac{d{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t)}{dt}\\&\quad = \tau _{\mathbf{off }} {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \ \\&\qquad +\ \gamma _1\, (m+1){{\,\mathrm{\textit{p}}\,}}(M=m+1\,|\,G=\mathbf{off }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t) \\&\qquad +\ \kappa _2 \, m{{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a-1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a-1;t) \\&\qquad +\ \gamma _2\, (a+1){{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a+1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a+1;t) \\&\qquad - \left[ \tau _{\mathbf{on }} + (\gamma _1+ \kappa _2)\, m +( \gamma _2 + \hat{\tau }_{\mathbf{on }})\, a \right] {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a;t)\\&\qquad \times {{\,\mathrm{\textit{p}}\,}}( G=\cdot ,A=\cdot ;t).\\&\frac{d{{\,\mathrm{\textit{p}}\,}}(G=\mathbf{on },M=m,A=a;t)}{dt}\\&\quad = \tau _\mathbf{on }{{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t) \\&\qquad +\ \kappa _1 {{\,\mathrm{\textit{p}}\,}}(M=m-1\,|\,G=\mathbf{on }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \\&\qquad +\ \gamma _1 \, (m+1) {{\,\mathrm{\textit{p}}\,}}(M=m+1\,|\,G=\mathbf{on }, A=a;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t)\\&\qquad +\ \kappa _2 \, m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a-1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a-1;t) \\&\qquad +\ \gamma _2 \, (a+1){{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a+1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a+1;t) \\&\qquad +\ \hat{\tau }_{\mathbf{on }} \, (a+1){{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a+1;t)\, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a+1;t) \\&\qquad -\ \left\{ \tau _{\mathbf{off }} + \kappa _1 + (\gamma _1 + \kappa _2)\, m + \gamma _2\, a \right\} {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a;t)\\&\qquad \times \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \end{aligned}$$

1.2.2 Step 2

$$\begin{aligned}&\sum _m\frac{d{{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },M=m,A=a;t)}{dt}\\&\quad = \tau _{\mathbf{off }} \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \ \\&\qquad +\ \gamma _1\, \left( \sum _m (m+1){{\,\mathrm{\textit{p}}\,}}(M=m+1\,|\,G=\mathbf{off }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t) \\&\qquad +\ \kappa _2 \, \left( \sum _m m{{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a-1;t)\ \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a-1;t) \\&\qquad +\ \gamma _2\, (a+1)\, \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a+1;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a+1;t) \\&\qquad - \left[ \tau _{\mathbf{on }} + \left( \sum _m (\gamma _1+ \kappa _2)\, m\, {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a;t) \right) +( \gamma _2 + \hat{\tau }_{\mathbf{on }})\, a \right] \\&\qquad \times {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t).\\&\sum _m\frac{d{{\,\mathrm{\textit{p}}\,}}(G=\mathbf{on },M=m,A=a;t)}{dt}\\&\quad = \tau _\mathbf{on }\left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a;t) \\&\qquad +\ \kappa _1 \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m-1\,|\,G=\mathbf{on }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \\&\qquad +\ \gamma _1 \, \left( \sum _m (m+1) {{\,\mathrm{\textit{p}}\,}}(M=m+1\,|\,G=\mathbf{on }, A=a;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t)\\&\qquad +\ \kappa _2 \, \left( \sum _m m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a-1;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a-1;t)\\&\qquad +\ \gamma _2 \, (a+1)\, \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a+1;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a+1;t) \\&\qquad +\ \hat{\tau }_{\mathbf{on }} \, (a+1)\, \left( \sum _m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{off }, A=a+1;t) \right) \, {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{off },A=a+1;t) \\&\qquad -\ \left[ \tau _{\mathbf{off }} + \kappa _1 + \left( \sum _m (\gamma _1 + \kappa _2)\, m {{\,\mathrm{\textit{p}}\,}}(M=m\,|\,G=\mathbf{on }, A=a;t)\ \right) + \gamma _2\, a \right] \\&\qquad \times {{\,\mathrm{\textit{p}}\,}}( G=\mathbf{on },A=a;t) \end{aligned}$$

Formal ACE-Ansatz approximation derivation

Before we begin the derivation, it is important to discuss Assumption 2.1-3. We state that the joint distribution needs to have non-zero probability over all of the state space through all time. We can easily violate this condition by starting the Kurtz process with the initial probability distribution which is non-zero over only a subset of the entire state space (e.g. a single state). However, the CME generator (2.4) has the feature that regardless of the initial condition, in an infinitesimal time, all the states have non-zero probability. Hence, numerically, if the processes does start at a single state, we can evolve it forward by a small time step using OFSP, and then use this time point for the initial condition in the dimension reduction methods. In the case of the Simple Gene Switch example in Sect. 5.1.2, we used \(t=1\) as the starting point for all dimension reduction methods.

We use the following notational convention: the approximation of the probability measure \(p(G=g,A=a;t)\) is denoted by the function w(gat),  furthermore, the approximation for the expectation operator \({\mathbb {E}}[\bullet (t)]\) is denoted by the function \(\eta _{\bullet }(t).\) Then the formal derivation of equation (5.1)–(5.8) are given by Eqs. (F.1)–(F.12).

$$\begin{aligned} \frac{d\, w(\mathbf{off },a,t)}{dt} =&\, \tau _\mathbf{off }\, w(\mathbf{on },a,t) \nonumber \\&\quad +\, k_2\, \eta _{M|}(\mathbf{off },a-1,t)\, w(\mathbf{off },a-1,t) \nonumber \\&\quad +\, \gamma _2\,(a+1)\,\,w(\mathbf{off },a+1,t) \nonumber \\&\quad -\, \left( \tau _\mathbf{on }+ k_2\,\eta _{M|}(\mathbf{off },a,t) + (\gamma _2 + \hat{\tau }_\mathbf{on })\,a \right) \, w(\mathbf{off },a,t), \end{aligned}$$
(F.1)
$$\begin{aligned} \frac{d\, w(\mathbf{on }, a , t)}{dt} =&\, \tau _\mathbf{on }\, w(\mathbf{off },a,t) \nonumber \\&\quad +\, k_2\, \eta _{M|}(\mathbf{on },a-1,t)\,w(\mathbf{on },a-1,t) \nonumber \\&\quad +\, \gamma _2\,(a+1)\,w(\mathbf{on },a+1,t) \nonumber \\&\quad +\, \hat{\tau }_\mathbf{on }\,(a+1)\,w(\mathbf{off },a+1,t) \nonumber \\&\quad -\, \left( \tau _\mathbf{off }+ k_2\, \eta _{M|}(\mathbf{on },a,t) +\ \gamma _2\,a \right) \, w(\mathbf{on },a,t). \end{aligned}$$
(F.2)
$$\begin{aligned} \eta _{M|}(g,a,t) =&\, \alpha \, \left( \left[ \begin{array}{c} g \\ a \end{array}\right] - \left[ \begin{array}{c} \eta _G(t) \\ \eta _A(t) \end{array}\right] \right) +\eta _M(t) \end{aligned}$$
(F.3)
$$\begin{aligned} \frac{d\, \eta _M(t)}{dt} =&\, k_1\, \eta _G(t) - \gamma _1\, \eta _M(t). \end{aligned}$$
(F.4)
$$\begin{aligned} \frac{d\,\eta _{G\,M}(t)}{dt} =&\tau _\mathbf{on }\,\left( -\eta _{G\,M}(t) - \eta _M(t) \right) - \tau _\mathbf{off }\, \eta _{G\,M}(t) + k_1\, \eta _G(t) \nonumber \\&\quad -\,\gamma _1\, \eta _{G\,M}(t) + \hat{\tau }_\mathbf{on }\, \left( \eta _{M\, A}(t) - \eta _{G\,M\,A}(t)\right) . \end{aligned}$$
(F.5)
$$\begin{aligned} \frac{d\,\eta _{M\,A}(t)}{dt} =&\, k_1\, \eta _{G\,A}(t) - (\gamma _1 + \gamma _2)\,\eta _{M\,A}(t) + k_2\, \eta _{M^2}(t) \nonumber \\&\quad -\,\hat{\tau }_\mathbf{on }\, \left( \eta _{M\, A}(t) - \eta _{G\,M\,A}(t) \right) \end{aligned}$$
(F.6)
$$\begin{aligned} \frac{d\,\eta _{M^2}(t)}{dt} =&\, k_1\,\left( 2\,\eta _{G\,M}(t) + \eta _G(t)\right) + \gamma _1\,\left( -2\,\eta _{M^2}(t) + \eta _M(t) \right) . \end{aligned}$$
(F.7)
$$\begin{aligned} \eta _{G\,M\,A}(t) =&\, \sum _{a\in {\mathbb {Z}}_+} \eta _{M|}(\mathbf{on },a,t) \,a\, w(\mathbf{on },a,t). \end{aligned}$$
(F.8)
$$\begin{aligned} \eta _{A}(t) =&\, \sum _{a\in {\mathbb {Z}}_+} \,a\, \left[ w(\mathbf{on },a,t) + w(\mathbf{off },a,t)\right] . \end{aligned}$$
(F.9)
$$\begin{aligned} \eta _{A^2}(t) =&\,\sum _{a\in {\mathbb {Z}}_+} \,a^2\, \left[ w(\mathbf{on },a,t) + w(\mathbf{off },a,t)\right] . \end{aligned}$$
(F.10)
$$\begin{aligned} \eta _{G^2}(t) =&\, \eta _G(t). \end{aligned}$$
(F.11)
$$\begin{aligned} \alpha :=&\, \left[ \begin{array}{cc} \eta _{G\,M}(t) - \eta _G(t)\,\eta _M(t)&\eta _{M\,A}(t) - \eta _M(t)\,\eta _A(t) \end{array}\right] \nonumber \\&\qquad \left( \begin{array}{cc} \eta _{G^2}(t) - \eta _G(t)^2 &{} \eta _{G\,A}(t) - \eta _G(t)\,\eta _A(t) \\ \eta _{G\,A}(t) - \eta _G(t)\,\eta _A(t) &{} \eta _{A^2}(t) - \eta _A(t)^2 \end{array} \right) ^{-1}. \end{aligned}$$
(F.12)

Two gene toggle switch derivations

We use the following notational convention: the approximation of the probability measure \(prob(G_0=g,P=p;t)\) is denoted by the function w(gpt),  furthermore, the approximation for the expectation operator \({\mathbb {E}}[\bullet (t)]\) is denoted by the function \(\eta _{\bullet }(t).\) Like in the simple gene switch case, the approximation is started at \(t=0.35\) to satisfy Assumption 2.1-3. We introduce the equations of motions in the following order: marginal distributions, moments, higher order moment closures, and the linear ACE-Ansatz approximations.

1.1 Marginal distribution

$$\begin{aligned} \frac{dw( G_0^\mathbf{on },p,t)}{dt}= & {} \sigma _1\,\eta _{M|}(G_0^\mathbf{off }, p,t)\,w(G_0^\mathbf{off },p,t)\nonumber \\&+\ \rho _2\, w( G_0^\mathbf{on },p-1,t) \nonumber \\&+\ k\,(p+1)\,w( G_0^\mathbf{on },p+1,t) \nonumber \\&+\ \sigma _3\,(1.0-\eta _{G_1|}(G_0^\mathbf{on },p+1,t ))\,(p+1)\,w(G_0^\mathbf{on },p+1,t) \nonumber \\&- \ \sigma _2\, w(G_0^\mathbf{on },p,t) \nonumber \\&-\ \rho _2\, \, w(G_0^\mathbf{on },p,t) \nonumber \\&-\ k\,p\, w(G_0^\mathbf{on },p,t) \nonumber \\&-\ \sigma _3\,(1.0-\eta _{G_1|}(G_0^\mathbf{on },p,t ))\,p\,w(G_0^\mathbf{on },p,t) \end{aligned}$$
(G.1)
$$\begin{aligned} \frac{dw( G_0^\mathbf{off },p,t)}{dt}= & {} \ \sigma _2\, w(G_0^\mathbf{on },p,t) \nonumber \\&+\ \rho _1\, w( G_0^\mathbf{off },p-1,t) \nonumber \\&+\ k\,(p+1)\,w( G_0^\mathbf{off },p+1,t) \nonumber \\&+\ \sigma _3\,(1.0-\eta _{G_1|}(G_0^\mathbf{off },p+1,t ))\,(p+1)\,w(G_0^\mathbf{off },p+1,t) \nonumber \\&- \ \sigma _1\,\eta _{M|}(G_0^\mathbf{off }, p,t)\,w(G_0^\mathbf{off },p,t) \nonumber \\&-\ \rho _1\, \, w(G_0^\mathbf{off },p,t) \nonumber \\&-\ k\,p\, w(G_0^\mathbf{off },p,t) \nonumber \\&-\ \sigma _3\,(1.0-\eta _{G_1|}(G_0^\mathbf{off },p,t ))\,p\,w(G_0^\mathbf{off },p,t) \end{aligned}$$
(G.2)

1.2 Moments

We derive the equations of motion for the following eight moments: \({\mathbb {E}}[G_1(t)],\)\( {\mathbb {E}}[M(t)],\)\({\mathbb {E}}[G_0\,G_1(t)],\)\({\mathbb {E}}[G_0\,M(t)],\)\({\mathbb {E}}[G_1\,P(t)],\)\({\mathbb {E}}[G_1\,M(t)],\)\({\mathbb {E}}[P\,M(t)],\) and \({\mathbb {E}}[M^2(t)].\)

Let \(\mu (t) := [ \eta _{G_1}(t), \eta _{M}(t), \eta _{G_0\,G_1}(t),\eta _{G_0\,M}(t),\eta _{G_1\, M}(t),\eta _{P\, M}(t),\eta _{M^2}(t) ],\) then the equation of motion for the approximation of the moments has the form:

$$\begin{aligned} \frac{d\mu (t)}{dt} = A\, \mu (t) + A^*, \end{aligned}$$

where

$$\begin{aligned} A := \left[ \begin{array}{cccccccc} -\sigma _4 &{} 0 &{} 0&{} 0&{} -\sigma _3&{} 0&{} 0&{} 0 \\ -\rho _3 + \rho _4 &{} -k - \sigma _1 &{} 0&{} \sigma _1&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} -\sigma _2 - \sigma _4&{} 0&{} 0&{} \sigma _1&{} 0&{} 0\\ 0&{} -\sigma _1&{} -\rho _3 + \rho _4&{} -k + \sigma _1 - \sigma _2&{} 0&{} 0&{} 0&{} \sigma _1 \\ \rho _1&{} 0&{} -\rho _1 + \rho _2, 0&{} -k + \sigma _3 - \sigma _4&{} 0&{} 0&{} 0 \\ \rho _4&{} 0&{} 0&{} 0&{} 0&{} -k - \sigma _1 - \sigma _4&{} \sigma _3&{} 0 \\ 0&{} \rho _1&{} 0&{} -\rho _1 + \rho _2&{} -\rho _3 + \rho _4&{} 0&{} -2\,k - \sigma _1 - \sigma _3&{} 0 \\ -\rho _3 + \rho _4&{} k + 2\,\rho _3 + \sigma _1&{} 0&{} -\sigma _1&{} 0&{} -2\,\rho _3 + 2\,\rho _4&{} 0&{} -2\,k - 2\, \sigma _1 \end{array} \right] \end{aligned}$$

and

$$\begin{aligned} A^* := \left[ \begin{array}{c} \sigma _3\,\eta _P(t) \\ \rho _3\\ -\sigma _1\,\eta _{G_0\,G_1\,M}(t) + \sigma _3\,\eta _{G_0\,P}(t) -\sigma _3\,\eta _{G_0\,G_1\,P}(t) \\ -\sigma _1\,\eta _{G_0\,M^2}(t) + \rho _3\,\eta _{G_0}(t) \\ \sigma _3\,\eta _{P^2}(t) -\sigma _3\,\eta _P(t) -\sigma _3\,\eta _{G_1\, P^2}(t) \\ \sigma _1\,\eta _{G_0\,G_1\,M}(t) -\sigma _3\,\eta _{G_1\,P\,M}(t) \\ 2\,\sigma _1\,\eta _{G_0\,M^2}(t) + \rho _3 \end{array} \right] . \end{aligned}$$

1.3 Higher order moment closures

Let \(w(G_0^\mathbf{on },t) := \sum _p w(G_0^\mathbf{on },p,t)\), and \(w(p,t) := w(G_0^\mathbf{on },p,t) + w(G_0^\mathbf{off },p,t)\). We apply the follow moment closers:

$$\begin{aligned} \eta _{G_0\,M^2}(t)= & {} (\eta _{M|}(G_0^\mathbf{on },t)^2 +\eta _{M|}(G_0^\mathbf{on },t))\,w(G_0^\mathbf{on },t), \end{aligned}$$
(G.3)
$$\begin{aligned} \eta _{G_1\,P^2}(t)= & {} \sum _p \eta _{G_1|}(p,t)\, p^2 \, w(p,t), \end{aligned}$$
(G.4)
$$\begin{aligned} \eta _{G_0\,G_1\,M}(t)= & {} \eta _{G_1|}(G_0^\mathbf{on },t)\,\eta _{M|}(G_0^\mathbf{on },t)\,w(G_0^\mathbf{on },t), \end{aligned}$$
(G.5)
$$\begin{aligned} \eta _{G_1\,P\,M}= & {} \sum _p \eta _{G_1|}(p,t) \, \eta _{M|}(p,t) \, p \, w(p,t), \end{aligned}$$
(G.6)
$$\begin{aligned} \eta _{G_0\,G_1\,P}(t)= & {} \sum _p \eta _{G_1|}(G_0^\mathbf{on },p,t) \, p \, w(G_0^\mathbf{on },p,t), \end{aligned}$$
(G.7)
$$\begin{aligned} \eta _{G_0\,P\,M}(t)= & {} \sum _p \eta _{M|}(G_0^\mathbf{on },p,t) \, p \, w(G_0^\mathbf{on },p,t). \end{aligned}$$
(G.8)

Similarly, we can use the marginal distribution, \(w(G_0^\mathbf{on },p,t),\) to generate the corresponding moments:

$$\begin{aligned} \eta _P(t)= & {} \sum _p p\, w(p,t), \end{aligned}$$
(G.9)
$$\begin{aligned} \eta _{P^2}(t)= & {} \sum _p p^2\, w(p,t), \end{aligned}$$
(G.10)
$$\begin{aligned} \eta _{G_0}(t)= & {} \sum _p w(G_0^\mathbf{on },p,t) \end{aligned}$$
(G.11)
$$\begin{aligned} \eta _{G_0\,P}(t)= & {} \sum _p p\, w(G_0^\mathbf{on },p,t). \end{aligned}$$
(G.12)

1.4 Linear ACE-Ansatz approximations

We approximate the conditional expectations with the linear ACE anzats:

$$\begin{aligned} \eta _{M|}(g,p,t)&= \alpha _{M|G_0,P}\, \left( \left[ \begin{array}{c} g \\ p \end{array}\right] - \left[ \begin{array}{c} \eta _{G_0}(t) \\ \eta _{P}(t) \end{array}\right] \right) +\eta _M(t) , \end{aligned}$$
(G.13)
$$\begin{aligned} \eta _{G1|}(g,p,t)&= \alpha _{G_1|G_0,P}\, \left( \left[ \begin{array}{c} g \\ p \end{array}\right] - \left[ \begin{array}{c} \eta _{G_0}(t) \\ \eta _{P}(t) \end{array}\right] \right) +\eta _{G_1}(t) , \end{aligned}$$
(G.14)
$$\begin{aligned} \eta _{G_1|}(p,t)&= \alpha _{G_1|P} (p - \eta _P(t)) + \eta _{G_1}(t), \end{aligned}$$
(G.15)
$$\begin{aligned} \eta _{G_1|}(g,t)&= \alpha _{G_1|G_0} (g - \eta _{G_0}(t)) + \eta _{G_1}(t), \end{aligned}$$
(G.16)
$$\begin{aligned} \eta _{M|}(p,t)&= \alpha _{M|P} (p - \eta _P(t)) + \eta _{M}(t), \end{aligned}$$
(G.17)
$$\begin{aligned} \eta _{M|}(g,t)&= \alpha _{M|G_0} (g - \eta _{G_0}(t)) + \eta _{M}(t). \end{aligned}$$
(G.18)

Where the gradients are given by:

$$\begin{aligned} \alpha _{M|G_0,P}:= & {} \left[ \begin{array}{cc} \eta _{G_0\,M}(t) - \eta _{G_0}(t)\,\eta _M(t)&\eta _{M\,P}(t) - \eta _M(t)\,\eta _P(t) \end{array}\right] \\&\quad \left( \begin{array}{cc} \eta _{G_0}(t) - \eta _{G_0}(t)^2 &{} \eta _{G_0\,P}(t) - \eta _{G_0}(t)\,\eta _P(t) \\ \eta _{G_0\,P}(t) - \eta _{G_0}(t)\,\eta _P(t) &{} \eta _{P^2}(t) - \eta _P(t)^2 \end{array} \right) ^{-1},\\ \alpha _{G_1|G_0,P}:= & {} \left[ \begin{array}{cc} \eta _{G_0\,G_1}(t) - \eta _{G_0}(t)\,\eta _{G_1}(t)&\eta _{G_1\,P}(t) - \eta _{G_1}(t)\,\eta _P(t) \end{array}\right] \\&\quad \left( \begin{array}{cc} \eta _{G_0}(t) - \eta _{G_0}(t)^2 &{} \eta _{G_0\,P}(t) - \eta _{G_0}(t)\,\eta _P(t) \\ \eta _{G_0\,P}(t) - \eta _{G_0}(t)\,\eta _P(t) &{} \eta _{P^2}(t) - \eta _P(t)^2 \end{array} \right) ^{-1},\\ \alpha _{G_1|P}:= & {} \left( \frac{ \eta _{G_1\,P}(t) - \eta _{G_1}(t)\,\eta _P(t)}{ \eta _{P^2}(t) - \eta _P(t)^2} \right) ,\\ \alpha _{G_1|G_0}:= & {} \left( \frac{ \eta _{G_1\,G_0}(t) - \eta _{G_1}(t)\,\eta _{G_0}(t)}{ \eta _{G_0}(t) - \eta _{G_0}(t)^2} \right) ,\\ \alpha _{M|P}:= & {} \left( \frac{ \eta _{M\,P}(t) - \eta _{M}(t)\,\eta _P(t)}{ \eta _{P^2}(t) - \eta _P(t)^2} \right) ,\\ \alpha _{M|G_0}:= & {} \left( \frac{ \eta _{M\,G_0}(t) - \eta _{M}(t)\,\eta _{G_0}(t)}{ \eta _{G_0}(t) - \eta _{G_0}(t)^2} \right) . \end{aligned}$$

SIR system parameters

The initial starting population was set to \((S(0)=200, I(0)=4).\) The OFSP method was configured to have a global error of \(10^{-6},\) with compression performed every 10 steps where each time step was of length 0.002. The distribution is the snapshot of the system at \(t=0.15.\) We also omit the recovered state since the total population is conserved, that is, \(S(t)+I(t) +R(t) = 204\) for all time (See Table 12).

Table 12 SIR system parameters

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sunkara, V. Algebraic expressions of conditional expectations in gene regulatory networks. J. Math. Biol. 79, 1779–1829 (2019). https://doi.org/10.1007/s00285-019-01410-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00285-019-01410-y

Keywords

Mathematics Subject Classification

Navigation