Skip to main content
Log in

Piecewise Linear and Boolean Models of Chemical Reaction Networks

Bulletin of Mathematical Biology Aims and scope Submit manuscript

Abstract

Models of biochemical networks are frequently complex and high-dimensional. Reduction methods that preserve important dynamical properties are therefore essential for their study. Interactions in biochemical networks are frequently modeled using Hill functions (\(x^n/(J^n+x^n)\)). Reduced ODEs and Boolean approximations of such model networks have been studied extensively when the exponent \(n\) is large. However, while the case of small constant \(J\) appears in practice, it is not well understood. We provide a mathematical analysis of this limit and show that a reduction to a set of piecewise linear ODEs and Boolean networks can be mathematically justified. The piecewise linear systems have closed-form solutions that closely track those of the fully nonlinear model. The simpler, Boolean network can be used to study the qualitative behavior of the original system. We justify the reduction using geometric singular perturbation theory and compact convergence, and illustrate the results in network models of a toggle switch and an oscillator.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

References

  • Abou-Jaoudé W, Ouattara D, Kaufman M (2009) From structure to dynamics: frequency tuning in the p53-mdm2 network: I. Logical approach. J Theor Biol 258(4):561–577

    Article  Google Scholar 

  • Abou-Jaoudé W, Ouattara D, Kaufman M (2010) From structure to dynamics: frequency tuning in the p53-mdm2 network: II. Differential and stochastic approaches. J Theor Biol 264(4):1177–1189

    Article  Google Scholar 

  • Aguda D (2006) Modeling the cell division cycle. Lecture notes in mathematics, vol 1872. Springer Berlin Heidelberg

  • Albert R, Othmer H (2003) The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila Melanogaster. J Theor Biol 223(1):1–18

    Article  MathSciNet  Google Scholar 

  • Alon U (2006) An introduction to systems biology: design principles of biological circuits. Chapman and Hall/CRC, Boca Raton, FL

  • Aracena J, Goles E, Moreira A, Salinas L (2009) On the robustness of update schedules in Boolean networks. Biosystems 97(1):1–8

    Article  Google Scholar 

  • Casey R, de Jong H, Gouzé J (2006) Piecewise-linear models of genetic regulatory networks: equilibria and their stability. J Math Biol 52(1):27–56

    Article  MathSciNet  MATH  Google Scholar 

  • Chaves M, Tournier L, Gouzé J (2010) Comparing Boolean and piecewise affine differential models for genetic networks. Acta Biotheor 58(2):217–232

    Article  Google Scholar 

  • Cheng X, Sun M, Socolar J (2013) Autonomous Boolean modelling of developmental gene regulatory networks. J R Soc Interface 10(78):20120574

    Article  Google Scholar 

  • Ciliberto A, Capuani F, Tyson J (2007) Modeling networks of coupled enzymatic reactions using the total quasi-steady state approximation. PLoS Comput Biol 3(3):e45

    Article  MathSciNet  Google Scholar 

  • Davidich M, Bornholdt S (2008) The transition from differential equations to Boolean networks: a case study in simplifying a regulatory network model. J Theor Biol 255(3):269–277

    Article  Google Scholar 

  • Davidich M, Bornholdt S (2008) Boolean network model predicts cell cycle sequence of fission yeast. PLoS ONE 3(2):e1672

    Article  Google Scholar 

  • De Jong H (2002) Modeling and simulation of genetic regulatory systems: a literature review. J Comput Biol 9(1):67–103

    Article  Google Scholar 

  • De Jong H, Gouzé J, Hernandez C, Page M, Sari T, Geiselmann J (2004) Qualitative simulation of genetic regulatory networks using piecewise-linear models. Bull Math Biol 66(2):301–340

    Article  MathSciNet  Google Scholar 

  • Edwards R, Siegelmann H, Aziza K, Glass L (2001) Symbolic dynamics and computation in model gene networks. Chaos: An Interdisciplinary. J Nonlinear Sci 11(1):160–169

    Google Scholar 

  • Elowitz M, Leibler S (2000) A synthetic oscillatory network of transcriptional regulators. Nature 403(6767):335–338

    Article  Google Scholar 

  • Fenichel N (1979) Geometric singular perturbation theory for ordinary differential equations. J Differ Equ 31(1):53–98

    Article  MathSciNet  MATH  Google Scholar 

  • Franke R, Theis F, Klamt S (2010) From binary to multivalued to continuous models: the lac operon as a case study. J Integr Bioinform 7(1):151

    Google Scholar 

  • Gardner T, Cantor C, Collins J (2000) Construction of a genetic toggle switch in escherichia coli. Nature 403(6767):339–342

    Article  Google Scholar 

  • Glass L, Kauffman S (1973) The logical analysis of continuous, non-linear biochemical control networks. J Theor Biol 39(1):103–129

    Article  Google Scholar 

  • Glass L (1975) Classification of biological networks by their qualitative dynamics. J Theor Biol 54(1):85–107

    Article  Google Scholar 

  • Glass L (1975) The logical analysis of continuous, non-linear biochemical control networks. J Chem Phys 63(1):1325–1335

    Article  MathSciNet  Google Scholar 

  • Goldbeter A (1991) A minimal cascade model for the mitotic oscillator involving cyclin and cdc2 kinase. Proc Natl Acad Sci USA 88(20):9107–9111

    Article  Google Scholar 

  • Goldbeter A, Koshland D (1981) An amplified sensitivity arising from covalent modification in biological systems. Proc Natl Acad Sci USA 78(11):6840–6844

    Article  MathSciNet  Google Scholar 

  • Gouzé J, Saria T (2002) A class of piecewise linear differential equations arising in biological models. Dyn Syst 17(4):299–316

    Article  MathSciNet  MATH  Google Scholar 

  • Hek G (2010) Geometric singular perturbation theory in biological practice. J Math Biol 60(3):347–386

    Article  MathSciNet  MATH  Google Scholar 

  • Ironi L, Panzeri L, Plahte E, Simoncini V (2011) Dynamics of actively regulated gene networks. Physica D 240:779–794

    Article  MathSciNet  MATH  Google Scholar 

  • Ishii N, Suga Y, Hagiya A, Watanabe H, Mori H, Yoshino M, Tomita M (2007) Dynamic simulation of an in vitro multi-enzyme system. FEBS Lett 581(3):413–420

    Article  Google Scholar 

  • Kaper T (1998) An introduction to geometrical methods and dynamical systems for singular perturbation problems. In: Analyzing multiscale phenomena using singular perturbation methods: American Mathematical Society Short Course, Baltimore, Maryland (Proceedings of Symposium Ap.) 5–6 Jan 1998, pp 85–132

  • Kauffman S (1969) Metabolic stability and epigenesis in randomly constructed genetic nets. J Theor Biol 22(3):437–467

    Article  Google Scholar 

  • Kumar A, Josić K (2011) Reduced models of networks of coupled enzymatic reactions. J Theor Biol 278(1):87–106

  • Li F, Long T, Lu Y, Ouyang Q, Tang C (2004) The yeast cell-cycle network is robustly designed. Proc Natl Acad Sci USA 101(14):4781–4786

    Article  Google Scholar 

  • Ma W, Trusina A, El-Samad H, Lim W, Tang C (2009) Defining network topologies that can achieve biochemical adaptation. Cell 138(4):760–773

    Article  Google Scholar 

  • Mendoza L, Xenarios I (2006) A method for the generation of standardized qualitative dynamical systems of regulatory networks. Theor Biol Med Model 3(13):1–18

    Google Scholar 

  • Michaelis L, Menten M (1913) Die kinetik der inwertin wirkung. Biochemische Zeitschrift 49:333–369

    Google Scholar 

  • Mochizuki A (2005) An analytical study of the number of steady states in gene regulatory networks. J Theor Biol 236:291–310

    Article  MathSciNet  Google Scholar 

  • Novak B, Csikasz-Nagy A, Gyorffy B, Chen K, Tyson JJ (1998) Mathematical model of the ssion yeast cell cycle with checkpoint controls at the G1/S, G2/M and metaphase/anaphase transitions. Biophys Chem 72:185–200

    Article  Google Scholar 

  • Novak B, Tyson JJ (1993) Numerical analysis of a comprehensive model of m-phase control in xenopus oocyte extracts and intact embryos. J Cell Sci 106(4):1153–1168

    Google Scholar 

  • Novak B, Pataki Z, Ciliberto A, Tyson J (2001) Mathematical model of the cell division cycle of fission yeast. Chaos (Woodbury, NY) 11(1):277–286

    Article  MATH  Google Scholar 

  • Polynikis A, Hogan SJ, di Bernardo M (2009) Comparing different ODE modelling approaches for gene regulatory networks. J Theor Biol 261(4):511–530

    Article  Google Scholar 

  • Snoussi E (1989) Qualitative dynamics of piecewise differential equations: a discrete mapping approach. Dyn Stab Syst 4(3):189–207

    MathSciNet  MATH  Google Scholar 

  • Sun M, Cheng X, Socolar J (2013) Causal structure of oscillations in gene regulatory networks: Boolean analysis of ordinary differential equation attractors. Chaos 23(2):025104

    Article  MathSciNet  Google Scholar 

  • Thomas R, D’Ari R (1990) Biological feedback. CRC, Boca Raton, FL

  • Tyson J, Chen K, Novak B (2003) Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cell. Curr Opin Cell Biol 15(2):221–231

    Article  Google Scholar 

  • van Zwieten D, Rooda J, Armbruster D, Nagy J (2011) Simulating feedback and reversibility in substrate-enzyme reactions. Eur Phys J B 84:673–684

    Article  Google Scholar 

  • Veliz-Cuba A, Arthur J, Hochstetler L, Klomps V, Korpi E (2012) On the relationship of steady states of continuous and discrete models arising from biology. Bull Math Biol 74(12):2779–2792

    Article  MathSciNet  MATH  Google Scholar 

  • Verhulst F (2005) Methods and applications of singular perturbations. Springer Berlin Heidelberg

  • Wittmann D, Krumsiek J, Saez-Rodriguez J, Lauffenburger D, Klamt S, Theis F (2009) Transforming Boolean models to continuous models: methodology and application to T-cell receptor signaling. BMC Syst Biol 3(1):98

    Article  Google Scholar 

Download references

Acknowledgments

We thank Matthew Bennett for help in preparing the manuscript. This work was funded by the NIH, through the joint NSF/NIGMS Mathematical Biology Program Grant No. R01GM104974.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alan Veliz-Cuba.

Appendix

Appendix

1.1 Motivation of Eq. (1)

Here, we present a heuristic justification of the use of Eq. (1). The ideas follow those presented in Goldbeter and Koshland (1981), Goldbeter (1991), Novak et al. (1998), Novak et al. (2001), Tyson et al. (2003), Aguda (2006). As mentioned in the Introduction, this is only heuristic in general.

Consider a protein that can exist in an unmodified form, \(W,\) and a modified form, \(W^*,\) where the conversion between the two forms is catalyzed by two enzymes, \(E_1\) and \(E_2\), that is, consider the reactions

$$\begin{aligned}&W+E_1 \mathop {\rightleftarrows }_{k_{-1}}^{k_1} WE_1 \mathop {\rightarrow }^{p_1} W^*+E_1,\\&W^*+E_2 \mathop {\rightleftarrows }_{k_{-2}}^{k_2} W^*E_2 \mathop {\rightarrow }^{p_2} W+E_2. \end{aligned}$$

Then, using quasi-steady-state assumptions, one can obtain the equation

$$\begin{aligned} \frac{\hbox {d}W^*}{\hbox {d}t}=A\frac{L -W^*}{K_1+L-W^*}-I\frac{W^*}{K_2+W*}, \end{aligned}$$

where \(A, I, L, K_1, K_2\) depend on \(k_1, k_{-1}, p_1, k_2, k_{-2}, p_2, [E_1], [WE_1], [E_2]\), and \([W^*E_2]\) (Goldbeter and Koshland 1981). After rescaling by \(L\), we obtain Eq. (1).

Now, consider a system with \(N\) species (e.g., proteins) and assume that \(u_i(t)\) and \(v_i(t)\) represent the concentration of species \(i\) at time \(t\) in its active and inactive form, respectively. Furthermore, suppose that the total concentration of each species is constant and that the difference between decay and production is negligible (so that \(u_i(t)+v_i(t)\) is constant), that is,

$$\begin{aligned} u_i(t)+v_i(t)=L_i, \end{aligned}$$

where \(L_i\) does not depend on time, and

$$\begin{aligned} \frac{\hbox {d}u_i}{\hbox {d}t}=\text {rate of activation} - \text {rate of inhibition} . \end{aligned}$$

Then, using Michaelis–Menten kinetics, the rate of activation of this species can be modeled by

$$\begin{aligned} \text {rate of activation}= A_i\frac{v_i}{K_{i}^A+v_i}=A_i\frac{L_i -u_i}{K_{i}^A+L_i-u_i}, \end{aligned}$$

where the maximal rate, \(A_i=A_i(u)\), is a function of the different species in the network. Similarly, modeling the inhibition of the species using Michaelis–Menten kinetics, we obtain

$$\begin{aligned} \text {rate of inhibition}= I_i\frac{u_i}{K_{i}^I+u_i}. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} \frac{\hbox {d}u_i}{\hbox {d}t}=A_i\frac{L_i -u_i}{K_{i}^A+L_i-u_i}-I_i\frac{u_i}{K_{i}^I+u_i}. \end{aligned}$$

Now, we rescale \(u_i\rightarrow L_iu_i, A_i\rightarrow L_iA_i, I_i\rightarrow L_iI_i\) and obtain

$$\begin{aligned} \frac{\hbox {d}u_i}{\hbox {d}t}=A_i\frac{1 -u_i}{\frac{K_{i}^A}{L_i}+1-u_i}-I_i\frac{u_i}{\frac{K_{i}^I}{L_i}+u_i}. \end{aligned}$$

Hence, by denoting \(J_i^A{:=}\frac{K_{i}^A}{L_i}\) and \(J_i^I{:=}\frac{K_{i}^I}{L_i}\), we obtain the system given in Eq. (1). Also, \(J_i^A\) and \(J_i^I\) small means that the dissociation constants (\(K_i^A, K_i^I\)) are much smaller than the total concentration of species \(i\), that is, \(J_i^A, J_i^I\ll 1\) if and only if \(K_i^A, K_i^I\ll L_i\). Note that the initial conditions now satisfy \(u_i(0)\in [0,1]\) for all \(i\).

1.2 Behavior of \(A\frac{1-x}{J+1-x}-I\frac{x}{J+x}\) as \(J\rightarrow 0\)

Consider the one-dimensional system

$$\begin{aligned} \frac{\hbox {d}x}{\hbox {d}t} = A \frac{1-x}{J+1-x} - I \frac{x}{J+x}. \end{aligned}$$
(22)

Figure 9 shows the graph of the right-hand side of Eq. (22) for the fixed values \(A = 1, I = 0.5\) and three different values of \(J\). Note that as \(J\) becomes smaller, the graph gets flatter in (0,1). Then, for \(J\) small, we can approximate Eq. (22) in the interior of the region \([0,1]\) by the linear ODE

$$\begin{aligned} \frac{\hbox {d}x}{\hbox {d}t} = A-I. \end{aligned}$$

For \(x\sim J\), we can approximate Eq. (22) by the ODE

$$\begin{aligned} \frac{\hbox {d}x}{\hbox {d}t} = A-I\frac{x}{J+x}. \end{aligned}$$

And for \(x\sim 1-J\), we can approximate Eq. (22) by the ODE

$$\begin{aligned} \frac{\hbox {d}x}{\hbox {d}t} = A\frac{1-x}{J+1-x}-I. \end{aligned}$$
Fig. 9
figure 9

Plots of right-hand side of Eq. (22) for three different values of \(J\), as functions of \(x\). Other parameters \(A = 1, I = .5\). This figure suggests that differential equations of the form Eq. (22) can be approximated by linear ODEs in the interior of the domain

For the values \(A=1, I=0.5\), we obtain the following approximations.

\(\frac{\hbox {d}x}{\hbox {d}t} = 0.5\), if \(J\ll x \ll 1-J\)

\(\frac{\hbox {d}x}{\hbox {d}t} = 1-0.5\frac{x}{J+x}\), if \(x\sim J\)

\(\frac{\hbox {d}x}{\hbox {d}t} = \frac{1-x}{J+1-x}-0.5\), if \(x\sim 1-J\)

Note that there is an asymptotically stable steady state close to 1. Intuitively, for \(J\) small, solutions that start in the region \(x\sim J\) quickly reach the region \(J\ll x \ll 1\), which behaves like a linear system. Then, solutions increase almost linearly (with slope 0.5) until they enter the region \(x\sim 1-J\) where they will approach the steady state (see Fig. 10).

Fig. 10
figure 10

Solutions of Eq. (22) for three different values of \(J\) (left \(J=0.1\), center: \(J=0.01\), right \(J=0.001\)). Other parameters \(A = 1, I = .5\)

We see that in the limit \(J\rightarrow 0\), we obtain the solutions

$$\begin{aligned} x(t)=x(0)+0.5t \text { for } t\in \left[ 0,\frac{1-x(0)}{0.5}\right] \text { and } x(t)=1 \text { for } t\in \left[ \frac{1-x(0)}{0.5},\infty \right) , \end{aligned}$$

where \(x(0)\in [0,1]\). Note that these functions are the solutions of the ODE

$$\begin{aligned} \frac{\hbox {d}x}{\hbox {d}t}=P(A-I,x) \end{aligned}$$

given in Eq. (16) (0-th order approximation).

1.3 Proof of Theorem 4

The main idea in the proof is to use the fact that for \(u_i\ne 0, 1\), the right-hand side of Eq. (1) converges to \(Wu+b\). More precisely, the convergence is uniform on compact subsets of \((0,1)^N\), so that we have compact convergence.

Also, given a steady state of Eq. (1), we can solve for \(u_i\) and obtain \(u_i=\Gamma ^J_i(u)\) (\(\Gamma ^J_i\) will be defined later). The proofs also use the fact that as \(J\rightarrow 0, \Gamma ^J=(\Gamma ^J_1,\ldots ,\Gamma ^J_N)\) converges uniformly to the function \(u\mapsto H(Wu+b)\) on compact subsets of each chamber, that is, we also have compact convergence of \(\Gamma ^J\).

To prove Theorem 4, we need the following definitions and lemmas.

A point \(u^*\in [0,1]^N\) will be a steady state of the ODE in Eq. (1) if and only if

$$\begin{aligned} A_i(u^*) \frac{1-u^*_i}{J+1-u^*_i}-I_i(u^*) \frac{u^*_i}{J+u^*_i}=0 \end{aligned}$$
(23)

for all \(i\). Solving the corresponding quadratic equation for \(u^*_i\), we obtain the solutions \(u^*_i=1/2\) if \(A_i(u^*)= I_i(u^*)\); and

$$\begin{aligned} u^*_i=\frac{(A_i(u^*)-I_i(u^*)-A_i(u^*)J-I_i(u^*)J)\pm \sqrt{\Delta _i(u^*)}}{2(A_i(u^*)-I_i(u^*))} \end{aligned}$$
(24)

if \(A_i(u^*)\ne I_i(u^*)\), where \(\Delta _i(u^*)\) is the discriminant of the quadratic equation, given by

$$\begin{aligned} \Delta _i(u){:=}(A_i(u)-I_i(u)-A_i(u)J-I_i(u)J)^2+4A_i(u)J(A_i(u)-I_i(u)). \end{aligned}$$

The following lemma states that up to a set of small measure and for small \(J\), all steady states of the ODE in Eq. (1) are given by the fixed points of the function \(\Gamma ^J=(\Gamma ^J_1,\ldots ,\Gamma ^J_N)\) defined by

$$\begin{aligned} \Gamma _i^J(u){:=} \frac{(A_i(u)-I_i(u)-A_i(u)J-I_i(u)J)+\sqrt{\Delta _i(u)}}{2(A_i(u)-I_i(u))}. \end{aligned}$$
(25)

Lemma 11

For any compact subset \(K\) of \(\cup _{\mathcal {C}\in \Omega }\mathcal {C}\), there is an \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), the function \(\Gamma ^J\) is well defined (as a real-valued function) on \(K\), and \(u^*\) is a steady state in \(K\) of the ODE in Eq. (1) if and only if \(\Gamma ^J(u^*)=u^*\).

Proof

Since the denominator is not 0 on \(K\), we need to show that \(\Delta _i(u)\) is nonnegative on \(K\) for \(J\) small.

Since \(K\) is compact and \(A_i(u)-I_i(u)=\sum _{j=1}^N w_{ij} u_j +b_i\ne 0\) for all \(i=1,\ldots ,N\) and for all \(u\in K\), there is \(r>0\) such that \(|A_i(u)-I_i(u)|=|\sum _{j=1}^N w_{ij} u_j +b_i|\ge r\) on \(K\). Then, since \((A_i(u)-I_i(u)-A_i(u)J-I_i(u)J)^2+4A_i(u)J(A_i(u)-I_i(u))\) converges uniformly as \(J \rightarrow 0\) to \((A_i(u)-I_i(u))^2\ge r^2\) on \(K\), there is \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), the function \((A_i(u)-I_i(u)-A_i(u)J-I_i(u)J)^2+4A_i(u)J(A_i(u)-I_i(u))\) is positive on \(K\) for all \(i\). Thus, \(\Gamma ^J\) is well defined for all \(0<J<\epsilon _K\).

If \(u^*\in K\) and \(\Gamma ^J(u^*)=u^*\), then \(u^*\) satisfies Eq. (24), and hence, it is a steady state of the ODE in Eq. (1). Also, if \(u^*\) is a steady state of the ODE in Eq. (1), then \(u^*\) satisfies Eq. (24). However, only \(\Gamma _i^J(u^*)\) is in \([0,1]\) and hence \(u^*=\Gamma ^J(u^*)\).\(\square \)

It is important to notice that if \(A_i(u)-I_i(u)>0\), then \(\Gamma _i^J(u)\) (which is well defined for \(J\) small) converges to 1, and if \(A_i(u)-I_i(u)<0\), then \(\Gamma _i^J(u)\) converges to 0 as \(J\rightarrow 0\). Hence, \(\Gamma ^J(u)\) converges pointwise to \(H(Wu+b)\) on \(\cup _{\mathcal {C}\in \Omega }\mathcal {C}\) as \(J\rightarrow 0\). The next lemma states that for any compact subset of \(\cup _{\mathcal {C}\in \Omega }\mathcal {C}\), we have uniform convergence and that the derivative of this function converges uniformly to zero.

Lemma 12

If \(K\) is a compact subset of \(\cup _{\mathcal {C}\in \Omega }\mathcal {C}\), then

  • The function \(\Gamma ^J\) converges uniformly to the function \(u\mapsto H(Wu+b)\) on \(K\) as \(J\rightarrow 0\). In particular, since \(H(Wu+b)\) is constant in each chamber, we get that for any chamber \(\mathcal {C}\in \cup _{\mathcal {C}\in \Omega }\mathcal {C}, \Gamma ^J\) converges uniformly to the constant function \(H(Wv+b)\) on \(K\cap \mathcal {C}\) for any fixed \(v\in C\). Also, \(\Gamma ^J\) converges uniformly to the constant function \(H(Wx+b)\) on \(K\cap \mathcal {C}(x)\) for any \(x\in \{0,1\}^N\)

  • The Jacobian matrix \(D \Gamma ^J\) converges uniformly to zero on \(K\) as \(J\rightarrow 0\).

Proof

Similar to the proof of Lemma 11, there is a number \(r>0\) such that \(|A_i(u)-I_i(u)|\ge r\) for all \(i\) and for all \(u\in K\), which is enough to guarantee uniform convergence on \(K\).\(\square \)

We now prove Theorem 4

Proof

In this proof, “ODE” will refer to the ODE in Eq. (1) and “BN” will refer to the BN in Eq. (17). Even though the steady states of this ODE depend on \(J\), for simplicity, we will denote them by \(u^*\) instead of \(u^{*J}\).

First, from Lemma 11, we consider \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), the function \(\Gamma ^J\) is well defined on \(K\) and such that \(u^*\in K\) is a steady state of the ODE if and only if \(u^*\) is a fixed point of \(\Gamma ^J\). Second, from Lemma 12, we have that \(\Gamma ^J\) converges uniformly to the constant vector \(h(x)=H(Wx+b)\) on \(K\cap \mathcal {C}(x)\). Since \(h(x)\) is in \(\{0,1\}^N\), we have that \(K\cap \mathcal {C}(h(x))\) contains a neighborhood of \(h(x)\). It follows from uniform convergence that for all \(0<J<\epsilon _K\) (taking a smaller \(\epsilon _K\) if necessary) , \(\Gamma ^J(K\cap \mathcal {C}(x))\subseteq K\cap \mathcal {C}(h(x))\). Also, on any chamber \(\mathcal {C}\) that does not contain an element of \(\{0,1\}^N, \Gamma ^J\) converges uniformly to the constant vector \(x{:=}H(Wv+b)\) for any fixed \(v\in \mathcal {C}\) (\(H(Wu+b)\) is constant on \(\mathcal {C}\)); then, for all \(0<J<\epsilon _K\) (taking a smaller \(\epsilon _K\) if necessary) \(\Gamma ^J(K\cap \mathcal {C})\subseteq K\cap \mathcal {C}(x)\). Also, note that \(K\cap \mathcal {C}\) is compact for any chamber \(\mathcal {C}\).

Now, suppose \(x^*\) is a steady state of the BN, that is, \(h(x^*)=x^*\). Then, for all \(0<J<\epsilon _K\), we obtain that \(\Gamma ^J(K\cap \mathcal {C}(x^*))\subseteq K\cap \mathcal {C}(h(x^*))=K\cap \mathcal {C}(x^*)\). Since we have a continuous function from a convex compact set to itself, \(\Gamma ^J\) has a fixed point \(\Gamma ^J(u^*)=u^*\in K\cap \mathcal {C}(x^*)\). Then, \(u^*\in K\cap \mathcal {C}(x^*)\) is a steady state of the ODE. Now, suppose that the ODE has a steady-state \(u^*\in K\), and let \(\mathcal {C}\) be the chamber that contains \(u^*\) and \(x^*{:=}H(Wu^*+b)\). Since \(u^*=\Gamma ^J(u^*)\in \Gamma ^J(K\cap \mathcal {C})\subseteq K\cap \mathcal {C}(x^*)\), we have that \(u^*\in \mathcal {C}(x^*)\). Since \(u^*\) and \(x^*\) belong to the same chamber, we also have that \(H(Wx^*+b)=H(Wu^*+b)=x^*\); thus, \(x^*\) is a steady state of the BN.

From Lemma 12, we can make the norm of \(D\Gamma ^J\) small so that \(u^*\) is the unique fixed point of \(\Gamma ^J\) in \(K\cap \mathcal {C}(x^*)\). Since \(\Gamma ^J\) converges uniformly to \(H(Wx^*+b)=h(x^*)=x^*\) on \(K\cap \mathcal {C}(x^*)\), we have that \(u^*=\Gamma ^J(u^*)\) converges to \(x^*\). Finally, to prove that the steady state of the ODE is asymptotically stable, we will show that the Jacobian matrix of the ODE can be seen as a small perturbation of a matrix that has negative eigenvalues. We will use the alternative form of \(\Gamma ^J_i\):

$$\begin{aligned} \Gamma ^J_i(u)=\frac{2A_i(u)J}{-A_i(u)+I_i(u)+A_i(u)J+I_i(u)J+\sqrt{\Delta _i(u)}}. \end{aligned}$$

Denote \(f^J=(f_1^J,\ldots ,f_N^J)\) where \(f_i^J(u)=A_i(u) \frac{1-u_i}{J+1-u_i}-I_i(u) \frac{u_i}{J+u_i}\). We now compute \(Df(u)\). For \(i\ne j\), we have

$$\begin{aligned} \frac{\partial f_i^J}{\partial u_j}=w_{ij}^+\frac{1-u_i}{J+1-u_i}-w_{ij}^- \frac{u_i}{J+u_i}. \end{aligned}$$

Also,

$$\begin{aligned} \frac{\partial f_i^J}{\partial u_i}=w_{ii}^+\frac{1-u_i}{J+1-u_i}-w_{ii}^- \frac{u_i}{J+u_i}-A_i(u)\frac{J}{(J+1-u_i)^2}-I_i(u)\frac{J}{(J+u_i)^2}. \end{aligned}$$

Let \(Z^J\) be the matrix given by \(Z^J_{ij}=w_{ij}^+\frac{1-u_i}{J+1-u_i}-w_{ij}^- \frac{u_i}{J+u_i}\) and denote with \(E^J\) the diagonal matrix with entries \(E_{ii}^J=-A_i(u)\frac{J}{(J+1-u_i)^2}-I_i(u)\frac{J}{(J+u_i)^2}\). Then,

$$\begin{aligned} Df^J(u)=Z^J+E^J \end{aligned}$$

where the entries of \(Z^J\) are bounded and \(E^J\) is a diagonal matrix. We now will show that for any steady state of the ODE in \(K, \lim _{J\rightarrow 0} E^J_{ii}=-\infty \) as \(J\rightarrow 0\). After showing this, we can see \(Df^J(u^*)\) as a small perturbation of a matrix that has negative eigenvalues. Since eigenvalues are continuous with respect to matrix entries, it follows that the eigenvalues of \(Df^J(u^*)\) have negative real part, and hence, \(u^*\) is asymptotically stable.

We now show that \(\lim _{J\rightarrow 0} E^J_{ii}=-\infty \) as \(J\rightarrow 0\). By computing \(\frac{(\Gamma ^J_i(u))^2}{J}\) and setting \(J=0\), it follows that \(\lim _{J\rightarrow 0}\frac{(\Gamma ^J_i(u))^2}{J}=0\) when \(A_i(u)-I_i(u)\) is negative. Similarly, we obtain that \(\lim _{J\rightarrow 0}\frac{(1-\Gamma ^J_i(u))^2}{J}=0\) when \(A_i(u)-I_i(u)\) is positive. From these two limits, it follows that if \(|A_i(u)-I_i(u)|\ne 0\), then \(\lim _{J\rightarrow 0}\frac{(J+\Gamma ^J_i(u))^2}{J}=0\) or \(\lim _{J\rightarrow 0}\frac{(J+1-\Gamma ^J_i(u))^2}{J}=0\). Furthermore, since \(A_i(u)-I_i(u)\) is uniformly nonzero on \(K\), the convergence is uniform.

If \(u^*\in K\) is a steady state of the ODE, then \(u_i^*=\Gamma ^J(u^*)\) and

\(\lim _{J\rightarrow 0}\left( -A_i(u)\frac{J}{(J+1-u^*_i)^2}-I_i(u)\frac{J}{(J+u^*_i)^2}\right) =\)

\(\lim _{J\rightarrow 0} \left( -A_i(u) \frac{J}{(J+1-\Gamma ^J_i(u^*))^2} -I_i(u) \frac{J}{(J+\Gamma ^J_i(u^*))^2} \right) =-\infty \). Note that uniform convergence is needed in the last step because \(u^*\) depends on \(J\).\(\square \)

1.4 Proof of Theorems 6 and 8

In the rest of this section, “ODE” will refer to the ODE in Eq. (1) and “BN” will refer to the BN in Eq. (17).

Notice that for any \(x\in \{0,1\}^N, \mathcal {C}(x)= \{u\in [0,1]^N:H(Wu+b)=H(Wx+b)\}\). We now prove Theorem 6.

Proof

Let \(y=h(x)\) and for simplicity in the notation, assume that \(y=(0,\ldots ,0)\).

In the case \(x=y\), we will show that \((0,\ldots ,0)\) contains an invariant set for the original ODE. Since \(x=(0,\ldots ,0)\) and \(h(x)=x\), we have that \(\mathcal {C}(x)=\cap _{i=1}^N\{u\in [0,1]^N:\sum _{j=1}^N w_{ij} u_j +b_i<0\}\). We now consider a hypercube of the form \(K=[0,D]^N\) with \(D\) small so that \(K\subseteq \mathcal {C}(x)\). We claim that for \(J\) small \(K\) is invariant. Since we already showed that \([0,1]^N\) is invariant, it is enough to check that if \(u\in K\) with \(u_i=D\), then \(f_i^J(u)\le 0\). Since \(K\) is compact and \(\sum _{j=1}^N w_{ij} u_j +b_i<0\) for all \(i\) and for all \(u\in K\), there is \(r>0\) such that \(\sum _{j=1}^N w_{ij} u_j +b_i\le -r\) for all \(u\in K\). It follows that \(A_i(u)-I_i(u) \frac{u_i}{J+u_i}\) converges uniformly to \(A_i(u)-I_i(u)=\sum _{j=1}^N w_{ij} u_j +b_i\le -r\) on \(\{u\in K:u_i=D\}\) as \(J\rightarrow 0\). Also, \(f_i^J(u)=A_i(u) \frac{1-u_i}{J+1-u_i}-I_i(u) \frac{u_i}{J+u_i}\le A_i(u)-I_i(u) \frac{u_i}{J+u_i}\) on \(\{u\in K:u_i=D\}\). Thus, on \(\{u\in K:u_i=D\}, f_i^J\) is bounded above by a function that converges uniformly to a negative function. Then, there is \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K, f_i^J\) is negative on \(\{u\in K:u_i=D\}\). Then, \(K\) is invariant.

In the case \(x\ne y\), we assume for simplicity that \(x=(1,0,\ldots ,0)\) and \(y=(0,0,\ldots ,0)\). Then, since \(h(x)=y\) and \(h_1(y)=0\), we have the following

$$\begin{aligned} \sum _{j=1}^N w_{ij}u_j+b_i<0, \text { for all } u\in \mathcal {C}(x), \text { and } \sum _{j=1}^N w_{1j}u_j+b_1<0, \text { for all } u\in \mathcal {C}(y). \end{aligned}$$

In particular, \(\sum _{j=1}^N w_{1j}u_j+b_1<0\) for all \(u\in \mathcal {C}(x)\cup \mathcal {C}(y)\). This also means that the hyperplane that separates \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is \(\{u:\sum _{j=1}^N w_{kj}u_j+b_k=0\}\) for some \(k\ne 1\); then, the common face of \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is given by

$$\begin{aligned} \{u\in [0,1]^N: \sum _{j=1}^N w_{ij}u_j+b_i<0\, \text { for}\, i\ne k \,\text {and}\, \sum _{j=1}^N w_{kj}u_j+b_k=0\}. \end{aligned}$$

Now, for \(r>0\) small, we define the set

$$\begin{aligned} L{:=}\{u\in [0,1]^N: \sum _{j=1}^N w_{ij}u_j+b_i<-r \,\text { for}\, i\ne k\quad \hbox {and}\quad \sum _{j=1}^N w_{kj}u_j+b_k=0\} \end{aligned}$$

which will be a face of the neighborhood of \(x\) that we are looking for (see Fig. 11). We now project \(L\) onto the \(u_1=0\) plane (see Fig. 11), that is, define

$$\begin{aligned} L_1{:=}\{(0,u_2,u_3,\ldots ,u_N):(u_1,\ldots ,u_N)\in L \text { for some } u\in L\} \end{aligned}$$

We use \(L_1\) to “generate” a box parallel to the \(u_1\) axis (see Fig. 11); namely, consider

$$\begin{aligned} B{:=}\{u\in [0,1]^N: (0,u_2,\ldots ,u_N)\in L_1 \}. \end{aligned}$$

Now, consider the neighborhood of \(x\) given by

$$\begin{aligned} K{:=}B\cap \mathcal {C}(x). \end{aligned}$$

\(K\) is a polytope such that \(L\) is one of its faces. Similar to the case \(x=y\), there is \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), we have that for any face of \(K\) other than \(L\) the vector field points inward. Also, the first coordinate of the vector field is negative on \(K\). Thus, any solution with initial condition in \(K\) must exit \(K\) through its face \(L\) and then enter \(\mathcal {C}(y)\), that is, the ODE transitions from \(K\) to \(\mathcal {C}(y)\).\(\square \)

Fig. 11
figure 11

Sets \(L\) (blue), \(L_1\) (red), and \(K\) (green) in the proof of Theorem 6 for \(N=2\) (Color figure online)

We prove Theorem 8.

Proof

We proceed as in the proof of Theorem 6, and for simplicity, we assume that \(x=(1,0,\ldots ,0)\) and \(y=h(x)=(0,\ldots ,0)\). Since \(h(x)=y\) and \(h_1(y)=0\), we have the following

$$\begin{aligned} \sum _{j=1}^N w_{ij}u_j+b_i<0, \text { for all } u\in \mathcal {C}(x), \text { and } \sum _{j=1}^N w_{1j}u_j+b_1<0, \text { for all } u\in \mathcal {C}(y). \end{aligned}$$

In particular, \(\sum _{j=1}^N w_{1j}u_j+b_1<0\) for all \(u\in \mathcal {C}(x)\cup \mathcal {C}(y)\). This also means that the hyperplane that separates \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is \(\{u:\sum _{j=1}^N w_{kj}u_j+b_k=0\}\) for some \(k\ne 1\); furthermore, since this hyperplane is parallel to the axes, we have that \((w_{k1},w_{k2},\ldots ,w_{kN})=(w_{k1},0,\ldots ,0)\). Then, the hyperplane that separates \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is \(\{u: u_1=\frac{b_k}{-w_{k1}}\}\), and the common face of \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is given by

$$\begin{aligned} \{u\in [0,1]^N: \sum _{j=1}^N w_{ij}u_j+b_i<0\, \text { for}\, i\ne k \,\text {and}\, u_1=\frac{b_k}{-w_{k1}}\}. \end{aligned}$$

Now, let \(K\) be a compact subset of \(\mathcal {C}(x)\) and consider \(r>0\) small such that \(K\subseteq K^0{:=}\{u\in [0,1]^N:\sum _{j=1}^N w_{ij}u_j+b_i\le -r \text { for}\, i\ne k \, \text {and}\, u_1\ge \frac{b_k}{-w_{k1}} \}\) (see Fig. 12). Since the hyperplanes in \(\mathcal {H}\) are parallel to the axis, \(K^0\) is a box with faces parallel to the axes and \(K^0\) also shares a face with \(\mathcal {C}(y)\). Then, similar to the proof of Theorem 6, there is \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), we have that at the faces of \(K^0\) other than that face shared with \(\mathcal {C}(y)\), the vector field of the ODE points inward, and the first entry of the vector field is negative. Then, the ODE will transition from \(K^0\) to \(\mathcal {C}(y)=\mathcal {C}(h(x))\).

Now, let \(K^1\) be a compact subset of \(\mathcal {C}(h(x))\) such that \(K^1\) intersects all solutions that start in \(K\) (see Fig. 12). Then, for all \(0<J<\epsilon _K\) (making \(\epsilon _K\) smaller if necessary), the ODE transitions from \(K^1\) to \(\mathcal {C}(h^2(x))\). This also means that the ODE transitions from \(K^0\) to \(\mathcal {C}(h^2(x))\). The proof follows by induction.\(\square \)

Fig. 12
figure 12

Sets \(K\) (light green), \(K^0\) (dark green and light green), and \(K^1\) (blue) in the proof of Theorem 8 for \(N=2\) (Color figure online)

1.5 Case \(n>1\)

In this case, Eq. (1) becomes

$$\begin{aligned} \frac{\hbox {d}u_i}{\hbox {d}t} = A_i\frac{(1-u_i)^n}{J^n+(1-u_i)^n}-I_i\frac{u_i^n}{J^n+u_i^n}. \end{aligned}$$

The behavior of the system as \(J\rightarrow 0\) is similar. To illustrate this, we use the example from Section 6.2 for \(n>1\); namely, consider

$$\begin{aligned} \frac{\hbox {d}x}{\hbox {d}t}=A\frac{(1-x)^n}{J^n+(1-x)^n}-I\frac{x^n}{J^n+x^n}. \end{aligned}$$
(26)

Figure 13 shows the graph of the right-hand side of Eq. (26) for the fixed values \(A = 1, I = 0.5\), three different values of \(J\), and three different values of \(n\). Note that as \(J\) becomes smaller, the graph gets flatter in (0,1) as in Section 6.2.

Fig. 13
figure 13

Plots of right-hand side of Eq. (26) for three different values of \(J\), as functions of \(x\), and three different values of \(n\) (left \(n=2\), center \(n=5\), right \(n=10\)). Other parameters: \(A = 1, I = .5\)

All definitions and results remain virtually unchanged except Sections 6.3 and 6.4. For \(n>1\), Eq. (23) becomes

$$\begin{aligned} A_i(u^*) \frac{(1-u^*_i)^n}{J^n+(1-u^*_i)^n}-I_i(u^*) \frac{(u^*_i)^n}{J^n+(u^*_i)^n}=0, \end{aligned}$$

which cannot be solved in terms of \(u_i^*\) in closed form as in Eq. (24) and Eq. (25). However, we can use the implicit function theorem to guarantee the existence of a function \(\Gamma ^J\) with the required properties (such as compact convergence to the Heaviside function).

1.6 Avoiding Discontinuity of Solutions in Eq. (14)

There is a way to make the approximations continuous, although it requires to change the equations in the boundary domains. The idea is to reduce the vector field using the approximations \(u_t\approx 1\) and \(u_s\approx 0\) where appropriate, while keeping the derivatives.

For example for the region \(\mathcal {R}_{12}\), we keep the derivatives and just use \(u_1 \approx 0\) and \(u_2 \approx 0\) in the right-hand side of Eq. (2). This will yield

$$\begin{aligned} \frac{\hbox {d}u_1}{\hbox {d}t}&= 0.5, \nonumber \\ \frac{\hbox {d}u_2}{\hbox {d}t}&= 0.5 \end{aligned}$$
(27)

This is still a decoupled system and solvable.

Similarly for region \(\mathcal {R}^{12}\), using \(u_1 \approx 1, u_2 \approx 1\), we will get

$$\begin{aligned} \frac{\hbox {d}u_1}{\hbox {d}t}&= 0.5\frac{1-u_1}{J+1-u_1}-1, \nonumber \\ \frac{\hbox {d}u_2}{\hbox {d}t}&= 0.5\frac{1-u_2}{J+1-u_2}-1. \end{aligned}$$
(28)

This is also decoupled and solvable, though it is not linear.

For regions \(\mathcal {R}_1^2\) (\(u_1 \approx 0, u_2 \approx 1\)) and \(\mathcal {R}_2^1\) (\(u_1 \approx 1, u_2 \approx 0\)), we will get

$$\begin{aligned} \frac{\hbox {d}u_1}{\hbox {d}t}&= 0.5-\frac{u_1}{J+u_1}, \nonumber \\ \frac{\hbox {d}u_2}{\hbox {d}t}&= 0.5\frac{1-u_2}{J+1-u_2}, \end{aligned}$$
(29)

and

$$\begin{aligned} \frac{\hbox {d}u_1}{\hbox {d}t}&= 0.5\frac{1-u_1}{J+1-u_1}, \nonumber \\ \frac{\hbox {d}u_2}{\hbox {d}t}&= 0.5-\frac{u_2}{J +u_2}, \end{aligned}$$
(30)

respectively. These equations are again decoupled and solvable, but again we lost the linearity.

We see that in each corner, our original system can be approximated with decoupled, solvable equations. A similar idea can be used for other regions, but as seen above, this alternative approach introduces nonlinearity.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Veliz-Cuba, A., Kumar, A. & Josić, K. Piecewise Linear and Boolean Models of Chemical Reaction Networks. Bull Math Biol 76, 2945–2984 (2014). https://doi.org/10.1007/s11538-014-0040-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11538-014-0040-x

Keywords

Mathematical Subject Classification

Navigation