Abstract
Models of biochemical networks are frequently complex and highdimensional. Reduction methods that preserve important dynamical properties are therefore essential for their study. Interactions in biochemical networks are frequently modeled using Hill functions (\(x^n/(J^n+x^n)\)). Reduced ODEs and Boolean approximations of such model networks have been studied extensively when the exponent \(n\) is large. However, while the case of small constant \(J\) appears in practice, it is not well understood. We provide a mathematical analysis of this limit and show that a reduction to a set of piecewise linear ODEs and Boolean networks can be mathematically justified. The piecewise linear systems have closedform solutions that closely track those of the fully nonlinear model. The simpler, Boolean network can be used to study the qualitative behavior of the original system. We justify the reduction using geometric singular perturbation theory and compact convergence, and illustrate the results in network models of a toggle switch and an oscillator.
References
AbouJaoudé W, Ouattara D, Kaufman M (2009) From structure to dynamics: frequency tuning in the p53mdm2 network: I. Logical approach. J Theor Biol 258(4):561–577
AbouJaoudé W, Ouattara D, Kaufman M (2010) From structure to dynamics: frequency tuning in the p53mdm2 network: II. Differential and stochastic approaches. J Theor Biol 264(4):1177–1189
Aguda D (2006) Modeling the cell division cycle. Lecture notes in mathematics, vol 1872. Springer Berlin Heidelberg
Albert R, Othmer H (2003) The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila Melanogaster. J Theor Biol 223(1):1–18
Alon U (2006) An introduction to systems biology: design principles of biological circuits. Chapman and Hall/CRC, Boca Raton, FL
Aracena J, Goles E, Moreira A, Salinas L (2009) On the robustness of update schedules in Boolean networks. Biosystems 97(1):1–8
Casey R, de Jong H, Gouzé J (2006) Piecewiselinear models of genetic regulatory networks: equilibria and their stability. J Math Biol 52(1):27–56
Chaves M, Tournier L, Gouzé J (2010) Comparing Boolean and piecewise affine differential models for genetic networks. Acta Biotheor 58(2):217–232
Cheng X, Sun M, Socolar J (2013) Autonomous Boolean modelling of developmental gene regulatory networks. J R Soc Interface 10(78):20120574
Ciliberto A, Capuani F, Tyson J (2007) Modeling networks of coupled enzymatic reactions using the total quasisteady state approximation. PLoS Comput Biol 3(3):e45
Davidich M, Bornholdt S (2008) The transition from differential equations to Boolean networks: a case study in simplifying a regulatory network model. J Theor Biol 255(3):269–277
Davidich M, Bornholdt S (2008) Boolean network model predicts cell cycle sequence of fission yeast. PLoS ONE 3(2):e1672
De Jong H (2002) Modeling and simulation of genetic regulatory systems: a literature review. J Comput Biol 9(1):67–103
De Jong H, Gouzé J, Hernandez C, Page M, Sari T, Geiselmann J (2004) Qualitative simulation of genetic regulatory networks using piecewiselinear models. Bull Math Biol 66(2):301–340
Edwards R, Siegelmann H, Aziza K, Glass L (2001) Symbolic dynamics and computation in model gene networks. Chaos: An Interdisciplinary. J Nonlinear Sci 11(1):160–169
Elowitz M, Leibler S (2000) A synthetic oscillatory network of transcriptional regulators. Nature 403(6767):335–338
Fenichel N (1979) Geometric singular perturbation theory for ordinary differential equations. J Differ Equ 31(1):53–98
Franke R, Theis F, Klamt S (2010) From binary to multivalued to continuous models: the lac operon as a case study. J Integr Bioinform 7(1):151
Gardner T, Cantor C, Collins J (2000) Construction of a genetic toggle switch in escherichia coli. Nature 403(6767):339–342
Glass L, Kauffman S (1973) The logical analysis of continuous, nonlinear biochemical control networks. J Theor Biol 39(1):103–129
Glass L (1975) Classification of biological networks by their qualitative dynamics. J Theor Biol 54(1):85–107
Glass L (1975) The logical analysis of continuous, nonlinear biochemical control networks. J Chem Phys 63(1):1325–1335
Goldbeter A (1991) A minimal cascade model for the mitotic oscillator involving cyclin and cdc2 kinase. Proc Natl Acad Sci USA 88(20):9107–9111
Goldbeter A, Koshland D (1981) An amplified sensitivity arising from covalent modification in biological systems. Proc Natl Acad Sci USA 78(11):6840–6844
Gouzé J, Saria T (2002) A class of piecewise linear differential equations arising in biological models. Dyn Syst 17(4):299–316
Hek G (2010) Geometric singular perturbation theory in biological practice. J Math Biol 60(3):347–386
Ironi L, Panzeri L, Plahte E, Simoncini V (2011) Dynamics of actively regulated gene networks. Physica D 240:779–794
Ishii N, Suga Y, Hagiya A, Watanabe H, Mori H, Yoshino M, Tomita M (2007) Dynamic simulation of an in vitro multienzyme system. FEBS Lett 581(3):413–420
Kaper T (1998) An introduction to geometrical methods and dynamical systems for singular perturbation problems. In: Analyzing multiscale phenomena using singular perturbation methods: American Mathematical Society Short Course, Baltimore, Maryland (Proceedings of Symposium Ap.) 5–6 Jan 1998, pp 85–132
Kauffman S (1969) Metabolic stability and epigenesis in randomly constructed genetic nets. J Theor Biol 22(3):437–467
Kumar A, Josić K (2011) Reduced models of networks of coupled enzymatic reactions. J Theor Biol 278(1):87–106
Li F, Long T, Lu Y, Ouyang Q, Tang C (2004) The yeast cellcycle network is robustly designed. Proc Natl Acad Sci USA 101(14):4781–4786
Ma W, Trusina A, ElSamad H, Lim W, Tang C (2009) Defining network topologies that can achieve biochemical adaptation. Cell 138(4):760–773
Mendoza L, Xenarios I (2006) A method for the generation of standardized qualitative dynamical systems of regulatory networks. Theor Biol Med Model 3(13):1–18
Michaelis L, Menten M (1913) Die kinetik der inwertin wirkung. Biochemische Zeitschrift 49:333–369
Mochizuki A (2005) An analytical study of the number of steady states in gene regulatory networks. J Theor Biol 236:291–310
Novak B, CsikaszNagy A, Gyorffy B, Chen K, Tyson JJ (1998) Mathematical model of the ssion yeast cell cycle with checkpoint controls at the G1/S, G2/M and metaphase/anaphase transitions. Biophys Chem 72:185–200
Novak B, Tyson JJ (1993) Numerical analysis of a comprehensive model of mphase control in xenopus oocyte extracts and intact embryos. J Cell Sci 106(4):1153–1168
Novak B, Pataki Z, Ciliberto A, Tyson J (2001) Mathematical model of the cell division cycle of fission yeast. Chaos (Woodbury, NY) 11(1):277–286
Polynikis A, Hogan SJ, di Bernardo M (2009) Comparing different ODE modelling approaches for gene regulatory networks. J Theor Biol 261(4):511–530
Snoussi E (1989) Qualitative dynamics of piecewise differential equations: a discrete mapping approach. Dyn Stab Syst 4(3):189–207
Sun M, Cheng X, Socolar J (2013) Causal structure of oscillations in gene regulatory networks: Boolean analysis of ordinary differential equation attractors. Chaos 23(2):025104
Thomas R, D’Ari R (1990) Biological feedback. CRC, Boca Raton, FL
Tyson J, Chen K, Novak B (2003) Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cell. Curr Opin Cell Biol 15(2):221–231
van Zwieten D, Rooda J, Armbruster D, Nagy J (2011) Simulating feedback and reversibility in substrateenzyme reactions. Eur Phys J B 84:673–684
VelizCuba A, Arthur J, Hochstetler L, Klomps V, Korpi E (2012) On the relationship of steady states of continuous and discrete models arising from biology. Bull Math Biol 74(12):2779–2792
Verhulst F (2005) Methods and applications of singular perturbations. Springer Berlin Heidelberg
Wittmann D, Krumsiek J, SaezRodriguez J, Lauffenburger D, Klamt S, Theis F (2009) Transforming Boolean models to continuous models: methodology and application to Tcell receptor signaling. BMC Syst Biol 3(1):98
Acknowledgments
We thank Matthew Bennett for help in preparing the manuscript. This work was funded by the NIH, through the joint NSF/NIGMS Mathematical Biology Program Grant No. R01GM104974.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 Motivation of Eq. (1)
Here, we present a heuristic justification of the use of Eq. (1). The ideas follow those presented in Goldbeter and Koshland (1981), Goldbeter (1991), Novak et al. (1998), Novak et al. (2001), Tyson et al. (2003), Aguda (2006). As mentioned in the Introduction, this is only heuristic in general.
Consider a protein that can exist in an unmodified form, \(W,\) and a modified form, \(W^*,\) where the conversion between the two forms is catalyzed by two enzymes, \(E_1\) and \(E_2\), that is, consider the reactions
Then, using quasisteadystate assumptions, one can obtain the equation
where \(A, I, L, K_1, K_2\) depend on \(k_1, k_{1}, p_1, k_2, k_{2}, p_2, [E_1], [WE_1], [E_2]\), and \([W^*E_2]\) (Goldbeter and Koshland 1981). After rescaling by \(L\), we obtain Eq. (1).
Now, consider a system with \(N\) species (e.g., proteins) and assume that \(u_i(t)\) and \(v_i(t)\) represent the concentration of species \(i\) at time \(t\) in its active and inactive form, respectively. Furthermore, suppose that the total concentration of each species is constant and that the difference between decay and production is negligible (so that \(u_i(t)+v_i(t)\) is constant), that is,
where \(L_i\) does not depend on time, and
Then, using Michaelis–Menten kinetics, the rate of activation of this species can be modeled by
where the maximal rate, \(A_i=A_i(u)\), is a function of the different species in the network. Similarly, modeling the inhibition of the species using Michaelis–Menten kinetics, we obtain
Thus, we obtain
Now, we rescale \(u_i\rightarrow L_iu_i, A_i\rightarrow L_iA_i, I_i\rightarrow L_iI_i\) and obtain
Hence, by denoting \(J_i^A{:=}\frac{K_{i}^A}{L_i}\) and \(J_i^I{:=}\frac{K_{i}^I}{L_i}\), we obtain the system given in Eq. (1). Also, \(J_i^A\) and \(J_i^I\) small means that the dissociation constants (\(K_i^A, K_i^I\)) are much smaller than the total concentration of species \(i\), that is, \(J_i^A, J_i^I\ll 1\) if and only if \(K_i^A, K_i^I\ll L_i\). Note that the initial conditions now satisfy \(u_i(0)\in [0,1]\) for all \(i\).
1.2 Behavior of \(A\frac{1x}{J+1x}I\frac{x}{J+x}\) as \(J\rightarrow 0\)
Consider the onedimensional system
Figure 9 shows the graph of the righthand side of Eq. (22) for the fixed values \(A = 1, I = 0.5\) and three different values of \(J\). Note that as \(J\) becomes smaller, the graph gets flatter in (0,1). Then, for \(J\) small, we can approximate Eq. (22) in the interior of the region \([0,1]\) by the linear ODE
For \(x\sim J\), we can approximate Eq. (22) by the ODE
And for \(x\sim 1J\), we can approximate Eq. (22) by the ODE
For the values \(A=1, I=0.5\), we obtain the following approximations.
\(\frac{\hbox {d}x}{\hbox {d}t} = 0.5\), if \(J\ll x \ll 1J\)
\(\frac{\hbox {d}x}{\hbox {d}t} = 10.5\frac{x}{J+x}\), if \(x\sim J\)
\(\frac{\hbox {d}x}{\hbox {d}t} = \frac{1x}{J+1x}0.5\), if \(x\sim 1J\)
Note that there is an asymptotically stable steady state close to 1. Intuitively, for \(J\) small, solutions that start in the region \(x\sim J\) quickly reach the region \(J\ll x \ll 1\), which behaves like a linear system. Then, solutions increase almost linearly (with slope 0.5) until they enter the region \(x\sim 1J\) where they will approach the steady state (see Fig. 10).
We see that in the limit \(J\rightarrow 0\), we obtain the solutions
where \(x(0)\in [0,1]\). Note that these functions are the solutions of the ODE
given in Eq. (16) (0th order approximation).
1.3 Proof of Theorem 4
The main idea in the proof is to use the fact that for \(u_i\ne 0, 1\), the righthand side of Eq. (1) converges to \(Wu+b\). More precisely, the convergence is uniform on compact subsets of \((0,1)^N\), so that we have compact convergence.
Also, given a steady state of Eq. (1), we can solve for \(u_i\) and obtain \(u_i=\Gamma ^J_i(u)\) (\(\Gamma ^J_i\) will be defined later). The proofs also use the fact that as \(J\rightarrow 0, \Gamma ^J=(\Gamma ^J_1,\ldots ,\Gamma ^J_N)\) converges uniformly to the function \(u\mapsto H(Wu+b)\) on compact subsets of each chamber, that is, we also have compact convergence of \(\Gamma ^J\).
To prove Theorem 4, we need the following definitions and lemmas.
A point \(u^*\in [0,1]^N\) will be a steady state of the ODE in Eq. (1) if and only if
for all \(i\). Solving the corresponding quadratic equation for \(u^*_i\), we obtain the solutions \(u^*_i=1/2\) if \(A_i(u^*)= I_i(u^*)\); and
if \(A_i(u^*)\ne I_i(u^*)\), where \(\Delta _i(u^*)\) is the discriminant of the quadratic equation, given by
The following lemma states that up to a set of small measure and for small \(J\), all steady states of the ODE in Eq. (1) are given by the fixed points of the function \(\Gamma ^J=(\Gamma ^J_1,\ldots ,\Gamma ^J_N)\) defined by
Lemma 11
For any compact subset \(K\) of \(\cup _{\mathcal {C}\in \Omega }\mathcal {C}\), there is an \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), the function \(\Gamma ^J\) is well defined (as a realvalued function) on \(K\), and \(u^*\) is a steady state in \(K\) of the ODE in Eq. (1) if and only if \(\Gamma ^J(u^*)=u^*\).
Proof
Since the denominator is not 0 on \(K\), we need to show that \(\Delta _i(u)\) is nonnegative on \(K\) for \(J\) small.
Since \(K\) is compact and \(A_i(u)I_i(u)=\sum _{j=1}^N w_{ij} u_j +b_i\ne 0\) for all \(i=1,\ldots ,N\) and for all \(u\in K\), there is \(r>0\) such that \(A_i(u)I_i(u)=\sum _{j=1}^N w_{ij} u_j +b_i\ge r\) on \(K\). Then, since \((A_i(u)I_i(u)A_i(u)JI_i(u)J)^2+4A_i(u)J(A_i(u)I_i(u))\) converges uniformly as \(J \rightarrow 0\) to \((A_i(u)I_i(u))^2\ge r^2\) on \(K\), there is \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), the function \((A_i(u)I_i(u)A_i(u)JI_i(u)J)^2+4A_i(u)J(A_i(u)I_i(u))\) is positive on \(K\) for all \(i\). Thus, \(\Gamma ^J\) is well defined for all \(0<J<\epsilon _K\).
If \(u^*\in K\) and \(\Gamma ^J(u^*)=u^*\), then \(u^*\) satisfies Eq. (24), and hence, it is a steady state of the ODE in Eq. (1). Also, if \(u^*\) is a steady state of the ODE in Eq. (1), then \(u^*\) satisfies Eq. (24). However, only \(\Gamma _i^J(u^*)\) is in \([0,1]\) and hence \(u^*=\Gamma ^J(u^*)\).\(\square \)
It is important to notice that if \(A_i(u)I_i(u)>0\), then \(\Gamma _i^J(u)\) (which is well defined for \(J\) small) converges to 1, and if \(A_i(u)I_i(u)<0\), then \(\Gamma _i^J(u)\) converges to 0 as \(J\rightarrow 0\). Hence, \(\Gamma ^J(u)\) converges pointwise to \(H(Wu+b)\) on \(\cup _{\mathcal {C}\in \Omega }\mathcal {C}\) as \(J\rightarrow 0\). The next lemma states that for any compact subset of \(\cup _{\mathcal {C}\in \Omega }\mathcal {C}\), we have uniform convergence and that the derivative of this function converges uniformly to zero.
Lemma 12
If \(K\) is a compact subset of \(\cup _{\mathcal {C}\in \Omega }\mathcal {C}\), then

The function \(\Gamma ^J\) converges uniformly to the function \(u\mapsto H(Wu+b)\) on \(K\) as \(J\rightarrow 0\). In particular, since \(H(Wu+b)\) is constant in each chamber, we get that for any chamber \(\mathcal {C}\in \cup _{\mathcal {C}\in \Omega }\mathcal {C}, \Gamma ^J\) converges uniformly to the constant function \(H(Wv+b)\) on \(K\cap \mathcal {C}\) for any fixed \(v\in C\). Also, \(\Gamma ^J\) converges uniformly to the constant function \(H(Wx+b)\) on \(K\cap \mathcal {C}(x)\) for any \(x\in \{0,1\}^N\)

The Jacobian matrix \(D \Gamma ^J\) converges uniformly to zero on \(K\) as \(J\rightarrow 0\).
Proof
Similar to the proof of Lemma 11, there is a number \(r>0\) such that \(A_i(u)I_i(u)\ge r\) for all \(i\) and for all \(u\in K\), which is enough to guarantee uniform convergence on \(K\).\(\square \)
We now prove Theorem 4
Proof
In this proof, “ODE” will refer to the ODE in Eq. (1) and “BN” will refer to the BN in Eq. (17). Even though the steady states of this ODE depend on \(J\), for simplicity, we will denote them by \(u^*\) instead of \(u^{*J}\).
First, from Lemma 11, we consider \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), the function \(\Gamma ^J\) is well defined on \(K\) and such that \(u^*\in K\) is a steady state of the ODE if and only if \(u^*\) is a fixed point of \(\Gamma ^J\). Second, from Lemma 12, we have that \(\Gamma ^J\) converges uniformly to the constant vector \(h(x)=H(Wx+b)\) on \(K\cap \mathcal {C}(x)\). Since \(h(x)\) is in \(\{0,1\}^N\), we have that \(K\cap \mathcal {C}(h(x))\) contains a neighborhood of \(h(x)\). It follows from uniform convergence that for all \(0<J<\epsilon _K\) (taking a smaller \(\epsilon _K\) if necessary) , \(\Gamma ^J(K\cap \mathcal {C}(x))\subseteq K\cap \mathcal {C}(h(x))\). Also, on any chamber \(\mathcal {C}\) that does not contain an element of \(\{0,1\}^N, \Gamma ^J\) converges uniformly to the constant vector \(x{:=}H(Wv+b)\) for any fixed \(v\in \mathcal {C}\) (\(H(Wu+b)\) is constant on \(\mathcal {C}\)); then, for all \(0<J<\epsilon _K\) (taking a smaller \(\epsilon _K\) if necessary) \(\Gamma ^J(K\cap \mathcal {C})\subseteq K\cap \mathcal {C}(x)\). Also, note that \(K\cap \mathcal {C}\) is compact for any chamber \(\mathcal {C}\).
Now, suppose \(x^*\) is a steady state of the BN, that is, \(h(x^*)=x^*\). Then, for all \(0<J<\epsilon _K\), we obtain that \(\Gamma ^J(K\cap \mathcal {C}(x^*))\subseteq K\cap \mathcal {C}(h(x^*))=K\cap \mathcal {C}(x^*)\). Since we have a continuous function from a convex compact set to itself, \(\Gamma ^J\) has a fixed point \(\Gamma ^J(u^*)=u^*\in K\cap \mathcal {C}(x^*)\). Then, \(u^*\in K\cap \mathcal {C}(x^*)\) is a steady state of the ODE. Now, suppose that the ODE has a steadystate \(u^*\in K\), and let \(\mathcal {C}\) be the chamber that contains \(u^*\) and \(x^*{:=}H(Wu^*+b)\). Since \(u^*=\Gamma ^J(u^*)\in \Gamma ^J(K\cap \mathcal {C})\subseteq K\cap \mathcal {C}(x^*)\), we have that \(u^*\in \mathcal {C}(x^*)\). Since \(u^*\) and \(x^*\) belong to the same chamber, we also have that \(H(Wx^*+b)=H(Wu^*+b)=x^*\); thus, \(x^*\) is a steady state of the BN.
From Lemma 12, we can make the norm of \(D\Gamma ^J\) small so that \(u^*\) is the unique fixed point of \(\Gamma ^J\) in \(K\cap \mathcal {C}(x^*)\). Since \(\Gamma ^J\) converges uniformly to \(H(Wx^*+b)=h(x^*)=x^*\) on \(K\cap \mathcal {C}(x^*)\), we have that \(u^*=\Gamma ^J(u^*)\) converges to \(x^*\). Finally, to prove that the steady state of the ODE is asymptotically stable, we will show that the Jacobian matrix of the ODE can be seen as a small perturbation of a matrix that has negative eigenvalues. We will use the alternative form of \(\Gamma ^J_i\):
Denote \(f^J=(f_1^J,\ldots ,f_N^J)\) where \(f_i^J(u)=A_i(u) \frac{1u_i}{J+1u_i}I_i(u) \frac{u_i}{J+u_i}\). We now compute \(Df(u)\). For \(i\ne j\), we have
Also,
Let \(Z^J\) be the matrix given by \(Z^J_{ij}=w_{ij}^+\frac{1u_i}{J+1u_i}w_{ij}^ \frac{u_i}{J+u_i}\) and denote with \(E^J\) the diagonal matrix with entries \(E_{ii}^J=A_i(u)\frac{J}{(J+1u_i)^2}I_i(u)\frac{J}{(J+u_i)^2}\). Then,
where the entries of \(Z^J\) are bounded and \(E^J\) is a diagonal matrix. We now will show that for any steady state of the ODE in \(K, \lim _{J\rightarrow 0} E^J_{ii}=\infty \) as \(J\rightarrow 0\). After showing this, we can see \(Df^J(u^*)\) as a small perturbation of a matrix that has negative eigenvalues. Since eigenvalues are continuous with respect to matrix entries, it follows that the eigenvalues of \(Df^J(u^*)\) have negative real part, and hence, \(u^*\) is asymptotically stable.
We now show that \(\lim _{J\rightarrow 0} E^J_{ii}=\infty \) as \(J\rightarrow 0\). By computing \(\frac{(\Gamma ^J_i(u))^2}{J}\) and setting \(J=0\), it follows that \(\lim _{J\rightarrow 0}\frac{(\Gamma ^J_i(u))^2}{J}=0\) when \(A_i(u)I_i(u)\) is negative. Similarly, we obtain that \(\lim _{J\rightarrow 0}\frac{(1\Gamma ^J_i(u))^2}{J}=0\) when \(A_i(u)I_i(u)\) is positive. From these two limits, it follows that if \(A_i(u)I_i(u)\ne 0\), then \(\lim _{J\rightarrow 0}\frac{(J+\Gamma ^J_i(u))^2}{J}=0\) or \(\lim _{J\rightarrow 0}\frac{(J+1\Gamma ^J_i(u))^2}{J}=0\). Furthermore, since \(A_i(u)I_i(u)\) is uniformly nonzero on \(K\), the convergence is uniform.
If \(u^*\in K\) is a steady state of the ODE, then \(u_i^*=\Gamma ^J(u^*)\) and
\(\lim _{J\rightarrow 0}\left( A_i(u)\frac{J}{(J+1u^*_i)^2}I_i(u)\frac{J}{(J+u^*_i)^2}\right) =\)
\(\lim _{J\rightarrow 0} \left( A_i(u) \frac{J}{(J+1\Gamma ^J_i(u^*))^2} I_i(u) \frac{J}{(J+\Gamma ^J_i(u^*))^2} \right) =\infty \). Note that uniform convergence is needed in the last step because \(u^*\) depends on \(J\).\(\square \)
1.4 Proof of Theorems 6 and 8
In the rest of this section, “ODE” will refer to the ODE in Eq. (1) and “BN” will refer to the BN in Eq. (17).
Notice that for any \(x\in \{0,1\}^N, \mathcal {C}(x)= \{u\in [0,1]^N:H(Wu+b)=H(Wx+b)\}\). We now prove Theorem 6.
Proof
Let \(y=h(x)\) and for simplicity in the notation, assume that \(y=(0,\ldots ,0)\).
In the case \(x=y\), we will show that \((0,\ldots ,0)\) contains an invariant set for the original ODE. Since \(x=(0,\ldots ,0)\) and \(h(x)=x\), we have that \(\mathcal {C}(x)=\cap _{i=1}^N\{u\in [0,1]^N:\sum _{j=1}^N w_{ij} u_j +b_i<0\}\). We now consider a hypercube of the form \(K=[0,D]^N\) with \(D\) small so that \(K\subseteq \mathcal {C}(x)\). We claim that for \(J\) small \(K\) is invariant. Since we already showed that \([0,1]^N\) is invariant, it is enough to check that if \(u\in K\) with \(u_i=D\), then \(f_i^J(u)\le 0\). Since \(K\) is compact and \(\sum _{j=1}^N w_{ij} u_j +b_i<0\) for all \(i\) and for all \(u\in K\), there is \(r>0\) such that \(\sum _{j=1}^N w_{ij} u_j +b_i\le r\) for all \(u\in K\). It follows that \(A_i(u)I_i(u) \frac{u_i}{J+u_i}\) converges uniformly to \(A_i(u)I_i(u)=\sum _{j=1}^N w_{ij} u_j +b_i\le r\) on \(\{u\in K:u_i=D\}\) as \(J\rightarrow 0\). Also, \(f_i^J(u)=A_i(u) \frac{1u_i}{J+1u_i}I_i(u) \frac{u_i}{J+u_i}\le A_i(u)I_i(u) \frac{u_i}{J+u_i}\) on \(\{u\in K:u_i=D\}\). Thus, on \(\{u\in K:u_i=D\}, f_i^J\) is bounded above by a function that converges uniformly to a negative function. Then, there is \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K, f_i^J\) is negative on \(\{u\in K:u_i=D\}\). Then, \(K\) is invariant.
In the case \(x\ne y\), we assume for simplicity that \(x=(1,0,\ldots ,0)\) and \(y=(0,0,\ldots ,0)\). Then, since \(h(x)=y\) and \(h_1(y)=0\), we have the following
In particular, \(\sum _{j=1}^N w_{1j}u_j+b_1<0\) for all \(u\in \mathcal {C}(x)\cup \mathcal {C}(y)\). This also means that the hyperplane that separates \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is \(\{u:\sum _{j=1}^N w_{kj}u_j+b_k=0\}\) for some \(k\ne 1\); then, the common face of \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is given by
Now, for \(r>0\) small, we define the set
which will be a face of the neighborhood of \(x\) that we are looking for (see Fig. 11). We now project \(L\) onto the \(u_1=0\) plane (see Fig. 11), that is, define
We use \(L_1\) to “generate” a box parallel to the \(u_1\) axis (see Fig. 11); namely, consider
Now, consider the neighborhood of \(x\) given by
\(K\) is a polytope such that \(L\) is one of its faces. Similar to the case \(x=y\), there is \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), we have that for any face of \(K\) other than \(L\) the vector field points inward. Also, the first coordinate of the vector field is negative on \(K\). Thus, any solution with initial condition in \(K\) must exit \(K\) through its face \(L\) and then enter \(\mathcal {C}(y)\), that is, the ODE transitions from \(K\) to \(\mathcal {C}(y)\).\(\square \)
We prove Theorem 8.
Proof
We proceed as in the proof of Theorem 6, and for simplicity, we assume that \(x=(1,0,\ldots ,0)\) and \(y=h(x)=(0,\ldots ,0)\). Since \(h(x)=y\) and \(h_1(y)=0\), we have the following
In particular, \(\sum _{j=1}^N w_{1j}u_j+b_1<0\) for all \(u\in \mathcal {C}(x)\cup \mathcal {C}(y)\). This also means that the hyperplane that separates \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is \(\{u:\sum _{j=1}^N w_{kj}u_j+b_k=0\}\) for some \(k\ne 1\); furthermore, since this hyperplane is parallel to the axes, we have that \((w_{k1},w_{k2},\ldots ,w_{kN})=(w_{k1},0,\ldots ,0)\). Then, the hyperplane that separates \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is \(\{u: u_1=\frac{b_k}{w_{k1}}\}\), and the common face of \(\mathcal {C}(x)\) and \(\mathcal {C}(y)\) is given by
Now, let \(K\) be a compact subset of \(\mathcal {C}(x)\) and consider \(r>0\) small such that \(K\subseteq K^0{:=}\{u\in [0,1]^N:\sum _{j=1}^N w_{ij}u_j+b_i\le r \text { for}\, i\ne k \, \text {and}\, u_1\ge \frac{b_k}{w_{k1}} \}\) (see Fig. 12). Since the hyperplanes in \(\mathcal {H}\) are parallel to the axis, \(K^0\) is a box with faces parallel to the axes and \(K^0\) also shares a face with \(\mathcal {C}(y)\). Then, similar to the proof of Theorem 6, there is \(\epsilon _K>0\) such that for all \(0<J<\epsilon _K\), we have that at the faces of \(K^0\) other than that face shared with \(\mathcal {C}(y)\), the vector field of the ODE points inward, and the first entry of the vector field is negative. Then, the ODE will transition from \(K^0\) to \(\mathcal {C}(y)=\mathcal {C}(h(x))\).
Now, let \(K^1\) be a compact subset of \(\mathcal {C}(h(x))\) such that \(K^1\) intersects all solutions that start in \(K\) (see Fig. 12). Then, for all \(0<J<\epsilon _K\) (making \(\epsilon _K\) smaller if necessary), the ODE transitions from \(K^1\) to \(\mathcal {C}(h^2(x))\). This also means that the ODE transitions from \(K^0\) to \(\mathcal {C}(h^2(x))\). The proof follows by induction.\(\square \)
1.5 Case \(n>1\)
In this case, Eq. (1) becomes
The behavior of the system as \(J\rightarrow 0\) is similar. To illustrate this, we use the example from Section 6.2 for \(n>1\); namely, consider
Figure 13 shows the graph of the righthand side of Eq. (26) for the fixed values \(A = 1, I = 0.5\), three different values of \(J\), and three different values of \(n\). Note that as \(J\) becomes smaller, the graph gets flatter in (0,1) as in Section 6.2.
All definitions and results remain virtually unchanged except Sections 6.3 and 6.4. For \(n>1\), Eq. (23) becomes
which cannot be solved in terms of \(u_i^*\) in closed form as in Eq. (24) and Eq. (25). However, we can use the implicit function theorem to guarantee the existence of a function \(\Gamma ^J\) with the required properties (such as compact convergence to the Heaviside function).
1.6 Avoiding Discontinuity of Solutions in Eq. (14)
There is a way to make the approximations continuous, although it requires to change the equations in the boundary domains. The idea is to reduce the vector field using the approximations \(u_t\approx 1\) and \(u_s\approx 0\) where appropriate, while keeping the derivatives.
For example for the region \(\mathcal {R}_{12}\), we keep the derivatives and just use \(u_1 \approx 0\) and \(u_2 \approx 0\) in the righthand side of Eq. (2). This will yield
This is still a decoupled system and solvable.
Similarly for region \(\mathcal {R}^{12}\), using \(u_1 \approx 1, u_2 \approx 1\), we will get
This is also decoupled and solvable, though it is not linear.
For regions \(\mathcal {R}_1^2\) (\(u_1 \approx 0, u_2 \approx 1\)) and \(\mathcal {R}_2^1\) (\(u_1 \approx 1, u_2 \approx 0\)), we will get
and
respectively. These equations are again decoupled and solvable, but again we lost the linearity.
We see that in each corner, our original system can be approximated with decoupled, solvable equations. A similar idea can be used for other regions, but as seen above, this alternative approach introduces nonlinearity.
Rights and permissions
About this article
Cite this article
VelizCuba, A., Kumar, A. & Josić, K. Piecewise Linear and Boolean Models of Chemical Reaction Networks. Bull Math Biol 76, 2945–2984 (2014). https://doi.org/10.1007/s115380140040x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s115380140040x