Skip to main content

Inverse Problems in Systems Biology: A Critical Review

  • Protocol
  • First Online:
Systems Biology

Part of the book series: Methods in Molecular Biology ((MIMB,volume 1702))

Abstract

Systems Biology may be assimilated to a symbiotic cyclic interplaying between the forward and inverse problems. Computational models need to be continuously refined through experiments and in turn they help us to make limited experimental resources more efficient. Every time one does an experiment we know that there will be some noise that can disrupt our measurements. Despite the noise certainly is a problem, the inverse problems already involve the inference of missing information, even if the data is entirely reliable. So the addition of a certain limited noise does not fundamentally change the situation but can be used to solve the so-called ill-posed problem, as defined by Hadamard. It can be seen as an extra source of information. Recent studies have shown that complex systems, among others the systems biology, are poorly constrained and ill-conditioned because it is difficult to use experimental data to fully estimate their parameters. For these reasons was born the concept of sloppy models, a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. Furthermore the concept of sloppy models contains also the concept of un-identifiability, because the models are characterized by many parameters that are poorly constrained by experimental data. Then a strategy needs to be designed to infer, analyze, and understand biological systems. The aim of this work is to provide a critical review to the inverse problems in systems biology defining a strategy to determine the minimal set of information needed to overcome the problems arising from dynamic biological models that generally may have many unknown, non-measurable parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Protocol
USD 49.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Brenner S (2010) Sequences and consequences. Philos Trans R Soc B Biol Sci 365:207–212. doi:10.1098/rstb.2009.0221

    Article  Google Scholar 

  2. Sethna J. http://sethna.lassp.cornell.edu/research/what_is_sloppiness

  3. Gutenkunst RN (2008) Sloppiness, modelling and evolution in biochemical networks. Thesis, Cornell University, Ithaca

    Google Scholar 

  4. Gutenkunst RN, Casey FP, Waterfall JJ, Myers CR, Sethna JP (2007) Extracting falsifiable predictions from sloppy models. Reverse engineering biological networks: opportunities and challenges in computational methods for pathway inference. Ann N Y Acad Sci 1115:203–211. doi:10.1196/annals.1407.003

    Article  PubMed  Google Scholar 

  5. Gutenkunst RN, Waterfall JJ, Casey FP, Brown KS, Myers CR, Sethna JP (2007) Universally sloppy parameter sensitivities in systems biology models. PLoS Comput Biol 3:1871–1878. doi:10.1371/journal.pcbi.0030189

    Article  CAS  PubMed  Google Scholar 

  6. Gutenkunst RN, Casey FP, Waterfall JJ, Atlas JC, Kuczenski RS (2007) SloppyCell. http://sloppycell.sourceforge.net

  7. Karlsson J, Anguelova M, Jirstrand M (2012) An efficient method for structural identifiability analysis of large dynamic systems. In: 16th IFAC proc. vol., pp 941–946. doi:dx.doi.org/10.3182/20120711-3-BE-2027.00381

    Google Scholar 

  8. Anguelova M, Karlsson J, Jirstrand M (2012) Minimal output sets for identifiability. Math Biosci 239:139–153. doi:10.1016/j.mbs.2012.04.005

    Article  PubMed  Google Scholar 

  9. Raue A, Karlsson J, Saccomani MP, Jirstrand M, Timmer J (2014) Comparison of approaches for parameter identifiability analysis of biological systems. Bioinformatics 30:1440–1448. doi:10.1093/bioinformatics/btu006

    Article  CAS  PubMed  Google Scholar 

  10. Oana CT, Banga JR, Balsa-Canto E (2011) Structural identifiability of systems biology models: a critical comparison of methods. PloS One 6:e27755. doi:10.1371/journal.pone.0027755

    Article  Google Scholar 

  11. Dilão R, Muraro D (2010) A software tool to model genetic regulatory networks. Applications to the modeling of threshold phenomena and of spatial patterning in Drosophila. PLoS One 5(5). doi:dx.doi.org/10.1371/journal.pone.0010743

    Google Scholar 

  12. Shapiro BE, Levchenko A, Wold BJ, Meyerowitz EM, Mjolsness ED (2003) Cellerator: extending a computer algebra system to include biochemical arrows for signal transduction modeling. Bioinformatics 19(5):677–678. doi:10.1093/bioinformatics/btg042

    Article  CAS  PubMed  Google Scholar 

  13. Sedoglavic A (2002) A probabilistic algorithm to test local algebraic observability in polynomial time. J Symb Comput 33:735–755. http://dx.doi.org/10.1006/jsco.2002.0532

    Article  Google Scholar 

  14. Sedoglavic A (2007) Reduction of algebraic parametric systems by rectification of their affine expanded lie symmetries. In: Proceedings of 2nd international conference on algebraic biology, 2–4 July 2007. doi:10.1007/978-3-540-73433-8_20

    Google Scholar 

  15. Guzzi R (2012) Introduction to inverse methods with applications to geophysics and remote sensing (in Italian). Earth sciences and geography series. Springer, New York. http://www.springer.com/it/book/9788847024946

  16. Ambrosio L, Dal Maso G (1990) A general chain rule for distributional derivatives. Proc Am Math Soc 108:691–702. doi:10.1090/S0002-9939-1990-0969514-3

    Article  Google Scholar 

  17. Li C, Donizelli M, Rodriguez N et al (2010) BioModels database: an enhanced, curated and annotated resource for published quantitative kinetic models. BMC Syst Biol 4:92. doi:10.1186/1752-0509-4-92

    Article  PubMed  PubMed Central  Google Scholar 

  18. Hucka M, Finney A, Sauro HM, Bolouri H, Doyle JC et al (2003) The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics 19:524–531. doi:10.1093/bioinformatics/btg015

    Article  CAS  PubMed  Google Scholar 

  19. Orr HA (2006) The distribution of fitness effects among beneficial mutations in Fisher’s geometric model of adaptation. J Theor Biol 238:279–285. doi:10.1016/j.jtbi.2005.05.001

    Article  PubMed  Google Scholar 

  20. Waterfall JJ (2006) Universality in multiparameter fitting: sloppy models. Ph.D. thesis, Cornell University, Ithaca, New York

    Google Scholar 

  21. White A, Tolman M, Thames HD, Withers HR, Mason KA, Transtrum MK (2016) The limitations of model-based experimental design and parameter estimation in sloppy systems. PLoS Comput Biol 12:e1005227. doi:10.1371/journal.pcbi.1005227

    Article  PubMed  PubMed Central  Google Scholar 

  22. Tyson JJ (1991) Modeling the cell division cycle: cdc2 and cyclin interactions. Proc Natl Acad Sci USA 88:7328–7332

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Bellu G, Saccomani MP, Audoly S, D’Angiò L (2007) DAISY: a new software tool to test global identifiability of biological and physiological systems. Comput Methods Prog Biomed 88:52–61. doi:10.1016/j.cmpb.2007.07.002

    Article  Google Scholar 

  24. Song C, Phenix H, Abedi V, Scott M, Ingalls BP, Kaern M, Perkins TJ (2010) Estimating the stochastic bifurcation structure of cellular networks. PLoS Comput Biol 6. doi:10.1371/journal.pcbi.1000699

    Google Scholar 

  25. Lu J, Engl HW, Schuster P (2006) Inverse bifurcation analysis: application to simple gene systems. Algorithms Mol Biol 1:11. doi:10.1186/1748-7188-1-11

    Article  PubMed  PubMed Central  Google Scholar 

  26. Villaverde AF, Banga JR (2014) Reverse engineering and identification in systems biology: strategies, perspectives and challenges. J R Soc Interface 11. doi:10.1098/rsif.2013.0505

    Google Scholar 

  27. Paci P, Colombo T, Fiscon G, Gurtner A, Pavesi G, Farina L (2017) SWIM: a computational tool to unveiling crucial nodes in complex biological networks. Sci Rep 7:44797. doi:10.1038/srep44797

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Villaverde A, Henriques D, Smallbone K, Bongard S, Schmid J, Cicin-Sain D, Crombach A, Saez-Rodriguez J, Mauch K, Balsa-Canto E, Mendes P, Jaeger J, Banga JR (2015) BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology. BMC Syst Biol 9:8. doi:10.1186/s12918-015-0144-4

    Article  PubMed  PubMed Central  Google Scholar 

  29. Press WH, Teukolsky SA, Vetterling WT, Flannery BP (2007) Numerical recipes: the art of scientific computing, 3rd edn. Cambridge University Press, Cambridge

    Google Scholar 

  30. Haselgrove CB (1961) The solution of nonlinear equations and of differential equations with two-point boundary conditions. Comput J 4:255–259

    Article  Google Scholar 

  31. Gillespie DT (2007) Stochastic simulation of chemical kinetics. Annu Rev Phys Chem 58:35–55. doi:10.1146/annurev.physchem.58.032806.104637

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

The authors thank Giuseppe Macino and Lorenzo Farina for their inspiring discussions on the whole analysis presented in this study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rodolfo Guzzi .

Editor information

Editors and Affiliations

Appendices

Appendix A: Notes

1.1 A.1 Deterministic Solutions

This appendix can be an aide to characterize the system biology, thanks to mathematical models which may consist of a system of differential equations involving state variables and parameters. If the variables of such system do not depend explicitly on time, the system is said to be autonomous. Under certain assumptions defined below such system may be considered as an autonomous dynamical system. As time does not occur explicitly in equations, solution of a system of differential equations may be projected in a space called phase-space in which the behavior of the state variables is described.

The plot of the solution in such space is called phase portrait. The bifurcation of a system of differential equations, i.e., of an autonomous dynamical system is concerned with changes in the qualitative behavior of its phase portrait as parameters vary and more precisely, when such a bifurcation parameter reaches a certain value, called critical value. Thus, bifurcation theory is of great importance in dynamical systems study because it indicates stability changes, structural changes in a system, etc. So, plotting the solution of autonomous dynamical system according to the bifurcation parameter leads to the construction of a bifurcation diagram. Such diagram provides knowledge on the behavior of the solution: constant, periodic, nonperiodic, or even chaotic. As there are many kinds of behaviors of solutions there are many kinds of bifurcations. The Hopf bifurcation corresponds to periodic solutions and period doubling bifurcation, or period doubling cascade, which is one of the routes to chaos for dynamical systems.

1.2 A.2 Bifurcation Concepts

A bifurcation occurs when a small smooth change made to the parameter values (the bifurcation parameters) of a system causes a sudden qualitative or topological change in its behavior. Generally, at a bifurcation, the local stability properties of equilibria, periodic orbits or other invariant sets changes. It has two types; Local bifurcations, which can be analyzed entirely through changes in the local stability properties of equilibria, periodic orbits or other invariant sets as parameters cross through critical thresholds; and Global bifurcations,which often occur when larger invariant sets of the system collide with each other, or with equilibria of the system. They cannot be detected purely by a stability analysis of the equilibria (fixed or equilibrium points, see the next section).

In dynamical systems, only the solutions of linear systems may be found explicitly. Unfortunately, real life problems can generally be modelled only by nonlinear systems The main idea is to approximate a nonlinear system by a linear one (around the equilibrium point).

1.3 A.3 Linear Stability Analysis

Bifurcations indicate qualitative changes in a system’s behavior. For a dynamical system \( \frac{dy}{dt}=f\left(y,\lambda \right) \), bifurcation points are those equilibrium points at which the Jacobian \( \frac{\partial f}{\partial y} \) is singular. For definition consider a nonlinear differential equation

$$ \dot{x}(t)=f\left(x(t),u(t)\right), $$
(19)

where f is a function mapping R × R 3R n. A point \( \overline{x} \) is called an equilibrium point if there is a specific \( \overline{u}\in {R}^m \) such that

$$ f\left(x(t),u(t)\right)={0}_n. $$
(20)

Suppose \( \overline{x} \) is an equilibrium point (with the input \( \overline{u} \)). Consider the initial condition \( x(0)=\overline{x} \), and applying the input \( u(t)=\overline{u} \) for all tt 0, then resulting solution x(t) satisfies

$$ x(t)=\overline{x}, $$
(21)

for all tt 0. That is why it is called an equilibrium point or solution.

Linear stability of dynamical equations can be analyzed in two parts: one for scalar equations and the other for two dimensional systems

  1. 1.

    Linear stability analysis for scalar equations

    To analyze the Ordinary Differential Equations (ODE)

    $$ \dot{x}=f(x) $$
    (22)

    locally about the equilibrium point \( x=\overline{x} \), we expand the function f (x) in a Taylor series about the equilibrium point \( \overline{x} \). To emphasize that we are doing a local analysis, it is customary to make a change of variables from the dependent variable x to a local variable. Now let:

    $$ x(t)=\overline{x}+\varepsilon (t), $$
    (23)

    where it is assumed that ε(t) ≪ 1, so that we can justify dropping all terms of order two and higher in the expansion. Substituting \( x(t)=\overline{x}+\varepsilon (t) \) into the Right-Hand Side (RHS) of the ODE yields

    $$ {\displaystyle \begin{array}{ccc}f\left(x(t)\right)& =& f\left(\overline{x}+\varepsilon (t)\right)=f\left(\overline{x}\right)+f\left(\overline{x}\right)\varepsilon (t)+f\left(\overline{x}\right)\frac{\varepsilon^2(t)}{2}+\cdots \\ {}& =& 0+f\left(\overline{x}\right)\varepsilon (t)+O\left({\varepsilon}^2\right),\end{array}} $$
    (24)

    and dropping higher order terms, we obtain

    $$ f(x)\approx f\left(\overline{x}\right)\varepsilon (t). $$
    (25)

    Note that dropping these higher order terms is valid since ε(t) ≪ 1. Now substituting \( x(t)=\overline{x}+\varepsilon (t) \) into the Left-Hand Side (LHS) of the ODE,

    $$ {\varepsilon}^{\prime }(t)=f\left(\overline{x}\right)\varepsilon (t). $$
    (26)

    The goal is to determine if we have growing or decaying solutions. If the solutions grows, then the equilibrium point is unstable. If the solution decays, then the fixed point is stable. To determine whether or not the solution is stable or unstable we simply solve the ODE and get the solution as

    $$ \varepsilon (t)={\varepsilon}_0\exp \left(\left(\overline{x}\right)\varepsilon (t)\right), $$
    (27)

    where ε 0 is a constant. Hence, the solution is growing if \( \dot{f}\left(\overline{x}\right)>0 \) and decaying if \( \dot{f}\left(\overline{x}\right)<0 \). As a result, the equilibrium point is stable if \( \dot{f}\left(\overline{x}\right)<0 \), unstable if \( \dot{f}\left(\overline{x}\right)>0 \).

    A first-order autonomous ODE with a parameter r has the general form dx/dt = f (x, r). The fixed points are the values of x for which f (x, r) = 0. A bifurcation occurs when the number or the stability of the fixed points changes as system parameters change. The classical types of bifurcations that occur in nonlinear dynamical systems are produced from the following prototypical differential equations:

    • saddle: dx/dt = r + x 2. A saddle-node bifurcation or tangent bifurcation is a collision and disappearance of two equilibria in dynamical systems. In autonomous systems, this occurs when the critical equilibrium has one zero eigenvalue. This phenomenon is also called fold or limit point bifurcation. An equilibrium solution (where x = 0) is simply \( x=\pm \sqrt{r} \). Therefore, if r < 0, then we have no real solutions, if r > 0, then we have two real solutions.

    We now consider each of the two solutions for r > 0, and examine their linear stability in the usual way. First, we add a small perturbation:

    $$ x=\overline{x}+\varepsilon . $$

    Substituting this into the equation yields

    $$ \frac{d\varepsilon}{d t}=\left(r-{\overline{x}}^2\right)-2\overline{x}\varepsilon -{\varepsilon}^2, $$
    (28)

    and since the term in brackets on the RHS is trivially zero, therefore

    $$ \frac{d\varepsilon}{d t}=-2\overline{x}\varepsilon, $$

    which has the solution

    $$ \varepsilon (t)=A\exp \left(-2\overline{x}t\right). $$

    From this, we see that for \( x=+\sqrt{r} \) | x | → 0 as t (linear stability); for \( x=-\sqrt{r}\kern1em \mid x\mid \to 0 \) as t (linear instability).

    In a typical “bifurcation diagram,” therefore, the saddle node bifurcation at r = 0 corresponds to the creation of two new solution branches. One of these is linearly stable, the other is linearly unstable.

    Let’s do the same for the next

    • transcritical: dx/dt = rxx 2

    • supercritical pitchfork super: dx/dt = rxx 3

    • subcritical pitchfork sub: dx/dt = rx + x 3

    and easily we find the relative stability and instability.

  2. 2.

    Linear stability analysis for systems

    Consider the two-dimensional nonlinear system

    $$ {\displaystyle \begin{array}{ccc}\dot{x}=f\left(x,y\right),& & \\ {}\dot{y}=g\left(x,y\right),& & \end{array}} $$
    (29)

    and suppose that \( \left(\overline{x},\overline{y}\right) \) is a steady state (equilibrium point), i.e., \( f\left(\overline{x},\overline{y}\right)=0 \) and \( g\left(\overline{x},\overline{y}\right)=0. \) Now let’s consider a small perturbation from the steady state \( \left(\overline{x},\overline{y}\right) \)

    $$ {\displaystyle \begin{array}{ccc}x=\overline{x}+u,& & \\ {}y=\overline{y}+v,& & \end{array}} $$
    (30)

    where u and v are understood to be small as u ≪ 1 and v ≪ 1. It is natural to ask whether u and v are growing or decaying so that x and y will move away form the steady state or move towards the steady states. If it moves away, it is called unstable equilibrium point, if it moves towards the equilibrium point, then it is called stable equilibrium point. As in scalar equations, by expanding the Taylor’s series for f (x, y) and g(x, y);

    $$ {\displaystyle \begin{array}{c}\dot{u}=\dot{x}=f\left(x,y\right)\\ {}=f\left(\overline{x}+u,\overline{y}+v\right)\\ {}=f\left(\overline{x},\overline{y}\right)+{f}_x\left(\overline{x},\overline{y}\right)u+{f}_y\left(\overline{x},\overline{y}\right)v+ higher\kern0.5em order\kern0.5em terms\dots \\ {}={f}_x\left(\overline{x},\overline{y}\right)u+{f}_y\left(\overline{x},\overline{y}\right)v+ higher\kern0.5em order\kern0.5em terms\dots .\end{array}} $$
    (31)

    Similarly,

    $$ {\displaystyle \begin{array}{c}\dot{v}=\dot{y}=g\left(x,y\right)\\ {}=g\left(\overline{x}+u,\overline{y}+v\right)\\ {}=g\left(\overline{x},\overline{y}\right)+{g}_x\left(\overline{x},\overline{y}\right)u+{g}_y\left(\overline{x},\overline{y}\right)v+ higher\kern0.5em order\kern0.5em terms\dots \\ {}={g}_x\left(\overline{x},\overline{y}\right)u+{g}_y\left(\overline{x},\overline{y}\right)v+ higher\kern0.5em order\kern0.5em terms\dots .\end{array}} $$
    (32)

    Since u and v are assumed to be small, the higher order terms are extremely small, we can neglect the higher order terms and obtain the following linear system of equations governing the evolution of the perturbations u and v,

    $$ \left[\begin{array}{c}\dot{u}\\ {}\dot{v}\end{array}\right]=\left[\begin{array}{cc}{f}_x\left(\overline{x},\overline{y}\right)& {f}_y\left(\overline{x},\overline{y}\right)\\ {}{g}_x\left(\overline{x},\overline{y}\right)& {g}_y\left(\overline{x},\overline{y}\right)\end{array}\right]\left[\begin{array}{c}u\\ {}v\end{array}\right] $$

    where the matrix

    $$ \left[\begin{array}{cc}{f}_x\left(\overline{x},\overline{y}\right)& {f}_y\left(\overline{x},\overline{y}\right)\\ {}{g}_x\left(\overline{x},\overline{y}\right)& {g}_y\left(\overline{x},\overline{y}\right)\end{array}\right] $$

    is called Jacobian matrix J of the nonlinear system, where the raws of the Jacobian are the derivatives computed in the steady state. The above linear system for u and v has the trivial steady state (u, v) = (0, 0), and the stability of this trivial steady state is determined by the eigenvalues of the Jacobian matrix at the equilibrium point (0, 0) where J(0, 0) give the eigenvalues by solving the characteristic equation det(JλI) = 0, where I is the identity matrix and λ are the eigenvalues.

    As a summary,

    • Asymptotically stable. A critical point is asymptotically stable if all eigenvalues of the jacobian matrix J are negative, or have negative real parts.

    • Unstable. A critical point is unstable if at least one eigenvalue of the jacobian matrix J is positive, or has positive real part.

    • Stable (or neutrally stable). Each trajectory move about the critical point within a finite range of distance.

    • Definition(Hyperbolic point). The equilibrium is said to be hyperbolic if all eigenvalues of the jacobian matrix have nonzero real parts.

    • Hyperbolic equilibria are robust (i.e., the system is structurally stable). Small perturbations of order do not change qualitatively the phase portrait near the equilibria. Moreover, local phase portrait of a hyperbolic equilibrium of a nonlinear system is equivalent to that of its linearization. This statement has a mathematically precise form known as the Hartman-Grobman. This theorem guarantees that the stability of the steady state \( \left(\overline{x},\overline{y}\right) \) of the nonlinear system is the same as the stability of the trivial steady state (0, 0) of the linearized system.

    • Definition(Non-Hyperbolic point). If at least one eigenvalue of the Jacobian matrix is zero or has a zero real part, then the equilibrium is said to be non-hyperbolic. Non-hyperbolic equilibria are not robust (i.e., the system is not structurally stable). Small perturbations can result in a local bifurcation of a non-hyperbolic equilibrium, i.e., it can change stability, disappear, or split into many equilibria. Some refer to such an equilibrium by the name of the bifurcation.

1.4 A.4 Applications to Two Nonlinear Equations System

In the study of nonlinear dynamics, it is useful to first introduce a simple system that exhibits periodic behavior as a consequence of a Hopf bifurcation. The two-dimensional nonlinear and autonomous system given by

$$ {\displaystyle \begin{array}{ccc}\dot{x}& =& {f}_1\left(x,y\right)=-x+ ay+{x}^2y,\\ {}\dot{y}& =& {f}_2\left(x,y\right)=b- ay-{x}^2y\end{array}} $$
(33)

has this feature. These equations describe the autocatalytic reaction of two intermediate species x and y in an isothermal batch reactor, when the system is far from equilibrium. In this context, the steady state referred to below is a pseudo steady state, and is applicable when the precursor reactant is slowly varying with time.

The unique steady state is given by x S = b and y S = b/(a + b 2). This steady state is at the position of the green dot in the phase portrait diagram. It appears as the intersection of the dotted blue and green curves, which are the level curves given by f 1(x, y) = 0 and f 2(x, y) = 0.

The stability of steady state to small disturbances can be assessed by determining the eigenvalues of the Jacobian J

$$ J=\left(\begin{array}{cc}\frac{\partial {f}_1}{\partial x}& \frac{\partial {f}_1}{\partial y}\\ {}\frac{\partial {f}_2}{\partial x}& \frac{\partial {f}_2}{\partial y}\end{array}\right)\kern1.00em $$
(34)

1.5 A.5 Hopf Bifurcation

Using the following procedure it is possible to compute the Hopf bifurcation parameter value of two or three-dimensional dynamical systems. Since the two-dimensional procedure may be obtained by a simple reduction, the three-dimensional procedure is only presented. A Hopf bifurcation occurs when a complex conjugate pair of eigenvalues crosses the imaginary axis.

The phase portrait with the vector field of directions around the critical point (x S , y S ) may be simply obtained. In addition, the eigenvalues of J, the trace trace(J), the determinant | J |, and \( \Delta = trace{(J)}^2-4\mid J\mid \) may be computed as far as the time series corresponding to x (t) and y (t), respectively and the real and imaginary parts of the two eigenvalues λ 1 and λ 2. Then to observe a Hopf bifurcation may be defined the parameters.

1.6 A.6 Nonlinear Equations System

Newton’s method can be used to solve systems of nonlinear equations. Newton-Raphson Method for two-dimensional Systems.

To solve the nonlinear system F(X) = 0, given one initial approximation P 0, and generating a sequence P k which converges to the solution P i i.e. F(X) = 0. 

Suppose that P k has been obtained, use the following steps to obtain P k+1.

  1. 1.

    Evaluate the function

    $$ \mathbf{F}\left({\mathbf{P}}_k\right)=\left(\begin{array}{c}{f}_1\left({p}_k,{q}_k\right)\\ {}{f}_2\left({p}_k,{q}_k\right)\end{array}\right)\kern1.00em $$
    (35)
  2. 2.

    Evaluate the Jacobian

    $$ \mathbf{J}\left({\mathbf{P}}_{\mathbf{k}}\right)=\left(\begin{array}{c}\frac{\updelta}{\updelta x}\kern2pt {f}_1\left({p}_k,{q}_k\right)\kern1em \frac{\updelta}{\updelta x}\,{f}_1\left({p}_k,{q}_k\right)\\ {}\frac{\updelta}{\updelta y}{f}_2\left({p}_k,{q}_k\right)\kern1em \frac{\updelta}{\updelta y}\,{f}_2\left({p}_k,{q}_k\right)\end{array}\right)\kern1.00em $$
    (36)
  3. 3.

    Solve the linear system

    $$ \mathbf{J}\left({\mathbf{P}}_{\mathbf{k}}\right)\Delta {\mathbf{P}}_k=-\mathbf{F}\left({\mathbf{P}}_{\mathbf{k}}\right)\kern1em \mathrm{for}\kern1em \Delta \mathbf{P} $$
    (37)
  4. 4.

    Compute the next approximation

    $$ {\mathbf{P}}_{k+1}={\mathbf{P}}_k+\Delta {\mathbf{P}}_k $$
    (38)

1.7 A.7 Singular Value Decomposition

Every A, matrix m × n, mn can be decomposed as

$$ \mathbf{A}=\mathbf{U}\Sigma {\mathbf{V}}^T $$
(39)

where (.)T denotes the transposed matrix and U is m × n matrix, V is n × n matrix satisfying

$$ {\mathbf{U}}^T\mathbf{U}={\mathbf{V}}^T\mathbf{V}=\mathbf{V}{\mathbf{V}}^T={\mathbf{I}}_n $$
(40)

and \( \Sigma =E\left[{\sigma}_1,\dots, {\sigma}_n\right] \) a diagonal matrix.

These σ i ’s, σ 1/gσ 2 ≥, , σ n ≥ 0 are the square root of the nonnegative eigenvalues of A T A and are called as the singular values of matrix A. As it is well known from linear algebra, see i.e., Press et al. [29] singular value decomposition is a technique to compute pseudoinverse for singular or ill-conditioned matrix of linear systems. In addition this method provides least square solution for overdetermined system and minimal norm solution in case of undetermined system.

The pseudoinverse of a matrix A, m × n is a matrix A +, n ∗×m satisfying

$$ {\mathbf{A}\mathbf{A}}^{+}\mathbf{A}=\mathbf{A},{\mathbf{A}}^{+}{\mathbf{A}\mathbf{A}}^{+}={\mathbf{A}}^{+},{\left({\mathbf{A}}^{+}\mathbf{A}\right)}^{\ast }={\mathbf{A}}^{+}\mathbf{A},{\left({\mathbf{A}\mathbf{A}}^{+}\right)}^{\ast }={\mathbf{A}\mathbf{A}}^{+} $$
(41)

where (.) denotes the conjugate transpose of the matrix.

Always exists a unique A + which can be computed using SVD:

  1. 1.

    If m > = n and \( \mathbf{A}=\mathbf{U}\Sigma {\mathbf{V}}^T \), then

    $$ {\mathbf{A}}^{+}=\mathbf{V}{\Sigma}^{-1}{\mathbf{U}}^T $$
    (42)

    where \( {\Sigma}^{-1}=E\left[1/{\sigma}_1,\dots, 1/{\sigma}_n\right] \)

  2. 2.

    If m < n, then compute the (A T)+, pseudoinverse of A T and then

    $$ {\mathbf{A}}^{+}={\left({\left({\mathbf{A}}^T\right)}^{+}\right)}^T $$
    (43)

1.8 A.8 Newton-Raphson Method with Pseudoinverse

The idea of using pseudoinverse in order to generalize of Newton method is not new but has been suggested by different authors, among others we may cite Haselgrove [30]. It means that in the iteration formula, the pseudoinverse of the Jacobian matrix will be employed,

$$ {x}_{i+1}={x}_i-{J}^{+}\left({x}_j\right)f\left({x}_i\right) $$
(44)

1.9 A.9 Stochastic Resonance

Since several times the system shows a bistability one can consider the following stochastic equation:

$$ dX=\frac{\partial U\left(X,t\right)}{\partial U(X)} dt+\eta d{W}_t $$
(45)

where dW t stands for a Wiener process and η represents the noise level.

Now consider potentials of the form U(X, t) = U o (X) + εXcos(2πt/τ), composed of a stationary part U o with two minima at X and X + and a periodic forcing with amplitude ε and period τ. If ε is small enough, X will oscillate around either X or X +, without ever switching to the other.

But what happens if one increases the noise amplitude η? Then there is some probability that X will jump from one basin to the other. If the noise level is just right, X will follow the periodic forcing and oscillate between X and X + with period τ. This is what we mean by stochastic resonance.

In more general terms, there is stochastic resonance whenever adding noise to a system improves its performance or, in the language of signal processing, increases its signal-to-noise ratio. Note that the noise amplitude cannot be too large or the system can become completely random.

1.10 A.10 Stochastic Solutions

The deterministic dynamics of populations in continuous time are traditionally described using coupled, first-order ordinary differential equations. While this approach is accurate for large systems, it is often inadequate for small systems where key species may be present in small numbers or where key reactions occur at a low rate. The Gillespie stochastic simulation algorithm (SSA) [31] is a procedure for generating time-evolution trajectories of finite populations in continuous time and has become the standard algorithm for these types of stochastic models. It is well known that stochasticity in finite populations can generate dynamics profoundly different from the predictions of the corresponding deterministic model. For example, demographic stochasticity can give rise to regular and persistent population cycles in models that are deterministically stable and can give rise to molecular noise and noisy gene expression in genetic and chemical systems where key molecules are present in small numbers or where key reactions occur at a low rate. Because analytical solutions to stochastic time-evolution equations for all but the simplest systems are intractable, while numerical solutions are often prohibitively difficult, stochastic simulations have become an invaluable tool for studying the dynamics of finite biological, chemical, and physical systems.

The Gillespie stochastic simulation algorithm (SSA) is a procedure for generating statistically correct trajectories of finite well-mixed populations in continuous time. The trajectory that is produced is a stochastic version of the trajectory that would be obtained by solving the corresponding stochastic differential equations.

1.11 A.11 The Gillespie SSAs

The SSA assumes a population consisting of a finite number of individuals distributed over a finite set of discrete states. Changes in the number of individuals in each state occur due to reactions between interacting states.

Given an initial time t 0 and initial population state X(t 0), the SSA generates the time evolution of the state vector X(t), (X 1(t), , X N (t)) where X i (t), i = 1, , N, is the population size of state i at time t and N is the number of states. The states interact through M reactions R j where j = 1, , M denotes the jth reaction. A reaction is defined as any process that instantaneously changes the population size of at least one state. Each reaction R j is characterized by two quantities. The first is its state-change vector ν j = (ν 1j , , ν Nj ), where ν ij is the population change in state i caused by one R j reaction. In other words, if the system is in state x, assuming x = X(t), and one R j reaction occurs, the system instantaneously jumps to state x + ν j . The second component of R j is its propensity function a j (x) which is the probability of one R j reaction occurring in the infinitesimal time interval [t; t + dt].

Appendix B: Software Tools

Beyond personal script to manage data, we have also used and tested several public domain softwares, based on Qt and Python. Here we suggest those we consider more flexible and accurate.

  • COPASI is a Qt software application for simulation and analysis of biochemical networks and their dynamics. COPASI is part of de.NBI, the “German Network for Bioinformatics Infrastructure”. It is a stand-alone program that supports models in the SBML standard and can simulate their behavior using ODEs or Gillespie’s stochastic simulation algorithm; arbitrary discrete events can be included in such simulations.http://copasi.org/

  • PyCoTools a COPASI based tool, in Python, for parameter estimation and identifiability.https://github.com/CiaranWelsh/PyCoTools

  • SloppyCell is a Python software environment for simulation and analysis of biomolecular networks, mainly developed for sloppy models.http://sloppycell.sourceforge.net/

  • Pycellerator provides python libraries, a command line interface, and an ipython notebook interface for Cellerator arrow notation.https://github.com/biomathman/pycellerator

  • Inverse problem solving tools for solving inverse problems. An open-source Python library for modelling and inversion in geophysics.http://www.fatiando.org/v0.1/api/inversion.html

  • PyDSTool is a sophisticated and integrated simulation and analysis environment for dynamical systems models of physical systems (ODEs, DAEs, maps, and hybrid systems and bifurcation). PyDSTool is platform independent, written primarily in Python with some underlying C and Fortran legacy code for fast solving. PyDSTool supports symbolic math, optimization, phase plane analysis, continuation and bifurcation analysis, data analysis, and other tools for modelling—particularly for biological applications.http://www.ni.gsu.edu/~rclewley/PyDSTool/FrontPage.html

We have not had any problems to install and run the above mentioned software, but check the different releases be compatible. The python version was 2.7.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media LLC

About this protocol

Check for updates. Verify currency and authenticity via CrossMark

Cite this protocol

Guzzi, R., Colombo, T., Paci, P. (2018). Inverse Problems in Systems Biology: A Critical Review. In: Bizzarri, M. (eds) Systems Biology. Methods in Molecular Biology, vol 1702. Humana Press, New York, NY. https://doi.org/10.1007/978-1-4939-7456-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4939-7456-6_6

  • Published:

  • Publisher Name: Humana Press, New York, NY

  • Print ISBN: 978-1-4939-7455-9

  • Online ISBN: 978-1-4939-7456-6

  • eBook Packages: Springer Protocols

Publish with us

Policies and ethics