Skip to main content
Log in

Defining and detecting structural sensitivity in biological models: developing a new framework

  • Published:
Journal of Mathematical Biology Aims and scope Submit manuscript

Abstract

When we construct mathematical models to represent biological systems, there is always uncertainty with regards to the model specification—whether with respect to the parameters or to the formulation of model functions. Sometimes choosing two different functions with close shapes in a model can result in substantially different model predictions: a phenomenon known in the literature as structural sensitivity, which is a significant obstacle to improving the predictive power of biological models. In this paper, we revisit the general definition of structural sensitivity, compare several more specific definitions and discuss their usefulness for the construction and analysis of biological models. Then we propose a general approach to reveal structural sensitivity with regards to certain system properties, which considers infinite-dimensional neighbourhoods of the model functions: a far more powerful technique than the conventional approach of varying parameters for a fixed functional form. In particular, we suggest a rigorous method to unearth sensitivity with respect to the local stability of systems’ equilibrium points. We present a method for specifying the neighbourhood of a general unknown function with \(n\) inflection points in terms of a finite number of local function properties, and provide a rigorous proof of its completeness. Using this powerful result, we implement our method to explore sensitivity in several well-known multicomponent ecological models and demonstrate the existence of structural sensitivity in these models. Finally, we argue that structural sensitivity is an important intrinsic property of biological models, and a direct consequence of the complexity of the underlying real systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Adamson MW, Morozov AYu (2013) When can we trust our model predictions? Unearthing structural sensitivity in biological systems. Proc R Soc A

  • Anderson TR, Gentleman WC, Sinha B (2010) Influence of grazing formulations on the emergent properties of a complex ecosystem model in a global Ocean general circulation model. Prog Oceanogr 87:201–213

    Article  Google Scholar 

  • Bairagi N, Sarkar RR, Chattopadhyay J (2008) Impacts of incubation delay on the dynamics of an eco-epidemiological system-a theoretical study. Bull Math Biol 70:2017–2038

    Article  MATH  MathSciNet  Google Scholar 

  • Bazykin AD (1998) Nonlinear dynamics of interacting populations. World Scientific, Singapore

    Google Scholar 

  • Bendoricchio G, Jorgensen S (2001) Fundamentals of ecological modelling. Elsevier Science Ltd, Amsterdam

  • Berezovskaya F, Karev G, Arditi R (2001) Parametric analysis of the ratio-dependent predator-prey model. J Math Biol 43:221–246

    Google Scholar 

  • Butler GJ, Wolkowicz GSK (1986) Predator-mediated competition in the chemostat. J Math Biol 24:167–191

    Article  MATH  MathSciNet  Google Scholar 

  • Cordoleani F, Nerini D, Gauduchon M, Morozov A, Poggiale J-C (2011) Structural sensitivity of biological models revisited. J Theor Biol 283:82–91

    Article  MathSciNet  Google Scholar 

  • Cordoleani F, Nerini D, Morozov A, Gauduchon M, Poggiale J-C (2013) Scaling up the predator functional response in heterogeneous environment : when Holling type III can emerge? J Theor Biol 336:200–208

    Article  MathSciNet  Google Scholar 

  • Dieudonne J (1960) Foundations of modern analysis. Academic, New York

    MATH  Google Scholar 

  • Duffy MA, Sivars-Becker L (2007) Rapid evolution and ecological host–parasite dynamics. Ecol Lett 10:44–53

    Article  Google Scholar 

  • Edwards AM (2001) Adding detritus to a nutrient-phytoplankton–zooplankton model: a dynamical-systems approach. J Plankton Res 23:389–413

    Article  Google Scholar 

  • Eichinger M, Kooijman SALM, Sempere R, Lefevre D, Gregori G, Charriere B, Poggiale J-C (2009) Consumption and release of dissolved organic carbon by marine bacteria in a pulsed-substrate environment: from experiments to modelling. Aquat Microb Ecol 56:41–54

    Article  Google Scholar 

  • Englund G, Leonardsson K (2008) Scaling up the functional response for spatially heterogeneous systems. Ecol Lett 11:440–449

    Article  Google Scholar 

  • Fussmann GF, Ellner SP, Shertzer KW, Hairston NG Jr (2000) Crossing the Hopf bifurcation in a live predator–prey system. Science 290:1358–1360

    Article  Google Scholar 

  • Fussmann GF, Blasius B (2005) Community response to enrichment is highly sensitive to model structure. Biol Lett 1:9–12

    Article  Google Scholar 

  • Gentleman W, Leising A, Frost B, Storm S, Murray J (2003) Functional responses for zooplankton feeding on multiple resources: a review of assumptions and biological dynamics. Deep Sea Res II 50:2847–2875

    Article  Google Scholar 

  • Giller PS, Doube BM (1994) Spatial and temporal co-occurrence of competitors in Southern African dung beetle communities. J Anim Ecol 63:629–643

    Article  Google Scholar 

  • Gonzalez-Olivares E, Mena-Lorca J, Rojas-Palma A, Flores JD (2011) Dynamical complexities in the Leslie–Gower predator–prey model as consequences of the Allee effect on prey. Appl Math Model 35:366–381

    Article  MATH  MathSciNet  Google Scholar 

  • Gross T, Ebenhöh W, Feudel U (2004) Enrichment and foodchain stability: the impact of different functional forms. J Theor Biol 227(3):349–358

    Article  Google Scholar 

  • Gross T, Edwards AM, Feudel U (2009) The invisible niche: weakly density-dependent mortality and the coexistence of species. J Theor Biol 258:148–155

    Google Scholar 

  • Guo QF, Brown JH, Valone TJ (2002) Long-term dynamics of winter and summer annual communities in the Chihuahuan desert. J Veg Sci 13:575–584

    Article  Google Scholar 

  • Halbach U, Halbach-Keup G (1974) Arch. Hydrobiol. 73:273

    Google Scholar 

  • Hastings A, Powell T (1991) Chaos in a three-species food chain. Ecology 72:896–903

    Google Scholar 

  • Janssen PHM, Heuberger PSC, Sanders R (1994) UNCSAM: a tool for automating sensitivity and uncertainty analysis. Environ Softw 9:1–11

    Article  Google Scholar 

  • Kar TK, Ghorai A, Batabyal A (2012) Global dynamics and bifurcation of a tri-trophic food chain model. World J Model Simul 8:66–80

    Google Scholar 

  • Kinnison MT, Hairston NG Jr (2007) Eco-evolutionary conservation biology: contemporary evolution and the dynamics of persistence. Funct Ecol 21:444–454

    Article  Google Scholar 

  • Kooi BW, Boer MP (2001) Bifurcations in ecosystem models and their biological interpretation. Appl Anal 77:29–59

    Google Scholar 

  • Kuang Y, Freedman HI (1988) Uniqueness of limit cycles in Gause-type models of predator–prey systems. Math Biosci 88:67–84

    Article  MATH  MathSciNet  Google Scholar 

  • Kuznetsov YA (2004) Elements of applied bifurcation theory. Springer, New York

    Book  MATH  Google Scholar 

  • Morozov A (2010) Emergence of Holling type iii zooplankton functional response: bringing together field evidence and mathematical modelling. J Theor Biol 265:45–54

    Article  Google Scholar 

  • Muller EB, Kooijman S, Edmunds PJ, Doyle FJ, Nisbet RM (2009) Dynamic energy budgets in syntrophic symbiotic relationships between heterotrophic hosts and photoautotrophic symbionts. J Theor Biol 259:44–57

    Article  Google Scholar 

  • Myerscough MR, Darwen MJ, Hogarth WL (1996) Stability, persistence and structural stability in a classical predator–prey model. Ecol Model 89:31–42

    Article  Google Scholar 

  • Nicholson AJ (1957) The self-adjustment of populations to change. In: Cold Spring Harbor symposium on quantitative biology vol 223, pp 153–173

  • Philippart CJM, Cadee GC, van Raaphorst W, Riegman R (2000) Long-term phytoplankton–nutrient interactions in a shallow coastal sea: algal community structure, nutrient budgets, and denitrification potential. Limnol Oceanogr 45:131–144

    Article  Google Scholar 

  • Poggiale JC (1998) Predator–prey models in heterogeneous environment: emergence of functional response. Math Comput Model 27(4):63–71

    Article  MATH  MathSciNet  Google Scholar 

  • Poggiale J-C, Baklouti M, Queguiner B, Kooijman S (2010) How far details are important in ecosystem modelling: the case of multi-limiting nutrients in phytoplankton–zooplankton interactions. Philos Trans R Soc B 365:3495–3507

    Article  Google Scholar 

  • Smayda TJ (1998) Patterns of variability characterizing marine phytoplankton, with examples from Narragansett Bay. ICES J Mar Sci 55:562–573

    Article  Google Scholar 

  • Thompson JN (1998) Rapid evolution as an ecological process. Trends Ecol Evol 13:329–332

    Article  Google Scholar 

  • Truscott JE, Brindley J (1994) Equilibria, stability and excitability in a general class of plankton population models. Philos Trans R Soc A347:703–718

    Article  Google Scholar 

  • Valdes L et al (2007) A decade of sampling in the Bay of Biscay: what are the zooplankton time series telling us? Prog Oceanogr 74:98–114

    Article  Google Scholar 

  • Wolda H (1988) Insect seasonality: why? Annu Rev Ecol Syst 19:1–18

    Google Scholar 

  • Wood SN, Thomas MB (1999) Super-sensitivity to structure in biological models. Proc R Soc B 266:565–570

    Article  Google Scholar 

  • Wood SN (2001) Partially specified ecological models. Ecol Monogr 71:1–25

    Article  Google Scholar 

Download references

Acknowledgments

We highly appreciated Professor Sergei V. Petrovskii for comments and useful suggestions to improve the manuscript. We thank 4 anonymous reviewers for their great work which helped us to substantially improve the manuscript. We are grateful to the Chief Editor Professor Mark Lewis and the Handling Editor Dr. Radek Erban for helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Yu. Morozov.

Appendices

Appendix A: Distances between systems

We have introduced the general definition of structural sensitivity in Sect. 2.1, but in practice there are several particular definitions arising from this general definition: choosing various metrics \(d_M \) will determine different types of structural sensitivity. In dynamical systems theory, the most commonly found such metric is the following (Kuznetsov 2004):

Definition 7.1

(general \(C^{1}\)-distance)

Consider two continuous-time systems:

$$\begin{aligned}&{\dot{x}}=f(x),\quad x\in {\mathbb {R}}^{n},\end{aligned}$$
(7.1)
$$\begin{aligned}&\hbox {and}\quad {\dot{x}}=f(x),\quad x\in {\mathbb {R}}^{n}, \end{aligned}$$
(7.2)

where \(f,g:{\mathbb {R}}^{n}\rightarrow {\mathbb {R}}^{n}\) are \(C^{1}\). The \(C^{1}\) -distance between (7.1) and (7.2) over a closed, bounded, region \(\Omega \subset {\mathbb {R}}^{n}\) is the positive number given by:

$$\begin{aligned} d_1 {:=}\mathop {\hbox {sup}}\limits _{x\in \Omega } \left\{ {\left\| {f\left( x \right) -g\left( x \right) } \right\| +\left\| {\frac{df\left( x \right) }{dx}-\frac{dg\left( x \right) }{dx}} \right\| } \right\} \end{aligned}$$

where \(\left\| {f\left( x \right) -g\left( x \right) } \right\| \) and \(\left\| {\frac{df\left( x \right) }{dx}-\frac{dg\left( x \right) }{dx}} \right\| \) denote a vector and a matrix norm, respectively.

In a large number of biological models, \(f\) and \(g\) are composed of linear combinations of potentially non-linear model functions, some of which have parameterisations we are certain of, through theoretical reasoning or established laws. In such a situation, it makes little sense to consider a distance over the space of all systems, but only those systems which fix the model functions we are sure of:

Definition 7.2

(Fixed function \(C^{1}\)-distance)

Consider two continuous-time systems:

$$\begin{aligned}&{\dot{x}} = G\left( {g_{1} \left( x \right) , \ldots ,g_{m} \left( x \right) ,h_{1} \left( x \right) , \ldots ,h_{p} \left( x \right) } \right) ,\quad x \in {\mathbb {R}}^{n},\end{aligned}$$
(7.3)
$$\begin{aligned}&\hbox {and}\quad {\dot{x}} = G\left( {g_{1} \left( x \right) , \ldots ,g_{m} \left( x \right) ,{\tilde{h}}_{1} \left( x \right) , \ldots ,{\tilde{h}}_{p} \left( x \right) } \right) , \quad x \in {\mathbb {R}}^{n}, \end{aligned}$$
(7.4)

where \(G:{\mathbb {R}}^{m+p}\rightarrow {\mathbb {R}}^{n}\) is linear and \(g_1 ,\ldots ,g_m ,h_1 ,\ldots ,h_p ,{\tilde{h}}_1 ,\ldots ,{\tilde{h}}_p \in C^{1}\left( {{\mathbb {R}}^{n}} \right) \). We define the fixed function \(C^{1}\) -distance between (7.3) and (7.4) as the \(\hbox {C}^{1}\)-distance defined only on the specific set of systems with the model functions \(g_1 \left( x \right) ,\ldots ,g_m \left( x \right) \) being fixed.

In many practical cases when the exact formulation of a model function is unknown, we have no information regarding the derivatives of the unknown functions (e.g. all the information we have is given by data points from experiments). In such a case, use of a \(C^{1}\)-metric may be impractical, and we may wish to use the following metrics (see Adamson and Morozov 2013):

Definition 7.3

(Absolute \(d_Q \)-distance)

Consider two continuous-time systems:

$$\begin{aligned}&{\dot{x}} = G\left( {g_{1} \left( x \right) , \ldots ,g_{m} \left( x \right) ,h_{1} \left( x \right) , \ldots ,h_{p} \left( x \right) } \right) ,\quad x \in {\mathbb {R}}^{n},\end{aligned}$$
(7.5)
$$\begin{aligned}&\hbox {and}\quad {\dot{x}} = G\left( {g_{1} \left( x \right) , \ldots ,g_{m} \left( x \right) ,{\tilde{h}}_{1} \left( x \right) , \ldots ,{\tilde{h}}_{p} \left( x \right) } \right) , \quad x \in {\mathbb {R}}^{n} , \end{aligned}$$
(7.6)

where \(G:{\mathbb {R}}^{m+p}\rightarrow {\mathbb {R}}^{n}\) is linear, \(g_1 ,\ldots ,g_m \in C^{1}\left( {{\mathbb {R}}^{n}} \right) \), and \(\left\{ {h_1 ,\ldots ,h_p } \right\} \!, \big \{ {{\tilde{h}}_1 ,\ldots ,{\tilde{h}}_p } \big \}\in Q=\left\{ {Q_1 ,\ldots ,Q_p } \right\} \) where the \(Q_i\subsetneq C^{1}\left( {{\mathbb {R}}^{n}} \right) \) are classes of functions satisfying certain specific conditions, including bounded second derivatives. The absolute \(d_Q \) -distance between (7.5) and (7.6) over a closed, bounded, region \(\Omega \subset {\mathbb {R}}^{n}\) is given by:

$$\begin{aligned} d_Q :{=}\mathop {\sup }\limits _{x\in \Omega } \sqrt{\left( {h_1 \left( x \right) -{\tilde{h}}_1 \left( x \right) } \right) ^{2}+\cdots +\left( {h_p \left( x \right) -{\tilde{h}}_p \left( x \right) } \right) ^{2}}. \end{aligned}$$

Remark

The system may be sensitive to the choice of linear composition of nonlinear terms, i.e. if \(G\) is replaced by some \({\tilde{G}}\) and the nonlinear terms changed accordingly. However, we can usually justify our choice of model composition to an extent—e.g. \(G\) representing a breakdown of the functional operator into average per-capita growth rates, mortality terms, functional responses, etc. and, as with the use of \(C^{1}\)-metrics, allowing variation of the linear composition makes the model potentially unrealistic, and a sensitivity analysis difficult to interpret. For these reasons, we consider this discussion to be beyond the scope of this paper, although it should certainly be considered elsewhere.

Definition 7.4

(Relative \(d_Q\)-distance)

The relative \(d_Q \) -distance between (7.5) and (7.6) over region \(\Omega \subset {\mathbb {R}}^{n}\) is given by:

$$\begin{aligned} d_Q {:=}\mathop {\sup }\limits _{x\in Q} \sqrt{\frac{\left\| {h_1 (x)-{\tilde{h}}_1 (x)} \right\| ^{2}+\cdots +\left\| {h_p (x)-{\tilde{h}}_p (x)} \right\| ^{2}}{\max \left\{ {h_1 (x)^{2}+\cdots +h_p (x)^{2},{\tilde{h}}_1 (x)^{2}+\cdots +{\tilde{h}}_p (x)^{2}} \right\} }} \end{aligned}$$

Note that the requirement that the second derivatives of all functions belonging to the class \(Q\) must vary within certain limits is vital: it ensures that the Jacobian matrices of two close systems cannot be arbitrarily far apart. In this sense the absolute and relative \(d_Q\)-distances can be considered to be implicitly \(C^{1}\)-metrics, rather than \(C^{0}\)-metrics as it initially seems.

Finally, in terms of practical tests for structural sensitivity, the most common approach (Janssen et al. 1994; Bendoricchio and Jorgensen 2001) is to choose a fixed parameterisation of all model functions, and check for sensitivity to variation of parameters of these model functions.

Definition 7.5

(Parameter variation distance)

Consider two systems composed of the same parameterised function with different parameters:

$$\begin{aligned}&{\dot{x}} = f\left( {x,\alpha _{1}, \ldots ,\alpha _{m} } \right) , \quad x \in {\mathbb {R}}^{n}, \alpha \in {\Theta } \subset {\mathbb {R}}^{m} \end{aligned}$$
(7.7)
$$\begin{aligned}&\hbox {and}\quad {\dot{x}} = f\left( {x,{\hat{\alpha }}_{1}, \ldots ,{\hat{\alpha }}_{m} } \right) , \quad x \in {\mathbb {R}}^{n}, {\hat{\alpha }} \in {\Theta } \subset {\mathbb {R}}^{m} \end{aligned}$$
(7.8)

The parameter variation distance between (7.7) and (7.8) over \(\Omega \subset {\mathbb {R}}^{n}\) is given by:

$$\begin{aligned} d_4 := \mathop {\sup }\limits _{x\in \Omega } \left\| {f\left( {x,\alpha _1 ,\ldots ,\alpha _m } \right) -f\left( {x,{\hat{\alpha }}_1 ,\ldots ,{\hat{\alpha }}_m } \right) } \right\| \!, \end{aligned}$$

where \(\Vert \Vert \) denotes a vector norm in \({\mathbb {R}}^{n}\).

Appendix B: Proof of Theorem 1

Clearly such a function cannot exist unless the conditions are satisfied, and so they are necessary conditions, but it remains to be proved that they are sufficient conditions for the existence of such a function. In order to do this, we shall construct a valid function assuming only these conditions. To follow the proof, it is helpful to refer to Fig. 1, which shows an example base function and its \(\varepsilon \)-neighbourhood (red boundaries) together with the corresponding \(\hbox {Upp}_{X,\textit{hX},\textit{DH}} \) and \(\hbox {Low}_{X,\textit{hX},\textit{DH}}\) (blue curves).

We first choose some \(0<\delta <\varepsilon \) which is sufficiently close to \(\varepsilon \) that the condition \(\hbox {Upp}_{X,\textit{hX},\textit{DH}} \left( x \right) >h_{\delta -} \left( x \right) \) and \(\hbox {Low}_{X,\textit{hX},\textit{DH}} \left( x \right) <h_{\delta +} \left( x \right) \quad \forall x\in \left[ {0,x_{\mathrm{max}} } \right] \) still holds, and some \(0<\gamma <<1\) such that if we construct \(\hbox {Upp}_{X,\textit{hX},\textit{DH}} \) and \(\hbox {Low}_{X,\textit{hX},\textit{DH}} \) using slightly relaxed bounds on the second derivative, \(\gamma <{\tilde{h}}^{{\prime }{\prime }}\left( x \right) <A_2 -\gamma \) and \(A_1 +\gamma <{\tilde{h}}^{{\prime }{\prime }}\left( x \right) <-\gamma \) instead of \(0<{\tilde{h}}^{{\prime }{\prime }}\left( x \right) <A_2 \) and \(A_1 <{\tilde{h}}^{{\prime }{\prime }}\left( x \right) <0\), then conditions (1) are still satisfied, and furthermore, that the second derivatives of \(h_{\delta -} \) and \(h_{\delta +} \) are still within these new bounds. It is easy to verify that such \(\delta \) and \(\gamma \) must exist through the continuity of the construction of \(h_{\delta +}, h_{\delta -}, \hbox {Upp}_{X,\textit{hX},\textit{DH}}\) and \(\hbox {Low}_{X,\textit{hX},\textit{DH}}\). Hereon, whenever we talk of \(\hbox {Upp}_{y,g\left( y \right) ,g{\prime }\left( y \right) }\) and \(\hbox {Low}_{y,g\left( y \right) ,g{\prime }\left( y \right) }\), we shall refer to the upper and lower bounds constructed using the slightly modified limits of the second derivative.

Initially, we define \({\tilde{h}}\) about \(X\) by choosing it to follow \(\hbox {Upp}_{X,\textit{hX},\textit{DH}} \left( x \right) \). We note that, by the 3 steps of construction, \(\hbox {Upp}_{y,g\left( y \right) ,g{\prime }\left( y \right) } \left( x \right) \) and \(\hbox {Low}_{y,g\left( y \right) ,g{\prime }\left( y \right) } \left( x \right) \) are both continuous with respect to the initial values \(y, g\left( y \right) \) and \(g^{\prime }\left( y \right) \). Therefore at every point \(x\) over which we’ve already defined \({\tilde{h}}\), the new upper and lower bounds formed by starting from \(x, \quad {\tilde{h}}\left( x \right) \) and \({\tilde{h}}^{\prime }\left( x \right) \) vary continuously, and we can use this fact to construct a valid \({\tilde{h}}\) piece by piece without violating any of the conditions it must satisfy. Let us initially consider the interval \(\left[ {0,X} \right] \). Since \(\hbox {Low}_{X,\textit{hX},\textit{DH}} \left( x \right) <h_{\delta +} \left( x \right) \), and \(\hbox {Low}_{X,\textit{hX},\textit{DH}} \left( 0 \right) \le 0\), then if we check \(\hbox {Low}_{x,{\tilde{h}}\left( x \right) ,{\tilde{h}}^{\prime }\left( x \right) } \) at each point of \({\tilde{h}}\left( x \right) \hbox {=Upp}_{X,\textit{hX},\textit{DH}} \left( x \right) \), then there must come a point \(x_1 \in \left[ {0,X} \right] \) for which either \(\hbox {Low}_{x_1 ,{\tilde{h}}\left( {x_1 } \right) ,{\tilde{h}}^{\prime }\left( {x_1 } \right) } \left( 0 \right) =0\) whilst \(\hbox {Low}_{x_1 ,{\tilde{h}}\left( {x_1 } \right) ,{\tilde{h}}^{\prime }\left( {x_1 } \right) } \) remains below \(h_{\delta +} \) over \(\left[ {0,x_1 } \right] \), or at which \(\hbox {Low}_{x_1 ,{\tilde{h}}\left( {x_1 } \right) ,{\tilde{h}}^{\prime }\left( {x_1 } \right) } \) is tangent to \(h_{\delta +} \) at some point \(x_2\). In the first case, we note that \(\hbox {Low}_{x_1 ,{\tilde{h}}\left( {x_1 } \right) ,{\tilde{h}}^{\prime }\left( {x_1 } \right) }\) cannot pass below \(h_{\delta -} \) in the interval \(\left( {0,x_1 } \right] \), because \(h_{\delta -} \left( 0 \right) \le 0\) and the curve of \(\hbox {Low}_{x_1 ,{\tilde{h}}\left( {x_1 } \right) ,{\tilde{h}}^{\prime }\left( {x_1 } \right) }\) is everywhere more concave than that of \(h_{\delta -} \) by definition—if \(\hbox {Low}_{x_1 ,{\tilde{h}}\left( {x_1 } \right) ,{\tilde{h}}^{\prime }\left( {x_1 } \right) }\) does pass beneath \(h_{\delta -} \) at a point in this interval, it must remain so across the whole domain, which contradicts \(\hbox {Low}_{x_1 ,{\tilde{h}}\left( {x_1 } \right) ,{\tilde{h}}^{\prime }\left( {x_1 } \right) } \left( {x_1 } \right) =\hbox {Upp}_{X,\textit{hX},\textit{DH}} \left( {x_1 } \right) >h_{\delta -} \left( {x_1 } \right) \). Therefore we can let \({\tilde{h}}\left( x \right) =\hbox {Low}_{x_1 ,{\tilde{h}}\left( {x_1 } \right) ,{\tilde{h}}^{\prime }\left( {x_1 } \right) } \left( x \right) \) for \(x\in \left[ {0,x_1 } \right] \), and we will have successfully defined \({\tilde{h}}\) over \(\left[ {0,X} \right] \). In the second case, we let \({\tilde{h}}\) follow \(h_{\delta +} \) for values below \(x_1 \), noting that regardless of our definition of distance, \(h_{\delta +} \) and \(h_{\delta -} \) both satisfy condition (i) since \(h\) does. If we are using Definition 7.4 of distance between functions (i.e. relative error), then \(h_{\delta +} \left( 0 \right) =0\) and we are done. If we are using Definition 7.3 (absolute error), we note that again, since the construction of \(\hbox {Low}_{y,g\left( y \right) ,g{\prime }\left( y \right) } \left( x \right) \) is continuous, there must be a point \(x_3\) such that \(\hbox {Low}_{x_3 ,h_{\delta +} \left( {x_3 } \right) ,h_{\delta +}^{\prime } \left( {x_3 } \right) } \left( 0 \right) =0. \quad \hbox {Low}_{x_2 ,h_{\delta +} \left( {x_2 } \right) ,h_{\delta +}^{\prime } \left( {x_2 } \right) } \left( x \right) \) cannot pass below \(h_{\delta -} \) over the interval \(\left( {0,x_2 } \right] \), again because it is everywhere more concave than \(h_{\delta -} \), so assuming otherwise would cause a contradiction, so therefore we let \({\tilde{h}}\left( x \right) {:=}\, \hbox {Low}_{x_3 ,h_{\delta +} \left( {x_3 } \right) ,h_{\delta +}^{\prime } \left( {x_3 } \right) } \left( x \right) \, \forall \,x\in \left[ {0,x_2 } \right] \), and we have successfully defined \({\tilde{h}}\) over \(\left[ {0,X} \right] \).

We define \({\tilde{h}}\) across the interval \(\left[ {X,x_{\mathrm{max}} } \right] \) in a similar way. With \({\tilde{h}}\) initially following \(\hbox {Upp}_{X,\textit{hX},\textit{DH}} \left( x \right) \), we check \(\hbox {Low}_{x,{\tilde{h}}\left( x \right) ,{\tilde{h}}^{\prime }\left( x \right) } \) at each point, and note that there must come a point \(x_4 \) at which either \(\hbox {Low}_{x_4 ,{\tilde{h}}\left( {x_4 } \right) ,{\tilde{h}}^{\prime }\left( {x_4 } \right) } \left( x \right) \) lies tangent to \(h_{\delta +} \) at some further point \(x_5 \), or \(\hbox {Low}_{x_4 ,{\tilde{h}}\left( {x_4 } \right) ,{\tilde{h}}^{\prime }\left( {x_4 } \right) } \left( {x_{\mathrm{max}} } \right) <h_{\delta +} \left( {x_{\mathrm{max}} } \right) \). Either way, we note that as before, \(\hbox {Low}_{x_4 ,{\tilde{h}}\left( {x_4 } \right) ,{\tilde{h}}^{\prime }\left( {x_4 } \right) } \left( x \right) \) must lie above \(h_{\delta -} \) over the interval \(\left[ {x_4 ,x_{\mathrm{max}} } \right] \), so in the latter case, we can let \({\tilde{h}}\) follow \(\hbox {Low}_{x_4 ,{\tilde{h}}\left( {x_4 } \right) ,{\tilde{h}}^{\prime }\left( {x_4 } \right) } \) over the interval \(\left[ {x_4 ,x_{\mathrm{max}} } \right] \), and we are done. In the former case, we let \({\tilde{h}}\) follow \(\hbox {Low}_{x_4 ,{\tilde{h}}\left( {x_4 } \right) ,{\tilde{h}}^{\prime }\left( {x_4 } \right) } \) over the interval \(\left[ {x_4 ,x_5 } \right] \), and then follow \(h_{\delta +} \) over the interval \(\left( {x_5 ,x_{\mathrm{max}} }\right] \), and we are done.

We have successfully proved that, provided that conditions (9) are satisfied, it is always possible to construct a \(C^{1}\) function satisfying criterion (i), (ii) and (iii). Therefore conditions (9) are precisely the necessary and sufficient conditions for there to exist at least one function in the \(\varepsilon \)-neighbourhood of \(h\) that satisfies criterion (i), (ii), and (iii)\(\square \)

Appendix C: Application of Theorem 1 to System (5.85.11)

Following the process outlined in Sect. 4, for given values \(\Phi , \Xi \) and \(P_0\), we can compute the upper and lower bounds of our sigmoid function as follows.

If \(\Phi \ge P_0 \):

$$\begin{aligned} Upp\left( P \right) {:=}\left\{ {{\begin{array}{l} {hP+\textit{DH}\cdot \left( {P-\Phi } \right) +\frac{A_1 }{2}\left( {P-P_0 } \right) ^{2}} \\ {hP+\textit{DH}\cdot \left( {P-\Phi } \right) } \\ \end{array} }{\begin{array}{l} {:P\in \left[ {0,P_0 } \right] } \\ {:P\in \left[ {P_0 ,P_{\mathrm{max}} } \right] } \\ \end{array} },} \right. \end{aligned}$$

and

$$\begin{aligned}&Low\left( P \right) \\&\quad {:=}\left\{ {{\begin{array}{l} {hP+\textit{DH}\cdot \left( {P-\Phi } \right) +A_2 \left( {\frac{1}{2}\left( {P-\Phi } \right) ^{2}+\left( {P_0 -\Phi } \right) \left( {P-P_0 } \right) } \right) } \\ {hP+\textit{DH}\cdot \left( {P-\Phi } \right) +\frac{A_2 }{2}\left( {P-\Phi } \right) ^{2}} \\ \end{array} }{\begin{array}{l} {:P\in \left[ {0,P_0 } \right] } \\ {:P\in \left[ {P_0 ,P_{\mathrm{max}} } \right] } \\ \end{array} },} \right. \end{aligned}$$

If \(\Phi <P_0 \):

$$\begin{aligned}&Upp\left( P \right) \\&\quad {:=}\left\{ {{\begin{array}{l} {hP+\textit{DH}\cdot \left( {P-\Phi } \right) +\frac{A_1 }{2}\left( {P-\Phi } \right) ^{2}:P\in \left[ {0,P_0 } \right] } \\ {hP+\textit{DH}\cdot \left( {P-\Phi } \right) +A_1 \left( {P_0 -\Phi } \right) \left( {P-\frac{1}{2}\left( {P_0 +\Phi } \right) } \right) :P\in \left[ {P_0 ,P_{\max } } \right] } \\ \end{array} }} \right. , \end{aligned}$$

and

$$\begin{aligned} Low\left( x \right) {:=}\left\{ {{\begin{array}{l} {hP+\textit{DH}\cdot \left( {P-\Phi } \right) :P\in \left[ {0,P_0 } \right] } \\ {hP+\textit{DH}\cdot \left( {P-\Phi } \right) +\frac{A_2 }{2}\left( {P-P_0 } \right) ^{2}:P\in \left[ {P_0 ,P_{\max } } \right] } \\ \end{array} }} \right. \end{aligned}$$

From Theorem 1, we can obtain the necessary and sufficient conditions for values \(\Phi \) and \(\textit{dH}\) to correspond to a valid function \({\tilde{h}}\) to be as follows:

If \(\Phi \ge P_0\):

$$\begin{aligned}&hP+\textit{DH}\cdot \left( {P-\Phi } \right) +\frac{A_1 }{2}\left( {P-P_0 } \right) ^{2}>h_{\varepsilon -} \left( P \right) \forall P\in \left[ {0,P_0 } \right] ;\\&hP+\textit{DH}\cdot \left( {P-\Phi } \right) >h_{\varepsilon -} \left( P \right) \quad \forall P\in \left[ {P_0 ,P_{\mathrm{max}} } \right] ;\\&hP+\textit{DH}\cdot \left( {P-\Phi } \right) +A_2 \left( {\frac{1}{2}\left( {P-\Phi } \right) ^{2}+\left( {P_0 -\Phi } \right) \left( {P-P_0 } \right) } \right) \\&\quad <h_{\varepsilon +} \left( P \right) \,\forall \,P\in \left[ {0,P_0 } \right] ;\\&hP+\textit{DH}\cdot \left( {P-\Phi } \right) +\frac{A_2 }{2}\left( {P-\Phi } \right) ^{2}<h_{\varepsilon +} \left( P \right) \quad \forall P\in \left[ {P_0 ,P_{\mathrm{max}} } \right] ;\\&hP-\textit{DH}\cdot \Phi +\frac{A_1 }{2}P_0 ^{2}>0, \end{aligned}$$

and \(hP-\textit{DH}\cdot \Phi +A_2 \left( {\frac{1}{2}\Phi ^{2}-P_0 \cdot \left( {P_0 -\Phi } \right) } \right) <0.\)

If \(\Phi <P_0 \):

$$\begin{aligned}&hP+\textit{DH}\cdot \left( {P-\Phi } \right) +\frac{A_1 }{2}\left( {P-\Phi } \right) ^{2}>h_{\varepsilon -} \left( P \right) \quad \forall P\in \left[ {0,P_0 } \right] ;\\&hP\!+\!\textit{DH}\cdot \left( {P-\Phi } \right) \!+\!A_1 \left( {P_0 -\Phi } \right) \left( {P\!-\!\frac{1}{2}\left( {P_0 \!+\!\Phi } \right) } \right) \!>\!h_{\varepsilon -} \left( P \right) \quad \forall P\!\in \!\left[ {P_0 ,P_{\mathrm{max}} } \right] ;\\&hP+\textit{DH}\cdot \left( {P-\Phi } \right) <h_{\varepsilon +} \left( P \right) \quad \forall P\in \left[ {0,P_0 } \right] ;\\&hP+\textit{DH}\cdot \left( {P-\Phi } \right) +\frac{A_2 }{2}\left( {P-P_0 } \right) ^{2}<h_{\varepsilon +} \left( P \right) \quad \forall P\in \left[ {P_0 ,P_{\mathrm{max}} } \right] ;\\&hP-\textit{DH}\cdot \Phi +\frac{A_1 }{2}\Phi ^{2}>0, \end{aligned}$$

and \(hP-\textit{DH}\cdot \Phi <0.\)

Appendix D: Stability Analysis of System (5.125.14)

Here we describe an approach to check the linear stability of an equilibrium of the system of delay-differential equations (5.31–5.33). Hereon we denote \(x_i \left( t \right) \) by \(x_i \), and \(x_i \left( {t-\tau } \right) \) by \(x_{i_\tau } \) for simplicity. We implement a standard technique of stability analysis of ODEs with delay (Dieudonne 1960; Bairagi et al. 2008). We can let \({\varvec{x}}={\varvec{x}}^{*}+\delta {\varvec{x}}\), where \(\delta x\) is a small magnitude perturbation from the equilibrium \({\varvec{x}}^{*}\), then use Taylor’s theorem to obtain the linearization of the system:

$$\begin{aligned} {\dot{\delta }}{\varvec{x}}\approx {\varvec{J}}_0 \delta {\varvec{x}}+{\varvec{J}}_\tau \delta {\varvec{x}}_\tau , \end{aligned}$$
(10.1)

where \({\varvec{J}}_0\) is the Jacobian matrix with respect to \({\varvec{x}}\) and \({\varvec{J}}_\tau \) is the Jacobian matrix with respect to \({\varvec{x}}_\tau \). If we assume that (10.1) has exponential solutions, we can write \(\delta {\varvec{x}}={\varvec{Ae}}^{\lambda t}\) and substitute this solution into (10.1) gives us \(\lambda {\varvec{Ae}}^{\lambda t}={\varvec{J}}_0 {\varvec{Ae}}^{\lambda t}+J_\tau {\varvec{Ae}}^{\lambda \left( {t-\tau } \right) }\). Dividing by \(e^{\lambda t}\) yields:

$$\begin{aligned} \lambda {\varvec{A}}=\left( {{\varvec{J}}_0 +e^{-\lambda \tau }{\varvec{J}}_\tau } \right) {\varvec{A}}. \end{aligned}$$
(10.2)

Since \(\lambda \) is therefore an eigenvalue of the matrix \(\left( {{\varvec{J}}_0 +e^{-\lambda \tau }{\varvec{J}}_\tau } \right) \), we know from the theory of linear algebra that (10.2) holds if and only if the following holds:

$$\begin{aligned} \left| {{\varvec{J}}_0 +e^{-\lambda \tau }{\varvec{J}}_\tau -\lambda {\varvec{I}}} \right| =0, \end{aligned}$$
(10.3)

where \(I\) is the three-dimensional identity matrix. (10.3) is called the characteristic equation of system (5.125.14), and can be calculated in this case as:

$$\begin{aligned} \lambda ^{3}+P\lambda ^{2}+Q\lambda +\left( {S\lambda +M} \right) e^{-\lambda \tau }+N=0, \end{aligned}$$
(10.4)

where \(P=\left( {c-1} \right) x_1^*-2jx_2^*-\left( {\tilde{H}}^{\prime }\left( {x_2^*} \right) +h \right) x_3^*-b;\)

$$\begin{aligned}&Q=\left( {x_1^*+hx_3^*} \right) \left( {b-cx_1^*+{\tilde{H}}^{\prime }\left( {x_2^*} \right) x_3^*+2jx_2^*} \right) +hx_1^*x_3^*;\\&S=k\cdot {\tilde{H}} \left( {x_2^*} \right) {\tilde{H}} ^{\prime }\left( {x_2^*} \right) x_3^*+acx_1^*x_2^*;\\&M=x_1^*x_3^*\left( {k\cdot {\tilde{H}}\left( {x_2^*} \right) {\tilde{H}}^{\prime }\left( {x_2^*} \right) +achx_2^*} \right) ,\, \hbox {and}\\&N=hx_1^*x_3^*\left( {b-cx_1^*+{\tilde{H}}^{\prime }\left( {x_2^*} \right) x_3^*-2jx_2^*} \right) \!. \end{aligned}$$

Unlike the case of ODE systems, equation (10.3) is not a polynomial over the complex numbers, but rather a quasi-polynomial: since the \(e^{-\lambda \tau }\) term is periodic with respect to the complex part of \(\lambda \), (10.3) must have infinitely many complex solutions. Therefore the usual approach of directly finding the eigenvalues of (10.3) and determining the conditions under which they all have negative real part cannot be used here. Instead, we need to choose a certain parameter—in this paper, we choose the time delay, \(\tau \)—and determine the critical values for which the real part of \(\lambda \) changes sign in order to detect bifurcations with respect to this parameter. At these critical values, the eigenvalues will take the form \(\lambda =i\cdot \omega \) for some real \(\omega \) (we assume, without loss of generality, that \(\omega >0)\). Substituting \(\lambda =i\cdot \omega \) into the characteristic equation (10.4) and separating the real and imaginary parts yields:

$$\begin{aligned} P\omega ^{2}-N=M\hbox {cos(}\omega \tau )+S\omega \hbox {sin(}\omega \tau ),\omega ^{3}-Q\omega =S\omega \hbox {cos(}\omega \tau )-M\hbox {sin}(\omega \tau ).\nonumber \\ \end{aligned}$$
(10.5)

Squaring both equations and summing them results in:

$$\begin{aligned} \omega ^{6}+\left( {P^{2}-2Q} \right) \omega ^{4}+\left( {Q^{2}-2NP-S^{2}} \right) \omega ^{2}+N^{2}-M^{2}=0 \end{aligned}$$
(10.6)

which has at least one positive, real solution provided \(N^{2}<M^{2}\), since this implies the polynomial is negative at \(\omega =0\), while it tends to positive infinity as \(\omega \rightarrow \infty \). Therefore, we can solve (10.6) as a cubic equation with variable \(\omega ^{2}\) and take the positive roots of these solutions to obtain at most three positive roots of (10.6). If we let \(\omega _0 \) denote any given positive root of (10.6), then by rearranging both equations of (10.5) in terms of \(\hbox {sin}\omega \tau \) and equating them, and then substituting in \(\omega =\omega _0\), we obtain:

$$\begin{aligned} \hbox {cos}\omega _0 \tau _C =\frac{S\omega _0^4 +\left( {MP-QS} \right) \omega _0^2 -MN}{S^{2}\omega _0^2 +M^{2}}, \end{aligned}$$
(10.7)

where \(\tau _C\) are the critical values of the time delay, at which the real parts of \(\lambda \) disappear. Therefore we obtain a countable family of critical time delays for each \(\omega _0\):

$$\begin{aligned} \tau _{c_m } =\frac{1}{\omega _0 }\cos ^{-1}\left( {\frac{S\omega _0^4 +\left( {\textit{MP}-\textit{QS}} \right) \omega _0^2 -\textit{MN}}{S^{2}\omega _0^2 +M^{2}}} \right) +\frac{m2\pi }{\omega _0 }, \quad m\in {\mathbb {Z}} \end{aligned}$$

Now we note that \(\hbox {Re}\left( \lambda \right) =0\) is a necessary, but not sufficient condition for a stability change to take place. To prove that there will be such bifurcations at our critical values \(\tau _{C_m}\), it is sufficient to prove that:

$$\begin{aligned} \frac{\hbox {dRe}\left( \lambda \right) }{\hbox {d}\tau }|_{\lambda =i\omega _0 } \ne 0, \end{aligned}$$

n.b. This is not to say that \(\frac{\hbox {dRe}\left( \lambda \right) }{\hbox {d}\tau }|_{\lambda =i\omega _0 } \ne 0\) and \(\hbox {Re}\left( \lambda \right) =0\) are necessary and sufficient conditions for a bifurcation at \(\tau _{C_m } \): the sign of \(\hbox {Re}\left( \lambda \right) \) can still change when \(\frac{\hbox {dRe}\left( \lambda \right) }{\hbox {d}\tau }|_{\lambda =i\omega _0 } =0\).

Note that \(\hbox {sign}\left\{ {\frac{\hbox {dRe}\left( \lambda \right) }{\hbox {d}}\tau |_{\lambda =i \omega _0} } \right\} =\hbox {sign}\left\{ {\hbox {Re}\left( {\left( {\frac{\mathrm{d}\lambda }{\mathrm{d}\tau }} \right) ^{-1}|_{\lambda =i\omega _0 } } \right) } \right\} \). Now by differentiating (10.4) with respect to \(\tau \), rearranging, substituting in \(\lambda =i\omega _0\), taking the real part and simplifying, we obtain:

$$\begin{aligned}&sign\left\{ {\frac{\textit{dRe}\left( \lambda \right) }{d\tau }|_{\lambda =i\omega _0 } } \right\} =sign\left\{ \left( {Q-3\omega _0^2 } \right) \left( {\omega _0^2 -Q} \right) \left( {S^{2}\omega _0^2 +M^{2}} \right) \right. \\&\quad \left. -\,2P\left( {P\omega _0^2 +N} \right) \left( {S^{2}\omega _0^2 +M^{2}} \right) \right. \\&\quad \left. {-\,S^{2}\left( {Q-3\omega _0^2 } \right) \left( {\omega _0^2 -Q} \right) -2PS^{2}\left( {P\omega _0^2 +N} \right) } \right\} \end{aligned}$$

Provided that \(\omega _0\) is not a root of this polynomial, there will be a bifurcation at each of the critical time delays, \(\tau _{C_m }\), that are related to it, and this can easily be checked by substituting each of the \(\omega _0\) into the polynomial.

Finally, once we have determined the bifurcation values of the time-delay, \(\tau _{C_m}\), it is simple to check how many such bifurcations take place between \(\tau =0\) and a specified time-delay \(\tau \), so we can determine the stability of our equilibrium for the system with this time-delay by computing the stability of the system in the case \(\tau =0\) (i.e. by using the standard stability analysis in the ODE case). If an equilibrium in the system without time-delay is stable, then it will be stable in the system with time-delay \(\tau \) if \(\# \left\{ {\tau _{C_m } |\tau _{C_m } \in \left( {0,\tau } \right) } \right\} \) is even, and unstable if it is odd. If the equilibrium is unstable in the system without delay, this situation is reversed.

Appendix E: Algorithm for Temporal Variation of System (5.15.4)

Here we describe the algorithm for temporal variation of the function response \({\tilde{F}}_c \) in model (5.15.4) used to produce Fig. 5. Recall that we require \({\tilde{F}}_c\) to always satisfy conditions (5.55.7) and to belong to the \(\varepsilon \)-neighbourhood of the base function given by the Monod parameterisation \(F_c \left( N \right) {:=}\frac{b_c N}{K_c +N}\).

We consider that the functional response \({\tilde{F}}_c\) has a piece-wise second derivative: \({\tilde{F}}_{_c}^{{{\prime }{\prime }}} \left( x \right) \equiv A_i <\)0 for \(x_i \le x\le x_{i+1} .\) In this paper, we consider \(n=6\) intervals of unequal length: \(0\le x<3; 3\le x<6; 6\le x<12; 12\le x<18; 18\le x<32; 32\le x<80=N_{\mathrm{max}}\). We assume that for each interval \(\left[ {x_i ,x_{i+1} } \right] \), the magnitude of the second derivative satisfies \(\left| {A_i } \right| <1\). For a given set \(\left\{ {A_i } \right\} _{i=1}^n\), the functional response can be obtained by piece-wise double integration of the second derivatives and using the condition that \({\tilde{F}}_c \left( 0 \right) =0\). There is no particular restriction of the initial slope \({\tilde{F}}_c^{\prime }\left( 0 \right) \) except that it should be positive and that for a given \({\tilde{F}}_c^{\prime }\left( 0 \right) , {\tilde{F}}_c\) should belong to the \(\varepsilon \)-neighbourhood of the base function.

At the initial moment of time \(t=0\) we choose an arbitrary set of \(\left\{ {A_i } \right\} _{i=1}^n \) and \({\tilde{F}}_c^{\prime }\left( 0 \right) \) in such a way that \({\tilde{F}}_c \) belongs to the \(\varepsilon \)-neighbourhood of the base function and satisfies conditions (5.55.7). At the next step of integration of the system of differential equations (5.15.4) we slightly change the values of the second derivatives \(A_i \) and the slope at the origin in the following way:

$$\begin{aligned} A_i \left( {t+\Delta t} \right)&= A_i \left( t \right) +\sigma _i ,\end{aligned}$$
(11.1)
$$\begin{aligned} {\tilde{F}}_c {\prime }\left( {0,t+\Delta t} \right)&= {\tilde{F}}_c {\prime }\left( {0,t} \right) +\rho , \end{aligned}$$
(11.2)

where \(\sigma _i\) and \(\rho \) are uncorrelated random noise processes (we consider Brownian motion); \(\Delta t\) is the time step of integration. By considering different amplitudes \(\sigma _i\) one can vary the rate of change of the functional response with time.

For each step of integration of (5.15.4), we modify \(A_i\) and \({\tilde{F}}_c^{\prime }\left( 0 \right) \) according to (11.111.2). In the case where the resultant functional response \({\tilde{F}}_c\) does not belong to the \(\varepsilon \)-neighbourhood of \(F_c\) or it does not satisfy conditions (5.55.7), then we consider a new realisation of noise, i.e. a new \(\sigma _i\) and \(\rho \) and repeat the procedure until the above requirements are satisfied, and then move forwards.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Adamson, M.W., Morozov, A.Y. Defining and detecting structural sensitivity in biological models: developing a new framework. J. Math. Biol. 69, 1815–1848 (2014). https://doi.org/10.1007/s00285-014-0753-3

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00285-014-0753-3

Keywords

Mathematics Subject Classification

Navigation