Vertex results for the robust analysis of uncertain biochemical systems

We consider the problem of assessing the sensitivity of uncertain biochemical systems in the presence of input perturbations (either constant or periodic) around a stable steady state. In particular, we propose approaches for the robust sensitivity analysis of systems with uncertain parameters assumed to take values in a hyper-rectangle. We highlight vertex results, which allow us to check whether a property is satisfied for all parameter choices in the hyper-rectangle by simply checking whether it is satisfied for all parameter choices at the vertices of the hyper-rectangle. We show that, for a vast class of systems, including (bio)chemical reaction networks with mass-action kinetics, the system Jacobian has a totally multiaffine structure (namely, all minors of the Jacobian matrix are multiaffine functions of the uncertain parameters), which can be exploited to obtain several vertex results. We consider different problems: robust non-singularity; robust stability of the steady-state; robust steady-state sensitivity analysis, in the case of constant perturbations; robust frequency-response sensitivity analysis, in the presence of periodic perturbations; and robust adaptation analysis. The developed theory is then applied to gain insight into some examples of uncertain biochemical systems, including the incoherent feed-forward loop, the coherent feed-forward loop, the Brusselator oscillator and the Goldbeter oscillator.


Introduction
In spite of inherent variations and fluctuations in the environment where they operate, leading to huge uncertainties in the parameter values of the associated models, living systems robustly preserve some crucial properties and keep reliably performing their specific task. The extraordinary robustness of biological processes has been observed in remarkable case studies, such as bacterial chemotaxis (Alon et al. 1999; Barkai and Leibler 1997) and circadian rhythms (Stelling et al. 2004). The pioneering study by Barkai and Leibler (1997), later confirmed experimentally by Alon et al. (1999), showed that the tumbling frequency that characterises the chemotaxis of Escherichia coli is robustly regulated: although it can rapidly change due to a variation in the concentration of a chemical stimulant, it then gradually adapts back precisely to its pre-stimulus value; this perfect adaptation property is shown to be insensitive to the precise values of the biochemical parameters, because it is a direct consequence of the network's architecture.
The general principles underlying biological robustness have been discussed from various perspectives: Kitano (2004), Stelling et al. (2004) and Streif et al. (2016) highlight the importance of redundancy, feedback control, decoupling and modularity of cellular networks so as to ensure their robust functioning; Lesne (2008) surveys possible mechanisms that enable robust behaviours, focusing on the different meanings of robustness in the physical sciences and in the life sciences, and on the specificities of biological systems; Whitacre (2012) provides an overview of paradigms and system principles that enable biological robustness, pointing to their similarities across various levels and scales; Khammash (2016) adopts an engineering viewpoint to propose the notion of robustness as a unifying principle behind the complexity in both biological and engineered systems, and emphasises the importance of perfect adaptation (Khammash 2021) to guarantee robustness.
A variety of quantitative and numeric approaches have been used to robustly assess various properties of interest for different types of biological models with uncertain parameters. A widely investigated property is stability: Chesi (2011) has proposed methods based on optimisation and linear matrix inequalities to assess the robust stability of genetic regulatory networks including mRNAs and proteins; Waldherr and Allgöwer (2011) have leveraged polynomial programming to robustly assess stability and instability of biochemical networks; Hara et al. (2019) have applied transferfunction methods for the robust stability analysis of linear systems with generalised frequency variables to the case of cyclic gene regulatory networks. The property of concentration robustness in chemical reaction networks has been assessed by providing network-imposed sensitivity bounds to evaluate the robustness of equilibrium species concentrations against fluctuations in reactant supply (Shinar et al. 2009) and by identifying topological principles that help robustly keep the concentration of active signalling compounds within tightly defined bounds in spite of perturbations (Steuer et al. 2011), while Shinar et al. (2007) have identified a network mechanism that guarantees robust input-output relation regardless of changes in the concentration of the system components. Kim et al. (2006) and Ma and Iglesias (2002) have integrated bifurcation analysis, structural-singular-value μ-analysis and hybrid optimisation to assess the robust stability of limit cycles in a model of the molecular network underlying cAMP oscillations in fields of chemotactic Dictyostelium discoideum cells. All these works very efficiently solve specific problems with dedicated tools.
This paper is different in spirit. Motivated by a significant class of biochemical systems and several relevant problems, we propose a framework for the robustness analysis of dynamical systems with a special uncertainty structure: multiaffine, i.e., affine in each uncertain parameter, while the other parameters are kept constant. Our methodologies build upon a consolidated approach to parametric robustness introduced by Barmish (1994): the main aim of the paper is to build a bridge between a class of relevant problems in mathematical biology and a class of powerful vertex results for robustness analysis developed within the realm of control theory, which offer valuable tools to assess the robustness of biological systems.
In particular, we deal with a vast class of uncertain nonlinear dynamical systems, including the interesting case of generic biochemical reaction networks, for which the uncertain parameters are assumed to take values in a hyper-rectangle and the system Jacobian is a totally multiaffine matrix with respect to the uncertain parameters, i.e., each minor of the Jacobian is a multiaffine function of the uncertain parameters.
Systems with totally multiaffine uncertainties include, as a special case, B DCdecomposable systems Giordano et al. 2016), whose Jacobian matrix can be written as a linear combination of the uncertain parameters, where J k are rank-one matrices: then, the Jacobian matrix can be decomposed as J = B DC, where D is a diagonal matrix carrying the uncertain parameters δ k on the diagonal, while B and C are constant matrices such that J k = B k C k , where B k denotes the kth column of B and C k the kth row of C. This class of models embraces biochemical systems with generic monotonic reaction rates, as well as general flow systems in engineering. Not only B DC-decomposable systems represent a proper subset of systems with totally multiaffine uncertainties: here we show that also chemical reaction networks with mass-action kinetics have a totally multiaffine Jacobian.
As a first main contribution, we provide a characterisation of totally multiaffine matrices, which allows us to simply check whether a matrix is totally multiaffine just by inspecting its entries and the matrix derivatives with respect to each of the uncertain parameters.
We then consider the steady-state sensitivity analysis for systems with totally multiaffine uncertainties. As is known, this type of analysis can be approached by considering the linearised system and computing the input-output steady-state characteristic. For systems with parametric uncertainties, the problem can be faced via robustness analysis (Barmish 1994) and solved through the solution of nonlinear systems with parametric rank-one uncertainties (Mohsenizadeh et al. 2014;Polyak and Nazin 2004). These approaches have been recently exploited to deal with the structural steady-state sensitivity analysis of B DC-decomposable systems affected by a persistent input perturbation, providing vertex results to assess the structural sign of steady-state influences Giordano et al. 2016).
A vertex property is a property that holds for all parameter choices in the hyperrectangle if and only if it holds for all parameter choices at the vertices of the hyperrectangle. Vertex properties are therefore extremely convenient from a computational standpoint, since they can be checked for a whole continuum space (all possible realisations of the uncertain system) just by checking them at a finite set of points (the realisations associated with the vertices of the parameter space).
Here, we consider the broader class of systems with totally multiaffine uncertainties and we provide vertex results to perform their sensitivity analysis when the parameters are bounded in a hyper-rectangle.
In particular, given the totally multiaffine Jacobian J (δ), where δ is the vector stacking the uncertain parameters, each bounded in a given interval, as well as the input matrix E and the output matrix H , we present the following main results.
-Important classes of systems, including not only B DC-decomposable systems but also chemical reaction networks with mass-action kinetics, have a totally multiaffine uncertainty structure. -Robust non-singularity of J (δ) can be checked by assessing its determinant at the vertices of the parameter space. -When the system is affected by a constant perturbation, robust lower and upper bounds for the input-output sensitivity Σ(δ) = −H J(δ) −1 E (where H is the output matrix and E is the input matrix) can be computed just by assessing Σ(δ) at the vertices of the parameter space; the obtained bounds are tight. -Robust stability can be assessed resorting to the Zero Exclusion Theorem by Barmish (1994), once stability has been shown to hold for a nominal value of the parameters. To assess the stability of the nominal system, a vertex type of test (well known for systems with affine uncertainties: see the work by Garofalo et al. (1993)) can be adopted to check whether a given positive definite function, either smooth or convex, is a Lyapunov function. -When the system is affected by a periodic perturbation, robust lower and upper bounds for the magnitude and the phase of the system transfer function W (s, δ) = H (s I − J (δ)) −1 E can be computed, by leveraging vertex results. -Potential oscillatory frequencies for the uncertain system can be robustly determined through a vertex result, which holds also in the presence of explicit delays.
-Vertex results can be obtained also to robustly assess adaptation for uncertain systems, as preliminarily reported in the conference work by Blanchini et al. (2020). When a positive persistent perturbation is applied, the output is adaptive if it initially increases and then, after a transient, it decreases so as to asymptotically recover a value that is closer to its pre-perturbation steady state. We provide a formal definition and we show that the property can be checked robustly through vertex tests.
Finally, we illustrate the proposed methodologies by applying them to chemical reaction networks, as well as biological systems taken from the literature.

Systems with totally multiaffine uncertainties
Consider the generic nonlinear continuous-time system where f : R n × R → R n and h : R n → R are continuously differentiable functions, the system state is x(t) ∈ R n , u(t) ∈ R is a scalar, which can be either an external input signal or a system parameter, and y(t) ∈ R is a scalar output. Assume that, given a constant inputū, the system reaches the asymptotically stable steady-statex such that f (x,ū) = 0, which corresponds to the output steady-state valueȳ = h(x), wherē x =x(ū).
We wish to assess how the steady-state outputȳ = h(x) changes due to variations in the constant inputū. More precisely, let v be the perturbation affecting the input, or parameter, and denote by w the corresponding perturbation affecting the output. We aim to infer (or provide bounds on) the output valueȳ + w corresponding to the perturbed input (or parameter)ū + v.
We consider input perturbations that are small enough to ensure that the stability of the steady statex(u) is preserved (recall that the eigenvalues of the Jacobian matrix are continuous functions of its entries, which are in turn continuous functions of u). Then, we consider the linearisation of the system at the stable equilibriumx. Upon defining the column vector E = ∂ f ∂u (x,ū) and the row vector H = ∂h ∂ x x , as well as the shifted state z = x −x, input v = u −ū and output w = y −ȳ, this boils down to considering the linearised system where is the Jacobian evaluated at the equilibrium (x,ū) and depends on the vector of uncertain parameters δ = (δ 1 , . . . , δ m ). The input-output sensitivity (Giordano et al. 2016) is then defined as: Although here we neglect the possible dependency of E and H on δ, uncertain input and output matrices E = E(δ) and H = H (δ) can be considered under suitable assumptions; see Remark 6 in the following.
Henceforth, we consider totally multiaffine uncertainties: we steadily assume that the system Jacobian matrix J (δ) is totally multiaffine, a special structure that encompasses a large class of biological models.

Motivation for totally multiaffine uncertainties: BDC-decomposable systems
A vast class of systems endowed with the property that the Jacobian matrix is totally multiaffine is represented by generic flow systems of the form where S captures the topology of the flow network and the vector function g stacks components that are monotonic in each argument and represent flow rates. The class of systems in (3) is general enough to embrace (bio)chemical reaction networks, gene networks and metabolic networks, ecological networks and food webs, as well as compartmental systems and flow systems in engineering. In the case of a chemical reaction network (CRN), S is the stoichiometric matrix representing the structure of the interactions among chemical species, while the components of the vector function g represent reaction rates. For all systems within this class, the Jacobian matrix admits the B DC-decomposition introduced by Blanchini and Giordano (2014) and Giordano et al. (2016), namely it can be written as where matrices B ∈ R n×m , D(x) ∈ R m×m and C ∈ R m×n are constructed as follows: i) D(x) is a diagonal matrix whose m diagonal entries are the absolute values of all the (nonzero) partial derivatives ordered as δ k = |∂ g i /∂ x j |, k = 1, . . . , m; ii) the kth column of B, B k , is the ith column of S; iii) the kth row of C, C k , has sign(∂ g i /∂ x j ) in the jth position and zero elsewhere, namely, it is either the jth row vector of the canonical basis or its opposite, ±e j .
Equivalently, J (x) can be written as the positive linear combination of rank-one matrices B k C k , i.e., In general, the components of the vector function g can be considered as unknown monotonic functions, hence the diagonal entries δ k of matrix D are uncertain parameters and we can see the Jacobian matrix computed at the equilibrium, J (δ), as a function of δ = (δ 1 , . . . , δ m ). Then, the input-output sensitivity Σ is the ratio of two multivariate polynomials that are both multiaffine in δ = (δ 1 , . . . , δ m ) .
Remark 1 For a B DC-decomposable system, all the entries of the Jacobian matrix J (x) = B D(x)C are linear functions of the diagonal elements δ 1 , . . . , δ m of D(x), and hence B DC-decomposable systems have a totally multiaffine Jacobian in view of the rank-one property, as we will see from Proposition 2. The implication is one-sided, since there exist totally multiaffine matrices that are not B DC-decomposable.
As we will see, the multiaffinity property is key to enable the efficient computation of the sensitivity Σ; this motivates us to consider a more general class of parameter-dependent matrices, totally multiaffine matrices, of which the class of B DC-decomposable matrices represents a proper subset.

Example 1
As an example of BDC-decomposable system, consider the chemical reaction network corresponding to the system of ordinary differential equations ⎧ ⎪ ⎨ ⎪ ⎩ẋ Its Jacobian matrix can be decomposed as and is totally multiaffine in δ 1 , δ 2 , δ 3 , δ 4 , δ 5 .

Motivation for totally multiaffine uncertainties: mass-action-kinetics systems
Another important class of systems that have a totally multiaffine Jacobian is represented by chemical reaction networks (CRNs) with mass-action kinetics. A massaction CRN involving n s species and n r reactions is described by a system of the form where S j is the jth column of the stoichiometric matrix, κ j ≥ 0 is the rate of the jth reaction, and ν jh ∈ Z is the stoichiometric coefficient of the hth species in the jth reaction. Let us define the reaction rates as . . , n r and the reciprocal variables, defined for x h = 0, as We can prove that a mass-action CRN has a totally multiaffine Jacobian in the reaction rates and the reciprocal variables.
Proposition 1 (Total multiaffinity of mass-action CRNs.) The Jacobian matrix of a mass-action CRN of the form (5) is totally multiaffine in the reaction rates and the reciprocal variables.
Proof For simplicity, let us consider exclusively reactions with two reagents; the generalisation to the case of an arbitrary number of reagents is straightforward. Consider a reaction pX i + q X κ j x p i x q − −−− X r , and let S j be the column of S associated with this reaction. Denote as φ j = κ j x p i x q the reaction rate associated with the jth reaction.
If, for the moment being, we do not consider the other reactions of the network, the only non-zero columns of the Jacobian J are the ith and th ones, i.e., where Ξ = diag(ξ 1 , . . . , ξ n s ) is the diagonal matrix whose diagonal entries are the reciprocal variables, and S j is a matrix computed from the stoichiometric matrix S and the stoichiometric coefficients p, q. Notice that multiplication by Ξ is effective only on the ith and th columns of S j , since all other columns of S j are null. Repeating the argument for all reactions in the network, the Jacobian can be written as We first show that the determinant of J is a multiaffine function of φ 1 , . . . , φ n r , ξ 1 , . . . , ξ n s . For every j = 1, . . . , n r , matrix S j has rank equal to 1, because all of its columns are proportional to the column vector S j , i.e., S j = S j s j , for some row vector s j . Then, we can resort to the matrix determinant lemma: given a square matrix A, column vectors b and c, and a scalar α, det(A+αbc ) = det(A)+αc adj(A)b, where adj(A) is the transpose of the cofactor matrix of A. Exploiting the lemma, the determinant of and hence it is multiaffine in φ 1 , . . . , φ n r . The determinant of J is then given by which is multiaffine in ξ 1 , . . . , ξ n s too. Finally, to show that J is totally multiaffine, just note that any of its square submatrices has the form whereΞ is a diagonal matrix of appropriate size andŜ j is a submatrix of S j having appropriate size, and hence its determinant is multiaffine in reaction rates and reciprocal variables.
Example 2 Consider again the CRN in Example 1, where now the reaction rate functions are assumed to follow mass-action kinetics. Define the flow variables and the reciprocal variables Then, the Jacobian matrix Remark 2 Assuming uncertainty bounds on the elements of the Jacobian, rather than on the original reaction rate parameters and species concentrations, may introduce some conservatism. Consider for instance a reaction rate described by the law of mass-action as κab, with bounds κ − ≤ κ ≤ κ + on the reaction rate constant, and bounds a − ≤ a ≤ a + and b − ≤ b ≤ b + on the species concentrations. The partial derivatives, i.e. the entries of the Jacobian, are bounded within the set and all the values in this hyper-rectangle are possible. However, since the Jacobian is evaluated at an equilibrium point, the parameters and the species concentrations are subject to the equilibrium conditions. This means that, in general, the actual set of values that the parameters and the species concentrations can take is a subset of B. This is a well-known issue in parametric robustness analysis. Conversely, no conservatism is introduced when taking the reciprocal concentrations, namely 1/a instead of a, since the former is confined exactly in 1/a + < 1/a < 1/a − . Also, no conservatism is introduced when considering generic monotonic reaction rate functions, as in Example 1. Indeed, if we only know that g(a, b) is an increasing function of both its arguments, and we assume that, in a suitable domain, the partial derivatives are bounded as δ − a ≤ ∂ g/∂a ≤ δ + a and δ − b ≤ ∂ g/∂b ≤ δ + b , then the equilibrium condition g(ā,b) =r does not introduce any restriction: any value of the derivatives within the given intervals is still possible. Even ifr is subject to constraints, the Jacobian is not affected.

Total multiaffinity: a general characterisation
We can provide the following characterisation of totally multiaffine matrices.

δ m ) if and only if each entry J i j (δ) is multiaffine in δ and
Proof Sufficiency. Let M(δ) be any square submatrix of J (δ). If all the entries of J (δ) are multiaffine in δ, then for each δ k , k = 1, . . . , m, matrix M(δ) can be written as where neither ∂ M(δ)/∂δ k norM k depend on δ k . Since ∂ M(δ)/∂δ k has rank at most 1 by assumption, column vectors m, n exist such that ∂ M(δ)/∂δ k = mn . Then, by applying the matrix determinant lemma, the determinant of M(δ) can be computed as and hence it is affine in δ k . Repeating the argument for all k = 1, . . . , m shows that det(M(δ)) is multiaffine in δ.
Necessity. Clearly, a necessary condition for J (δ) being multiaffine in δ is multiaffinity of all its entries J i j (δ). Assume by contradiction that ∂ J (δ)/∂δ k has rank greater than 1 for some k. This implies that there exists a minor M(δ) of sizek > 1 for which ∂ M(δ)/∂δ k is invertible (and hence has non-zero determinant). Then, considering again (6), we have k , which has dimension greater than 1. Hence, p(δ k ) cannot be affine in δ k .
Proposition 2 allows us to test very simply whether a given matrix is multiaffine or not. Consider for instance the Jacobian of Example 2, whose entries are clearly multiaffine in the parameters, and compute its component-wise derivative with respect to φ 1 : It is immediate to verify that ∂ J ∂φ 1 has rank 1.

Sensitivity analysis for totally multiaffine uncertain systems
In the following, we assume that each of the uncertain parameters δ k , k = 1, . . . , m, is lower and upper bounded by δ − k and δ + k respectively, i.e., δ − k ≤ δ k ≤ δ + k . Then, the uncertain parameter vector δ = (δ 1 , . . . , δ m ) belongs to the hyper-rectangle D, which we define as: We denote byD the set of vertices of the hyper-rectangle D: The number of vertices of D, i.e., cardinality of the setD, is 2 m : |D| = 2 m . Is it possible to check whether a property holds for all possible parameters in D, just by checking whether the property holds for all choices of the parameters inD? We denote as vertex property a property that is satisfied for all parameter choices in the hyper-rectangle if, and only if, it is satisfied for all parameter choices at the vertices of the hyper-rectangle: from a computational standpoint, checking a vertex property requires looking just at a finite set of parameter choices among all possible (infinite) choices.
Our first result concerns the robust non-singularity of multiaffine systems.

Theorem 1 (Robust non-singularity.) Assume that J (δ) is totally multiaffine in δ. Then, J (δ) is non-singular for all δ ∈ D if and only if det (−J (δ)) has the same sign (either positive or negative) for allδ ∈D, namely on all the vertices of D.
Proof Sufficiency. Since J (δ) is totally multiaffine, det(−J (δ)) is a multiaffine function of δ as well. Sufficiency follows from the fact that a multiaffine function defined on a hyper-rectangle takes its maximum and minimum values at the vertices of the hyperrectangle (see, e.g., Lemma 14.5.5 by Barmish (1994)): if the function det(−J (δ)) is positive (or negative) at all the vertices inD, then it must be non-zero inside the whole hyper-rectangle D.
Theorem 1 provides a vertex result that allows to easily check whether a system with totally multiaffine uncertainties is robustly non-singular. When this is the case, tight bounds on the input-output sensitivity of the system can be derived, again just by looking at the vertices of the parameter hyper-rectangle.
Theorem 2 (Robust sensitivity bounds.) Given the linearised system (1), assume that J (δ) is totally multiaffine in δ and robustly non-singular for all δ ∈ D. Then, the input-output sensitivity Σ(δ) = −H J(δ) −1 E is lower and upper bounded as and the bounds are tight (i.e., they are the actual minimum and maximum of Σ(δ)).
Proof In view of the robust non-singularity assumption, we can write the input-output sensitivity Σ(δ) as and notice that, since J (δ) is totally multiaffine in δ, both the numerator q(δ) and the denominator p(δ) are multiaffine polynomials of the parameters δ 1 , . . . , δ m . We prove the result for the lower bound Σ − ; the proof for the upper bound Σ + follows the same steps. Assume without restriction that p(δ) > 0 for all δ (if this is not the case, we can change sign to both q and p).
In view of the multiaffinity of q(δ) and p(δ), the function ρ(δ) is multiaffine in δ, too. Therefore, ρ(δ) ≥ 0 for all δ ∈ D if and only if ρ(δ) ≥ 0 for all δ ∈D, which is in turn equivalent to the vertex condition q(δ)/ p(δ) ≥ k for all δ ∈D. Moreover, the condition holds when k equals the smallest value that the function q(δ)/ p(δ) takes at the vertices δ ∈D, i.e., k = Σ − , and hence the bound is tight.
Remark 3 Assuming robust non-singularity of the system Jacobian matrix is not restrictive. In fact, since the equilibrium around which we assess the sensitivity needs to be asymptotically stable, det[−J (δ)] > 0 must hold for all δ, which implies robust non-singularity. Also, robust non-singularity of J (δ) is necessary for the sensitivity to be well-defined and for the result in Theorem 2 to hold: for instance, in the scalar case with J (δ) = δ 1 − δ 2 , E = H = 1 and bounds 0.1 ≤ δ 1 , δ 2 ≤ 1, the resulting sensitivity Σ(δ) = (δ 2 − δ 1 ) −1 is unbounded and not even defined for δ 1 = δ 2 . Actually, the stability requirement could be relaxed to just requiring the non-singularity of the Jacobian at the equilibrium. Provided that the Jacobian is non-singular, we can well estimate the magnitude of the equilibrium shift under constant perturbations (by defining a steady-state map and computing its derivative) and thus perform sensitivity analysis even in the case of a possibly unstable equilibrium.

Some remarks about stability analysis
Our sensitivity analysis relies on the assumption that the steady state is stable. Vertextype approaches are available to robustly check whether such an assumption is actually satisfied.
In fact, for systems of the form (1), we can write the characteristic polynomial as and check the stability of the system, for a given value δ * , by ensuring that all the roots of p(s, δ * ) have negative real part. Then, the Zero Exclusion Theorem by Barmish (1994) (Sect. 7.3) ensures that robust stability holds if and only if the set does not include the origin, for all frequencies ω ≥ 0.
Drawing this set in the complex plane is hard. However, the Mapping Theorem by Barmish (1994) (Sect. 14.6) ensures that V(iω) is in the convex hull of the vertex points p(iω,δ), namely, This provides a sufficient criterion for robust stability, which can therefore be checked via the following procedure: 1. check stability for an arbitrary value δ * ∈ D 2. check the exclusion for the convex hull: 0 The stability analysis at step 1 can be based, for instance, on Lyapunov functions: we have the following vertex result.
Theorem 3 (Lyapunov-based stability analysis.) Let V (z) be a positive definite radially unbounded function, which is either smooth or convex. Denote by D + V (z) the generalised Dini derivative of V (z). Then, V (z) is a Lyapunov function for system (1), in the sense that Proof Necessity follows from the fact thatD is a subset of D. We now prove sufficiency.
is a multilinear function of δ 1 , . . . , δ q , hence it takes its maximum value at the vertices ofD. Therefore, denoting by ψ For quadratic functions the result was shown by Garofalo et al. (1993). For nonsmooth but convex functions, including polyhedral Lyapunov functions Giordano 2014, 2017) and piecewise-linear in rates Lyapunov functions (Al-Radawi and Angeli 1999), the proof can be carried out along the same lines, but it is more involved because one must resort to the subgradient.

Complex sensitivity and frequency response for totally multiaffine uncertain systems
In this section, we consider the case when the perturbation acting on the inputū is a periodic signal of the form v(t) = μ cos(ωt). Standard notions from dynamical systems theory tell us that, under robust stability assumptions (i.e., stability for all δ ∈ D), the steady-state output is where the magnitude amplification factor A ω and the phase shift φ ω are the magnitude and phase, respectively, of the system transfer function evaluated at the perturbation frequency ω. Denote by W (s, δ) the parameter-dependent transfer function of the system: where q(s, δ) is the numerator polynomial and p(s, δ) is the denominator polynomial corresponding to the characteristic polynomial of the system. The magnitude amplification factor A ω = A ω (δ) and the phase shift φ ω = φ ω (δ) are then given by Clearly, when the perturbation signal is constant (i.e.,ω = 0), the phase shift φω(δ) is 0 for every δ ∈ D, and the amplification factor Aω(δ) reduces to the input-output sensitivity (i.e., Aω(δ) = Σ(δ)), for which tight bounds are provided in Theorem 2. We now aim to generalise the analysis of Sect. 3 (which is focused on constant input perturbations) so as to investigate how periodic input perturbations affect the system output.
Given a complex number s ∈ C, we define Q(s) as the convex hull of the 2 m (complex) vertex points q(s, δ), i.e., Similarly, for the 2 m vertex points p(s, δ) we denote by P(s) their convex hull, i.e., Definition 2 (Numerator/denominator critical values.) A complex number s ∈ C is numerator critical (resp. denominator critical) if the convex polygon Q(s) (resp. P(s)) includes the origin; otherwise it is numerator non-critical (resp. denominator non-critical).

Proposition 3 (Numerator/denominator non-critical values and total multiaffinity.)
Assume that J (δ) is totally multiaffine in δ, and lets ∈ C be a numerator (resp. denominator) non-critical value. Then, for any δ ∈ D,s cannot be a root of q(s, δ) (resp. p(s, δ)).
Proof Since the convex polygon Q(s) does not include the origin, there is a vector α β , with α, β > 0, that defines on the complex plane C = {s : where the equality holds because the objective function is multiaffine in δ, and hence takes its minimum in the vertex setD, while the inequality is due to the fact that all points q(s, δ) belong to Q(s). Since γ > 0, q(s, δ) = 0 for all δ ∈ D, namely q(s, δ) cannot have a root ats. The proof for p(s, δ) is identical.

Remark 4
Proposition 3 restates well-known results about value-set analysis: the proof is reported for completeness, yet a different proof could be given in terms of the Mapping Theorem by Barmish (1994) (Section 14.6).
We separately consider the numerator and denominator polynomials, q(s, δ) and p(s, δ), and provide bounds on their magnitude and phase by exploiting the fact that they are multiaffine functions of δ. Again, we can exploit vertex results.
Theorem 4 (Magnitude and phase of numerator/denominator polynomials.) Assume that J (δ) is totally multiaffine in δ and lets ∈ C be a numerator non-critical value. Then, for every δ ∈ D, the following inequalities hold: Fig. 1 Example of a set {q(s, δ), δ ∈ D} (drawn with a thick black boundary and filled with diagonal lines) and its convex hull Q(s) = conv q(s, δ), δ ∈D = {δ 1 , δ 2 , δ 3 , δ 4 } (represented as a blue square), where δ 1 , δ 2 , δ 3 ,δ 4 are the vertices of the hyper-rectangle D. Note that the minimum modulus of q(s, δ) for δ ∈ D is smaller than the modulus at the vertices: this justifies the lower bound in Theorem 4, which is possibly conservative, as in this case. Indeed, min σ ∈Q(s) σ ≤ min δ∈D |q(s, δ)| ≤ min δ∈D |q(s, δ)| (color figure online) The first three bounds are tight, while the last, computed as the minimum Euclidean norm of the points in Q(s), is conservative in general. Ifs ∈ C is a denominator non-critical value, then analogous bounds hold for p(s, δ).
Proof Recall that, since J (δ) is totally multiaffine in δ, the numerator q(s, δ) of the transfer function is multiaffine in δ too (and the same holds true for the denominator p(s, δ)). The thesis then follows as an application of the Mapping Theorem by Barmish (1994) (Section 14.6), which guarantees that the convex hull of the set of q(s, δ), δ ∈ D, is the convex hull of the vertex points q(s, δ), δ ∈D, i.e., conv {q(s, δ), δ ∈ D} = conv q(s, δ), δ ∈D = Q(s).
In particular, the minimum and maximum values of the phase of q(s, δ), for δ ∈ D, are obtained at the vertices, for some δ ∈D. The same holds true for the maximum value of the modulus of q(s, δ). The minimum value of the modulus of q(s, δ), for δ ∈ D, is lower bounded by the minimum value in the convex hull, min σ ∈Q(s) σ , which is in general smaller or equal to min δ∈D |q(s, δ)|; see, for instance, the example in Fig. 1. However, this fourth bound is not tight: as shown in the example in Fig. 1, the true minimum min δ∈D |q(s, δ)| is between these two values: min σ ∈Q(s) σ ≤ min δ∈D |q(s, δ)| ≤ min δ∈D |q(s, δ)|.
We are now ready to evaluate magnitude and phase of the transfer function, as an immediate consequence of Theorem 4.
Theorem 5 (Bounds on magnitude and phase of the transfer function.) Assume that J (δ) is totally multiaffine in δ and let iω be both numerator non-critical and denominator non-critical. Then, for every δ ∈ D, the following inequalities hold: Remark 5 Bounds (15)- (16) are in general conservative. The only tight bounds are those achieved on the phase shift φ ω (δ) when the numerator is a constant. We point out that these bounds are valid for small enough input signals: in this case, the linear approximation is valid and our results transfer to the original nonlinear system. However, for large signals, the prediction capability could be lost. For instance, large signals could drive some system components into saturations and, in this case, the resulting amplitude and phase could be different from those estimated by our analysis.

Bounds on magnitude and phase plots of the transfer function
The tools developed in this section allow us to derive bounds on the Bode plot of the system transfer function, which represents the magnitude and the phase of the transfer function W (iω, δ) as a function of the frequency ω. More precisely, plotting the bounds (15) and (16) as a function of the perturbation frequency ω provides the admissible intervals for magnitude and phase plots, respectively. Since these bounds are not tight, we introduce the concept of inner and outer bounds to assess how conservative they are.
Definition 3 (Outer bounds for magnitude and phase.) Functions A − out (ω) and A + out (ω) are outer lower and upper bounds for the output magnitude amplification factor A ω (δ) if, for every ω ≥ 0, it holds Functions φ − out (ω) and φ + out (ω) are outer lower and upper bounds for the output phase shift φ ω (δ) if, for every ω ≥ 0, it holds: Clearly, the bounds in (15) and (16) intended as functions of the frequency ω, i.e., are outer lower and upper bounds for the output magnitude amplification factor and the output phase shift, respectively. As already remarked, these intervals are in general conservative. To evaluate the degree of conservatism, we can introduce inner lower and upper bounds, so as to derive sub-intervals that are necessarily included in the outer intervals, as per the following vertex result.
Proposition 4 (Inner bounds for magnitude and phase.) Assume that J (δ) is totally multiaffine in δ and introduce the following functions of the frequency ω ≥ 0: where A ω (δ) and φ ω (δ) are defined as in (10) and (11), respectively. Then, for every ω ≥ 0 and for every δ ∈ D, the following inclusions hold: ω) and φ + in (ω) are actually achieved for some vertex δ ∈D, hence the corresponding bounds cannot be tightened. Therefore, the inclusions Remark 6 All the results we have presented so far can be extended to the case in which ∂u (x,ū) and H = ∂h ∂ x x are also uncertain, provided that the matrix is totally multiaffine in δ. Then, the transfer function can be written as and Theorem 5 holds true without changes, as well as all the results in Sect. 4. Also all the results in Sect. 3 remain valid: in fact, the sensitivity in Theorem 2 corresponds to the transfer function computed at s = 0.

Self-sustained oscillations: what do the critical frequencies tell us?
Critical frequencies are fundamental when analysing oscillators. If, for a certain value of the parameters, an equilibrium point becomes marginally stable with two complex eigenvalues on the imaginary axis, the imaginary part of these eigenvalues is expected to provide a good estimate of the oscillating frequency ω. This reasoning is heuristic in the following sense. The onset of oscillations requires the presence of a negative loop with sufficiently high loop gain (see, e.g., the work by Blanchini et al. 2014;Domijan and Pécou 2011;Gouze 1998;Snoussi 1998 and the references therein). If all the eigenvalues lie to the left of the imaginary axis, oscillations cannot be persistent. If the eigenvalues are exactly on the imaginary axis, the oscillations of the linear (linearised) systems are persistent, but, if these eigenvalues cross the imaginary axis (even slightly, due to a perturbation), then the oscillatory trajectories diverge in principle. However, unavoidable saturations in the original nonlinear system cause a "virtual gain reduction", which keeps the oscillations bounded. This situation can be seen as due to the reduction of the critical gain, namely the gain value corresponding to purely imaginary eigenvalues. An approximate guess of the oscillating frequency can therefore be obtained as follows.
The next vertex result, a corollary to Proposition 3, allows us to identify potential oscillatory frequencies.
Theorem 6 (Potential oscillatory frequencies and denominator critical values.) A potential oscillatory frequency ω must be such thats = iω is a denominator critical value as per Definition 2, namely, the convex polygon P(s) = conv{ p(s, δ), δ ∈D} includes the origin.
The opposite is not true in general: if iω is a denominator critical value, it still could be that no parameter value exists in the admissible bounds for which there are imaginary roots. In fact, the convex hull P(s) = conv{ p(s, δ), δ ∈D} is larger than the actual set { p(s, δ), δ ∈ D}, and it may happen that 0 ∈ P(s) but 0 / ∈ {p(s, δ), δ ∈ D}. Still, the polygon P(s) provides good indications about the oscillation frequency we can expect.
The previous considerations can be applied also to the case of a stable system with a negative loop including delays that cause the onset of oscillations.
Consider the system with y(s) = W (s)u(s) and a delay τ in the closed loop, u(t) = y(t − τ ), hence u(s) = e −sτ y(s). To analyse its oscillatory behaviour, we need to study the quasi-polynomial Ψ (s, τ, δ) = q(s, δ)e −sτ + p (s, δ) and the existence of possible roots in iω. For any fixed ω, the set of all Ψ (iω, τ, δ) enjoys the same property of the polynomials q(s, δ) and p(s, δ): the convex hull of the set is the convex hull of the vertices. Therefore, the previous analysis holds unchanged.

Robust adaptation analysis for uncertain systems
In this section, we report vertex results to assess adaptation to a persistent constant input (step input), a remarkable property of several biological systems (Alon 2006;Ma et al. 2009). An input-output dynamical system exhibits adaptation if, after a persistent input has been applied, the output initially increases and, after a transient, eventually decreases so as to get close(r) to its pre-perturbation value. Adaptation includes perfect adaptation (El-Samad et al. 2002;Giordano et al. 2016;Khammash 2021;Kim et al. 2014;Yi et al. 2000), which is achieved when the output asymptotically recovers exactly its pre-perturbation value and can be formally associated with the system transfer function W (s) vanishing for s = 0.
Adaptation is difficult to define formally and then to assess, even more so in the case of systems with uncertain parameters; see the discussion by Blanchini et al. (2020). It roughly requires the step response to be (essentially) first increasing and then decreasing: temporary trend changes and oscillations can occur during the transient, but the response must be predominantly decreasing for large values of t. For instance, consider the three output signals shown in Fig. 2 in response to a step perturbation (red signal): we call the orange one, which does not eventually decrease, non-adaptive; the green one, which first increases and then decreases, adaptive; the blue one, which first increases and then decreases so much that it changes sign, over-adaptive.
Throughout this section, we consider the following standing assumption. Assumption 1 Consider system (1), with δ ∈ D. The step response y s (t) corresponding to the input u(t) =ū is initially positive (i.e. ∃ τ > 0 such thatẏ s (t) . = d dt y s (t) > 0 for 0 < t ≤ τ ) and lim t→∞ y s (t) is finite.
The sign ofū is therefore implicitly chosen to ensure that y s (t) is initially positive. We can say that the system is adaptive if y s (t) is essentially increasing for small values of t and essentially decreasing for larger values of t. In order to provide a formal definition, we consider the weighted integral Then, the abscissa of convergence, σ , is the largest value of a for which the integral in (19) is finite: Note that, when a = 0, the integral in (19) is equal to lim t→∞ y s (t) in view of the Final Value Theorem, hence it is finite according to Assumption 1. We can then provide a formal definition of adaptation (which does not require linearity of the system in general; see Blanchini et al. 2020).
Definition 5 (Adaptive step response.) Given the weighted integral I a defined in (19), the response y s (t) to the step inputū, starting from zero initial conditions, is adaptive if there existsā < σ such that I a > 0 for a <ā, I a < 0 forā < a < σ.
More precisely, we say it is partially adaptive ifā > 0; perfectly adaptive ifā = 0; over-adaptive ifā < 0. Given the uncertain system (1) with step response y s (t, δ), adaptation is robust if the property holds for allū and all δ ∈ D.
The idea is that, when a < 0, the values ofẏ s (t) at earlier times (mainly positive in an adaptive system) are weighed more, while when a > 0 the values ofẏ s (t) at later times (mainly negative in an adaptive system) are weighed more; the larger is a in absolute value, the more visible such an effect is.
Consider system (1) with impulse responseẏ s (t), namely, the rational transfer function W (s) is the Laplace transform ofẏ s (t) forū = 1. Then, the abscissa of convergence of W (s) is σ , defined in (20), while −σ is the spectral abscissa, namely, the largest among the real parts of the poles of W (s) (recall that the poles of a transfer function are the roots of its denominator polynomial). Given the complex numbers w 1 and w 2 , we say that w 1 dominates w 2 if (w 1 ) > (w 2 ). Then, the following proposition characterises adaptation as the presence of a single dominant real zero, larger than the spectral abscissa.
Proposition 5 (Characterisation of adaptive step response.) Let W (s) = H (s I − J ) −1 E = q(s)/ p(s) be the rational, strictly proper transfer function of the linear asymptotically stable system (1). The system step response is adaptive if and only if there exists exactly one real zero, −ζ , such that ζ < σ. Partial adaptation corresponds to ζ > 0, perfect adaptation to ζ = 0, over-adaptation to ζ < 0.
Proof When a < σ, I a = ∞ 0 e atẏ s (t)dt = lim s→0 W (s − a) = W (−a). Then, the threshold value isā = ζ , where −ζ is a real zero, which must be the only zero in the open interval (−σ, ∞) in order to prevent other sign changes. Now, given the uncertain system (1), we wish to check whether it has a robustly adaptive response. To this aim, we consider the system transfer function W (s, δ) = H (s I − J (δ)) −1 E = q(s, δ)/p (s, δ). Then, for all admissible values of δ, we evaluate both the position of its real zeros (real roots of q(s, δ)) and the spectral abscissa −σ (i.e., the largest real part of all poles, namely, of all roots of p(s, δ)). If, for all possible values of the uncertain parameters, there is a single real zero −ζ with ζ < σ, this reveals robust adaptation.
We approach this problem through vertex algorithms and graphical methods based on the theory of parametric robustness and the Mapping Theorem by Barmish (1994).
In particular, the spectral abscissa −σ can be robustly evaluated as follows.
Proposition 6 (Robust spectral abscissa.) The value −a is a robust spectral abscissa, with P(iω − a) defined as in (13).
In fact, the spectral abscissa −σ is less than −a if and only if all the roots of the characteristic polynomial p(s, δ) have real parts less than −a, which is equivalent to p(s − a, δ) being Hurwitz for all δ ∈ D. Since the convex hull of { p(s − a, δ), δ ∈ D} is equal to P(iω−a), the convex hull of { p(s −a, δ), δ ∈D}, the proposition provides a sufficient condition. For a fixed a, the condition can be checked in practice by plotting via computer graphics the set P(iω − a) for a finite frequency range, sufficiently large and sufficiently densely sampled.
As an alternative, one can also check whether p(s − a, δ) is Hurwitz through the solution of a convex optimisation problem, as done by : this approach is harder to implement, but it is non-conservative.
The previous result entails that we can draw the exact envelope of the real plot of the polynomials p(λ, δ) and q(λ, δ) as a function of λ, which we call robust real plot. Drawing the two functions ϕ − (λ) and ϕ + (λ) (resp. ψ − (λ) and ψ + (λ)) is enough to locate the real zeros (resp. real poles) of the uncertain system.
In order to assess robust adaptation for an uncertain system, we need to check whether the system has a real dominant zero for all possible values of the uncertain parameters. First of all, we need to verify that Assumption 1 holds robustly: this can be done through a vertex test.

Lemma 1 (Robust vertex check.) Assumption 1 holds for all δ ∈ D if and only if the leading coefficient of q(s, δ) is positive for all δ ∈D.
Proof The step response is positive in a right neighbourhood of 0 due to Assumption 1. Hence, the first non-zero derivative ofẏ s (t), of order r (relative degree of W (s, δ), namely, difference between the degree of the denominator and the degree of the numerator), must be positive. The initial value theorem yields lim t→0 y (r ) s (t) = lim s→∞ s r W (s, δ) > 0, hence the leading coefficient q n−r (δ) of q(s, δ) must be positive for all δ ∈ D. Since q n−r (δ) is a multiaffine function, its minimum is achieved at the vertices δ ∈D. Then, it is necessary and sufficient that all the values at the vertices are positive.
Definition 6 (Robust real zero dominance.) If −σ < 0 is a robust spectral abscissa (i.e., it dominates all poles for all δ ∈ D ), the system with transfer function W (s, δ) is robustly real zero dominant if, for all δ ∈ D, there exists a real zero that dominates −σ .
Robust real zero dominance can be checked through a vertex result.
The presence of at least one dominant real zero suggests that there is adaptation, but is not enough: the dominant real zero must be unique in view of Proposition 5. We need therefore to establish robust uniqueness, again through a vertex result.

round). Each point in the interval is a root of q(s, δ) for some δ ∈ D. The root is unique for all δ ∈ D if all the vertex derivative polynomials q (s, δ), δ ∈D, have the same sign (either positive or negative) in this interval.
Proof The intersection of q(s, δ) with the interval [c L , c R ] is unique for all δ ∈ D if the derivative q (s, δ) does not change sign in the interval for all δ ∈ D. Since the derivative q (s, δ) is a multiaffine function of δ, its maximum and minimum value are on the extrema, δ ∈D.
An important issue, which we now briefly discuss, is the perfect or partial adaptation to non-constant signals, for instance sinusoidal signals. To achieve perfect adaptation to a sinusoidal signal, a system must satisfy the so-called internal model principle (Sontag 2003), as it has been highlighted in the literature (see for instance the work by Bin et al. (2021), Mucci et al. (2020) and the references therein). In our context, perfect adaptation to a sinusoidal signal of frequency ω is equivalent to the condition A ω (δ) ≡ 0, namely, it is equivalent to considering (17) with A + out (ω) = 0. This condition is quite restrictive to be ensured. Hence, we can consider adaptation, not perfect, just by requiring that A + out (ω) ≤ , where > 0 is a small enough tolerance parameter. An advantage of our proposed approach is that it allows us to perform the analysis on a whole assigned interval of frequencies, rather than on a fixed one.

Case studies of biological systems
We present in this final section some case studies of biological systems, to which we apply the techniques we have illustrated in the previous sections.

Incoherent Feed-Forward Loop with first-order enzyme reactions
The incoherent Feed-Forward Loop (i-FFL) is a circuit where the input triggers the output response and also activates a negative controller, which eventually suppresses the output. It is known that, when the input is a ramp, the i-FFL exhibits a pulse response, while it exhibits an oscillatory output in response to a periodic input (Mangan and Alon 2003;Krishnan and Floros 2019). Here, we consider an implementation of the i-FFL realised through a synthetic transcriptional circuit (Kim et al. 2014), and we investigate to what extent small periodic perturbations of the constant input can affect the output. We let the parameters of the circuit take values in a hyper-rectangle centered around the desired nominal values. This situation reflects the fact that precisely tuning the parameters of a synthetic circuit is rarely possible.
The transcriptional i-FFL designed by Kim et al. (2014) is composed of two RNA species, a and b, whose production is controlled by the common input u and which hybridise to form the complex c. Within the complex, RNase catalyses the degradation of b, while it poorly catalyses the degradation of a; for this reason, when c degrades, only a, but not b, is released (see Kim et al. 2014 for details). A mathematical model of the circuit is provided by Kim et al. (2014), and the same model is adopted by Krishnan and Floros (2019) (see model KI14) to examine the response of adaptive circuits to a variety of stimuli. Assuming first-order enzyme reactions for a, b, c and zero-order reactions for the input u, the circuit is described by the following ordinary differential equations: g 2 (a, b, c where g 1 (a) := a is the degradation term, g 2 (a, b, c) := κ 1 (ab−c) describes hybridisation of a and b and the release of free a upon degradation of c, g 3 (a, b) = κ 2 ab describes hybridisation of a and b. The RNA species b represents the output of the circuit.
When the input is a constant signal u(t) ≡ū, the circuit reaches the unique equilibrium (a eq , b eq , c eq ) = (ū, 1,ū). Around the equilibrium, the dynamics of the circuit is described by the linearised system: where x := a b c , E := 1 κ 2 0 , H := 0 1 0 , and J is the Jacobian at the equilibrium, i.e. J = J (a eq , b eq , c eq ) with With respect to the partial derivatives of the functions g 1 , g 2 and g 3 , the Jacobian J (a, b, c) admits the B DC-decomposition: where: Note that the fourth element of δ, i.e. ρ, inherently has value 1 because, by definition, ρ := ∂ g 1 ∂a and ∂ g 1 ∂a = 1 because the model variables are non-dimensional (see the work by Kim et al. (2014) for details). Finally, let v(t) = μ cos(ωt) be a small periodic perturbation acting on the inputū.
By exploiting the results of Sect. 4, we can examine how, depending on the perturbation frequency ω, the output b is affected by the perturbation v. Figure 3 shows lower and upper bounds on the Bode plots of the transfer function W (iω, δ) = H (iωI − J ) −1 E: these bounds have been computed as in Theorem 5 and Proposition 4, and indicate, for each frequency ω, the admissible values of the magnitude amplification factor and the phase shift of the output b. Now assume uncertainty bounds of ±50% on the nominal value of the inputū, too. With respect to κ 1 , κ 2 andū, the Jacobian at the equilibrium However, since J is totally multiaffine in κ 1 , κ 2 andū, the vertex arguments and the results of Sect. 4 can still be applied. Let the amplitude and the frequency of the periodic perturbation v be μ = 0.05 and ω 1 = 0.5, respectively, so that v(t) = 0.05 cos(0.5t).
The computation of the numerator and denominator polynomials of the transfer function W (s) = H (s I − J ) −1 E = q(s)/ p(s) yields: q (s, κ 1 , κ 2 ) : = κ 2 s 2 + 2κ 1 κ 2 s p (s, κ 1 , κ 2 ,ū) : = s 3 + (1 + 2κ 1 + κ 2ū ) s 2 + (κ 1 + κ 2ū + κ 1 κ 2ū ) s + κ 1 κ 2ū (23) Plotting of the polygons Q(iω 1 ) and P(iω 1 ), which are reported in Fig. 4, shows that neither Q(iω 1 ) nor P(iω 1 ) includes the origin, and hence s = iω 1 is both a numerator and denominator non-critical value. Then, bounds on magnitude and phase of each polynomial can be computed as in Theorem 4, which yields: Finally, in view of Theorem 5, the amplification factor A ω and the phase shift φ ω of the output b satisfy the following inequalities: Figure 5 reports the perturbed input signalū + v(t) and the output b(t). The interval between the maximum and minimum values of the output allowed by the bounds (24) is indicated by the gray area: as it can be seen, the bounds on the output amplitude are indeed satisfied.
To investigate the degree of conservatism of the bounds (24), N = 1000 sets of parameters have been randomly sampled within the given hyper-rectangle and, for each set of parameters, the amplification factor of the output has been computed. Figure 6 shows the parameters sampled within the hyper-rectangle (left panel) and a histogram of the amplification factors (right panel), where the red dashed lines indicate the bounds (24). Black diamonds indicate the vertices of the hyper-rectangle (left panel) and the corresponding amplification factors (right panel).

Coherent Feed-Forward Loop with Hill dynamics
In a coherent Feed-Forward Loop (c-FFL), activation of the most downstream element x 3 of the circuit occurs both directly from the most upstream element x 1 and indirectly, via the intermediate element x 2 , which is, in turn, activated by x 1 . A c-FFL with Hilltype dynamics is described by the following set of equations (see the work by Ghafari and Mashaghi (2017)): When the input is a constant signal u(t) ≡ū, the circuit reaches the unique equilibrium: Around the equilibrium, the dynamics of the circuit is described by the linearised system where x := x 1 x 2 x 3 , E := 1 0 0 , H := 0 0 1 , and the Jacobian is given by where we have set: Since J is B DC-decomposable, it is multi-affine in γ 1 , γ 2 , γ 3 , α, β and ε. Indeed, computation of the transfer function W (s) = H (s I − J ) −1 E = q(s)/ p(s) shows that both numerator and denominator polynomials are multi-affine: Let the nominal values of the parameters be γ 1 = γ 2 = γ 3 = 1, V 1 = 0.6, V 2 = 0.5, K 1A = K 2 A = 1, n = 2 (biologically plausible values are provided e.g. by Ghafari and Mashaghi 2017;Smolen et al. 2000;Mangan and Alon 2003), and let the nominal input beū = 1. Then, the nominal vector δ that collects the diagonal entries of matrix D is δ = γ 1 γ 2 γ 3 α β ε = 1 1 1 0.3 0.0758 0.0124 . Assume uncertainty bounds Let v(t) = μ cos(ωt) be a small periodic perturbation acting on the inputū. By applying the results of Sect. 4, we can examine the effects of the periodic perturbation v on the output x 3 . The bounds on the Bode plots of the transfer function W (iω, δ) = H (iωI − J ) −1 E are reported in Fig. 7: they provide, for each perturbation frequency ω, admissible intervals for the amplification factor and the phase shift of the output x 3 . Now let the amplitude and the frequency of the periodic perturbation v be μ = 0.05 and ω 1 = 0.55, respectively, namely let v(t) = 0.05 cos(0.55t). We investigate, in two different scenarios, the degree of conservatism of the bounds on the amplification factor.
In the first scenario, we assume uncertainty bounds of ±20% on all entries of the vector δ. Plotting of the polygons Q(iω 1 ) and P(iω 1 ), which are reported in Fig. 8, shows that neither Q(iω 1 ) nor P(iω 1 ) includes the origin, and hence s = iω 1 is both a numerator and denominator non-critical value. Then, bounds on the amplification factor of the output x 3 can be computed by exploiting Theorem 4 and Theorem 5. Figure 9 reports the perturbed input signalū + v(t) and the output x 3 (t) together with the interval values allowed by the bounds. To investigate the degree of conservatism of the computed bounds, we randomly sample the parameters γ 1 , γ 2 , γ 3 , V 1 , V 2 , K 1A , K 2 A , and we retain N = 500 sets of parameters for which the bounds of ±20% on the entries of δ are satisfied. Then, for each set of parameters, we compute the amplification factor of the simulated output. A histogram of the amplification factors is reported in Fig. 10.
In the second scenario, we fix γ 1 , γ 2 , and γ 3 to the nominal values, and we assume bounds of ±20% on entries α, β, and ε of the vector δ. In this case, the denominator polynomial p(s, γ 1 , γ 2 , γ 3 ) is not subject to uncertainty. Plotting of the polygon Q(iω 1 ), which is reported in Fig. 11, shows that Q(iω 1 ) does not include the origin, and hence s = iω 1 is both a numerator and denominator non-critical value. Then, bounds on the amplification factor of the output x 3 can be computed by exploiting Theorem 4 and Theorem 5. Figure 12 reports the perturbed input signalū +v(t) and the output x 3 (t) together with the computed bounds (note that, with respect to Fig. 9, the input and the output signals are the same, while the bounds are different). It can be seen that the output stays within the gray area. To investigate the degree of conservatism, we randomly sample the parameters V 1 , V 2 , K 1A , and K 2 A , we retain N = 1000 sets of parameters for which the bounds of ±20% on α, β, and ε are satisfied, and we compute, for each set of parameters, the amplification factor of the simulated output. Figure 13 shows the values of α, β, and ε for the retained sets of sampled parameters and a histogram of the amplification factors.

The Brusselator oscillator
Consider the well-known Brusselator oscillator as described for instance by Epstein and Pojman (1998). The model consists of the following two differential equations: The Jacobian of the system is: At the equilibrium, it holds a = bx eq y eq and the Jacobian is therefore: To perform our analysis, we re-parameterise the problem adopting the parameters δ 1 = a, δ 2 = c and δ 3 = bx 2 eq , on which we impose the uncertainty bounds. The Jacobian can thus be written as: whose characteristic polynomial is p(s, δ 1 , δ 2 , δ 3 ) = s 2 + (−δ 1 + δ 2 + δ 3 )s + δ 2 δ 3 . We assume uncertainty bounds of 0.4 ≤ δ i ≤ 1, i = 1, 2, 3. By applying Theorem 6, we can identify potential oscillatory frequencies of the system. To this aim, we plot some polygons P(iω) of the polynomial p(s, δ 1 , δ 2 , δ 3 ): Fig. 14 shows some polygons P(iω) for frequency ω in the interval [0.4, 0.55]. Since each of these polygons includes the origin, we can conclude that all frequencies ω with 0.4 ≤ ω ≤ 0.55 are potential oscillatory frequencies. To verify that the system exhibits oscillations in this range of frequencies, let us consider for numerical purposes the following data: a = 0.95, b = 0.5, c = 0.4 (reaction rate constants) and u = 0.4 (input). These are proper parameters for the model; to perform our analysis, we consider the parameters δ i , on which the uncertainty bounds are imposed. With these values of the original parameters, the bounds 0.4 ≤ δ i ≤ 1 are satisfied; indeed, it results: δ 1 = 0.95, δ 2 = 0.4 and δ 3 = 0.5. Simulation of the model, reported in Fig. 15, shows that, with this set of parameters, the system exhibits sustained oscillations, and the frequency of oscillation is about ω ≈ 0.44, consistently with the fact that the polygon P(i0.44) includes the origin.

The Goldbeter oscillator
The Brusselator was a very simple, illustrative example of oscillator. In this section, we consider a more complex oscillator, which describes the circadian oscillations of PER protein in Drosophila. The system is based on multiple phosphorylation of PER and on the negative feedback implemented by the nuclear form of PER on the transcription of the per gene. A mathematical description of the system was proposed by Goldbeter (1995). The Goldbeter oscillator model includes five state variables: M denotes the concentration of mRNA transcripts from per gene; P 0 , P 1 and P 2 denote the concentration of PER protein in the unphosphorylated, monphosphorylated and biphosphorylated form, respectively; P N denotes the concentration of the nuclear form of PER. The system is described by the following five differential equations: The total concentration of PER protein, denoted by P t , is computed as P t := P 0 + P 1 + P 2 + P N .
Straightforward computations show that the Jacobian of the system takes the following form: where the Greek letters denote the partial derivatives of the Michaelis-Menten and Hill functions, specifically: Since the Jacobian is multi-affine in α, β, γ , ε, φ, ν, ρ, k s , k 1 , k 2 , we can apply the results of Sect. 4.2 to identify potential oscillatory frequencies. Let the nominal values of the parameters be the values adopted by Goldbeter (1995), Consider as working point of the system the rough values around which M, P 0 , P 1 , P 2 and P N are known to oscillate (Goldbeter 1995), namely:M = 1.75μM,P 0 = 1.1μM,P 1 = 0.67μM,P 2 = 0.58μM,P N = 0.85μM. For the nominal values of the parameters and the considered working point, the Jacobian is: Assume rough uncertainty bounds of ±50% on each non-zero entry of the Jacobian, namely on α, β, γ , ε, φ, ν, ρ, k s , k 1 , k 2 . To identify potential oscillatory frequencies of the system, we compute the characteristic polynomial of the Jacobian (31), and we plot the polygon P(iω), which is the convex hull of the value set, in a large range of frequencies. As seen from Fig. 16, P(iω) includes the origin for frequencies ω in the interval [ω min , ω max ] = [0.14, 0.42]. These frequencies represent potential oscillatory frequencies of the system. The corresponding oscillation period ranges from T min = 2π/ω max = 14.96h to T max = 2π/ω min = 44.88h. As reported in Fig. 17, this is fully consistent with the numerical simulations proposed by Goldbeter (1995).

Adaptation analysis
The biochemical reaction network can be associated with a system of differential equations, describing the time evolution of the species concentrations, having state vector x = x 1 x 2 x 3 x 4 : ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ẋ  ( We consider an additive input on x 1 , namely E = [1 0 0 0] , and we take x 3 as output, namely H = [0 0 1 0]. The lower and upper bounds on δ are given as δ − = [1 1 1 1 1 0.0 3] and δ + = [2 2 2 2 2 0.1 4]. For δ 6 = 0, it can be seen that the system has perfect adaptation: the transfer function has a zero at the origin. We wish to assess whether adaptation (although non-perfect) is robustly maintained for small values of δ 6 ? The robust real plot is shown in Fig. 18  Therefore, we have robust real zero dominance. The same conclusion can be drawn by noticing that −a = −0.133 is a robust spectral abscissa, since the origin does not belong to the set P(iω − a) as can be seen in Fig. 19, and that ψ + (−a) < 0.

Concluding discussion
This paper aims at building a connection between robustness analysis problems that arise in systems biology and a class of powerful robustness analysis tools developed within the realm of control theory (Barmish 1994). Although the general principles of biological robustness have been thoroughly discussed from a variety of perspectives (Khammash 2016;Kitano 2004;Lesne 2008;Stelling et al. 2004;Streif et al. 2016;Whitacre 2012) and several dedicated tools have been adopted to perform the robustness analysis of specific models, or classes of models, as discussed in the introduction, we are still lacking a general framework that allows us to robustly assess properties of interest for biological models in a systematic way. For the first time to our knowledge, this work proposes to leverage the vertextype results developed in the 90s in the control community (Barmish 1994) for the robustness analysis of a vast class of biological models that turn out to have a totally multiaffine uncertainty structure: the class we consider includes generic biochemical reaction networks, as well as chemical reaction networks with mass-action kinetics. For this class of systems, building upon the work by Barmish (1994), we derive vertex results that allow us to robustly assess the input-output sensitivity, in the presence of input perturbations that can be either constant or periodic, in the neighbourhood of an equilibrium state. Besides being useful for the robust steady-state sensitivity analysis in the presence of constant perturbations and for the robust frequency-response sensitivity analysis in the presence of sinusoidal disturbances, vertex-type algorithms also enable to robustly assess non-singularity, stability, and adaptation.
Our analysis has two main limitations. First, it is based on linearisation: all our characterisations do transfer from the linear approximation to the original nonlinear system when small input signals are considered, but the prediction capability may be lost for large inputs. Second, the analysis requires in general a re-parametrisation, namely, it considers uncertain parameters that may be functions of the original parameters: as a consequence, providing uncertainty bounds is not always straightforward, and a certain degree of conservatism is introduced, which we have numerically quantified in some case studies.
A benefit of the approach is that it allows us to resort to very effective methods available in the literature, which are shown to be applicable to study biologically relevant models and provide a critical insight into their robustness analysis.
A potentially interesting alternative approach to the robustness analysis of biological systems relies on the probabilistic methods and randomised algorithms introduced by Tempo et al. (2005), which tackle uncertain systems by assessing their properties in probability, through sampling (Monte Carlo) techniques. These approaches have the advantage of being quite flexible, since they do not require special structures (e.g. multiaffinity) and they can fruitfully support other methodologies, as shown for instance in the case study by Kim et al. (2006). However, they have two main drawbacks: the number of samples required to get to a conclusion with a given confidence may be huge, and their outcome is heavily distribution-dependent, so that different sample distributions for the system parameters can result in completely different conclusions on the system properties. A possible future research direction is that of systemati-cally combining probabilistic and classic approaches for the robustness analysis of biological models.