Robust set-point regulation for ecological models with multiple management goals

Population managers will often have to deal with problems of meeting multiple goals, for example, keeping at specific levels both the total population and population abundances in given stage-classes of a stratified population. In control engineering, such set-point regulation problems are commonly tackled using multi-input, multi-output proportional and integral (PI) feedback controllers. Building on our recent results for population management with single goals, we develop a PI control approach in a context of multi-objective population management. We show that robust set-point regulation is achieved by using a modified PI controller with saturation and anti-windup elements, both described in the paper, and illustrate the theory with examples. Our results apply more generally to linear control systems with positive state variables, including a class of infinite-dimensional systems, and thus have broader appeal.


Introduction
Regulation by feedback arises in numerous areas of science and engineering; such as acoustics, electrical circuits, aviation and biological systems. According to the report of Murray et al. (2003): "Feedback is an enabling technology in a variety of application areas and has been reinvented and patented many times in different contexts". Ubiquitous to the design and synthesis of modern feedback control systems are (P)roportional, (I)ntegral, (D)erivative controllers. These dynamical models incorporate current (P part), past (I part) and predictive (D part) information about a measured variable or variables and create from this information a signal, termed an input or control, which is then fed back into the to-be-controlled system to achieve some desired dynamic behaviour. PID controllers are widely used in industrial processes (Lunze 1989;Åström and Hägglund 1995) and have been described as one of the "Success Stories in Control" (Samad and Annaswamy 2011, p. 103). The special case of integral control was developed in the 1970s as a technique for regulating the measured variables of a stable, but controlled, linear system to a fixed and chosen set-point. Early contributions to the theory of proportional and integral (PI) control are found in the control engineering literature and include Davison (1975Davison ( , 1976, Lunze (1985), Morari (1985) and Grosdidier et al. (1985). Whilst grounded in the field of process engineering, applications of PI control are multiple and varied. Indeed, established examples in engineering are complemented by emerging examples in biology, such as the regulation of blood sugar by insulin (Saunders et al. 1998), bacterial chemotaxis in living cells (Yi et al. 2000), calcium homeostasis (El-Samad et al. 2002) and, recently in Guiver et al. (2015), ecological management-the continued focus of the present work.
In ecological management, PI control provides a suite of techniques for management by the addition or removal of individuals from an ecological process, such as a population. In applied contexts, addition may correspond to captive-release schemes, translocation or replanting and removal may correspond to harvesting, culling or coppicing. Consequently, applications of PI control are broad in scope and importance, including pest or resource management, agriculture, horticulture and conservation. Its scope potentially extends to key and immensely timely societal challenges of the twenty-first century, such as food security (Godfray and Garnett 2014). Indeed, UNESCO's Mathematics of Planet Earth's 2013 programme 1 was "born from the will of the world mathematical community to learn more about the challenges faced by our planet and the underlying mathematical problems, and to increase the research effort on these issues" including "[a] growing population competing for the same global resources". In addition to the potential applications, our motivation for exploring the utility of PI control in ecological management is twofold: (a) their ease of computation and implementation, with very little knowledge required of the to-be-controlled system, and; (b) their inherent robustness to various forms of uncertainty. We further elaborate on (a) and (b) in the manuscript and contend that these facets make PI control ideally suited for ecological management where processes are subject to unknown disturbances and dynamic models are (possibly) highly uncertain. The PI controllers that we propose here do not seek to use measured data to update the underlying ecological model over time, by inferring parameters for instance, but the control does change in response to a measured variable. In this sense and context, feedback control has parallels to adaptive management, an approach well known in the resource and ecological management literature (Holling 1978;Walters 1986;Williams 2011). Other authors have noted this connection as well: Heinimann (2010) proposes principles from control theory as a concept for scholars and practitioners in adaptive ecosystem management.
Our earlier paper, Guiver et al. (2015), introduces integral control and PI control, in a context of single management goals, for structured population models. These deterministic population or meta-population models stratify individuals according to some discrete or continuous age-, size-or stage-structure and include matrix (P)opulation (P)rojection (M)odels (Caswell 2001;Cushing 1998) and (I)ntegral (P)rojection (M)odels (Easterling et al. 2000;Ellner and Rees 2006;Briggs et al. 2010). Guiver et al. (2015) considers regulation of single (scalar) observations or measurements to a prescribed set-point, or, in ecological modelling parlance, achieves a single management goal or objective. It is reasonable to request, however, that more than one measurement is regulated, and that more than one per time-step management action is permitted. For example, when designing a replanting programme to conserve a declining plant population, regulating total abundance may not be as beneficial as thought when the composition of the resulting stratified population is dominated by the seed stage-class. It may be more desirable to control both total abundance and abundance of a given stage-class, for instance, flowering plants. Alternatively, in sustainable harvesting, it may be desirable to harvest (that is, remove from) certain stage-classes whilst replenishing others, and still maintain a desired abundance of certain stages.
The application of PI control to the above multi-objective management problem is novel itself and, we believe, a useful and timely contribution to the suite of tools available to population managers, conservation biologists and other end users. To present such a solution requires new mathematical results in control theory for two reasons. First, population level models, such as matrix PPMs and IPMs, are examples of positive dynamical systems and existing "off-the-shelf" PI control need not respect the necessary nonnegativity constraints. In an applied context the controller could instruct management actions that are counter-intuitive or, worse, meaningless, such as removing more individuals than are currently present. Second, when the measured-variables are naturally constrained to be nonnegative, it is clear that not every nonnegative vector with more than one component is a feasible set-point. For example, if one measurement is always required to be larger than another, then this ordering must be preserved in the candidate set-point, as is the case in the plant example alluded to above. Therefore, in the present contribution we apply low-gain PI control with multiple management goals (so-called multi-input, multi-output systems) to examples in ecological management and develop low-gain PI control for discrete-time, positive state linear systems. The models and terminology is further explained throughout the manuscript. The material we present is an extension of Guiver et al. (2015), where an existing suite of results in control theory was drawn upon and further developed to address the nuanced situation of positive state variables and input constraints. Analogously, in regulating multiple outputs with multiple saturating inputs, we need to develop a different set of tools, described in the manuscript, and in particular draw upon recent positive state results in Guiver et al. (2014). Our results apply to situations outside of ecology, adding to their appeal, and are novel, although there are similarities to the results of Nersesov et al. (2004). We compare and contrast their approach with ours in Remark 4.8.
Owing to its dual focus, the manuscript has the following deliberate structure. Section 2 seeks to further motivate PI control as a tool for ecological management, informally states our main result and illustrates its application through an example. The subsequent Sects. 3 and 4 form the technical heart of the manuscript and develop the mathematics summarised in Sect. 2. In order to extend the appeal of this contribution, including to a possibly non-mathematical audience, we have deliberately placed proofs of all novel results in "Appendix C". A second example is presented in Sect. 5 and the manuscript is concluded by Sect. 6 with a discussion. "Appendices A and B" contain model parameters used in the examples that are not given in the main text for ease of presentation and preliminary material required for the proofs of our results, respectively.

Motivation, main result and illustrative example
This section contains an informal overview of our main results and demonstrates their possible application. We seek as well to further motivate the present contribution by briefly discussing the distinction between robust and optimal control, particularly in the context of ecological management. A larger, more comprehensive, introduction to PI control in the same context is contained in our earlier manuscript (Guiver et al. 2015, Section 2) which, to avoid repetition, we have not reproduced fully here. We mention that Guiver et al. (2015, Section 2.1) compares and contrasts PI control with other theoretical approaches to ecological management available in the literature.
For the situation considered here, the key ingredients are: -a managed population or resource that is changing over time (referred to as the to-be-controlled system or just system); -the possibly disturbed observations or measurements (referred to as outputs); -a management strategy that permits the addition or removal of individuals (referred to as control actions). The outputs provide information about aspects of the population, say abundance of a strata, and the present PI control problem is to choose a series of control actions to subsequently manage these outputs, that is, to regulate them to prescribed quantities. A PI controller is, in essence, a mathematical model that uses functions of the measurements to determine present and future control actions.
To describe PI control, a model of the to-be-controlled system is required. We shall assume that the population is modelled by a deterministic, linear, stratified population model, typically a matrix PPM (Caswell 2001). PPMs are structured population models, meaning that the modelled population is partitioned into discrete age-, size-or developmental stage-classes (the latter may include larval, pupal, adult, etc.). A linear, time-invariant matrix PPM is given by x(t + 1) = Ax(t), x(0) = x 0 , t = 0, 1, 2, . . . , (2.1) where x(t) denotes the structured population, in integer n stage-classes, with initial population distribution x 0 and A is an n × n componentwise nonnegative matrix. The time-steps t in (2.1) are assumed fixed: a week, month, or breeding cycle, for instance. The matrix A in (2.1) is often called the projection matrix, and contains life-history parameters of the population, such as recruitment, survival and transitions between stage-classes. The inclusion of measurements y(t) and control actions u(t) in (2.1) leads to the model where the input vector u(t) with integer m components is to-be-determined by the modeller. The terms B and C in (2.2) are n × m and p × m matrices, respectively, where p denotes the number of per time-step measurements taken. We note that, at any given time-step t, the entire population distribution (the state) x(t) may not be known (or known precisely), and consequently may not be used to help determine u(t). This is not necessarily a problem for feedback control-as we explain in Sect. 4.4.1, knowledge of x(t) is not required for PI control to succeed (knowledge of the matrix A is not required either) and PI control provides so-called global results in that they hold for any initial population distribution x 0 . What is crucial to the efficacy of feedback control is access to the measured variable y(t). The key difference between the present contribution and Guiver et al. (2015) is that in the latter we restricted attention to m = p = 1 but here the situation m, p > 1 is permitted, implying that numerous measurements are recorded and management actions taken-so-called management with multiple goals in ecological terminology or the multi-input, multi-output case in control theoretic terminology. Matrix PPMs (2.1) are examples of discrete-time, positive dynamical systems-"positivity" refers to the property that the state-variables take only nonnegative values, typically denoting abundances, densities or concentrations. Positive dynamical systems form the appropriate framework for a variety of physically meaningful mathematical models and arise as models in a diverse range of fields from biology, chemistry, ecology and economics to genetics, medicine and engineering (Haddad et al. 2010, p. xv). Owing to their importance in mathematical modelling positive dynamical systems are well-studied with textbooks by, for example, Berman et al. (1989), Krasnosel'skij et al. (1989 and Berman and Plemmons (1994). The theory of linear positive dynamical systems is rooted in the seminal works of Perron (1907) and Frobenius (1912) on nonnegative matrices (for a recent treatment see, for example, Berman and Plemmons 1994, Chapter 2). Control of positive dynamical systems leads to positive input control systems (Farina and Rinaldi 2000), where the input variables are also assumed to be positive. Presently, only the state x(t) and output y(t) in (2.2) need take componentwise nonnegative values, so called so-called positive state systems . Accordingly, u(t) may take negative values, provided that a nonnegative number or distribution remains. Such a framework allows the modelling of control actions (or disturbances) such as harvesting, culling, pest management or predation; actions which, importantly, fall outside the existing positive systems theory.
As a concrete and illustrative example, we explore the potential utility of low-gain PI control by applying it to the management of a pronghorn (Antilocapra americana) population based on matrix models from Berger and Conner (2008). Pronghorn are native to Canada, Mexico and the US, and currently occur in western North America from Canada through to northern Mexico. Managed populations are found in Yellowstone National Park and across the continent numbers are generally stable, having recovered from near extinction in the 1920s. The species is susceptible, however, to habitat loss from urban and agricultural expansion and restriction of seasonal movements from fencing (Hoffmann et al. 2008). Pronghorn is legally hunted with permits, although the subspecies Sonoran pronghorn is endangered and populations in Arizona and Mexico are protected under the US Endangered Species Act.
The example also seeks to highlight the drawbacks of "off-the-shelf" PI control in this particular applied context and to motivate additional novel features we develop in Sect. 4. The pronghorn projection matrix model is an age-structured model, with time-steps denoting years, and is based on Berger and Conner (2008, Table 4, wolffree site). The models provided there are for female pronghorn, although presently we have included males in the population as well. Consequently there are six stageclasses denoting female and male, neonates, yearlings and prime adults. The model parameters used may be found in "Appendix A". The spectral radius of the projection matrix A in (2.1) or (2.2) is λ = 0.9222 < 1 so that the uncontrolled population x(t) specified by (2.1) [or (2.2) with u(t) = 0 for every t] is declining asymptotically. Suppose, therefore, that the hypothetical management objectives are to raise abundance of female and male prime adults to 120 and 100, respectively, from their initial abundances of c. 95 and c. 30, assuming a total initial population abundance of 300. In order to be regulated to the chosen set-point r = 120 100 , (2.3) the female and male prime adults stage-classes must be observed each time-step, determining the matrix C in (2.2). To affect these changes at least two per-time step management actions (the same number as observations) are required, and we assume that we may replenish female and male neonates, determining B in (2.2). The first and second component of the vector-valued input variable u(t) in (2.2) now denote how many female and male individuals are released per time-step, respectively. The first difficulty to overcome is to determine, given the particular A, B and C specified by the pronghorn model, whether it is possible to choose an input u(t) such that the output y(t) does indeed converge to r in (2.3)? Note that the state and output variables must remain nonnegative for a meaningful model and this nonnegativity requirement in turn imposes geometric constraints on the set of possible inputs u(t). For the sequel we record this problem as: (P1) which nonnegative set-points can be tracked asymptotically whilst preserving nonnegative state and output variables?
Informally, we say that set-points that may be asymptotically tracked with nonnegative state and output variables are feasible and we demonstrate in the "Appendix A" that the set-point r in (2.3) is indeed feasible. Figure 1 shows simulation results obtained by applying low-gain integral control to the pronghorn model. Although the output, here denoting measured abundance of each stage-class, converges to the chosen set-point r over time, four deficiencies of the "off-the-shelf" integral controller are demonstrated: (i) during time-steps 50-150, substantially more than 200 female neonates must be added to the population per time-step (Fig. 1a), which may be too large to be practical; (ii) during the same time-steps, the integral controller is instructing the removal of male neonates, that is, u 2 (t) < 0 ( Fig. 1a), which seems unnecessary and wasteful; (iii) most crucially, the resulting measurements y 2 (t) of male prime adults are negative for some t (Fig. 1b), which is absurd for this model, and; (iv) the performance is very slow, predicting at least 500 years(!) to converge to the desired set-point.
Whilst we acknowledge that the model parameters have been chosen somewhat pathologically to emphasise these deficiencies, they do help motivate the present contribution quite markedly. Deficiencies (i)-(iii) above are addressed by considering a modified low-gain PI control model that includes input saturation. In words, negative inputs u(t) (that is, when the management strategy suggests removal of individuals) are replaced by zero and a per time-step maximum bound is imposed for u(t), reflecting limited per timestep resources or management capability. As we explain in Sect. 4, doing so introduces a nonlinearity into the feedback model and establishing convergence of the output to the set-point is more challenging. For the sequel, we record: (P2) how can input saturation be included in low-gain PI control and still ensure that the desired set-point is tracked asymptotically by the output?
Deficiency (iv) of rate of convergence of the feedback model may be adjusted by the use of a (P)roportional component as well as an (I) component, as we describe in the manuscript. Our main results are low-gain PI control models for positive state linear systems that address issues (P1) and (P2), stated as Theorem 4.6 and Corollary 4.7. We establish several robustness results in Sect. 4.4, that capture how the low-gain PI control systems can handle uncertainty. Figure 2 contains simulation results obtained by applying low-gain integral control with input saturation to the pronghorn model. From the simulations we see that each of issues (i)-(iv) present in Fig. 1 do not appear and a robust solution to the stated management problem is provided.
Having outlined a low-gain PI control solution to the above management problem, we comment on how the solution may be additionally combined with other management approaches present in the literature and, moreover, how feedback control differs from optimal control. These latter observations are intended to further motivate the present exploration of the utility of PI controllers in ecological management.
Remark 2.1 (i) An existing suite of management strategies proposed in ecological matrix modelling are based on tools from perturbation theory. Typically, modelled vital rates are altered with a view to obtaining some asymptotically desired dynamic behaviour (such as stasis or growth in conservation), which is described by replacing A in (2.1) with A + , for some perturbation matrix . Sensitivity (Demetrius 1969) or elasticity (Kroon et al. 1986) analyses are often employed and use methods from calculus to determine the effect of small changes in particular vital rates on the resulting asymptotic behaviour. These calculations are used to inform where potential management or conservation strategies should invest their efforts. Numerous examples are present in the literature and we highlight, for example, shark conservation (Otway et al. 2004) and the effects of Brazil nut tree seed extraction on its demography (Zuidema and Boot 2002). Biologically, perturbation analysis denotes improving or degrading vital rates through environmental or demographic changes, the former for instance, through improved quality or access to food or decreased mortality rates by protecting habitats. These methods are not directly comparable to PI control, as they do not denote the addition or removal of individuals, but may be combined with PI control approaches. We revisit the above pronghorn example in Sect. 5 and combine low-gain PI control proposed here with a second management strategy. (ii) Low-gain PI control is an example of feedback control. Complementary to feedback control is optimal control which, to some audiences, may be synonymous with control theory itself. Here an input is chosen to achieve some desired dynamic behaviour as well as to minimise a prescribed functional; typically denoting the cost or effort of the management strategy in ecological applications. Optimal control has proven very popular in mathematical biology (Lenhart and Workman 2007). Pontryagin's celebrated maximum principle (see, for example, Liberzon 2011, Chapter 4) has been employed in models for the optimal control of HIV (Kirschner et al. 1997), epidemics (Hansen and Day 2011) and vector-borne diseases (Blayneh et al. 2009). Techniques from optimal control have appeared extensively in the mathematical ecology, conservation and resource management literature where an input to a control system denotes a management strategy that is applied to a ecological process, such as a modelled population. To name but a few examples, research by Hastings and collaborators has tackled optimal management of deterministic models for the invasive perennial deciduous grass Spartina by applying linear programming (Hastings et al. 2006), so-called linear quadratic optimal control (Blackwood et al. 2010) or dynamic programming (Lampert et al. 2014). Elsewhere applications of Pontryagin's maximum principle have appeared in the fisheries management literature (Kellner et al. 2011;Moeller and Neubert 2013). Solutions to population management problems have also been proposed by optimising prescribed cost-functionals in the situation when the underlying dynamics are assumed stochastic, such as those given by (P)artially (O)bservable (M)arkov (D)ecision (P)rocesses (Monahan 1982). Stochastic dynamic programming techniques are then used to numerically compute optimal strategies. Substantial research has been undertaken by Possingham and collaborators, including Shea and Possingham (2000), Chadès et al. (2011) and Regan et al. (2011). Whilst the design of management strategies via optimal control have an appeal in that they would minimise some specified cost, there are downsides. First, computing optimal controls is often analytically intractable or computationally highly expensive [suffering from, for example, the "curse of dimensionality", coined in Bellman (1957), see more recently Powell (2007)] and so optimal controls can be impractical to implement. Second, and often overlooked, it is not always clear that "off-the-shelf" optimal control approaches will respect positivity of the system states (although one exception we are aware of appearing in the control literature is Nersesov et al. (2004)). Third, and a more serious and pressing obstacle, population-level ecological models are typically highly uncertain. Uncertainty is a broad term in ecology and ecological modelling, although in this context both Regan et al. (2002) and Williams (2001) contain helpful and interesting codices of the term. Presently, uncertainty encompasses choice of model structure (for example, type of model, number of stage-classes or any modelled density-dependence), parametric uncertainty (for instance, how to accurately fit vital rates for a chosen model) and unknown disturbances of the dynamics (such as unmodelled immigration or sampling error). Therefore, we argue that it is essential that ecological management strategies, be it for sustainable harvesting, pest management or conservation, are designed to be robust. Informally, a control scheme is robust with respect to a source of uncertainty if it performs as intended in spite of that uncertainty. Another facet of robust control is quantifying the extent to which a control objective fails when operating in uncertain or unknown operating conditions. The study of robust control [with textbooks by, for example, Green and Limebeer (1995) or Zhou and Doyle (1998)] was in part born out of the hugely important observation by control engineers in the 1970s that optimal control techniques need not be robust and, moreover, over-optimisation leads to fragility (Doyle 1978). Indeed, as we sought to emphasise in Guiver et al. (2015), so-thought optimal controls can have disastrous performance when applied to an uncertain model and hence our current continued exploration of robust feedback control in ecological management.

Problem formulation: multi-input, multi-output low-gain PI control
Sections 3 and 4 contain the technical heart of the manuscript where we formulate both the problem exposited in Sect. 2 and its solution. Specifically, in this section we recap so-called multi-input, multi-output low-gain PI control and in the next we extend known results to address the issues (P1) and (P2). Recall that proofs of all novel stated results are contained in "Appendix C".

Notation
We introduce some notation, although most notation we use is standard, or is defined as it is introduced. Briefly, we let N 0 , N, R and C denote the sets of nonnegative integers, positive integers, real and complex numbers, respectively. For positive integer n, denoted n ∈ N, we let R n and C n denote real and complex n-dimensional Euclidean space, respectively, equipped with the usual two-norm, always denoted by · . As usual, we let R 1 = R and C 1 = C. For m ∈ N, R n×m and C n×m denote the sets of n × m matrices with real and complex entries, respectively. We shall denote by I the identity matrix, used consistently without specifying its dimensions. The notation · also denotes the operator two-norm induced from · on C n or R n . We denote by r (A) the spectral radius of A ∈ C n×n which, recall, is given by where σ (A) denotes the spectrum of A-its set of eigenvalues when A is a matrix. The state x of the uncontrolled linear model (2.1) converges to zero or diverges to infinity when r (A) < 1 or r (A) > 1, respectively (the latter at least for some nonzero initial states x 0 ). The symbols R n + and R n×m + denote the sets of componentwise nonnegative vectors and matrices, respectively. A vector z in R n belongs to R n + if z k ≥ 0 for every k, where z k denotes the kth component of z. We call vectors z ∈ R n + nonnegative and say that z ∈ R n + is positive if z k > 0 for every k. For vector z ∈ R n , the term z 1 denotes the vector one-norm of z, and is defined as The superscript T denotes matrix or vector transposition, so that if z ∈ R n then z T is a row vector.

Multi-input, multi-output low-gain PI control
For the most part in the present manuscript we consider the discrete-time linear model (2.2) where for n, m, p ∈ N and given x 0 ∈ R n . The variables u, x and y denote the input, state and output of (2.2), respectively. Although our motivating applications are the management of ecological models where the input, state and output typically have clear biological interpretations, here we are describing the more general situation. In particular, PI control does not require nonnegativity assumptions on A, B or C. We shall impose additional structure on (2.2) and (3.1) in Sect. 4. The transfer function G of the linear system (2.2) [also of the triple (A, B, C)] is the function of a complex variable, defined as where recall that I in (3.2) is the (here n×n) identity matrix. The function G is certainly well-defined for every complex z that is not an eigenvalue of A and, moreover, provides a relationship between an input u and the resulting output y related by (2.2). More information about G is contained in "Appendix B" but, it suffices here to note that if r (A) < 1 then G(1) is well-defined and has the property that if u has a limit u ∞ then, for any initial state x 0 , y in (2.2) has the limit lim t→∞ From the Neumann series definition and the limit relationship (3.3) it follows that the (i, j)th entry of G(1) is the eventual ith measurement when the jth input variable is one for all times. The interpretation is somewhat similar to that of the fundamental matrix in matrix population modelling (Caswell 2001, p. 112). By conducting controlled experiments, such as in applications in electrical circuits, it is sometimes possible to obtain an estimate of G(1) (Penttinen and Koivo 1980;Lunze 1985), although this is possibly inappropriate in ecological management. Integral control has been developed in the situation r (A) < 1 to solve the so-called set-point regulation problem or objective, namely, to generate an input u such that the resulting outputs y of (2.2) converge to a prescribed set-point r ∈ R p . The objective should be achieved independently of the initial state x 0 and with only knowledge of y and G(1). The internal model principle (Francis and Wonham 1976) dictates that in order to achieve the set-point regulation objective via feedback control, the control strategy must contain an integrator, or synonymously an integral controller which, when connected via feedback to (2.2), leads to: is a feedback connection from (Ib) to (Ia) via the input u(t). The terms K ∈ R m× p , g > 0 and x 0 c ∈ R m in (Ib) are design parameters and r is the desired set-point.
The following "low-gain" result for integral control is well-known and based on, for example, Logemann and Townley (1997, Theorem 2.5, Remark 2.7). The term "low-gain" refers to the fact that the positive parameter g in (I) (often called a "gain") is required to be sufficiently small.
Theorem 3.1 (Low-gain integral control) Suppose that the integral control system (I) with m = p satisfies (A1) r (A) < 1, and; (A2) K and G(1) are such that every eigenvalue of the product K G(1) has positive real part. Then, there exists g * > 0 such that for all g ∈ (0, g * ), all r ∈ R p and all When r (A) ≥ 1 then the conclusions of Theorem 3.1 do not apply to (I). However, in this situation (I) can be modified by including a (P)roportional feedback component. Specifically, the feedback connection (Ic) is replaced by if the state x is known and available to the modeller, or by when only the output y is available. The matrices F 1 ∈ R m×n and F 2 ∈ R m×m are additional design parameters. We denote by (PI1) and (PI2) the combinations of (Ia), (Ib) and (3.4) or (Ia), (Ib) and (3.5), respectively. For completeness, we record that (PI1) is given by while in (PI2) the third line of (PI1) is replaced by (3.5). Inserting the expression for u in (PI1) into the dynamic equation for x also in (PI1) and introducing the new input variable v := x c yields demonstrating that (PI1) is an instance of (I), only with A replaced by A − B F 1 . The same argument shows that (PI2) simplifies to (I) as well, now with A replaced by A − B F 2 C. We do not give the details. The upshot is that Theorem 3.1 is applicable to (PI1) provided that F 1 can be chosen such that A − B F 1 satisfies (A1) and K can be chosen such that K and the transfer function of (A 1 , B, C) together satisfy (A2). In usual situations the crucial requirements is the choice of F 1 such that r (A − B F 1 ) < 1, as here a suitable K in (PI1) is given by . The analogous statements are true for (PI2). Theorem 3.1 is the basis for the robust feedback control solution to the multiple management goals problem, motivated in Sect. 2. Additional features need to be included in the model (I) to cope with the demands of ecological management, and are done so in the next section. We conclude the current section by making some remarks on the roles of the dimensions of the input and output spaces, m and p, respectively, and also assumption (A2) that appears in the above theorem.
Remark 3.2 (i) In the case that r (A) < 1, we see from (3.3) that the range of possible limiting outputs is equal to the image of G(1), which is at most m-dimensional. For every r ∈ R p to belong to this image then necessarily we require that m ≥ p and that G(1) is surjective. In words, as many control actions are needed as observations are to be regulated. When m > p then there is some redundancy, or non-uniqueness, in the choice of inputs. (ii) For any m, p ∈ N, assumption (A2) implies that K G(1) is invertible, as zero is not an eigenvalue of K G(1). In this case G(1) must be injective as if G(1)v = 0 for some v ∈ R m then K G(1)v = 0 and thus v = 0. Therefore, by the ranknullity theorem, m ≤ p. In order for every reference r ∈ R p to be a candidate limit of the output, we require that G(1) is surjective hence m ≥ p (as noted in (i)). Combined we see that necessarily m = p. Therefore, m = p and (A2) together imply that G(1) is invertible, and hence the inverses in parts (a) and (b) of Theorem 3.1 make sense. Consequently, the spectrum condition (A2) implies (1) is not known exactly then K can be based on an estimate of (the inverse of) G(1) which we investigate further in Sect. 4.4.3.

Multi-input, multi-output low-gain PI control for positive systems with input saturation
Having recapped low-gain PI control for linear systems in Sect. 3 we now introduce additional structure that arises from considering positive state linear systems, our primary focus, and present a low-gain, multi-input, multi-output PI controller. Specifically, we additionally assume that (A, B, C) in (2.2) satisfy and all initial states x 0 are componentwise nonnegative, so that x 0 ∈ R n + . As in Theorem 3.1, in our subsequent low-gain PI control results, we shall assume that m = p (see Remark 3.2 for motivation of this choice).
The framework (2.2) and (4.1) includes matrix PPMs where the input, state and output of (2.2) denote the control action, the stage-or age-structured population abundances, and some measurement or observation of the population, respectively. In this applied context the assumption that m = p means that as many per time-step measurements of the population are made as are available per-time step management actions.
We seek a version of Theorem 3.1 for asymptotic tracking of a chosen nonnegative set-point r ∈ R m + . As motivated in Sect. 2, the two issues recorded there as (P1) and (P2) must be overcome. To that end, in Sect. 4.1 we describe the set of feasible setpoints-these are candidate limits of the output of a positive state linear system where nonnegativity of the state and output variables is preserved (P1). Then, in Sect. 4.2, we establish stability of a low-gain integral control system with the additional feature that the input to the state equation is saturated (P2). Saturating the input introduces a nonlinearity into the feedback system, and that the conclusions of Theorem 3.1 still hold must be derived. Recall that the motivation for saturating the input is to avoid removing individuals when conservation is the ultimate goal and to reflect the realistic constraint of per-time step resource or capacity limits.

Feasible set-points for positive state control systems
In this section we answer the question (P1): to which nonnegative set-points can the output y of (2.2) and (4.1) converge? Although we shall apply these results to inputs u generated by a PI controller, for now it suffices to consider convergent inputs. For that reason we do not need to impose the restriction m = p in this section. We introduce some terminology and notation.
Definition 4.1 For (A, B, C) as in (3.1) we say that r ∈ R p is trackable if there exists a convergent input u such that the output y of (2.2) converges to r as t tends to infinity. Supposing further that (A, B, C) satisfy (4.1) we say that r ∈ R p + is trackable with positive state if r is trackable and moreover the state x(t) of (2.2) is componentwise nonnegative for every t ∈ N 0 . We call the set of such r the set of trackable outputs of (A, B, C) with positive state.
We seek to characterise the set of trackable outputs of (A, B, C) with positive state. For X ∈ R s×t + , where s, t ∈ N, the set X + denotes all nonnegative linear combinations of the columns of X , which is a subset of R s + . We also denote componentwise nonnegativity of a matrix X or vector v by X ≥ 0 or v ≥ 0 (respectively, also 0 ≤ X and 0 ≤ v).
We remind the reader that the subsequent claims are proved in "Appendix C".

Lemma 4.2 Suppose that (A, B, C) is given by (4.1) and that r (
Next we recall an assumption from Guiver et al. (2014), 2 which pertains to a nonnegative pair (A, B) ∈ R n×n + × R n×m + : Assumption (H) for the pair (A, B) captures the situation whereby for any nonnegative x it is possible to choose negative u such that Ax + Bu is "as small as possible", yet still nonnegative. Indeed, the choice of u that achieves this is u = −F x. Assumption (H) always holds if B = b = e i , the ith standard basis vector, as then the required and so if A 1 v + bw ≥ 0 then by inspection of the first component, necessarily w ≥ 0. Guiver et al. (2014, Lemma 2.1) contains a constructive algorithm for checking whether assumption (H) holds for any pair (A, B), and determines the required F (which is unique) when it exists.

Assumption (H) always holds for any
We have recalled assumption (H) because if the (A, B) component of (2.2) satisfies (H) then there exists a characterisation of the set of trackable outputs of (A, B, C) with positive state.

Proposition 4.3 Suppose that (A, B, C) is given by (4.1), r(A) < 1 and additionally that the pair (A, B) satisfies assumption (H). Then the set of trackable outputs of (A, B, C) with positive state is precisely equal to
The next result provides a recipe for enlarging the guaranteed set of possible trackable outputs with positive state, particularly in the case that (H) is not satisfied.

Lemma 4.4 Suppose that (A, B, C) is given by (4.1) and that r (
Remark 4.5 A straightforward adjustment to the proof of Lemma 4.4 demonstrates that the sets G C AB (1) + have a monotonically decreasing nested structure with respect to the partial ordering of componentwise nonnegativity on A, in that where A ≤Ā means that 0 ≤Ā − A. The largest possible set that can be achieved by this process is C B + and occurs when F ≥ 0 can be chosen such that A − B F = 0. In this case the set of trackable outputs of (A, B, C) with positive state must contain C B + . Proposition 4.3 demonstrates that, when assumption (H) holds, G C A 1 B (1) + is the largest possible set for tracking with positive state.

Low-gain integral control with input saturation
In this section we address question (P2) by demonstrating that suitable adjustments to the integral control model (I), incorporating saturation on the input, achieve set-point regulation, as well as bounding the per time-step input and preserving nonnegativity of the state and output variables. Recall that the three-faceted motivation for saturating the input is to: (i) allow for the inclusion of per time-step bounds representing resource or capacity constraints associated with the implementation of a management strategy; (ii) prevent negative control signals particularly problematic when conservation is the We next introduce the input saturation function which is incorporated into a lowgain integral control model in (Iaw). For given U > 0 define the function sat U by an example of which is graphed in Fig. 3. The diagonal saturation function sat is defined as the componentwise combination of sat U i functions as follows: Here the constants U i > 0 for i ∈ {1, 2, . . . , m} are chosen and in applications denote the per time-step bound on the ith component of the input. To incorporate input saturation into a low-gain integral control model we consider: where E ∈ R m×m is a design parameter additional to those appearing in (I) and is discussed in more detail in Sect. 4.3. Our main result of the manuscript is Theorem 4.6 below, which mirrors Theorem 3.1, and guarantees that the low-gain integral control model with input saturation (Iaw) achieves asymptotic tracking of the output of (Iaw) to a prescribed set-point under the (same, previously employed) assumptions (A1) and (A2) and a known choice of E. The theorem provides solutions to problems (P1) and (P2).
Theorem 4.6 Suppose that (Iaw) satisfies (A1) and (A2) and choose where g > 0 is as in (Iaw). Then, there exists g * > 0 such that for all g ∈ (0, g * ), all r ∈ G(1) + such that ≥ 0 for each t ∈ N 0 and has the properties: By appealing to the results of Sect. 4.1, including a proportional component in the feedback law in (Iaw) gives rise to a larger set of candidate set-points. Let (PI1aw) denote the feedback system which differs from (Iaw) only by the inclusion of an additional proportional state- We present the following corollary for the low-gain PI control systems (PI1aw) and (PI2aw).

Comparing and contrasting low-gain feedback systems (I) and (Iaw)
In this section we record some observations on the low-gain integral control system (Iaw), the above theorem and corollary, and their relation to other published results. The integral control scheme (Iaw) differs from (I) by the saturation function in the definition of u, and by the term involving E appended to controller state dynamics as well. The term involving E is crucial and, intuitively, acts as a correction term, activating at time-steps t when the integral control state x c (t) saturates meaning that x c (t) = sat (x c (t)). The input is not saturated when sat (x c (t)) = x c (t) and for these time-steps the term in (Iaw) involving E is zero and plays no role. Loosely speaking, at these times (Iaw) is behaving as though there is no saturation, the resulting model is linear and Theorem 3.1 applies. Theorem 4.6 makes the previous assertion rigorous.
The feedback system (Iaw) with E = 0 was considered in Guiver et al. (2015) in the specific so-called single-input, single-output case (meaning m = p = 1), so that B = b and C = c T are vectors. Here assumption (A1) is as before, and assumption (A2) reduces to G(1) > 0 (it suffices to take K = 1). However, in contrast to the situation in Guiver et al. (2015), saturating a multi-input (m > 1) control signal can be inherently destabilising, resulting in the desired set-point regulation objective not being achieved. Roughly, if E = 0 then the control signal may get 'stuck' in the saturating region, and the resulting failure is attributed to what is known as "actuator saturation" or "integrator windup" in control engineering literature (Johanastrom and Rundqwist 1989). Anti-windup control refers to the study of mechanisms to alleviate or remove windup in PI controllers and, owing to its importance in applications, is a hugely well-studied topic. The already 20-year-old chronological bibliography of Bernstein and Michel (1995) contains 250 references. We refer the reader to Tarbouriech and Turner (2009) for a recent overview of anti-windup control. There are many possible such mechanisms, for example in how to choose the matrix E that appears in (Iaw), also known as a static anti-windup component. The advantages of our choice of E in Theorem 4.6 and elsewhere are that it: (i) is straightforward to compute and thus implement; (ii) possesses demonstrable robustness to model uncertainty, and; (iii) can be extended to a class of infinite-dimensional systems.
For readers less familiar with (or indeed interested in) anti-windup control, the key feature of the present discussion is that the term involving E in (Iaw) is a crucial feature and should not be omitted. We reiterate that although our results are aimed at ecological models, they apply to any positive state linear system described by (2.2). As far as we know, the anti-windup method we propose and its proof is novel in a control theory context as well.
One approach to anti-windup control present in the literature determines the antiwindup component E via the solution of a set of certain linear matrix inequalities (LMIs) see, for example, Mulder et al. (2001) or Silva and Tarbouriech (2006). Although these LMIs can often be solved numerically and can result in other performance criteria being met (such as so-called "bumpless transfer"), they introduce another level of complexity for the modeller. Moreover, since they use Lyapunov based arguments, they seemingly do not extend across to systems that have infinitedimensional Banach spaces as state-spaces (thus precluding IPMs, for instance).
Remark 4.8 (i) Although when r (A) < 1 no F 1 or F 2 component is required to apply Theorem 4.6, the use of (PI1aw) or (PI2aw) often results in faster convergence than that of (Iaw), highlighted as issue (iv) in Sect. 2. Moreover, if F 1 ∈ R n×n is such that then Lemma 4.4 (b) implies that there is a larger choice of possible references achievable by (PI1aw) than by (Iaw), which thus encourages the use of PI control even in the case that r (A) < 1. Similar comments apply to F 2 for the (PI2aw) system. (ii) If K and G(1) are such that K G(1) is positive semi-definite then it can be shown that the conclusions of Theorem 4.6 hold for (Iaw) with E = 0, that is, with no anti-windup component. Although the choice K = G(1) −1 guarantees this condition, such a choice requires exact knowledge of G(1) and the requirement that K G(1) is positive semi-definite is very non-robust to parameter uncertainty. For this reason we have, therefore, insisted on including the anti-windup component As mentioned in the introduction, feedback control that preserves nonnegativity of state and solves a non-zero state-regulation problem has been considered in Nersesov et al. (2004). The goals of that paper and ours here are similar, but there the authors work in continuous-time, and use a feedback derived from a constrained optimal control problem (as opposed to a low-gain integral controller) to steer the state to a prescribed non-zero equilibrium. They do not consider input saturation to avoid negative states but instead constrain the structure of the inputs. Their work builds on that of Leenheer and Aeyels (2001). Roszak and Davison (2009) solve the continuous-time, nonnegative output regulation problem (also called the servomechanism problem, hence their title) using low-gain integral control. There, the authors determine the model parameter K in (I) using optimal control results-a different approach to ours. Another key difference between that work and ours is that the input is not saturated (that is, bounded) from above and thus, as we understand, "integrator windup" is not an issue.

Robustness of low-gain PI control
The efficacy of the low-gain PI control systems considered so far is predicated on several modelling assumptions: (U1) that the system of interest is accurately modelled by (2.2) and (4.1); (U2) there are no external signals or noises affecting the dynamics of the state x or the input u; (U3) there is no measurement or sampling error in y; (U4) the steady-state gain G(1) is known. In practice, all four of these assumptions are likely to be violated and thus here we quantify to what extent low-gain PI control is robust to failures of (U1)-(U4). By doing so we seek to describe how concepts from robust feedback control apply to sources of uncertainty that arise in ecological modelling. A more detailed discussion may be found in Guiver et al. (2015, Section 3.1), but briefly,

Robustness to choice of model structure and model parameters
When modelling ecological processes, such as managed populations, there are often a plethora of models to choose from that all attempt to capture the same underlying dynamics. Within structured population models of the form (2.1) there are age-or size-based models, that partition the life cycle (perhaps a continuum of stages) into predetermined discrete stage-classes. The above choices imply that there is choice, or indeed, uncertainty in A, B and C in (2.1), challenging (U1). The state dimension n may even be uncertain. Low-gain PI control is robust to this source of uncertainty in the sense that knowledge of A, B and C is not required to implement it. The measured variable y(t) is required, and it is assumed that y(t) = C x(t), for some choice of C, but C itself is not needed. Rather, A, B and C are required to satisfy the assumptions (A1) and (A2). Assumption (A1) does not need A to be known, and simply means that the population of interest is in decline. Recall that when seeking to use PI control to reduce a growing population, then assumption (A1) amounts to the requirement that the population can be stabilised (that is, made to decline) by state-or output-feedbacksee (PI1) and the discussion below. Assumption (A2) does require knowledge of G(1) to determine a suitable K (and E for (Iaw)), which may be determined from A, B and C, but may also be known by experiment or experience. As we explain in Sect. 4.4.3, an estimate of G(1) maybe sufficient to determine a K that together satisfy (A2). Finally, we comment that since assumptions (A1) and (A2) are necessary for low-gain integral control (as well as sufficient), we cannot allow any greater model uncertainty.

Robustness to external disturbances
External dynamics affecting the state, input and output may be included in the original model (2.2) by writing: where d 1 (t) ∈ R n and d 2 (t) ∈ R p are typically unknown. In a population model d 1 may denote either a disturbance to the population such as (unmodelled) immigration, emigration or predation or an input error, meaning that the intended input u(t) is disturbed. Similarly, d 2 denotes some form of measurement or sampling error. The inclusion of d 1 and d 2 seeks to address the assumptions (U2) and (U3). A reasonably general framework is to assume that d 1 and d 2 in (4.6) are bounded, and of course are such that x and y remain nonnegative. We refer the reader to Eager et al. (2014) and the references therein for more information on the impacts of nonnegative disturbances on populations modelled by matrix PPMs. When only boundedness of d 1 and d 2 is assumed then we cannot in general expect the same convergence of the output y of the feedback system (4.6) connected with a low-gain PI controller as that exhibited by (Iaw), (PI1aw) or (PI2aw). The next result provides upper bounds on the difference of the state and output from their respective asymptotic limits in terms of the initial error and the maximum values of d 1 and d 2 . The result is an (I)nput-to-(S)tate-(S)tability estimate and we refer the reader to Sontag (2008) for more background on ISS.
Proposition 4.9 Suppose that the low-gain integral control system with disturbances with bounded disturbances d 1 and d 2 satisfies (A1) and (A2), and choose E := gK G(1) where g > 0 is as in (Iaw). Then, there exists g * > 0 such that for all g ∈ (0, g * ), all r as in (4.5) and all Low-gain PI control without saturation or positivity constraints is known to have the desirable property that convergent input disturbances d 1 = B f 1 , for some disturbance f 1 , are rejected by the integral controller, meaning that the output still converges to the desired set-point. Meanwhile, convergent output disturbances d 2 result in asymptotic tracking of the output to the set-point offset by the limit of the disturbance. A convergent output disturbance includes constant disturbances which may, for example, correspond to a systematic or persistent measurement error. The next corollary demonstrates that, broadly speaking, the same disturbance rejection and offset in the set-point properties hold for the low-gain integral control model (Iaw) with input saturation.
Corollary 4.10 Suppose that the low-gain integral control system with disturbances (4.7) satisfies (A1) and (A2), and choose E := gK G(1) where g > 0 is as in (Iaw). Suppose that f 1 and d 2 are convergent with respective limits f ∞ 1 and d ∞ 2 . Then, there exists g * > 0 such that for all g ∈ (0, g * ), all r ∈ R m + such that The constant g * is independent of f 1 and d 2 .

Robustness to uncertainty in the steady-state gain G(1)
In the final part of our material on robustness with respect to various forms of uncertainty, here we consider the situation where the steady-state gain matrix G(1) is not known precisely, meaning that (U1) is violated. Knowledge of G(1) is used in lowgain integral control and PI control in three separate situations: first, in determining K to satisfy (A2) that appears in the original integral control model (I), the PI models (PI1), (PI2) and, the focus of the present study, (Iaw). Second, G(1) is used in determining E that appears in (Iaw). Both of the components K and E are required to ensure that low-gain PI control as presented is effective as described in Sect. 4.3. Recall that the choice K = G(1) −1 satisfies (A2) and E = gK G(1) is sufficient to ensure that the conclusions of Theorem 4.6 (and Corollaries 4.7, 4.10, Proposition 4.9) hold. Third, knowledge of G(1) is used in Lemma 4.2 to help determine the set of trackable outputs with positive state, which in turn provides feasible set-points.
Uncertainty in G(1) typically arises from uncertainty in the parameters or even the dimensions of A, B or C. Throughout this section we shall assume that the unknown transfer function G in (3.2) can be decomposed as whereĜ is known and G is expected to be "small". More generally, in this section variables with hats shall always denote known quantities and capital deltas denote uncertain terms. Lemmas 4.11-4.13 below are technical preliminary results gathering sufficient conditions for the main result of the section, Corollary 4.15. This latter result states that if a known nominal estimateĜ is close to the unknown G, meaning that G −Ĝ ∞ is small 3 , then basing the design of K and E on the nominal estimateĜ(1) of G(1) is sufficient for low-gain PI control to succeed.
We first demonstrate how the decomposition (4.10) arises from parametric uncertainty in A, B and C. We let ρ(A) = C\σ (A) denote the resolvent set of A, (when A is a matrix then ρ(A) is the set of all complex numbers that are not eigenvalues of A).

Lemma 4.11
Suppose that (A, B, C) ∈ R n×n × R n×m × R m×n for m, n ∈ N admit the decompositions which is defined for all z ∈ C ∩ ρ(A) ∩ ρ(Â) and is in the form (4.10) withĜ = GĈÂB and G the sum of the remaining three terms on the right hand side of (4.11).

Lemma 4.12
Suppose that G admits the decomposition (4.10) and thatĜ (1) (4.12) If Q = I , the m × m identity matrix, then K and G (1) A sufficient condition for (4.13) is that (4.14) Lemma 4.13 Let X denote a bounded operator on a Hilbert space (such as a square matrix with real or complex entries), with −1 ∈ ρ(X ), so that I + X is invertible. Then the conditions (a) X < 1 2 , or; (b) X ≤ 1 and (I − X )(I + X ) −1 ≤ 1; are sufficient for (I + X ) −1 X < 1. (4.15) Remark 4.14 An estimate of the form (4.15) appears as a condition on X in Corollary 4.15 below, hence the inclusion of sufficient conditions here. We comment that (a) and (b) do not imply one another as X = − 1 4 I satisfies (a) but not (b) and X = I satisfies (b) but not (a).

Corollary 4.15
Suppose that (A, B, C) as in (4.1) satisfy (A1) and the associated transfer function G admits the decomposition (4.10), whereĜ is known and K andĜ together satisfy (A2) and choose E := gKĜ(1), (4.16) where g > 0 is as in (Iaw). Then, there exists M * > 0 and g * > 0 (which in general depends on M * ) such that for all G in (4.10) with and all g ∈ (0, g * ), all r ∈ G(1) + as in (4.5) and all (x 0 , x 0 c ) ∈ R n + × R m + , the solution (x, x c ) of (Iaw) satisfies x(t) ≥ 0 for each t ∈ N 0 and has the properties (a), ( b) and (c) of Theorem 3.1.
Remark 4.16 Corollary 4.15 can easily be extended to the PI systems (PI1aw) or (PI2aw), considered in Corollary 4.7, by replacing A by A 1 and A 2 as appropriate.

Low-gain PI control with input saturation for a class of infinite-dimensional systems
We have so far focussed on solving the robust set-point regulation problem with multiple management goals by applying low-gain PI control in the situation where the underlying (ecological) model is assumed finite-dimensional. Abstractly, we have developed low-gain PI control with input saturation for discrete-time positive-state linear systems. In this section we demonstrate that many of the results presented extend to a class of discrete-time, infinite-dimensional linear systems which includes the class of IPMs. IPMs were introduced by Easterling et al. (2000) (see also Ellner and Rees 2006;Rees andEllner 2009 or Briggs et al. 2010) as a tool for population modelling where the n discrete age-, size-or stage-classes of a PPM are replaced by a continuous variable. As a concrete example, a shrub or tree population model may partition individuals according to a continuous variable denoting height or stem diameter. An IPM is a discrete-time linear system on the function space L 1 (Ω) specified by integral operator: for some nonnegative-valued kernel where, for simplicity say, Ω is the closure of some bounded set in R n , n ∈ N. At each time-step t ∈ N 0 , the state of an IPM is a function of the continuous variable ξ ∈ Ω.
To formulate integral control in a possibly infinite-dimensional setting let X denote an ordered real Banach space, so that X is equipped with a partial order ≤ (also ≥) that respects vector space addition and multiplication by nonnegative scalars. The positive cone C induced by (B, ≥) is the set of x ∈ X such that x ≥ 0 and is a closed, convex set (so that if x, y ∈ C and α ≥ 0 then x + y, αx ∈ C) with the property that x, −x ∈ C implies that x = 0. For real Banach spaces X 1 , X 2 with respective positive cones C 1 , C 2 a bounded linear operator T : X 1 → X 2 is called positive if T C 1 ⊆ C 2 . In words, T is positive if every positive element of X 1 is mapped to a positive element of X 2 .
Example 4.17 (i) The situation considered throughout the manuscript thus far has taken X = R n for n ∈ N with partial order ≥ denoting usual componentwise nonnegativity, so that x ∈ R n , x ≥ 0 if x k ≥ 0 for every k ∈ {1, 2, . . . , n}. As such, the positive cone of R n and this partial ordering is the nonnegative orthant C = R n + . (ii) To model IPMs we choose X = L 1 (Ω) with the partial ordering ≥ of almost everywhere pointwise inequality, that is are bounded, positive, linear operators and X is as above. The state-space X may now be infinite-dimensional but, for simplicity, the input and output spaces are still assumed to be R m . Since B and C are bounded and finite-rank, then necessarily they can be written and for some b i ∈ X and c j : X → R, linear functionals on X . Using the expression (4.22), B is positive if, and only if, b i ∈ C for every i ∈ {1, 2, . . . , m}. Similarly, C is positive if, and only if, c j is positive for every j ∈ {1, 2, . . . , m}, here meaning that c j (C) ⊆ R + . The low-gain integral control system with input saturation is still defined by (Iaw), with design parameters E, K ∈ R m×m , g > 0 and x 0 c ∈ R m . Note that for each timestep t ∈ N 0 , the integral controller state x c (t) ∈ R m is still finite-dimensional and thus readily computable. The expression (3.2) for the transfer function G is well-defined when A, B and C are as in (4.21), and consequently assumptions (A1) and (A2) are as before.
To include a (P)roportional feedback in the control law as in (PI1aw) or (PI2aw) requires bounded linear operators respectively. When F 1 and F 2 are bounded then so are as the composition and difference of bounded operators.
The main result of this section demonstrates that the low-gain integral controller (Iaw) still achieves the robust set-point regulation problem in the more general case when (A, B, C) are as in (4.21). By noting that the PI system (PI1aw) with F 1 or F 2 as in (4.24) reduces to (Iaw) with A replaced by A 1 or A 2 given by (4.25), the next result includes both the state-and output-feedback cases.

Theorem 4.18 Assume that the low-gain integral control feedback system (Iaw) specified by positive operators (A, B, C) in (4.21) satisfies assumptions (A1) and (A2) with E = gK G(1) in (Iaw).
Then, there exists g * > 0 such that for all g ∈ (0, g * ), all r as in (4.5) and all (x 0 , x 0 c ) ∈ C × R m + , the solution (x, x c ) of (Iaw) has the properties (a), (b) and (c) of Theorem 3.1 and furthermore x(t) ∈ C for every t ∈ N 0 .
The robustness results Proposition 4.9, Corollary 4.10 and Corollary 4.15 also apply when (Iaw) is specified by positive operators (A, B, C) in (4.21).
The proofs of the above results are exactly the same as the earlier named results; none of the arguments used there required that X is finite-dimensional.

Remark 4.19
The results of Sect. 4.1 on feasible nonnegative set-points translate to the situation when X is a real, partially ordered Banach space. Again, none of the proofs explicitly use that X is finite-dimensional. However, assumption (H) should be replaced by: (H ) Let X denote a real, partially ordered Banach space with positive cone C. Given the pair of bounded, linear, positive operators A : X → X , B : R m → X there exists a bounded, positive operator F : X → R m such that definingÂ := A− B F it follows thatÂ is positive and for any v ∈ C and w ∈ R m , ifÂv + Bw ∈ C then w ∈ R m + . Importantly, the constructive characterisation Guiver et al. (2014, Lemma 2.1) does not hold in the general Banach space case, however, as it is truly a finite-dimensional result.

Examples
Example 5.1 Matrix projection models for the sustainable harvesting of two species of palm trees in Mexico are considered in Olmsted and Alvarez-Buylla (1995). We use a matrix PPM from there of the palm species Coccothrinax readii to demonstrate how a potential harvesting and conservation strategy could be based on a low-gain PI control law. The projection matrix A is given by: The nine stages denote seedlings, saplings I and II, juveniles I-V, and adult trees and the time-steps correspond to years. We refer the reader to Olmsted and Alvarez-Buylla (1995) for details on the phenology of C. readii.
The spectral radius of A in (5.1) is 1.0549 > 1, so that the uncontrolled population (2.1) is predicted to grow asymptotically. As with the pronghorn example in Sect. 2, we assume that we do not know the entire population distribution at each time-step exactly and again only have access to some part of the state. For simplicity, we consider the case where just two per time-step measurements are made and correspondingly have access to two stages for replenishment. We assume that the seedlings and adult tree stage classes may be restocked and harvested, respectively, leading to the B matrix: Furthermore, we assume that we are able to measure the abundances of the final two stages; the largest juvenile trees and adult trees so that: The set-point regulation objective is to determine a feedback F 2 in (PI2aw) and reference r such that the low-gain PI control system (PI2aw) drives the population to some non-zero level, and to determine the resulting adult tree harvest. The input u(t) is given by: where u 1 (t) denotes the number of seedlings planted at time-step t ∈ N 0 , and is desired to be nonnegative. Similarly, u 2 (t) denotes the number of adult trees harvested at timestep t ∈ N 0 , and should be negative. Indeed, we do not want to harvest seedlings or plant adult trees. Roughly speaking, the negative term −F 2 C x on the right hand side of (5.2) determines the harvesting yield and the positive term sat (x c ) from the integral control law determines the replanting scheme. We require F 2 ∈ R 2×2 such that Then, for each r = r 1 r 2 T = G C A 0 B (1)v ∈ R 2 + for v ∈ R 2 + , the following asymptotic yields are obtained population distribution: measured abundances: x * 8 = r 1 and x * 9 = r 2 .
(5.6) First, we construct F 2 ∈ R 2×2 to satisfy (5.3). By considering the product B F 2 C, we seek to replace the ninth row of A by zero, which necessitates and yields Choosing F 2 as in (5.7) satisfies (5.3) from which we compute It remains to determine r , or equivalently v. In terms of components the reference r = G(1)v is given by Of the four quantities r 1 , r 2 , v 1 and v 2 , two are free to be chosen, provided that 0 ≤ v 1 ≤ U 1 and 0 ≤ v 2 ≤ U 2 , and the remaining two are determined by (5.9). Rewriting (5.5) in components gives (5.10) Therefore, from (5.10) we see that v 1 ≥ 0 is the asymptotic replanting level, and from (5.9) that v 2 = r 2 is the desired asymptotic adult tree abundance. For given v 1 , v 2 , the expression u ∞ 2 ≤ 0 in (5.10) determines the asymptotic number of adult trees harvested per time-step. The asymptotic abundance of the penultimate stage-class is r 1 . These relations are summarised in Table 2.
For the following numerical simulation we suppose an initial population distribution with no adult trees, so that The results are plotted in Fig. 4. The set-point regulation objectives are achieved and the harvest of adult trees increases from zero to (almost) 40 per year, peaking at approximately 43 trees per year. Furthermore, although not specified as a management objective, the total tree abundance rises from x 0 1 = 1400-5100. We note that the resulting dynamics are rather slow; the time-steps here denote years. This is, in part we suspect, because of the admittedly somewhat limited control actions of only adding to the first stage class and removing from the last. The uncontrolled dynamics themselves are slow as mathematically the matrix A has nearly ones on the diagonal and very small entries on the sub diagonal. Biologically, the species C. readii is long lived; Olmsted and Alvarez-Buylla (1995) estimate the maximal life span as over 145 years, yet the model is a size based model. That said, the speed of convergence could be increased by allowing more control actions and measurements and adding a 'larger' proportional part F to the control law. In this case the explanation of the roles of r and v related by r = G(1)v become more complicated. It is also the case that we have not explored the roles of tuning K and g further, or of the initial controller state x 0 c ; all of which can affect the transient dynamics of the model.
To demonstrate robustness of the PI controller, we now assume that the recruitment of the population is not fixed at 55.8, but unknown and denoted by f . We have relegated proofs of the subsequent claims to "Appendix C". If f ≥ 0 is constant, then owing to the particular structure of this model and the uncertainty, the reference r is still tracked asymptotically. This is an example of convergent disturbance rejection, Corollary 4.10. Moreover, a calculation shows that (5.11) and hence the relations (5.10) and (5.9) hold with γ 2 replaced by γ 1 f . The key interpretations that v 1 is the asymptotic planting level and r 2 = v 2 is the asymptotic abundance of adult trees hold as before and are thus independent of f . Figure 5 contains three simulations with randomly chosen, but positive f . Here we have fixed v 1 and v 2 as before, so that now r 1 and the asymptotic harvest yield varies as f and thus G(1) does. A more appropriate model may be to consider the situation where f is time varying with values f (t), t ∈ N 0 , the inclusion of which reflects environmental or demographic stochasticity. It can be demonstrated that the second output y 2 , denoting adult trees, still converges to r 2 . The population abundances x, planting/harvesting quantities u and abundance of largest juvenile trees need not converge in general. However, the ISS estimate of Proposition 4.9 applies. Figure 5 contains a simulation where f (t) is drawn from a pseudo-random truncated normal with mean 55.8 and variance 4. We note that, as predicted, the second output, number of adult trees present, rejects the disturbances to the model and is the same across all simulations.
Example 5.2 We revisit the matrix projection model for pronghorn from Sect. 2 to demonstrate how low-gain PI control may be combined with other management strategies. The restocking strategy dictated by the low-gain PI controller (Iaw) solves the stated management problem, demonstrated in Fig. 2. However, the asymptotic restocking levels are c. 200 and c. 150 female and male neonates per year, respectively, to maintain a stable population with 120 prime females and 100 prime males. These restocking levels may be too large to implement practically. We suspect that they are so high because the modelled rate of neonate survival and transition to the (next) juvenile stage class is very low, 0.059 in fact (below 6 %). Recall that the uncontrolled population specified by (2.1) has asymptotic rate of decline 0.9222 < 1. We investigate the effect of improved neonate survival (of both sexes) p on the asymptotic growth rate of the controlled population. Appealing to the perturbation analysis of Hodgson et al. (2006, Theorem 3.3) the relationship in Fig. 6 is obtained between perturbation to survival and resulting asymptotic growth rate. The details are contained in "Appendix A". Although a perturbation of only 0.1180 is required to reach population stasis, note that this corresponds to an approximately 200 % increase of current neonate survival, which may also be infeasible to implement. Therefore, to reach the same management objective described in Sect. 2, we explore the combination of the low-gain PI control model (Iaw) with a perturbation of 0.0590 (which is still 100 % of current survival) to neonate survival. Practically, the latter management strategy corresponds to some environmental change. Note that the perturbation to survival alone leads to an asymptotic growth rate of 0.9638 < 1, so is not enough by itself to reverse the predicted asymptotic decline. Simulations of the combined management strategy are plotted in Fig. 7. The demonstrable difference between Figs. 2 and 7 is that in the later the asymptotic restocking rates have fallen to c. 50 and c. 25 female and male individuals per year, respectively. Finally, we note that writing the perturbation to the pronghorn projection matrix A as A + D 1 pD 2 (with D 1 and D 2 given by (6.6)), it is possible to see how the predicted asymptotic level of restocking changes with perturbation p (provided that r (A + D 1 pD 2 ) < 1. Indeed, according to Theorem 4.6 (a), u ∞ is given by

Discussion
Low-gain PI control with input saturation has been reconsidered and extended to discrete-time, positive state linear systems where multiple outputs are regulated to desired, necessarily nonnegative, set-points. Our results hold for both finitedimensional and a class of infinite-dimensional systems. The motivation for the current study is twofold: first, to further explore the utility of feedback control in ecological management type problems, and it is in this context that we have posed much of the present material and our examples. The second purpose is to further develop the suite of robust feedback control for positive state systems-models that arise in a variety of other physically and biologically motivated scenarios. The present contribution is a sequel to Guiver et al. (2015), where we first considered low-gain PI control as a potential tool for ecological management. There only a single (scalar) per time-step measurement or output of the to-be-controlled system was made, with a view to regulating to a single (scalar) set-point, and thus only a single per time-step control action is required. In other words, Guiver et al. (2015) considered robust regulation of a single management goal. Although conceptually very similar, additional mathematical difficulties arise in extending these results to the natural situation where several management objectives are specified (that is, multiinput, multi-output systems) and additionally in the presence of input saturation to reflect per time-step resource constraints. Specifically, two issues had to be overcome. First, in Sect. 4.1 we described the set of feasible set-points: candidate asymptotic outputs of a positive state linear system. Feasible set-points are subsequently used in our main results, Theorem 4.6 and Corollary 4.7, as the asymptotic limits of the output of a low-gain PI control system. To summarise, Lemma 4.2 states that the set of trackable outputs with positive state includes the nonnegative linear span of the columns of G(1), the transfer function evaluated at one. The set of trackable outputs with positive state is enlarged by incorporating a proportional component to the feedback law, Lemma 4.4. Second, in Sect. 4.2 we addressed the problem of including input saturation and still achieving robust set-point regulation. We achieved this by appending a simple anti-windup mechanism (the term involving E in (Iaw)) in the integral controller and thereby preventing the destabilising phenomenon associated with input saturation in control theory known as "integrator windup", discussed in Sect. 4.3. Our main results are Theorem 4.6 and Corollary 4.7 which are low-gain PI control results for positive state systems and mirror the existing, well-known case recorded in Theorem 3.1.
The low-gain PI control system (Iaw) contains demonstrable robustness to certain sources of model uncertainty and disturbances, as described in Sect. 4.4. These facets are a hugely important aspect of feedback control, and a reason why population managers may wish to consider its utility in applications, as ecological models are often highly uncertain. To ensure, however, that (Iaw) is efficacious a sufficiently accurate estimate of G(1) is required. A possible fruitful future avenue of research would be to investigate techniques for computing the matrix parameter E (which, recall, depends on G(1)) adaptively, so that E is the output of some dynamic or iterative process. Adaptive control techniques are already known to compute the low-gain parameter g > 0 adaptively; either in the scalar output case (Logemann and Ryan 2000), or in the multi-input, multi-output case but without input saturation (Ke et al. 2009). An adaptive scheme here would ideally determine a suitable E without requiring knowledge of G(1).
We comment that transfer functions are ubiquitous objects in control theory as they provide a so-called "frequency domain" description of (usually-controlled) dynamic processes. Historically, the term frequency in an engineering context refers to the frequency of oscillation, such as of an electrical alternating current. Intuitively, and amongst other beneficial properties, the frequency domain provides an elegant description of the behaviour of dynamical systems driven by periodic signals and how dynamical systems alter or modulate the phase and amplitude of an incoming periodic signal. Given that numerous physical and biological drivers are (at least roughly) periodic (such as daylight, rainfall or temperature), it is no surprise that a frequency domain approach to ecological modelling has recently been brought to an ecological audience (Greeman and Benton 2005;Worden et al. 2010). Transfer functions have also been employed in ecological modelling in the context of perturbation analysis in Hodgson and Townley (2004), Hodgson et al. (2006) and Stott et al. (2012), as we exploited in Example 5.2 for a modelled pronghorn population. Here the transfer function provides an analytic relationship between perturbations to a population's life histories and the resulting change to asymptotic growth rate and, in that sense, is a form of sensitivity analysis. We believe that the mature and well-studied language of systems and control theory has much to offer ecological modelling and management. Conversely, the continued study of ecology or ecosystems from a control theory perspective, particularly processes that exhibit feedback structures or feedback-type behaviour may, in the spirit of biomimicry, lead to novel concepts in control theory with other applications.
In closing, we reiterate the distinction between robust control and optimal control. Recall that in the former a control or input is designed to achieve some desired dynamical behaviour in spite of uncertainty or disturbances to the dynamics whilst in the latter, a control or input is chosen to achieve some desired dynamic behaviour while also minimising a prescribed functional. Broadly speaking (as there are always exceptions), robust control is not optimal and optimal control is not robust. We have explored the use of feedback control for robust ecological management and have not addressed the subject of costs here. As we sought to emphasise in Guiver et al. (2015), inputs obtained from many classical optimal control results are not always robust to various forms of uncertainty. Needless to say, as we believe that ecological models are naturally prone to uncertainty, and indeed as the biological and ecological literature contains numerous papers contributing to the theory and application of optimal control, we have instead focussed on further developing the set of robust feedback control tools for ecological management. We acknowledge the demands placed on population managers by limited resources, and the consequent desire to use those resources wisely. Certainly, more research is required in combining optimal control with robust control in the field of ecological management.

Appendix A: Model parameters used in examples
Pronghorn matrix PPM The matrix model for female pronghorn is based on Berger and Conner (2008,  where stage classes one to three denote female neonates, yearlings and prime adults, respectively. We have removed the fourth stage class denoting senescent adults as this stage does not contribute to the life-cycle, and its inclusion results in a reducible matrix.
To include males into the model, we assume that they have the same vital rates as the females, and that the sex-ratio is equal, which leads to the projection matrix in (2.1) and (2.2), where stages one to six now denote female neonates, yearlings, prime adults and male neonates, yearlings, prime adults. The matrix A has r (A) = 0.9222 < 1, thus (A1) holds and the uncontrolled population is predicted to decline asymptotically. The modelling assumption that we are able to independently replenish both neonate stage classes and that we observe both the adult stage classes leads to is required. Since u ∞ ≥ 0 then evidently r = G(1)u ∞ ∈ G(1) + and hence r is trackable with positive state by Lemma 4.2. Obviously, non-integer numbers of individual pronghorns do not make sense and the numbers in (6.3) are an artefact of the non-integers appearing in A and would, in practice, of course be rounded to the nearest integer. To apply a low-gain integral control model (I) to the pronghorn PPM additionally requires that a matrix K , a small positive parameter g and an initial input u(0) = x 0 c are specified. For the simulations in Fig. 1 we chose: Recall that K and G(1) are required to have the property that every eigenvalue of the product K G(1) has positive real part [assumption (A2)]. In fact we have chosen K deliberately so that When revisiting the pronghorn model in Example 5.2 we write a perturbation p to neonate survival as ⎡ (6.6) Hodgson et al. (2006, Theorem 3.3) states that for λ / ∈ σ (A), λ ∈ σ (A + pD 1 D 2 ) if, and only if, 1 ∈ σ ( pD 2 (λI − A) −1 D 1 ). When unravelled the latter condition is equivalent to 1 = 90361 × 10 3 p 1.25 × 10 8 λ 3 − 1.09 × 10 8 λ 2 − 5331299 . (6.7) For p in the interval [0, 0.5), Eq. (6.7) is solved for λ (seeking the largest positive solution denoting r (A + pD 1 D 2 )) and plotted in Fig. 6. The simulations of (Iaw) applied to the pronghorn PPM with A replaced by A + 0.059D 1 D 2 in Fig. 7 were conducted with parameters: Example 5.1 continued For the robustness arguments we first replace the (1, 9)th entry of A in (5.1) by f > 0 and letf = 55.8 so that the original A is denoted byÂ. Therefore where e i is the ith standard basis vector, and thus also Appealing to the block structure of A 0 , it follows that σ (A 0 ) = σ (Â 0 ) is independent of f . Consequently, A satisfies (A) for every f > 0. Similarly, from (5.11) a calculation shows that and hence the known choice is such that K and G(1) satisfy (A2) for every f > 0. We claim that the y 2 dynamics, which recall are those of the 9th stage-class, are independent of δ. To see this we note that the dynamics for the state x are which by inspection of the 9th component yields that Moreover, from the integral control update law c (t) and thus y 2 (t) converge as t → ∞ (the latter to r 2 ), independently of δ. The above argument holds if f = f (t) is time-varying. Now writing the whole state dynamics are given by If f (t) is constant then d(t) is convergent, and hence the disturbance to x(t) is rejected by the output, by Corollary 4.10. If f (t) is time-varying then the ISS estimates of Proposition 4.9 apply.

Appendix B: The Z-transform, transfer functions and convolutions
We collect more notation that shall be required for some of the proofs. First, for B a Banach space with norm · and p ∈ [1, ∞] we let p = p (N 0 ; B) denote the usual sequence space of B-valued sequences v such that For each sequence v, t ∈ N 0 and p ∈ [1, ∞), the quantity v p (0,t) denotes with an analogous definition for v ∞ (0,t) . If H is a Hilbert space with norm · induced from an inner-product, then 2 (N 0 ; H) is itself a Hilbert space. For a sequence v ∈ 2 , the Z-transform of v, denotedv, is a H-valued function of a complex variable given by If v ∈ 2 thenv ∈ H 2 and furthermore the Parseval equivalence of norms holds The above claims are well-known see; for example, Staffans (2005, p. 699).
If r (A) < 1 and u ∈ 2 then applying the Z-transform to (2.2) and eliminatingx(z) yields that For two sequences u, v we let u * v denote the (discrete) convolution of u and v, with terms given by and record the following fact regarding the Z-transform of convolutions We shall also require the following 2 and pointwise estimates for convolutions, respectively A proof of (7.6) may be found in Desoer and Vidyasagar (1975, p. 244) and (7.7) follows from the Cauchy-Schwarz inequality (equivalently, the Hölder inequality with exponents p = q = 2). Let X denote a Banach space and denote bounded, linear operators with r (A) < 1. Then the function (as defined in (3.2)) is equal to the Z-transform of the sequence h defined by Since r (A) < 1 it follows that h ∈ p (N 0 ; R m×m ) for every p ≥ 1. Furthermore, combining (7.2), (7.5) and (7.9) we obtain the crucial estimate for u ∈ 2 and h as in (7.8) For any finitely nonzero sequence v, v = P T v, for some T ∈ N where P T is the truncation operator Clearly, P T v ∈ 2 for every T ∈ N, and applying estimate (7.10) above yields the truncated version that we shall also require for the proofs in the following "Appendix C".

Appendix C: Proofs of results
Proof of Lemma 4.2 (a) Since by assumption r (A) < 1, the equality holds (and the Neumann series converges absolutely) and thus as A, B and C are nonnegative (b) A useful ingredient in the following proof is that with A = A 1 + B F, since B, F ≥ 0 it follows that A 1 ≤ A and so [by, for example, Berman and Plemmons (1994, p. 27)] Consider the state-feedback input which when inserted into (2.2) gives rise to the closed-loop system Note that as A 1 , B, u + ≥ 0 it follows from (8.3) that x(t) ≥ 0 for each t ∈ N 0 . Invoking (8.1) yields that x is convergent, with limit Consequently, the input u given by (8.2) is also convergent as is the output y, which therefore satisfies whence r ∈ R p + is trackable with positive state.
Proof of Proposition 4.3 It suffices to prove that the set of trackable outputs of (A, B, C) with positive state is contained in G C A 1 B (1) + , as the converse inclusion was established in Lemma 4.2 (b). Assume that r ∈ R p + is trackable with positive state, so that there exists a convergent input u (not necessarily nonnegative) such that the state x(t) is nonnegative for each t ∈ N 0 and furthermore the output converges to r . Thus for each t ∈ N 0 (8.5) so that by assumption (H), u(t) + F x(t) ≥ 0. Furthermore as u, and thus x, are convergent Proof of Lemma 4.4 (a) It is well-known [see, for example, Hodgson et al. (2006, Theorem 3.3)] that As r (A 1 + B F) = r (A) < 1, the above equivalences yield that 0 / ∈ σ (I − G F A 1 B (1)), proving the claim. (b) A calculation using the Sherman-Woodbury-Morrison Formula [see, for example, Hager (1989)] gives that where we have used part (a) for the existence of the inverse of I − G F A 1 B (1). Claim (b) follows from (8.6) once we note that the Neumann series appearing in (8.6) is nonnegative.
Proof of Theorem 4.6 The choice of r in (4.5) ensures that there exists v ∈ R m such that We note that v need not be unique. From its definition, the saturation function has the idempotent property that sat (sat (w)) = sat (w), ∀w ∈ R m . (8.8) We define the shifted function sat : which from (8.8) satisfies sat (0) = 0. Introduce the shifted co-ordinatesx andx c bỹ where x ∞ is as in Theorem 4.6 (b). For notational convenience we introduce the (so-called deadzone) nonlinearity Ψ by which, it is routine to verify, satisfies the linear estimate An elementary sequence of calculations shows thatx andx c have dynamics given by is rewritten as Our aim is to demonstrate that our choice of E = gK G(1) ∈ R p×m in (4.4) ensures that zero is a globally asymptotically stable equilibrium of (8.15) for all sufficiently small, but positive, g. To that end, as in Theorem 3.1 (see Logemann and Townley 1997, Theorem 2.5, Remark 2.7), assumptions (A1) and (A2) (particularly the choice of K ) imply that there existsĝ > 0 such that r (A) < 1, ∀g ∈ (0,ĝ). (8.16) For such g ∈ (0,ĝ), we consider the transfer function G g of the triple (A, B, C), which is given by By using blockwise inversion and subsituting our choice of E = gK G(1), it follows that G g reduces to (1)).
( 8.17) We seek to establish the following claim: there exists a g * ∈ (0,ĝ) and ρ ∈ (0, 1) such that for all g ∈ (0, g * ) In what follows we let T denote the complex unit circle T = {z ∈ C : |z| = 1}. We note that for every g ∈ (0,ĝ) and z ∈ T, z I − A is invertible (as r (A) < 1), as is and hence by, for example, Zhang (2005, Theorem 1.2), it follows that the Schur complement is invertible as well. For z ∈ T we define Consider the following chain of equivalences: fix ρ ∈ (0, 1) where superscript * denotes the Hilbert space adjoint operator, and we have used in the last equivalence that a bounded operator on a Hilbert space has the same operator norm as its adjoint. Therefore Expressing the right hand side of (8.20) in terms of the inner-product ·, · on C m and using the decomposition (8.19) yields that where we have set v = (T (z)) − * u, and noted that as T (z) is bijective, v ∈ C m ranges across all of C m as u ∈ C m does. Hence, We seek to establish that the right hand side of (8.21) holds, which, written out in full claims that there exists g * > 0 and ρ ∈ (0, 1) such that for all g ∈ (0, g * ) where for w ∈ C, w denotes its complex conjugate. The arguments that follow are based on those used in the proof of Logemann and Townley (1997, Theorem 2.5), although adapted for our purposes. Seeking a contradiction, suppose that the above claim is false. Therefore, there exist sequences (g n ) n∈N ⊆ (0, ∞), (z n ) n∈N ⊆ T and (v n ) n∈N ⊆ C m such that g n 0 as n → ∞, but g n (K G(z n )− K G(1)) * v n > 1 − 1 n [(z n −1)I +g n (K G(z n )) * ]v n , ∀n ∈ N. (8.23) The inequality (8.23) necessitates that v n = 0 for each n ∈ N and so (by multiplying both sides of (8.23) by a positive constant if necessary) we may assume that v n = 1, n ∈ N. (8.24) Arguing similarly, inequality (8.23) also necessitates that z n = 1 for each n ∈ N and, as z n ∈ T, it follows that Re z n < 1, ∀n ∈ N. (8.25) Since the sequences (z n ) n∈N ⊆ T and (v n ) n∈N ⊆ C m are both bounded, we may pass to a subsequence (not relabelled) along which both (z n ) n∈N ⊆ T and (v n ) n∈N ⊆ C m converge. We denote the limits of these sequences by z ∞ ∈ T and v ∞ ∈ C m , respectively, and note that (z n ) n∈N has limit z ∞ . The equalities (8.24) imply that v ∞ = 1 and so v ∞ = 0.
Since (K G(z n )) n∈N is bounded, taking the limit n → ∞ in (8.23) yields that and hence z ∞ = 1 = z ∞ . Dividing both sides of (8.23) by g n > 0, we obtain the following estimates for n ∈ N, n ≥ 2 for some constant Γ > 0. Therefore, the sequence ( z n −1 g n ) n∈N is bounded and hence has a convergent subsequence, which we pass to without relabelling, and denote the limit by l ∞ . Taking the limit n → ∞ in (8.26) yields that (8.28) In particular, as v ∞ = 0 we conclude from (8.28) that −l ∞ ∈ σ ((K G(1)) * ) ⊆ C + 0 , since Therefore, Re (−l ∞ ) ≥ 2α > 0, for some α > 0 and hence there exists N ∈ N such that Re 1 − z n g n = Re 1 − z n g n ≥ α > 0, n ∈ N, n ≥ N .
(8.29) Define z n = 1 + i Im z n for n ∈ N and using (8.25) and |z n | = 1 we compute (1 − Re z n ) → 0, as n → ∞. (8.30) Now for each n ∈ N 1 − z n g n = 1 − z n g n + z n − z n 1 − z n · 1 − z n g n , and thus which is a contradiction, since by construction Re 1 − z n g n = 0, n ∈ N.
Invoking the estimate (8.18) we rearrange (8.33) The bound (8.34) holds for every T ∈ N, and we hence conclude thatx c = Cξ ∈ 2 and thus claim (a) holds, that is, From (8.15) it follows thatx has dynamics whence, as r (A) < 1 and by (8.35) proving parts (b) and (c).

Proof of Corollary 4.7
We only prove the result for (PI1aw), as the proof for (PI2aw) is very similar and is omitted. With F 1 chosen as in the statement of the result, it follows immediately from inspection of (PI1aw) that so that (PI1aw) specified by (A, B, C) is in fact an instance of (Iaw) specified by (A 1 , B, C). By assumption A 1 := A − B F 1 ∈ R n×n + , so that (A 1 , B, C) satisfy (4.1), (A1) and (A2) hold for (A 1 , B, C) and thus the result follows by applying Theorem 4.6 to (Iaw) specified by (A 1 , B, C).
Proof of Proposition 4.9 The proof is based on that of Theorem 4.6. Introducing the shifted co-ordinatesx andx c as in (8.10a) and (8.10b), respectively, then the disturbed feedback system can be written as The solution ξ of (8.39) can be expressed as From the proof of Theorem 4.6 there exists g * > 0 such that for all g ∈ (0, g * ) and applying C to (8.42) produces Introduce the sequences h μ , a μ , s(μ, ξ ) and D μ with respective terms The property r (μA) < 1 from (8.41) implies that a μ , h μ ∈ 1 ∩ 2 . We estimate (8.43) in a similar manner to as in (8.33), yielding that where c 1 = a μ 2 , c 2 = h μ 1 , we have used the convolution estimate (7.6), the linear estimate (8.12) and the definition of η. Rearranging (8.44) gives with constants Taking norms in (8.42) gives that The second and third terms on the right hand side of (8.46) are convolutions, which we bound from above using (7.7) to give where c 5 := a μ ∞ . As r (μA) < 1 and B is bounded, there exist constants c 6 and c 7 such that where we have inserted (8.45). We compute that which, when substituted into (8.47) produces η(t) ≤ c 8 η(0) + c 9 c 10 μ t D ∞ (0,t−1) , t ∈ N.
To acquire the estimate for y(t) − r , we repeat the above calculation from (8.46), but now estimate y(t) − r = [ C 0 ] ξ(t) instead of ξ(t). The proof is the same as above, as C 0 is bounded. In summary, we have established (4.8) with γ = 1 μ ∈ (0, 1) and constants M 0 , M 1 and M 2 relabelled from the c j constants appropriately.
Proof of Corollary 4.10 The proof of the result borrows heavily from the proof of (4.8) in Proposition 4.9. For given f ∞ 1 and d ∞ 2 let r and u + be as in (4.9), and note that for some v ∈ R m + (not necessarily unique). Define Define the shifted co-ordinatesx andx c bỹ An elementary sequence of calculations shows thatx andx c have dynamics given by , t ∈ N 0 , (8.49) which can be written as (8.39) with , t ∈ N 0 . (8.50) In deriving Proposition 4.9 we established the existence of a g * > 0 such that for all g ∈ (0, g * ) and all initial states (x 0 , x 0 c ) ∈ R n + × R m + , there exists μ > 1 and constants C 1 , C 2 > 0 such that the solution ξ = x−w ∞ x c − sat (v) of (8.39) satisfies the estimate ξ(t) ≤ C 1 μ −t ξ(0) + C 2 μ D ∞ (0,t−1) , t ∈ N. (8.51) Since D in (8.50) is convergent, it is bounded and hence the inequality (8.51) implies that ξ is bounded. A straightforward time-invariance argument yields that for every T ∈ N 0 ξ(t + T ) ≤ C 1 μ −t ξ(T ) + C 2 μ D ∞ (T,T +t−1) , t ∈ N. The result now follows by multiplying (8.53) by C =Ĉ + C and B =B + B on the left and right hand sides respectively, expanding and collecting together terms as suggested.
Proof of Lemma 4.12 The proof makes use of the complex stability radius developed by Hinrichsen and Pritchard (1986a, b). For given Q ∈ C m×m with σ (Q) ⊆ C + 0 , then clearly σ (−Q) ⊆ C − 0 , with C − 0 the open left-half complex plane. For assumption (A2) we require that −K G(1) = −KĜ(1) − K G(1) = −Q − K G(1), also has spectrum contained in C − 0 . Viewing −K G(1) as a structured perturbation to −Q, by Hinrichsen and Pritchard (1986b, Proposition 2.1) which, noting that K = QĜ(1) −1 , is (4.12). In the case that Q = I then which is the second ingredient in establishing (4.13). Here we have used that σ (−G(1)K ) ⊆ C − 0 if, and only if, σ (−K G(1)) ⊆ C − 0 , which follows from the fact that the non-zero eigenvalues of −K G(1) are precisely equal to those of −G(1)K . That (4.14) is sufficient for (4.13) follows immediately from the fact that the matrix 2-norm satisfies Proof of Lemma 4.13 If (a) holds then we simply estimate as required. We claim that (b) implies that X + X * 0, where P 0 or 0 P both denote positive definiteness of P (as opposed to the usual P ≥ 0 or 0 ≤ P, which in this manuscript denotes componentwise nonnegativity of P). Denote Y = (I − X )(I + X ) −1 , and note that (b) implies that Y 2 ≤ 1 or, in other words, I − Y * Y 0. From here we compute that as claimed. The inequality X ≤ 1 implies that X * ≤ 1 and thus, letting H denote the Hilbert space on which X is defined We combine (8.54) and (8.55) and estimate for v ∈ H and ρ 2 ∈ ( 1 2 , 1) and taking u = (I + X ) * v ∈ H yields that X * (I + X ) − * u, X * (I + X ) − * u ≤ ρ 2 u, u ⇒ X * (I + X ) − * u ≤ ρ u . (8.56) Since v ∈ H and hence u ∈ H was arbitrary, we conclude from (8.56) that The next lemma is a technical result that prepares the proof of Corollary 4.15. The lemma demonstrates that the assumptions of Corollary 4.15 are sufficient for the (unknown) transfer function G g associated with the feedback system (Iaw) to satisfy for all perturbations G that are not too large in norm. Once (8.57) is established, then the proof of Corollary 4.15 is identical to the latter part of the proof of Theorem 4.6.
Therefore, Re (−l ∞ ) ≥ 2α > 0, for some α > 0 and hence there exists N ∈ N such that Re 1 − z n g n = Re 1 − z n g n ≥ α > 0, n ∈ N, n ≥ N . (8.69) However, z n ∈ T and if z n = 1 for any n ∈ N with n ≥ N then Re 1−z n g n = 0, contradicting the uniformly positive estimate (8.69). Therefore, it suffices to suppose that Re z n < 1, ∀n ∈ N. (8.70) The proof now finishes identically to that of Theorem 4.6, arguing from the line after (8.29).

Proof of Corollary 4.15
The hypotheses of the corollary ensure that Lemma 6.1 applies and therefore for g ∈ (0, g * ) the estimate (8.57) (or (8.59)) holds. The proof of the corollary is now the same as that of Theorem 4.6, following from the paragraph preceding Eq. (8.32).