The previous chapter assumed that the intrinsic process, P, has a given unvarying form. The actual process may differ from the given form or may fluctuate over time. If a system is designed with respect to a particular form of P, then variation in P away from the assumed form may cause the system to become unstable.

We can take into account the potential variation in P by altering the optimal design problem. The new design problem includes enhanced stability guarantees against certain kinds of variation in P.

Variation in an intrinsic process is an inevitable aspect of design problems. In engineering, the process may differ from the assumed form because of limited information, variability in manufacturing, or fluctuating aspects of the environment.

In biology, a particular set of chemical reactions within an individual may vary stochastically over short time periods. That reaction set may also vary between individuals because of genetic and environmental fluctuations. In all cases, actual processes typically follow nonlinear, time-varying dynamics that often differ from the assumed form.

We may also have variation in the controller or other system processes. In general, how much variability can be tolerated before a stable system becomes unstable? In other words, how robust is a given system’s stability to perturbations?

Fig. 7.1
figure 1

System uncertainty represented by a feedback loop. The transfer function, \(\varDelta \), describes an upper bound on the extent to which the actual system, \(\tilde{G}=G/(1-G\varDelta )\), deviates from the nominal system, G. Here, G may represent a process, a controller, or an entire feedback system

We cannot answer those question for all types of systems and all types of perturbations. However, the \(\mathcal {H}_{\infty }\) norm introduced earlier provides insight for many problems. Recall that the \(\mathcal {H}_{\infty }\) norm is the peak gain in a Bode plot, which is a transfer function’s maximum gain over all frequencies of sinusoidal inputs. The small gain theorem provides an example application of the \(\mathcal {H}_{\infty }\) norm.

1 Small Gain Theorem

Suppose we have a stable system transfer function, G. That system may represent a process, a controller, or a complex cascade with various feedback loops. To express the mathematical form of G, we must know exactly the dynamical processes of the system.

How much may the system deviate from our assumptions about dynamics and still remain stable? For example, if the uncertainties may be expressed by a positive feedback loop, as in Fig. 7.1, then we can analyze whether a particular system, G, is stably robust against those uncertainties.

In Fig. 7.1, the stable transfer function, \(\varDelta \), may represent the upper bound on our uncertainty. The feedback loop shows how the nominal unperturbed system, G, responds to an input and becomes a new system, \(\tilde{G}\), that accounts for the perturbations. The system, \(\tilde{G}\), represents the entire loop shown in Fig. 7.1.

The small gain theorem states that the new system, \(\tilde{G}\), is stable if the product of the \(\mathcal {H}_{\infty }\) norms of the original system, G, and the perturbations, \(\varDelta \), is less than one

$$\begin{aligned} \vert \vert G \vert \vert _\infty \vert \vert \varDelta \vert \vert _\infty <1. \end{aligned}$$
(7.1)

Here, we interpret G as a given system with a known \(\mathcal {H}_{\infty }\) norm. By contrast, we assume that \(\varDelta \) represents the set of all stable systems that have an \(\mathcal {H}_{\infty }\) norm below some upper bound, \(\vert \vert \varDelta \vert \vert _\infty \). For the perturbed system, \(\tilde{G}\), to be stable, the upper bound for the \(\mathcal {H}_{\infty }\) norm of \(\varDelta \) must satisfy

$$\begin{aligned} \vert \vert \varDelta \vert \vert _\infty < \frac{1}{\vert \vert G \vert \vert _\infty }. \end{aligned}$$
(7.2)

If G is a system that we can design or control, then the smaller we can make \(\vert \vert G \vert \vert _\infty \), the greater the upper bound on uncertainty, \(\vert \vert \varDelta \vert \vert _\infty \), that can be tolerated by the perturbed system. Put another way, smaller \(\vert \vert G \vert \vert _\infty \) corresponds to greater robust stability.

A full discussion of the small gain theorem can be found in textbooks (e.g., Zhou and Doyle 1998; Liu and Yao 2016). I present a brief intuitive summary.

The positive feedback loop in Fig. 7.1 has transfer function

$$\begin{aligned} \tilde{G}=\frac{G}{1-G\varDelta }. \end{aligned}$$
(7.3)

We derive that result by the following steps. Assume that the input to G is \(w+\nu \), which is the sum of the external input, w, and the feedback input, \(\nu \). Thus, the system output is \(\eta =G(w+\nu )\).

We can write the feedback input as the output of the uncertainty process, \(\nu =\varDelta \eta \). Substituting into the system output expression, we have

$$\begin{aligned} \eta =G(w+\nu )=Gw+G\varDelta \eta . \end{aligned}$$

The new system transfer function is the ratio of its output to its external input, \(\tilde{G}=\eta /w\), which we can solve for to obtain Eq. 7.3.

The new system, \(\tilde{G}\), is unstable if any eigenvalue has real part greater than or equal to zero, in which the eigenvalues are the roots of s of the denominator, \(1-G(s)\varDelta (s)=0\).

Intuitively, we can see that \(\tilde{G}(s)\) blows up unstably if the denominator becomes zero at some input frequency, \(\omega \), for \(s=j\omega \). The denominator will be greater than zero as long as the product of the maximum values of \(G(j\omega )\) and \(\varDelta (j\omega )\) are less than one, as in Eq. 7.1. That condition expresses the key idea. The mathematical presentations in the textbooks show that Eq. 7.1 is necessary and sufficient for stability .

Reducing the \(\mathcal {H}_{\infty }\) norm of G increases its robustness with respect to stability. In Eq. 7.2, a smaller \(\vert \vert G \vert \vert _\infty \) corresponds to a larger upper bound on the perturbations that can be tolerated.

A lower maximum gain also associates with a smaller response to perturbations, improving the robust performance of the system with respect to disturbances and noise. Thus, robust design methods often consider reduction of the \(\mathcal {H}_{\infty }\) norm.

2 Uncertainty: Distance Between Systems

Suppose we assume a nominal form for a process, P. We can design a controller, C, in a feedback loop to improve system stability and performance. If we design our controller for the process, P, then how robust is the feedback system to alternative forms of P?

The real process, \(P'\), may differ from P because of inherent stochasticity, or because of our simple model for P misspecified the true underlying process.

What is the appropriate set of alternative forms to describe uncertainty with respect to P? Suppose we defined a distance between P and an alternative process, \(P'\). Then a set of alternatives could be specified as all processes, \(P'\), for which the distance from the nominal process, P, is less than some upper bound.

We will write the distance between two processes when measured at input frequency \(\omega \) as

$$\begin{aligned} \delta [P(j\omega ), P'(j\omega )] = \hbox {distance at frequency}\, \omega , \end{aligned}$$
(7.4)

for which \(\delta \) is defined below. The maximum distance between processes over all frequencies is

$$\begin{aligned} \delta _\nu (P,P') = \max _{\omega }\,\delta [P(j\omega ), P'(j\omega )], \end{aligned}$$
(7.5)

subject to conditions that define whether P and \(P'\) are comparable (Vinnicombe 2001; Qiu and Zhou 2013) . This distance has values \(0\le \delta _\nu \le 1\), providing a standardized measure of separation.

To develop measures of distance, we focus on how perturbations may alter system stability . Suppose we start with a process, P, and controller, C, in a feedback system. How far can an alternative process, \(P'\), be from P and still maintain stability in the feedback loop with C? In other words, what is the stability margin of safety for a feedback system with P and C?

Robust control theory provides an extensive analysis of the distances between systems with respect to stability margins (Vinnicombe 2001; Zhou and Doyle 1998; Qiu and Zhou 2010, 2013). Here, I present a rough intuitive description of the key ideas.

For a negative feedback loop with P and C, the various input–output pathways all have transfer functions with denominator \(1+PC\), as in Eq. 6.1. These systems become unstable when the denominator goes to zero, which happens if \(P=-1/C\). Thus, the stability margin is the distance between P and \(-1/C\).

The values of these transfer functions, \(P(j\omega )\) and \(C(j\omega )\), vary with frequency, \(\omega \). The worst case with regard to stability occurs when P and \(-1/C\) are closest; that is, when the distance between these functions is a minimum with respect to varying frequency. Thus, we may define the stability margin as the minimum distance over frequency

$$\begin{aligned} b_{P, C}=\min _{\omega }\,\delta [P(j\omega ),-1/C(j\omega )]. \end{aligned}$$
(7.6)

Here is the key idea. Start with a nominal process, \(P_1\), and a controller, C. If an alternative or perturbed process, \(P_2\), is close to \(P_1\), then the stability margin for \(P_2\) should not be much worse than for \(P_1\).

In other words, a controller that stabilizes \(P_1\) should also stabilize all processes that are reasonably close to \(P_1\). Thus, by designing a good stability margin for \(P_1\), we guarantee robust stabilization for all processes sufficiently near \(P_1\).

We can express these ideas quantitatively, allowing the potential to design for a targeted level of robustness. For example,

$$\begin{aligned} b_{P_2,C}\ge b_{P_1,C}-\delta _\nu (P_1,P_2). \end{aligned}$$

Read this as the guaranteed stability margin for the alternative process is at least as good as the stability margin for nominal process minus the distance between the nominal and alternative processes. A small distance between processes, \(\delta _\nu \), guarantees that the alternative process is nearly as robustly stable as the original process.

The definitions in this section depend on the distance measure, expressed as

$$\begin{aligned} \delta (c_1,c_2)=\frac{ \vert c_1-c_2 \vert }{\sqrt{1+|c_1|^2}\sqrt{1+|c_2|^2}}. \end{aligned}$$

Here, \(c_1\) and \(c_2\) are complex numbers. Transfer functions return complex numbers. Thus, we can use this function to evaluate \(\delta [P_1(j\omega ), P_2(j\omega )]\).

3 Robust Stability and Robust Performance

The stability margin \(b_{P, C}\) measures the amount by which P may be altered and still allow the system to remain stable. Note that \(b_{P, C}\) in Eq. 7.6 expresses a minimum value of \(\delta \) over all frequencies. Thus, we may also think of \(b_{P, C}\) as the maximum value of \(1/\delta \) over all frequencies.

The maximum value of magnitude over all frequencies matches the definition of the \(\mathcal {H}_{\infty }\) norm, suggesting that maximizing the stability margin corresponds to minimizing some expression for an \(\mathcal {H}_{\infty }\) norm. Indeed, there is such an \(\mathcal {H}_{\infty }\) norm expression for \(b_{P, C}\). However, the particular form is beyond our scope. The point here is that robust stability via maximization of \(b_{P, C}\) falls within the \(\mathcal {H}_{\infty }\) norm theory, as in the small gain theorem.

Stability is just one aspect of design. Typically, a stable system must also meet other objectives, such as rejection of disturbance and noise perturbations. This section shows that increasing the stability margin has the associated benefit of improving a system’s rejection of disturbance and noise. Often, a design that targets reduction of the \(\mathcal {H}_{\infty }\) norm gains the benefits of an increased stability margin and better regulation through rejection of disturbance and noise.

The previous section on regulation showed that a feedback loop reduces its response to perturbations by lowering its various sensitivities, as in Eqs. 6.2 and 6.5. A feedback loop’s sensitivity is \(S=1/(1+PC)\) and its complementary sensitivity is \(T=PC/(1+PC)\).

Increasing the stability margin, \(b_{P, C}\), reduces a system’s overall sensitivity. We can see the relation between stability and sensitivity by rewriting the expression for \(b_{P, C}\) as

$$\begin{aligned} b_{P, C}=\Bigg [\max _{\omega }\,\sqrt{|S|^2+|CS|^2+|PS|^2+|T|^2}\Bigg ]^{-1} \end{aligned}$$

This expression shows that increasing \(b_{P, C}\) reduces the total magnitude of the four key sensitivity measures for negative feedback loops.

Fig. 7.2
figure 2

Comparison between the responses of two systems to a unit step input, \(r=1\). The blue curves show \(P_1\) and the gold curves show \(P_2\). a, b Systems in Eq. 7.7, with \(k=100\) and \(T=0.025\). The top plot shows the open-loop response for each system. The bottom plot shows the closed-loop feedback response with unit feedback, \(P/(1+P)\), in which the error signal into the system, P, is \(1-y\) for system output, y. c, d Open (top) and closed (bottom) loop responses for the systems in Eq. 7.8, with \(k=100\). Redrawn from Fig. 12.3 of Åström and Murray (2008), Princeton University Press

4 Examples of Distance and Stability

The measure, \(\delta _\nu (P_1,P_2)\), describes the distance between processes with respect to their response characteristics in a negative feedback loop. The idea is that \(P_1\) and \(P_2\) may have different response characteristics when by themselves in an open loop, yet have very similar responses in a feedback loop. Or \(P_1\) and \(P_2\) may have similar response characteristics when by themselves, yet have very different responses in a feedback loop.

Thus, we cannot simply use the response characteristics among a set of alternative systems to understand how variations in a process influence stability or performance. Instead, we must use a measure, such as \(\delta _\nu \), that expresses how variations in a process affect feedback loop characteristics.

This section presents two examples from Sect. 12.1 of Åström and Murray (2008). In the first case, the following two systems have very similar response characteristics by themselves in an open loop, yet have very different responses in a closed feedback loop

$$\begin{aligned} P_1=\frac{k}{s+1}P_2=\frac{k}{(s+1)(Ts+1)^2}, \end{aligned}$$
(7.7)

when evaluated at \(k=100\) and \(T=0.025\), as shown in Fig. 7.2a, b. The distance between these systems is \(\delta _\nu (P_1,P_2)=0.89\). That large distance corresponds to the very different response characteristics of the two systems when in a closed feedback loop. (Åström and Murray (2008) report a value of \(\delta _\nu =0.98\). The reason for the discrepancy is not clear. See the supplemental Mathematica code for my calculations, derivations, and graphics here and throughout the book.)

In the second case, the following two systems have very different response characteristics by themselves in an open loop, yet have very similar responses in a closed feedback loop

$$\begin{aligned} P_1=\frac{k}{s+1}P_2=\frac{k}{s-1}, \end{aligned}$$
(7.8)

when evaluated at \(k=100\), as shown in Fig. 7.2c, d. The distance between these systems is \(\delta _\nu (P_1,P_2)=0.02\). That small distance corresponds to the very similar response characteristics of the two systems when in a closed feedback loop.

5 Controller Design for Robust Stabilization

The measure \(b_{P, C}\) describes the stability margin for a feedback loop with process P and controller C. A larger margin means that the system remains robustly stable to variant processes, \(P'\), with greater distance from the nominal process, P. In other words, a larger margin corresponds to robust stability against a broader range of uncertainty.

For a given process, we can often calculate the controller that provides the greatest stability margin. That optimal controller minimizes an \(\mathcal {H}_{\infty }\) norm, so in this case we may consider controller design to be an \(\mathcal {H}_{\infty }\) optimization method.

Often, we also wish to keep the \(\mathcal {H}_2\) norm small. Minimizing that norm improves a system’s regulation by reducing response to perturbations. Jointly optimizing the stability margin and rejection of disturbances leads to mixed \(\mathcal {H}_{\infty }\) and \(\mathcal {H}_2\) design.

Mixed \(\mathcal {H}_{\infty }\) and \(\mathcal {H}_2\) optimization is an active area of research (Chen and Zhou 2001; Chang 2017). Here, I briefly summarize an example presented in Qiu and Zhou (2013). That article provides an algorithm for mixed optimization that can be applied to other systems.

Qiu and Zhou (2013) start with the process, \(P=1/s^2\). They consider three cases. First, what controller provides the minimum \(\mathcal {H}_{\infty }\) norm and associated maximum stability margin, b, while ignoring the \(\mathcal {H}_2\) norm? Second, what controller provides the minimum \(\mathcal {H}_2\) norm, while ignoring the stability margin and \(\mathcal {H}_{\infty }\) norm? Third, what controller optimizes a combination of the \(\mathcal {H}_{\infty }\) and \(\mathcal {H}_2\) norms?

For the first case, the controller

$$\begin{aligned} C(s)=\frac{ \Big (1+\sqrt{2}\Big )s+1}{s+1+\sqrt{2}} \end{aligned}$$

has the maximum stability margin

$$\begin{aligned} b^*_{P, C}=\Big (4+2\sqrt{2}\Big )^{-1/2}=0.38. \end{aligned}$$

The cost associated with the \(\mathcal {H}_2\) norm from Eq. 6.5 is \(\mathcal {J}=\infty \), because the sensitivity function CS has nonzero gain at infinite frequency.

For the second case, the controller

$$\begin{aligned} C(s)=\frac{2\sqrt{2}s+1}{s^2+2\sqrt{2}s+4} \end{aligned}$$

has the minimum \(\mathcal {H}_2\) cost, \(\mathcal {J}^*=6\sqrt{2}=8.49\), with associated stability margin \(b_{P, C}=0.24\). This controller and associated cost match the earlier example of \(\mathcal {H}_2\) norm minimization in Eq. 6.7 with \(\mu =1\).

For the third case, we constrain the minimum stability margin to be at least \(b_{P, C}>1/\sqrt{10}=0.316\), and then find the controller that minimizes the \(\mathcal {H}_2\) norm cost subject to the minimum stability margin constraint, yielding the controller

$$\begin{aligned} C(s)=\frac{2.5456s+1}{0.28s^2+1.5274s+2.88}, \end{aligned}$$

which has the cost \(\mathcal {J}=13.9\) and stability margin \(b_{P, C}=0.327\).

In these examples, a larger stability margin corresponds to a greater \(\mathcal {H}_2\) cost. That relation illustrates the tradeoff between robust stability and performance measured by the rejection of disturbance and noise perturbations.