# Stability and Performance of Complex Systems Affected by Parametric Uncertainty

**DOI:**https://doi.org/10.1007/978-1-4471-5102-9_137-1

## Abstract

Uncertainty is an inherent feature of all real-life complex systems. It can be described in different forms; we focus on the parametric description. The simplest results on stability of linear systems under parametric uncertainty are the Kharitonov theorem, edge theorem, and graphical tests. More advanced results include sufficient conditions for robust stability with matrix uncertainty, LMI tools, and randomized methods. Similar approaches are used for robust control synthesis, where performance issues are crucial.

## Keywords

Linear systems Parametric uncertainty and robustness Robust stability Kharitonov theorem Tsypkin–Polyak plot Edge theorem Matrix Randomized methods Quadratic stability Robust and optimal design## Introduction

Mathematical models for systems and control are often unsatisfactory due to the incompleteness of the parameter data. For instance, the ideas of off-line optimal control can only be applied to real systems if all the parameters, exogenous perturbations, state equations, etc. are known precisely. Moreover, feedback control also requires a detailed information which is not available in most cases. For example, to drive a car with four-wheel control, the controller should be aware of the total weight, location of the center of gravity, weather conditions, and highway properties as well as many other data which may not be known. In that respect, even such a relatively simple real-life system can be considered a *complex* one; in such circumstances, control under uncertainty is a highly important issue.

The focus in this article is on the *parametric uncertainty*; other types of uncertainty can be treated in more general models of robustness. This topic became particularly popular in the control community in the mid- to late 1980s of the previous century; at large, the results of this activity have been summarized in the monographs (Ackermann 1993; Barmish 1994; Bhattacharyya et al. 1995).

We start with problems of stability of polynomials with uncertain parameters and present the simplest robust stability results for this case together with the most important machinery. Next, we consider stability analysis for the matrix uncertainty; most of the results are just sufficient conditions. We present some useful tools for the analysis, such as the LMI technique and randomized methods. Robust control under parametric uncertainty is the next step; we briefly discuss several problem formulations for this case.

## Stability of Linear Systems Subject to Parametric Uncertainty

*x*

_{0}is an arbitrary finite initial condition, and \(A \in {\mathbb{R}}^{n\times n}\) is the state matrix. The system is stable (i.e., no matter what

*x*

_{0}is, the solutions tend to zero as

*t*→

*∞*) if and only if all eigenvalues

*λ*

_{ i }of the matrix

*A*have negative real parts:

*A*is said to be a

*Hurwitz*matrix. If it is known precisely, checking condition (2) is immediate. For instance, one might compute the characteristic polynomial

*A*(here,

*I*is the identity matrix) and use any of the stability tests (e.g., the Routh algorithm, Routh–Hurwitz test, and graphical tests such as the Mikhailov plot or Hermite–Biehler theorem), see Gantmacher (2000). Alternatively, the eigenvalues can be directly computed using the currently available software, such as Matlab.

*A*is incomplete; for instance, it can depend on the (real) parameters \(q = (q_{1},\ldots ,q_{m})\) which take arbitrary values within the given intervals:

*robust stability problem*; i.e., the goal is to check if condition (2) holds for

*all matrices*in the family (4).

The two main components of any robust stability setup are the *feasible set* \(\mathcal{Q}\subset {\mathbb{R}}^{\ell}\), in which the uncertain parameters are allowed to take their values (usually a ball in some norm; e.g., the box as in (4)), and the *uncertainty structure*, which defines the functional dependence of the coefficients on the uncertain parameters. Of the most interest are the affine and multiaffine dependence; typically, more general situations are hard to handle.

### Simple Solutions

In some cases, the robust stability problem admits a simple solution. Perhaps the most striking example is the so-called Kharitonov theorem (Kharitonov 1978); also see Barmish (1994), where this seminal result is referred to as a *spark* because of its transparency and elegance.

*interval polynomial family*

*q*

_{ i }are allowed to take values in the respective intervals

*independently of each other*and distinguish the following four elements in this family:

*four Kharitonov polynomials*, \(p_{1},p_{2},p_{3},\mbox{ and }p_{4}\), are Hurwitz.

A simple and transparent proof of this result can be obtained using the *value set concept* (Zadeh and Desoer 1963) and the *zero exclusion* principle (Frazer and Duncan 1929), the two general tools which are in the basis of many results in the area of robust stability. We illustrate these concepts via robust stability of polynomials.

*value set*, which is, by definition, the set on the complex plane obtained by fixing the argument

*s*to be

*jω*for a certain value of

*ω*and letting the uncertain parameter vector

*q*sweep the feasible domain.

*robust stability margin*; for simplicity, we define this quantity for the case of the interval family (5). Namely, introduce the

*nominal polynomial*

*p*

_{0}(

*s*) with coefficients

*scaling factors*

*r*

_{max}is defined as follows:

Anther drawback of the Kharitonov result is its inapplicability to the discrete-time case (Schur stability of polynomials).

*q*

_{0}

^{0}>

*α*

_{0},

*q*

_{ n }

^{0}>

*α*

_{ n }, and (ii) as

*ω*changes zero to infinity, the curve

*z*(

*ω*) goes consecutively through

*n*quadrants in the counterclockwise direction and does not intersect the unit square with the vertices ( ± 1, ±

*j*). Unlike the Kharitonov theorem, with this test, the robust stability margin of family (5) can be determined as the size of the maximal square inscribed in the curve

*z*(

*ω*); see Fig. 2. Moreover, with minor modifications, this test applies to

*dependent uncertainty structures*where the coefficient vector \(q = {(q_{0},\ldots ,q_{n})}^{\top }\) is confined to a ball in

*ℓ*

_{ p }-norm, not to a

*box*as in (5).

On top of that, the Tsypkin–Polyak plot can be built for discrete-time systems which do not admit any counterparts of the Kharitonov theorem.

*affine polynomial family*of the form

*p*

_{ i }are the given polynomials and the

*q*

_{ i }s are the uncertain parameters (clearly, they can be scaled to take values in the segment [ − 1, 1]). The famous

*edge theorem*(Bartlett et al. 1988) claims that checking the robust stability of such a family is equivalent to checking the

*edges*of the uncertainty box, i.e., the points \(q \in {\mathbb{R}}^{m}\) with all but one components being fixed to ± 1, while the “free” coordinate varies in [ − 1, 1].

### Complex Solutions

Obviously, the affine model (9) covers just a small part of problems with parametric uncertainty. Closed-form solutions cannot be obtained in the general case; however, many important classes of systems can be analyzed efficiently.

*block diagram description*of systems is often more convenient than differential equations of the form (1). The blocks are associated with typical elements such as amplifiers, integrators, lag elements, and oscillators, which are connected in a certain circuit. In this case, transfer functions are the most adequate tool for dealing with such systems. For instance, the transfer function of the lag element is given by

*T*is the

*time constant*of the element. In terms of differential equations, this means that the input

*u*(

*t*) of a block and its output

*x*(

*t*) satisfy the equation \(T\dot{x} + x = u\).

*m*cascade connected elements with uncertain time constants

*gain*

*k*is known to have the form

*multilinearly*on the uncertain parameters

*T*

_{ i }(cf. linear dependence in (9)), making the problem much more complicated.

The solution of the problem above was obtained in Kiselev et al. (1997) for many important special cases; the closely related problem of finding the “critical gain” (the maximal value of *k* retaining the robust stability) was also addressed.

Using the similar technique, closed-form solutions can be obtained for a number of similar problems such as robust sector stability, robust stability of distributed systems, robust *D*-decomposition, to name just a few.

### Difficult Problems: Possible Approaches

In spite of the apparent progress obtained in the area of parametric robustness, the list of unsolved problems is still quite large. Moreover, some of the formulations were shown to be NP-hard, making it hard to believe that any efficient solution methods will ever be found.

*interval matrix*. Specifically, assume that the entries

*a*

_{ ij }of the matrix

*A*in (1) are interval numbers

However, a change in the statement of the problem often allows for simple and elegant solutions. We mention three fruitful reformulations.

*In the first approach*, the uncertain parameters are assumed to have random rather than deterministic nature; for instance, they are assumed to be uniformly distributed over the respective intervals of uncertainty. We next specify an acceptable tolerance ε, say ε = 0. 01, and check if the resulting random family of polynomials is stable with probability no less than (1 − ε); see Tempo et al. (2013) for a comprehensive exposition of such a *randomized approach to robustness*.

In many of the NP-hard robustness problems, such a reformulation often leads to exact or approximate solutions. Moreover, the randomized approach has several attractive properties even in the situations where the deterministic solution is available. Indeed, the deterministic statements of robustness problems are minimax; hence, the answer is dictated by the “worst” element in the family, whereas these critical values of the uncertain parameters are rather unlikely to occur. Therefore, by neglecting a small risk of violation of the stability, the admissible domains of variation of the parameters may be considerably extended. This effect is known as the *probabilistic enhancement of robustness margins*; it is particularly tangible for the large number of the parameters. Another attractive property of the randomized approach is its low computational complexity which only slowly grows with increase of the number of uncertain parameters.

To illustrate, let us turn back to problem (11)–(10) and use the value set approach. In the considered problem, this set can be efficiently built.

*T*

_{ i }are independent random variables uniformly distributed over the respective segments (10) and consider the random variable

*m*large, its behavior obeys the central limit theorem, so that the probability that

*η*belongs to the respective

*confident ellipse*\(\mathcal{E} = \mathcal{E}(\omega )\) is close to unity. In other words, we have

*probabilistic predictor*of the value set \(\mathcal{V}(\omega )\); it is the shifted set of points of the form \({e}^{z},z \in \mathcal{E}\subset \mathbb{C}\). The predictor \(\mathcal{G}(\omega )\) constitutes a small portion of the deterministic value set \(\mathcal{V}(\omega )\), yielding the probabilistic enhancement of the robustness margin.

Note also that the computation of \(\mathcal{E}\) and \({e}^{\mathcal{E}}\) is nearly trivial and, in contrast to the construction of the true value set \(\mathcal{V}\), the complexity does not grow with increase of *m*.

*The second approach*to solving “hard” problems in robust stability relates to the notion of

*superstability*(Polyak and Shcherbakov 2002). The matrix

*A*of system (1) (and the system itself) is said to be superstable, if its entries \(a_{ij},\;i,j = 1,\ldots ,n\), satisfy the relations

*x*∥

_{ ∞ }is a Lyapunov function for the system. Since the condition of superstability is formulated in terms of linear inequalities on the entries of

*A*, checking robust superstability of affine (and in particular, interval) matrix families is immediate. Similar situation holds for so-called positive systems.

*The third approach*to robustness analysis relates to

*quadratic stability*, Leitmann (1979) and Boyd et al. (1994). Namely, a family of systems is said to be

*robustly quadratically stable*if it possesses a common quadratic Lyapunov function \(V (x) = {x}^{\top }Px\) with positive definite matrix

*P*. In other words, an uncertain family of matrices

*A*(

*q*), \(q \in \mathcal{Q}\) has to satisfy the following set of the matrix Lyapunov-type inequalities:

The inequality above is referred to as a *linear matrix inequality* (LMI), Boyd et al. (1994); there exist both efficient numerical methods for solving such inequalities (*interior point methods*) and various software, e.g., Matlab. This approach can be directly applied at least in the following two cases: (i) the set \(\mathcal{Q}\) contains a finite number of points and (ii) \(\mathcal{Q}\) is a polyhedron and the dependence *A*(*q*) is affine. In the general setup or in the high-dimensional problems, randomized methods can be employed.

Finding the *quadratic robust stability margin* (by analogy with the stability margin, this is the maximum span of the feasible set \(\mathcal{Q}\) that allows for the existence of the common Lyapunov function) in this problem is also possible; it reduces to the minimization of a linear function over the solutions of a similar LMI.

Note that the approaches based on superstability and quadratic stability provide only sufficient conditions for robustness.

## Robust Control

So far, of our primary interest was in assessing the robust stability of a closed-loop system with synthesized linear feedback. A more important problem is to *design* a controller that makes the closed-loop system robustly stable and guarantees certain *robust performance* of the system.

### Robust Stabilization

*robust stabilization*consists in finding the linear static state feedback

*output*robustly stabilizing controllers can be considered in the situations where only the linear output

*y*=

*Cx*of the system is available, but not the complete state vector

*x*.

If the number of controller parameters to be tuned is small (which is the case for PI or PID controllers), then the design can be accomplished using the *D-decomposition technique*.

In the general formulation, the problem of robust design is complicated; it can, however, be addressed with the use of randomized methods, Tempo et al. (2013). Other plausible approaches include superstability and quadratic stability; respectively, the problem reduces to solving linear programs or linear matrix inequalities in the coefficients of the controller.

### Robust Performance

Needless to say, the robust stabilization problem is not the only one in the area of optimal control. As a rule, a certain cost function is always involved (say, integral quadratic), and its desired value should be guaranteed for all admissible values of the uncertain parameters. Moreover, robust stability is a necessary condition for such a guaranteed estimate to exist. This sort of problems can often be cast in the form of LMIs which must be satisfied for all admissible values of the parameters. Such robust LMIs can be solved either directly or using various randomized techniques presented in Tempo et al. (2013).

## Conclusions

In spite of the considerable progress attained in the parametric robustness of complex systems, this topic is still a vivid and active research area. To date, randomization, superstability, and quadratic stability present the most efficient and diverse tools for the analysis and design of systems affected by parametric uncertainty.

## Cross-References

## Bibliography

- Ackermann J (1993) Robust control: systems with uncertain physical parameters. Springer, LondonCrossRefzbMATHGoogle Scholar
- Barmish BR (1994) New tools in robustness of linear systems. Macmillan, New YorkGoogle Scholar
- Bartlett AC, Hollot CV, Lin H (1988) Root locations of an entire polytope of polynomials: it suffices to check the edges. Math Control Sig Syst 1(1):61–71CrossRefzbMATHMathSciNetGoogle Scholar
- Bhattacharyya SP, Chapellat H, Keel LH (1995) Robust control: the parametric approach. Prentice Hall, Upper Saddle RiverzbMATHGoogle Scholar
- Boyd S, El Ghaoui L, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, PhiladelphiaCrossRefzbMATHGoogle Scholar
- Frazer RA, Duncan WJ (1929) On the criteria for the stability of small motions. Proc R Soc Lond A 124(795):642–654CrossRefzbMATHGoogle Scholar
- Gantmacher FR (2000) The theory of matrices. AMS, ProvidenceGoogle Scholar
- Kharitonov VL (1978) Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. Differentsial’nye Uravneniya 14:2086–2088zbMATHMathSciNetGoogle Scholar
- Kiselev ON, Le Hung Lan, Polyak BT (1997) Frequency responses under parametric uncertainty. Autom Remote Control 58(Pt. 2, 4):645–661Google Scholar
- Leitmann G (1979) Guaranteed asymptotic stability for some linear systems with bounded uncertainties. J Dyn Syst Measure Control 101(3):212–216CrossRefzbMATHMathSciNetGoogle Scholar
- Nemirovskii AS (1994) Several NP-hard problems arising in robust stability analysis. Math Control Sig Syst 6(1):99–105MathSciNetGoogle Scholar
- Polyak BT, Shcherbakov PS (2002) Superstable linear control systems. I. Analysis; II. Design. Autom Remote Control 63(8):1239–1254; 63(11):1745–1763CrossRefzbMATHMathSciNetGoogle Scholar
- Tempo R, Calafiore G, Dabbene F (2013) Randomized algorithms for analysis and control of uncertain systems, with applications, 2nd edn. Springer, LondonCrossRefzbMATHGoogle Scholar
- Tsypkin YZ, Polyak BT (1991) Frequency domain criteria for
*l*^{p}-robust stability of continuous systems. IEEE Trans Autom Control 36(12):1464–1469CrossRefzbMATHMathSciNetGoogle Scholar - Zadeh LA, Desoer CA (1963) Linear system theory – a state space approach. McGraw-Hill, New YorkGoogle Scholar