Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Stability and Performance of Complex Systems Affected by Parametric Uncertainty

  • Boris PolyakEmail author
  • Pavel Shcherbakov
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_137-1


Uncertainty is an inherent feature of all real-life complex systems. It can be described in different forms; we focus on the parametric description. The simplest results on stability of linear systems under parametric uncertainty are the Kharitonov theorem, edge theorem, and graphical tests. More advanced results include sufficient conditions for robust stability with matrix uncertainty, LMI tools, and randomized methods. Similar approaches are used for robust control synthesis, where performance issues are crucial.


Linear systems Parametric uncertainty and robustness Robust stability Kharitonov theorem Tsypkin–Polyak plot Edge theorem Matrix Randomized methods Quadratic stability Robust and optimal design 


Mathematical models for systems and control are often unsatisfactory due to the incompleteness of the parameter data. For instance, the ideas of off-line optimal control can only be applied to real systems if all the parameters, exogenous perturbations, state equations, etc. are known precisely. Moreover, feedback control also requires a detailed information which is not available in most cases. For example, to drive a car with four-wheel control, the controller should be aware of the total weight, location of the center of gravity, weather conditions, and highway properties as well as many other data which may not be known. In that respect, even such a relatively simple real-life system can be considered a complex one; in such circumstances, control under uncertainty is a highly important issue.

The focus in this article is on the parametric uncertainty; other types of uncertainty can be treated in more general models of robustness. This topic became particularly popular in the control community in the mid- to late 1980s of the previous century; at large, the results of this activity have been summarized in the monographs (Ackermann 1993; Barmish 1994; Bhattacharyya et al. 1995).

We start with problems of stability of polynomials with uncertain parameters and present the simplest robust stability results for this case together with the most important machinery. Next, we consider stability analysis for the matrix uncertainty; most of the results are just sufficient conditions. We present some useful tools for the analysis, such as the LMI technique and randomized methods. Robust control under parametric uncertainty is the next step; we briefly discuss several problem formulations for this case.

Stability of Linear Systems Subject to Parametric Uncertainty

Consider the closed-loop linear, time invariant continuous time state space system
$$\dot{x} = Ax,\qquad x(0) = x_{0},$$
where \(x(t) \in {\mathbb{R}}^{n}\) is the state vector, x 0 is an arbitrary finite initial condition, and \(A \in {\mathbb{R}}^{n\times n}\) is the state matrix. The system is stable (i.e., no matter what x 0 is, the solutions tend to zero as t → ) if and only if all eigenvalues λ i of the matrix A have negative real parts:
$$\mathrm{Re}\lambda _{i} < 0,\qquad i = 1,\ldots ,n,$$
in which case, A is said to be a Hurwitz matrix. If it is known precisely, checking condition (2) is immediate. For instance, one might compute the characteristic polynomial
$$p(s) =\det (sI - A) = a_{0} + a_{1}s +\ldots +a_{n-1}{s}^{n-1} + {s}^{n}$$
of A (here, I is the identity matrix) and use any of the stability tests (e.g., the Routh algorithm, Routh–Hurwitz test, and graphical tests such as the Mikhailov plot or Hermite–Biehler theorem), see Gantmacher (2000). Alternatively, the eigenvalues can be directly computed using the currently available software, such as Matlab.
However, things get complicated if the knowledge of the matrix A is incomplete; for instance, it can depend on the (real) parameters \(q = (q_{1},\ldots ,q_{m})\) which take arbitrary values within the given intervals:
$$A = A(q),\qquad \underline{q}_{i} \leq q_{i} \leq \overline{q}_{i},\quad i = 1,\ldots ,m.$$
In that case, we arrive at the robust stability problem; i.e., the goal is to check if condition (2) holds for all matrices in the family (4).

The two main components of any robust stability setup are the feasible set \(\mathcal{Q}\subset {\mathbb{R}}^{\ell}\), in which the uncertain parameters are allowed to take their values (usually a ball in some norm; e.g., the box as in (4)), and the uncertainty structure, which defines the functional dependence of the coefficients on the uncertain parameters. Of the most interest are the affine and multiaffine dependence; typically, more general situations are hard to handle.

Simple Solutions

In some cases, the robust stability problem admits a simple solution. Perhaps the most striking example is the so-called Kharitonov theorem (Kharitonov 1978); also see Barmish (1994), where this seminal result is referred to as a spark because of its transparency and elegance.

Namely, consider the interval polynomial family
$$\displaystyle\begin{array}{rcl} \mathcal{P}& =& \{p(s) = q_{0} + q_{1}s +\ldots +q_{n}{s}^{n}, \\ & & \ \ \ \ \ \ \underline{q}_{i} \leq q_{i} \leq \overline{q}_{i},\quad i = 0,\ldots ,n\}, \end{array}$$
where the coefficients q i are allowed to take values in the respective intervals independently of each other and distinguish the following four elements in this family:
$$\displaystyle\begin{array}{rcl} p_{1}(s)& =& \underline{a}_{0} + \underline{q}_{1}s + \overline{q}_{2}{s}^{2} + \overline{q}_{ 3}{s}^{3} +\ldots \\ p_{2}(s)& =& \underline{q}_{0} + \overline{q}_{1}s + \overline{q}_{2}{s}^{2} + \underline{q}_{ 3}{s}^{3} +\ldots \\ p_{3}(s)& =& \overline{q}_{0} + \overline{q}_{1}s + \underline{q}_{2}{s}^{2} + \underline{q}_{ 3}{s}^{3} +\ldots \\ p_{4}(s)& =& \overline{q}_{0} + \underline{q}_{1}s + \underline{q}_{2}{s}^{2} + \overline{q}_{ 3}{s}^{3}+\ldots \\ \end{array}$$
By the Kharitonov theorem, the interval family (5) is robustly stable (i.e., all polynomials in (5) are Hurwitz having all roots with negative real parts) if and only if the four Kharitonov polynomials, \(p_{1},p_{2},p_{3},\mbox{ and }p_{4}\), are Hurwitz.

A simple and transparent proof of this result can be obtained using the value set concept (Zadeh and Desoer 1963) and the zero exclusion principle (Frazer and Duncan 1929), the two general tools which are in the basis of many results in the area of robust stability. We illustrate these concepts via robust stability of polynomials.

Given the uncertain polynomial family
$$\mathcal{P}(s,Q) =\{ p(s,q),\quad q \in \mathcal{Q}\},$$
the set
$$\mathcal{V}(\omega ) =\{ p(j\omega ,q): \;\;\omega \geq 0,\;q \in \mathcal{Q}\}$$
is referred to as the value set, which is, by definition, the set on the complex plane obtained by fixing the argument s to be  for a certain value of ω and letting the uncertain parameter vector q sweep the feasible domain.
The zero exclusion principle states that, under certain regularity requirements, the uncertain polynomial family is robustly stable if and only if it contains a stable element and the following condition holds:
$$0\notin \mathcal{V}(\omega )\quad \forall \;\omega \geq 0.$$
To use this machinery, one has to be able to compute efficiently the value set and check condition (6). For the interval family (5), the value set can be shown to be the rectangle with coaxial edges and the vertices being the values of the four Kharitonov polynomials; see Fig. 1. Being an extremely propelling result, the Kharitonov theorem is not free of drawbacks. First of all, it is not capable of determining the maximal lengths of the uncertainty intervals that retain the robust stability. This relates to an important notion of robust stability margin; for simplicity, we define this quantity for the case of the interval family (5). Namely, introduce the nominal polynomial p 0(s) with coefficients
$$q_{i}^{0} = (\overline{q}_{ i} + \underline{q}_{i})/2,$$
and the scaling factors
$$\alpha _{i} = (\overline{q}_{i} -\underline{q}_{i})/2$$
for the deviations of the coefficients. Then the robust stability margin r max is defined as follows:
$$\begin{array}{rcl} & & r_{\max }\, =\,\sup \{ r: \;p(s,q)\ \text{ is stable}\;\forall \,q_{i}: \,\ \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \vert q_{i} - q_{i}^{0}\vert \leq r\alpha _{ i},\quad i = 1,\ldots ,n\}. \end{array}$$
Fig. 1

The Kharitonov rectangular value set

Anther drawback of the Kharitonov result is its inapplicability to the discrete-time case (Schur stability of polynomials).

A more flexible graphical test for robust stability uses the so-called Tsypkin–Polyak plot (Tsypkin and Polyak 1991), which is defined as the parametric curve on the complex plane:
$$z(\omega ) = x(\omega ) + jy(\omega ),\,j = \sqrt{-1};\quad 0 \leq \omega < \infty ,$$
$$x(\omega ) = \frac{q_{0}^{0} - q_{2}^{{0}\omega ^{2}}+\ldots } {\alpha _{0} +\alpha { _{2}\omega }^{2}+\ldots } \,,\quad y(\omega ) = \frac{q_{1}^{0} - q_{3}^{{0}\omega ^{2}}+\ldots } {\alpha _{1} +\alpha { _{3}\omega }^{2}+\ldots } \,.$$
Then, by the Tsypkin–Polyak criterion, the polynomial family (5) is robustly stable if and only if the following conditions hold: (i) q 0 0 > α 0, q n 0 > α n , and (ii) as ω changes zero to infinity, the curve z(ω) goes consecutively through n quadrants in the counterclockwise direction and does not intersect the unit square with the vertices ( ± 1, ± j). Unlike the Kharitonov theorem, with this test, the robust stability margin of family (5) can be determined as the size of the maximal square inscribed in the curve z(ω); see Fig. 2. Moreover, with minor modifications, this test applies to dependent uncertainty structures where the coefficient vector \(q = {(q_{0},\ldots ,q_{n})}^{\top }\) is confined to a ball in p -norm, not to a box as in (5).
Fig. 2

The Tsypkin–Polyak plot

On top of that, the Tsypkin–Polyak plot can be built for discrete-time systems which do not admit any counterparts of the Kharitonov theorem.

It is fair to say that interval polynomial families is an idealization, since the coefficients of the characteristic polynomial can hardly be thought of as the physical parameters of the real-world system. As a step towards more realistic formulations, consider the affine polynomial family of the form
$$p(s) = p_{0}(s) +\displaystyle\sum _{ i=1}^{m}q_{ i}p_{i}(s),\quad \vert q_{i}\vert \leq 1,\quad i = 1,\ldots ,m,$$
where p i are the given polynomials and the q i s are the uncertain parameters (clearly, they can be scaled to take values in the segment [ − 1, 1]). The famous edge theorem (Bartlett et al. 1988) claims that checking the robust stability of such a family is equivalent to checking the edges of the uncertainty box, i.e., the points \(q \in {\mathbb{R}}^{m}\) with all but one components being fixed to ± 1, while the “free” coordinate varies in [ − 1, 1].

Complex Solutions

Obviously, the affine model (9) covers just a small part of problems with parametric uncertainty. Closed-form solutions cannot be obtained in the general case; however, many important classes of systems can be analyzed efficiently.

Thus, in the engineering practice, block diagram description of systems is often more convenient than differential equations of the form (1). The blocks are associated with typical elements such as amplifiers, integrators, lag elements, and oscillators, which are connected in a certain circuit. In this case, transfer functions are the most adequate tool for dealing with such systems. For instance, the transfer function of the lag element is given by
$$W(s) = 1/(Ts + 1),$$
where the scalar T is the time constant of the element. In terms of differential equations, this means that the input u(t) of a block and its output x(t) satisfy the equation \(T\dot{x} + x = u\).
Assume now we have a set of m cascade connected elements with uncertain time constants
$$\underline{T}_{i} \leq T_{i} \leq \overline{T}_{i},\quad i = 1,\ldots ,m,$$
with known lower and upper bounds. The characteristic polynomial of such a connection embraced by the feedback with gain k is known to have the form
$$p(s) = k + (1 + T_{1}s)\cdots (1 + T_{m}s).$$
Hence, the robust stability problem reduces to checking if all polynomials (11) with constraints (10) are Hurwitz. Note that the coefficients of such a polynomial depend multilinearly on the uncertain parameters T i (cf. linear dependence in (9)), making the problem much more complicated.

The solution of the problem above was obtained in Kiselev et al. (1997) for many important special cases; the closely related problem of finding the “critical gain” (the maximal value of k retaining the robust stability) was also addressed.

Using the similar technique, closed-form solutions can be obtained for a number of similar problems such as robust sector stability, robust stability of distributed systems, robust D-decomposition, to name just a few.

Difficult Problems: Possible Approaches

In spite of the apparent progress obtained in the area of parametric robustness, the list of unsolved problems is still quite large. Moreover, some of the formulations were shown to be NP-hard, making it hard to believe that any efficient solution methods will ever be found.

One of such fundamental problems is robust stability of the interval matrix. Specifically, assume that the entries a ij of the matrix A in (1) are interval numbers
$$\underline{a}_{ij} \leq a_{ij} \leq \overline{a}_{ij},\qquad i,j = 1,\ldots ,n;$$
the problem is to check if the interval matrix is robustly stable, i.e., if the eigenvalues of all matrices in this family have negative real parts. Numerous attempts to prove a Kharitonov-like theorem for matrices have failed, and the results by Nemirovskii (1994) on NP-hardness showed that these generalizations are not possible. It was also shown that the edge theorem for matrix families is not valid. The other NP-hard problems in robustness include the analysis of systems with interval delays, parallel connection of uncertain blocks, problem (11)–(10) with nested segments \([\underline{T}_{i},\overline{T}_{i}]\), and others.

However, a change in the statement of the problem often allows for simple and elegant solutions. We mention three fruitful reformulations.

In the first approach, the uncertain parameters are assumed to have random rather than deterministic nature; for instance, they are assumed to be uniformly distributed over the respective intervals of uncertainty. We next specify an acceptable tolerance ε, say ε = 0. 01, and check if the resulting random family of polynomials is stable with probability no less than (1 − ε); see Tempo et al. (2013) for a comprehensive exposition of such a randomized approach to robustness.

In many of the NP-hard robustness problems, such a reformulation often leads to exact or approximate solutions. Moreover, the randomized approach has several attractive properties even in the situations where the deterministic solution is available. Indeed, the deterministic statements of robustness problems are minimax; hence, the answer is dictated by the “worst” element in the family, whereas these critical values of the uncertain parameters are rather unlikely to occur. Therefore, by neglecting a small risk of violation of the stability, the admissible domains of variation of the parameters may be considerably extended. This effect is known as the probabilistic enhancement of robustness margins; it is particularly tangible for the large number of the parameters. Another attractive property of the randomized approach is its low computational complexity which only slowly grows with increase of the number of uncertain parameters.

To illustrate, let us turn back to problem (11)–(10) and use the value set approach. In the considered problem, this set can be efficiently built.

Assume now that the parameters T i are independent random variables uniformly distributed over the respective segments (10) and consider the random variable
$$\eta =\eta (\omega ) =\log (p(j\omega ) - k) =\displaystyle\sum _{ i=1}^{m}\log (1 + j\omega T_{ i}).$$
The right-hand side of the last relation is the sum of independent complex-valued random variables; for m large, its behavior obeys the central limit theorem, so that the probability that η belongs to the respective confident ellipse \(\mathcal{E} = \mathcal{E}(\omega )\) is close to unity. In other words, we have
$$p(j\omega ) \approx k + {e}^{\mathcal{E}}\doteq\mathcal{G}(\omega ),$$
and the set \(\mathcal{G}(\omega )\) is referred to as a probabilistic predictor of the value set \(\mathcal{V}(\omega )\); it is the shifted set of points of the form \({e}^{z},z \in \mathcal{E}\subset \mathbb{C}\). The predictor \(\mathcal{G}(\omega )\) constitutes a small portion of the deterministic value set \(\mathcal{V}(\omega )\), yielding the probabilistic enhancement of the robustness margin.

Note also that the computation of \(\mathcal{E}\) and \({e}^{\mathcal{E}}\) is nearly trivial and, in contrast to the construction of the true value set \(\mathcal{V}\), the complexity does not grow with increase of m.

The second approach to solving “hard” problems in robust stability relates to the notion of superstability (Polyak and Shcherbakov 2002). The matrix A of system (1) (and the system itself) is said to be superstable, if its entries \(a_{ij},\;i,j = 1,\ldots ,n\), satisfy the relations
$$a_{ii} < 0,\qquad \min _{i}(-a_{ii} -\displaystyle\sum _{j\neq i}\vert a_{ij}\vert ) =\sigma > 0.$$
The following estimate holds for the solutions of the superstable system (1):
$$\|x(t)\|_{\infty }\leq \| x(0)\|_{\infty }{e}^{-\sigma t},$$
i.e., it is stable, and the (nonsmooth) function ∥ x ∥  is a Lyapunov function for the system. Since the condition of superstability is formulated in terms of linear inequalities on the entries of A, checking robust superstability of affine (and in particular, interval) matrix families is immediate. Similar situation holds for so-called positive systems.
The third approach to robustness analysis relates to quadratic stability, Leitmann (1979) and Boyd et al. (1994). Namely, a family of systems is said to be robustly quadratically stable if it possesses a common quadratic Lyapunov function \(V (x) = {x}^{\top }Px\) with positive definite matrix P. In other words, an uncertain family of matrices A(q), \(q \in \mathcal{Q}\) has to satisfy the following set of the matrix Lyapunov-type inequalities:
$$A(q)P + PA{(q)}^{\top }\prec 0,\quad q \in \mathcal{Q},\quad P \succ 0,$$
where the symbols \(\prec ,\succ \) stand for the sign-definiteness of a matrix.

The inequality above is referred to as a linear matrix inequality (LMI), Boyd et al. (1994); there exist both efficient numerical methods for solving such inequalities (interior point methods) and various software, e.g., Matlab. This approach can be directly applied at least in the following two cases: (i) the set \(\mathcal{Q}\) contains a finite number of points and (ii) \(\mathcal{Q}\) is a polyhedron and the dependence A(q) is affine. In the general setup or in the high-dimensional problems, randomized methods can be employed.

Finding the quadratic robust stability margin (by analogy with the stability margin, this is the maximum span of the feasible set \(\mathcal{Q}\) that allows for the existence of the common Lyapunov function) in this problem is also possible; it reduces to the minimization of a linear function over the solutions of a similar LMI.

Note that the approaches based on superstability and quadratic stability provide only sufficient conditions for robustness.

Robust Control

So far, of our primary interest was in assessing the robust stability of a closed-loop system with synthesized linear feedback. A more important problem is to design a controller that makes the closed-loop system robustly stable and guarantees certain robust performance of the system.

Robust Stabilization

Let the linear system
$$\dot{x} = A(q)x + Bu$$
depend on the vector \(q \in \mathcal{Q}\) of uncertain parameters. In the simplest form, the problem of robust stabilization consists in finding the linear static state feedback
$$u = Kx$$
that guarantees the robust stability of the closed-loop system. Alternatively, static or dynamic output robustly stabilizing controllers can be considered in the situations where only the linear output y = Cx of the system is available, but not the complete state vector x.

If the number of controller parameters to be tuned is small (which is the case for PI or PID controllers), then the design can be accomplished using the D-decomposition technique.

In the general formulation, the problem of robust design is complicated; it can, however, be addressed with the use of randomized methods, Tempo et al. (2013). Other plausible approaches include superstability and quadratic stability; respectively, the problem reduces to solving linear programs or linear matrix inequalities in the coefficients of the controller.

Robust Performance

Needless to say, the robust stabilization problem is not the only one in the area of optimal control. As a rule, a certain cost function is always involved (say, integral quadratic), and its desired value should be guaranteed for all admissible values of the uncertain parameters. Moreover, robust stability is a necessary condition for such a guaranteed estimate to exist. This sort of problems can often be cast in the form of LMIs which must be satisfied for all admissible values of the parameters. Such robust LMIs can be solved either directly or using various randomized techniques presented in Tempo et al. (2013).


In spite of the considerable progress attained in the parametric robustness of complex systems, this topic is still a vivid and active research area. To date, randomization, superstability, and quadratic stability present the most efficient and diverse tools for the analysis and design of systems affected by parametric uncertainty.



  1. Ackermann J (1993) Robust control: systems with uncertain physical parameters. Springer, LondonCrossRefzbMATHGoogle Scholar
  2. Barmish BR (1994) New tools in robustness of linear systems. Macmillan, New YorkGoogle Scholar
  3. Bartlett AC, Hollot CV, Lin H (1988) Root locations of an entire polytope of polynomials: it suffices to check the edges. Math Control Sig Syst 1(1):61–71CrossRefzbMATHMathSciNetGoogle Scholar
  4. Bhattacharyya SP, Chapellat H, Keel LH (1995) Robust control: the parametric approach. Prentice Hall, Upper Saddle RiverzbMATHGoogle Scholar
  5. Boyd S, El Ghaoui L, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, PhiladelphiaCrossRefzbMATHGoogle Scholar
  6. Frazer RA, Duncan WJ (1929) On the criteria for the stability of small motions. Proc R Soc Lond A 124(795):642–654CrossRefzbMATHGoogle Scholar
  7. Gantmacher FR (2000) The theory of matrices. AMS, ProvidenceGoogle Scholar
  8. Kharitonov VL (1978) Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. Differentsial’nye Uravneniya 14:2086–2088zbMATHMathSciNetGoogle Scholar
  9. Kiselev ON, Le Hung Lan, Polyak BT (1997) Frequency responses under parametric uncertainty. Autom Remote Control 58(Pt. 2, 4):645–661Google Scholar
  10. Leitmann G (1979) Guaranteed asymptotic stability for some linear systems with bounded uncertainties. J Dyn Syst Measure Control 101(3):212–216CrossRefzbMATHMathSciNetGoogle Scholar
  11. Nemirovskii AS (1994) Several NP-hard problems arising in robust stability analysis. Math Control Sig Syst 6(1):99–105MathSciNetGoogle Scholar
  12. Polyak BT, Shcherbakov PS (2002) Superstable linear control systems. I. Analysis; II. Design. Autom Remote Control 63(8):1239–1254; 63(11):1745–1763CrossRefzbMATHMathSciNetGoogle Scholar
  13. Tempo R, Calafiore G, Dabbene F (2013) Randomized algorithms for analysis and control of uncertain systems, with applications, 2nd edn. Springer, LondonCrossRefzbMATHGoogle Scholar
  14. Tsypkin YZ, Polyak BT (1991) Frequency domain criteria for l p-robust stability of continuous systems. IEEE Trans Autom Control 36(12):1464–1469CrossRefzbMATHMathSciNetGoogle Scholar
  15. Zadeh LA, Desoer CA (1963) Linear system theory – a state space approach. McGraw-Hill, New YorkGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.Institute of Control ScienceMoscowRussia