# Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

# Stability: Lyapunov, Linear Systems

Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_194-1

## Abstract

The notion of stability allows to study the qualitative behavior of dynamical systems. In particular it allows to study the behavior of trajectories close to an equilibrium point or to a motion. The notion of stability that we discuss has been introduced in 1882 by the Russian mathematician A.M. Lyapunov, in his Doctoral Thesis; hence, it is often referred to as Lyapunov stability. In this entry we discuss and characterize Lyapunov stability for linear systems.

## Keywords

Linear systems Equilibrium points Motions Stability Eigenvalues

## Introduction

Consider a linear, time-invariant, finite-dimensional system, i.e., a system described by equations of the form
$$\begin{array}{ccl} \sigma x& =&Ax + Bu, \\ y & =&Cx + Du, \end{array}$$
(1)
with x(t) ∈ I​​R n , u(t) ∈ I​​R m , y(t) ∈ I​​R p and A ∈ I​​R n ×n , B ∈ I​​R n ×m , C ∈ I​​R p ×n , and D ∈ I​​R p ×m constant matrices. In Eq. (1) σx(t) stands for $$\dot{x}(t)$$ if the system is continuous-time and for x(t + 1) if the system is discrete-time. Since the system is time-invariant, it is assumed, without loss of generality, that all signals are defined for t ≥ 0, that is, if the system is continuous-time, then t ∈ I​​R  + , i.e., the set of non-negative real numbers, whereas if the system is discrete-time, then t ∈ Z  + , i.e., the set of non-negative integers. For ease of notation, the argument “t” is dropped whenever this does not cause confusion, and we use the notation t ≥ 0 to denote either I​​R  +  or Z  + . Finally, we use either x(t, x(0), u) or x(t) to denote the solution of the first of equations (1) at a given time t ≥ 0, with the initial condition x(0) and the input signal u. The former is used when it is important to keep track of the initial state and external input u, whereas the latter is used whenever there is not such a need.

### Definition 1 (Equilibrium).

Consider the system (1). Assume the input u is constant, i.e., u(t) = u 0 for all t ≥ 0 and for some constant u 0. A state x e is an equilibrium of the system associated to the input u 0 if x e  = x(t, x e , u 0), for all t ≥ 0.

### Proposition 1 (Equilibria of linear systems).

Consider the system (1) and assume u(t) = u 0 , for all t, where u 0 is a constant vector. Then the following hold.
• If u 0 = 0 then the origin is an equilibrium.

• For continuous-time systems, if A is invertible, for any u 0 there is a unique equilibrium $$x_{e} = -{A}^{-1}Bu_{0}$$ . If A is not invertible, the system has either infinitely many equilibria or it has no equilibria.

• For discrete-time systems, if I − A is invertible, for any u 0 there is a unique equilibrium $$x_{e} = {(I - A)}^{-1}Bu_{0}$$ . If I − A is not invertible, the system has either infinitely many equilibria or it has no equilibria.

### Proposition 2.

Consider the continuous-time, time-invariant, linear system
$$\begin{array}{ccl} \dot{x}& =&Ax + Bu,\\ y&=&Cx + Du,\end{array}$$
and the initial condition x(0) = x 0 . Then, for all t ≥ 0,
$$x(t) = {e}^{At}x_{ 0} +\displaystyle\int _{ 0}^{t}{e}^{A(t-\tau )}Bu(\tau )d\tau$$
(2)
and
$$y(t) = C{e}^{At}x_{ 0} +\displaystyle\int _{ 0}^{t}C{e}^{A(t-\tau )}Bu(\tau )d\tau + Du(t).$$
(3)

### Proposition 3.

Consider the discrete-time, time-invariant, linear system (to simplify the notation we use x + (t) to denote x(t + 1) and we drop the argument t)
$$\begin{array}{ccl} {x}^{+} & =&Ax + Bu, \\ y & =&Cx + Du,\end{array}$$
and the initial condition x(0) = x 0 . Then, for all t ≥ 0,
$$x(t) = {A}^{t}x_{ 0} +\displaystyle\sum _{ i=0}^{t-1}{A}^{t-1-i}Bu(i)$$
(4)
and
$$y(t) = C{A}^{t}x_{ 0} +\displaystyle\sum _{ i=0}^{t-1}C{A}^{t-1-i}Bu(i) + Du(t).$$
(5)

## Definitions

In this section we provide some notions and definitions which are applicable to general dynamical systems.

### Definition 2 (Lyapunov stability).

Consider the system (1) with u(t) = u 0, for all t ≥ 0 and for some constant u 0. Let x e be an equilibrium point. The equilibrium is stable (in the sense of Lyapunov) if for every ε > 0 there exists a δ = δ(ε) > 0 such that $$\|x(0) - x_{e}\| <\delta$$ implies $$\|x(t) - x_{e}\| <\epsilon ,$$ for all t ≥ 0, where the notation $$\|\cdot \|$$ denotes the Euclidean norm in $${\mathbb{R}}^{n}.$$

In stability theory the quantity x(0) − x e is called initial perturbation, and x(t) is called perturbed evolution. Therefore, the definition of stability can be interpreted as follows. An equilibrium point x e is stable if however we select a tolerable deviation ε, there exists a (possibly small) neighborhood of the equilibrium x e such that all initial conditions in this neighborhood yield trajectories which are within the tolerable deviation.

The property of stability dictates a condition on the evolution of the system for all t ≥ 0. Note, however, that in the definition of stability, we have not requested that the perturbed evolution converge asymptotically, that is, for t → , to x e . This convergence property is very important in applications, as it allows to characterize the situation in which not only the perturbed evolution remains close to the unperturbed evolution, but it also converges to the initial (unperturbed) evolution. To capture this property we introduce a new definition.

### Definition 3 (Asymptotic stability).

Consider the system (1) with u(t) = u 0, for all t ≥ 0 and for some constant u 0. Let x e be an equilibrium point. The equilibrium is asymptotically stable if it is stable and if there exists a constant δ a  > 0 such that $$\|x(0) - x_{e}\| <\delta _{a}$$ implies $$\lim _{t\rightarrow \infty }\|x(t) - x_{e}\| = 0.$$

In summary, an equilibrium point is asymptotically stable if it is stable, and whenever the initial perturbation is inside a certain neighborhood of x e , the perturbed evolution converges, asymptotically, to the equilibrium point, which is thus said to be attractive. From a physical point of view, this means that all sufficiently small initial perturbations give rise to effects which can be a priori bounded (stability) and which vanish asymptotically (attractivity).

It is important to highlight that, in general, attractivity does not imply stability: it is possible to have an equilibrium of a system which is not stable (i.e., it is unstable), yet for all initial perturbations, the perturbed evolution converges to the equilibrium. This however is not the case for linear systems, as discussed in section “Stability of Linear Systems”. We conclude the section with two simple examples illustrating the notions that have been introduced.

### Example 1.

Consider the discrete-time system $${x}^{+} = -x,$$ with x(t) ∈ I​​R. This system has a unique equilibrium at x e  = 0. Note that for any initial condition x 0 ∈ I​​R, one has
$$x_{2t-1} = -x_{0},\qquad \qquad x_{2t} = x_{0},$$
for all t ≥ 1 and integer. This implies that the equilibrium is stable, but not attractive.

### Example 2.

Consider the continuous-time system
$$\dot{x}_{1} =\omega x_{2},\qquad \dot{x}_{2} = -\omega x_{1},$$
with ω a positive constant. The system has a unique equilibrium at x e  = 0. This equilibrium is stable, but not attractive. To see this note that, along the trajectories of the system, $$x_{1}\dot{x}_{1} + x_{2}\dot{x}_{2} = 0,$$ and this implies that, along the trajectories of the system, x 1 2(t) + x 2 2(t) is constant, i.e., $$x_{1}^{2}(t) + x_{2}^{2}(t) = x_{1}^{2}(0) + x_{2}^{2}(0).$$ Therefore, the state of the system remains on the circle centered at the origin and with radius $$\sqrt{x_{1 }^{2 }(0) + x_{2 }^{2 }(0)}$$, for all t ≥ 0: the condition for stability holds with δ(ε) = ε.

### Definition 4 (Global asymptotic stability).

Consider the system (1) with u(t) = u 0, for all t ≥ 0 and for some constant u 0. Let x e be an equilibrium point. The equilibrium is globally asymptotically stable if it is stable and if, for all x(0), $$\lim _{t\rightarrow \infty }\|x(t) - x_{e}\| = 0.$$

The property of (global) asymptotic stability can be strengthened imposing conditions on the convergence speed of $$\|x(t) - x_{e}\|$$.

### Definition 5 (Exponential stability).

Consider the system (1) with u(t) = u 0, for all t ≥ 0 and for some constant u 0. Let x e be an equilibrium point. The equilibrium is exponentially stable if there exists λ > 0, in the case of continuous-time systems, and 0 < λ < 1 in the case of discrete-time systems, such that for all ε > 0, there exists a δ = δ(ε) > 0 such that $$\|x(0) - x_{e}\| <\delta$$ implies $$\|x(t) - x_{e}\| <\epsilon {e}^{-\lambda t},$$ in the case of continuous-time systems, and $$\|x(t) - x_{e}\| {<\epsilon \lambda }^{t},$$ in the case of discrete-time systems, for all t ≥ 0.

### Definition 6 (Stability of motion).

Consider the system (1). Let
$$\mathcal{M} =\{ (t,x(t)) \in T \times I\,\,{R}^{n}\},$$
with x(t) = x(t, x 0, u), for given x 0 and u, and $$T = I\,\,{R}^{+}$$, in the case of continuous-time systems, and $$T = {Z}^{+}$$, in the case of discrete-time systems, be a motion. The motion is stable if for every ε > 0 there exists a δ = δ(ε) > 0 such that $$\|x(0) - x_{0}\| <\delta$$ implies
$$\|x(t,x(0),u) - x(t,x_{0},u)\| <\epsilon ,$$
(6)
for all t ≥ 0.

The notion of stability of a motion is substantially the same as the notion of stability of an equilibrium. The important issue is that the time-parametrization is important, i.e., a motion is stable if, for small initial perturbations, the perturbed evolution is close, for any fixed t ≥ 0, to the non-perturbed evolution. This does not mean that if the perturbed and unperturbed trajectories are close, then the motion is stable: in fact the trajectories may be close but may be followed with different timing, which means that for some t ≥ 0 condition (6) may be violated.

## Stability of Linear Systems

The notion of stability relies on the knowledge of the trajectories of the system. As a result, even if this notion is very elegant and useful in applications, it is in general hard to assess stability of an equilibrium or of a motion. There are, however, classes of systems for which it is possible to give stability conditions without relying upon the knowledge of the trajectories. Linear systems belong to one such class. In this section we study the stability properties of linear systems, and we show that, because of the linear structure, it is possible to assess the properties of stability and attractivity in a simple way. To begin with, we recall some properties of linear systems.

### Proposition 4.

Consider a linear, time-invariant system. (Asymptotic) stability of one motion implies (asymptotic) stability of all motions. In particular, (asymptotic) stability of any motion implies and is implied by (asymptotic) stability of the equilibrium x e = 0.

The above statement, together with the result in Proposition 1, implies the following important properties.

### Proposition 5.

If the origin of a linear system is asymptotically stable, then, necessarily, the origin is the only equilibrium of the system for u = 0. Moreover, asymptotic stability of the zero equilibrium is always global. Finally, asymptotic stability implies exponential stability.

The above discussion shows that the stability properties of a motion (e.g., an equilibrium) of a linear system are inherited by all motions of the system. Moreover, for linear systems, local properties are always global properties. This means that, with some abuse of terminology, we can refer the stability properties to the linear system, for example, we say that a linear system is stable to mean that all its motions are stable. Stability properties of a linear, time-invariant system are therefore properties of the free evolution of its state: for this class of systems, it is possible to obtain simple stability tests.

### Proposition 6.

A linear, time-invariant system is stable if and only if$$\|{e}^{At}\| \leq k$$, for continuous-time systems, or$$\|{A}^{t}\| \leq k$$, for discrete-time systems, for all t ≥ 0 and for some k > 0. It is asymptotically stable if and only if$$\lim _{t\rightarrow \infty }{e}^{At} = 0$$, for continuous-time systems, or$$\lim _{t\rightarrow \infty }{A}^{t} = 0$$, for discrete-time systems. To state the next result we need to define the geometric multiplicity of an eigenvalue. To this end we recall a few facts. Consider a matrix A ∈ I​​Rn×nand a polynomial p(λ). The polynomial p(λ) is a zeroing polynomial for A if p(A) = 0. Note that, by Cayley-Hamilton Theorem, the characteristic polynomial of A is a zeroing polynomial for A. Among all zeroing polynomials there is a unique monic polynomial p M ( λ) with smallest degree. This polynomial is called the minimal polynomial of A. Note that the minimal polynomial of A is a divisor of the characteristic polynomial of A. If A has r ≤ n distinct eigenvalues λ 1 , …, λ r, then
$$p_{M}(\lambda ) = {(\lambda -\lambda _{1})}^{m_{1} }{(\lambda -\lambda _{2})}^{m_{2} }\cdots {(\lambda -\lambda _{r})}^{m_{r} },$$
where the number mi denotes, by definition, the geometric multiplicity of λ i , for i = 1,⋯ ,r. This means that the geometric multiplicity of λ i equals the multiplicity of λ i as a root of p M ( λ). Recall, finally, that the multiplicity of λ i as a root of the characteristic polynomial is called algebraic multiplicity.

### Proposition 7.

The equilibrium x e = 0 of a linear, time-invariant system is stable if and only if the following conditions hold.
• In the case of continuous-time systems, the eigenvalues of A with geometric multiplicity equal to one have non-positive real part, and the eigenvalues of A with geometric multiplicity larger than one have negative real part.

• In the case of discrete-time systems, the eigenvalues of A with geometric multiplicity equal to one have modulo not larger than one, and the eigenvalues of A with geometric multiplicity larger than one have modulo smaller than one.

Proof Let λ 1, λ 2, ⋯ , λ r , with r ≥ 1, be the distinct eigenvalues of A, i.e., the distinct roots of the characteristic polynomial of A. Then
$${e}^{At} =\displaystyle\sum _{ i=1}^{r}\displaystyle\sum _{ k=1}^{m_{i} }R_{ik} \frac{{t}^{k-1}} {(k - 1)!}{e}^{\lambda _{i}t},$$
for some matrices R ik , where m i is the geometric multiplicity of the eigenvalue λ i . This matrix is bounded if and only if the conditions in the statement hold. Similarly,
$${A}^{t} =\displaystyle\sum _{ i=1}^{r}\displaystyle\sum _{ k=1}^{m_{i} }R_{ik} \frac{{t}^{k-1}} {(k - 1)!}\lambda _{i}^{t-k+1},$$
for some matrices R ik , and this is bounded if and only if the conditions in the statement hold. $$\vartriangleleft$$

### Proposition 8.

The equilibrium x e = 0 of a linear, time-invariant system is asymptotically stable if and only if the following conditions hold.
• In the case of continuous-time systems, the eigenvalues of A have negative real part.

• In the case of discrete-time systems, the eigenvalues of A have modulo smaller than one.

Proof The proof is similar to the one of the previous proposition, once it is noted that, for the considered class of systems and as stated in Proposition 6, asymptotic stability implies and is implied by boundedness and convergence of e At or A t . $$\vartriangleleft$$

### Remark 1.

For linear, time-varying systems, i.e., systems described by equations of the form
$$\begin{array}{ccl} \sigma x& =&A(t)x + B(t)u,\\ y&=&C(t)x + D(t)u, \end{array}$$
it is possible to provide stability conditions in the spirit of the boundedness and convergence conditions in Proposition 6. These require the definition of a matrix, the so-called monodromy matrix, which describes the free evolution of the state of the system. It is, however, not possible to provide conditions in terms of eigenvalues of the matrix A(t) similar to the conditions in Propositions 7 and 8.

We conclude this discussion with an alternative characterization of asymptotic stability in terms of linear matrix inequalities.

### Proposition 9.

The equilibrium x e = 0 of a linear, time-invariant system is asymptotically stable if and only if the following conditions hold.
• In the case of continuous-time systems, there exists a symmetric positive definite matrix P = P such that A P + PA < 0.

• In the case of discrete-time systems, there exists a symmetric positive definite matrix P = P such that A PA − P < 0.

To complete our discussion we stress that stability properties are invariant with respect to changes in coordinates in the state space.

### Corollary 1.

Consider a linear, time-invariant system and assume it is (asymptotically) stable. Then any representation obtained by means of a change of coordinates of the form $$x(t) = L\hat{x}(t)$$ , with L constant and invertible, is (asymptotically) stable.

Proof The proof is based on the observation that the change of coordinates transforms the matrix A into $$\tilde{A} = {L}^{-1}AL$$ and that the matrices A and $$\tilde{A}$$ are similar, that is, they have the same characteristic and minimal polynomials. $$\vartriangleleft$$

## Summary and Future Directions

The property of Lyapunov stability is instrumental to characterize the qualitative behavior of dynamical systems. For linear, time-invariant systems, this property can be studied on the basis of the location, and multiplicity, of the eigenvalues of the matrix A. The property of Lyapunov stability can be studied for more general classes of systems, including nonlinear systems, distributed parameter systems, and hybrid systems, to which the basic definitions given in this article apply.

## Recommended Reading

Classical references on Lyapunov stability theory and on stability theory for linear systems are given below.

## Bibliography

1. Antsaklis PJ, Michel AN (2007) A linear systems primer. Birkhäuser, BostonGoogle Scholar
2. Brockett RW (1970) Finite dimensional linear systems. Wiley, LondonGoogle Scholar
3. Hahn W (1967) Stability of motion. Springer, New YorkGoogle Scholar
4. Khalil HK (2002) Nonlinear systems, 3rd edn. Prentice-Hall, Upper Saddle RiverGoogle Scholar
5. Lyapunov AM (1992) The general problem of the stability of motion. Taylor & Francis, LondonGoogle Scholar
6. Trentelman HL, Stoorvogel AA, Hautus MLJ (2001) Control theory for linear systems. Springer, LondonGoogle Scholar
7. Zadeh LA, Desoer CA (1963) Linear system theory. McGraw-Hill, New YorkGoogle Scholar

## Copyright information

© Springer-Verlag London 2014

## Authors and Affiliations

1. 1.Department of Electrical and Electronic EngineeringImperial College LondonLondonUK