Stability: Lyapunov, Linear Systems
Abstract
The notion of stability allows to study the qualitative behavior of dynamical systems. In particular it allows to study the behavior of trajectories close to an equilibrium point or to a motion. The notion of stability that we discuss has been introduced in 1882 by the Russian mathematician A.M. Lyapunov, in his Doctoral Thesis; hence, it is often referred to as Lyapunov stability. In this entry we discuss and characterize Lyapunov stability for linear systems.
Keywords
Linear systems Equilibrium points Motions Stability EigenvaluesIntroduction
Definition 1 (Equilibrium).
Consider the system (1). Assume the input u is constant, i.e., u(t) = u _{0} for all t ≥ 0 and for some constant u _{0}. A state x _{ e } is an equilibrium of the system associated to the input u _{0} if x _{ e } = x(t, x _{ e }, u _{0}), for all t ≥ 0.
Proposition 1 (Equilibria of linear systems).

If u _{0} = 0 then the origin is an equilibrium.

For continuoustime systems, if A is invertible, for any u _{0} there is a unique equilibrium \(x_{e} = {A}^{1}Bu_{0}\) . If A is not invertible, the system has either infinitely many equilibria or it has no equilibria.

For discretetime systems, if I − A is invertible, for any u _{0} there is a unique equilibrium \(x_{e} = {(I  A)}^{1}Bu_{0}\) . If I − A is not invertible, the system has either infinitely many equilibria or it has no equilibria.
Proposition 2.
Proposition 3.
Definitions
In this section we provide some notions and definitions which are applicable to general dynamical systems.
Definition 2 (Lyapunov stability).
Consider the system (1) with u(t) = u _{0}, for all t ≥ 0 and for some constant u _{0}. Let x _{ e } be an equilibrium point. The equilibrium is stable (in the sense of Lyapunov) if for every ε > 0 there exists a δ = δ(ε) > 0 such that \(\x(0)  x_{e}\ <\delta\) implies \(\x(t)  x_{e}\ <\epsilon ,\) for all t ≥ 0, where the notation \(\\cdot \\) denotes the Euclidean norm in \({\mathbb{R}}^{n}.\)
In stability theory the quantity x(0) − x _{ e } is called initial perturbation, and x(t) is called perturbed evolution. Therefore, the definition of stability can be interpreted as follows. An equilibrium point x _{ e } is stable if however we select a tolerable deviation ε, there exists a (possibly small) neighborhood of the equilibrium x _{ e } such that all initial conditions in this neighborhood yield trajectories which are within the tolerable deviation.
The property of stability dictates a condition on the evolution of the system for all t ≥ 0. Note, however, that in the definition of stability, we have not requested that the perturbed evolution converge asymptotically, that is, for t → ∞, to x _{ e }. This convergence property is very important in applications, as it allows to characterize the situation in which not only the perturbed evolution remains close to the unperturbed evolution, but it also converges to the initial (unperturbed) evolution. To capture this property we introduce a new definition.
Definition 3 (Asymptotic stability).
Consider the system (1) with u(t) = u _{0}, for all t ≥ 0 and for some constant u _{0}. Let x _{ e } be an equilibrium point. The equilibrium is asymptotically stable if it is stable and if there exists a constant δ _{ a } > 0 such that \(\x(0)  x_{e}\ <\delta _{a}\) implies \(\lim _{t\rightarrow \infty }\x(t)  x_{e}\ = 0.\)
In summary, an equilibrium point is asymptotically stable if it is stable, and whenever the initial perturbation is inside a certain neighborhood of x _{ e }, the perturbed evolution converges, asymptotically, to the equilibrium point, which is thus said to be attractive. From a physical point of view, this means that all sufficiently small initial perturbations give rise to effects which can be a priori bounded (stability) and which vanish asymptotically (attractivity).
It is important to highlight that, in general, attractivity does not imply stability: it is possible to have an equilibrium of a system which is not stable (i.e., it is unstable), yet for all initial perturbations, the perturbed evolution converges to the equilibrium. This however is not the case for linear systems, as discussed in section “Stability of Linear Systems”. We conclude the section with two simple examples illustrating the notions that have been introduced.
Example 1.
Example 2.
Definition 4 (Global asymptotic stability).
Consider the system (1) with u(t) = u _{0}, for all t ≥ 0 and for some constant u _{0}. Let x _{ e } be an equilibrium point. The equilibrium is globally asymptotically stable if it is stable and if, for all x(0), \(\lim _{t\rightarrow \infty }\x(t)  x_{e}\ = 0.\)
The property of (global) asymptotic stability can be strengthened imposing conditions on the convergence speed of \(\x(t)  x_{e}\\).
Definition 5 (Exponential stability).
Consider the system (1) with u(t) = u _{0}, for all t ≥ 0 and for some constant u _{0}. Let x _{ e } be an equilibrium point. The equilibrium is exponentially stable if there exists λ > 0, in the case of continuoustime systems, and 0 < λ < 1 in the case of discretetime systems, such that for all ε > 0, there exists a δ = δ(ε) > 0 such that \(\x(0)  x_{e}\ <\delta\) implies \(\x(t)  x_{e}\ <\epsilon {e}^{\lambda t},\) in the case of continuoustime systems, and \(\x(t)  x_{e}\ {<\epsilon \lambda }^{t},\) in the case of discretetime systems, for all t ≥ 0.
Definition 6 (Stability of motion).
The notion of stability of a motion is substantially the same as the notion of stability of an equilibrium. The important issue is that the timeparametrization is important, i.e., a motion is stable if, for small initial perturbations, the perturbed evolution is close, for any fixed t ≥ 0, to the nonperturbed evolution. This does not mean that if the perturbed and unperturbed trajectories are close, then the motion is stable: in fact the trajectories may be close but may be followed with different timing, which means that for some t ≥ 0 condition (6) may be violated.
Stability of Linear Systems
The notion of stability relies on the knowledge of the trajectories of the system. As a result, even if this notion is very elegant and useful in applications, it is in general hard to assess stability of an equilibrium or of a motion. There are, however, classes of systems for which it is possible to give stability conditions without relying upon the knowledge of the trajectories. Linear systems belong to one such class. In this section we study the stability properties of linear systems, and we show that, because of the linear structure, it is possible to assess the properties of stability and attractivity in a simple way. To begin with, we recall some properties of linear systems.
Proposition 4.
Consider a linear, timeinvariant system. (Asymptotic) stability of one motion implies (asymptotic) stability of all motions. In particular, (asymptotic) stability of any motion implies and is implied by (asymptotic) stability of the equilibrium x _{e} = 0.
The above statement, together with the result in Proposition 1, implies the following important properties.
Proposition 5.
If the origin of a linear system is asymptotically stable, then, necessarily, the origin is the only equilibrium of the system for u = 0. Moreover, asymptotic stability of the zero equilibrium is always global. Finally, asymptotic stability implies exponential stability.
The above discussion shows that the stability properties of a motion (e.g., an equilibrium) of a linear system are inherited by all motions of the system. Moreover, for linear systems, local properties are always global properties. This means that, with some abuse of terminology, we can refer the stability properties to the linear system, for example, we say that a linear system is stable to mean that all its motions are stable. Stability properties of a linear, timeinvariant system are therefore properties of the free evolution of its state: for this class of systems, it is possible to obtain simple stability tests.
Proposition 6.
Proposition 7.

In the case of continuoustime systems, the eigenvalues of A with geometric multiplicity equal to one have nonpositive real part, and the eigenvalues of A with geometric multiplicity larger than one have negative real part.

In the case of discretetime systems, the eigenvalues of A with geometric multiplicity equal to one have modulo not larger than one, and the eigenvalues of A with geometric multiplicity larger than one have modulo smaller than one.
Proposition 8.

In the case of continuoustime systems, the eigenvalues of A have negative real part.

In the case of discretetime systems, the eigenvalues of A have modulo smaller than one.
Proof The proof is similar to the one of the previous proposition, once it is noted that, for the considered class of systems and as stated in Proposition 6, asymptotic stability implies and is implied by boundedness and convergence of e ^{ At } or A ^{ t }. \(\vartriangleleft \)
Remark 1.
We conclude this discussion with an alternative characterization of asymptotic stability in terms of linear matrix inequalities.
Proposition 9.

In the case of continuoustime systems, there exists a symmetric positive definite matrix P = P ^{′} such that A ^{′} P + PA < 0.

In the case of discretetime systems, there exists a symmetric positive definite matrix P = P ^{′} such that A ^{′} PA − P < 0.
To complete our discussion we stress that stability properties are invariant with respect to changes in coordinates in the state space.
Corollary 1.
Consider a linear, timeinvariant system and assume it is (asymptotically) stable. Then any representation obtained by means of a change of coordinates of the form \(x(t) = L\hat{x}(t)\) , with L constant and invertible, is (asymptotically) stable.
Proof The proof is based on the observation that the change of coordinates transforms the matrix A into \(\tilde{A} = {L}^{1}AL\) and that the matrices A and \(\tilde{A}\) are similar, that is, they have the same characteristic and minimal polynomials. \(\vartriangleleft \)
Summary and Future Directions
The property of Lyapunov stability is instrumental to characterize the qualitative behavior of dynamical systems. For linear, timeinvariant systems, this property can be studied on the basis of the location, and multiplicity, of the eigenvalues of the matrix A. The property of Lyapunov stability can be studied for more general classes of systems, including nonlinear systems, distributed parameter systems, and hybrid systems, to which the basic definitions given in this article apply.
CrossReferences
Feedback Stabilization of Nonlinear Systems
Linear Systems: ContinuousTime, TimeInvariant State Variable Descriptions
Linear Systems: ContinuousTime, TimeVarying State Variable Descriptions
Linear Systems: DiscreteTime, TimeInvariant State Variable Descriptions
Linear Systems: DiscreteTime, TimeVarying State Variable Descriptions
Lyapunov Methods in Power System Stability
Power System Voltage Stability
Stability and Performance of Complex Systems Affected by Parametric Uncertainty
Recommended Reading
Classical references on Lyapunov stability theory and on stability theory for linear systems are given below.
Bibliography
 Antsaklis PJ, Michel AN (2007) A linear systems primer. Birkhäuser, BostonGoogle Scholar
 Brockett RW (1970) Finite dimensional linear systems. Wiley, LondonGoogle Scholar
 Hahn W (1967) Stability of motion. Springer, New YorkGoogle Scholar
 Khalil HK (2002) Nonlinear systems, 3rd edn. PrenticeHall, Upper Saddle RiverGoogle Scholar
 Lyapunov AM (1992) The general problem of the stability of motion. Taylor & Francis, LondonGoogle Scholar
 Trentelman HL, Stoorvogel AA, Hautus MLJ (2001) Control theory for linear systems. Springer, LondonGoogle Scholar
 Zadeh LA, Desoer CA (1963) Linear system theory. McGrawHill, New YorkGoogle Scholar