Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Adaptive Control for Linear Time-Invariant Systems

  • Petros A. Ioannou
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_111-1


Adaptive control of linear time-invariant (LTI) systems deals with the control of LTI systems whose parameters are constant but otherwise completely unknown. In some cases, large norm bounds as to where the unknown parameters are located in the parameter space are also assumed to be known. In general, adaptive control deals with LTI plants which cannot be controlled with fixed gain controllers, i.e., nonadaptive control methods, and their parameters even though assumed constant for design and analysis purposes may change over time in an unpredictable manner. Most of the adaptive control approaches for LTI systems use the so-called certainty equivalence principle where a control law motivated from the known parameter case is combined with an adaptive law for estimating on line the unknown parameters. The control law could be associated with different control objectives and the adaptive law with different parameter estimation techniques. These combinations give rise to a wide class of adaptive control schemes. The two popular control objectives that led to a wide range of adaptive control schemes include model reference adaptive control (MRAC) and adaptive pole placement control (APPC). In MRAC, the control objective is for the plant output to track the output of a reference model, designed to represent the desired properties of the plant, for any reference input signal. APPC is more general and is based on control laws whose objective is to set the poles of the closed loop at desired locations chosen based on performance requirements. Another class of adaptive controllers for LTI systems that involves ideas from MRAC and APPC is based on multiple models, search methods, and switching logic. In this class of schemes, the unknown parameter space is partitioned to smaller subsets. For each subset, a parameter estimator or a stabilizing controller is designed or a combination of the two. The problem then is to identify which subset in the parameter space the unknown plant model belongs to and/or which controller is a stabilizing one and meets the control objective. A switching logic is designed based on different considerations to identify the most appropriate plant model or controller from the list of candidate plant models and/or controllers. In this entry, we briefly describe the above approaches to adaptive control for LTI systems.


Adaptive Pole Placement Control (APPC) Model Reference Adaptive Control (MRAC) Unknown Parameter Space Indirect MRAC MRAC Scheme 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Model Reference Adaptive Control

In model reference control (MRC), the desired plant behavior is described by a reference model which is simply an LTI system with a transfer function W m (s) and is driven by a reference input. The controller transfer function C(s, θ ), where θ is a vector with the coefficients of C(s), is then developed so that the closed-loop plant has a transfer function equal to W m (s). This transfer function matching guarantees that the plant will match the reference model response for any reference input signal. In this case the plant transfer function \(G_{p}(s,\theta _{p}^{{\ast}})\), where \(\theta _{p}^{{\ast}}\) is a vector with all the coefficients of G p (s), together with the controller transfer function C(s, θ ) should lead to a closed-loop transfer function from the reference input r to the plant output y p that is equal to W m (s), i.e.,
$$\frac{y_{p}(s)} {r(s)} = W_{m}(s) = \frac{y_{m}(s)} {r(s)} ,$$
where y m is the output of the reference model. For this transfer matching to be possible, G p (s) and W m (s) have to satisfy certain assumptions. These assumptions enable the calculation of the controller parameter vector θ as
$${ \theta }^{{\ast}} = F(\theta _{ p}^{{\ast}}),$$
where F is a function of the plant parameters \(\theta _{p}^{{\ast}}\), to satisfy the matching equation (1). The function in (2) has a special form in the case of MRC that allows the design of both direct and indirect MRAC. For more general classes of controller structures, this is not possible in general as the function F is nonlinear. This transfer function matching guarantees that the tracking error \(e_{1} = y_{p} - y_{m}\) converges to zero for any given reference input signal r. If the plant parameter vector \(\theta _{p}^{{\ast}}\) is known, then the controller parameters θ can be calculated using (2), and the controller C(s, θ ) can be implemented. We are considering the case where \(\theta _{p}^{{\ast}}\) is unknown. In this case, the use of the certainty equivalence (CE) approach, (Astrom and Wittenmark 1995; Egardt 1979; Ioannou and Fidan 2006; Ioannou and Kokotovic 1983; Ioannou and Sun 1996; Landau et al. 1998; Morse 1996; Landau 1979; Narendra and Annaswamy 1989; Narendra and Balakrishnan 1997; Sastry and Bodson 1989; Stefanovic and Safonov 2011; Tao 2003) where the unknown parameters are replaced with their estimates, leads to the adaptive control scheme referred to as indirect MRAC, shown in Fig. 1a. The unknown plant parameter vector \(\theta _{p}^{{\ast}}\) is estimated at each time t denoted by θ p (t), using an online parameter estimator referred to as adaptive law. The plant parameter estimate θ p (t) at each time t is then used to calculate the controller parameter vector θ(t) = F(θ p (t)) used in the controller C(s, θ). This class of MRAC is called indirect MRAC, because the controller parameters are not updated directly, but calculated at each time t using the estimated plant parameters. Another way of designing MRAC schemes is to parameterize the plant transfer function in terms of the desired controller parameter vector θ . This is possible in the MRC case, because the structure of the MRC law is such that we can use (2) to write
$$ \theta _{p}^{{\ast}} = {F}^{-1}{(\theta }^{{\ast}}), $$
where F − 1 is the inverse of the mapping F( ⋅), and then express \(G_{p}(s,\theta _{p}^{{\ast}}) = G_{p}(s,{F}^{-1}{(\theta }^{{\ast}})) =\bar{ G}_{p}(s{,\theta }^{{\ast}})\). The adaptive law for estimating θ online can now be developed by using \(y_{p} =\bar{ G}_{p}(s{,\theta }^{{\ast}})u_{p}\) to obtain a parametric model that is appropriate for estimating the controller vector θ as the unknown parameter vector. The MRAC can then be developed using the CE approach as shown in Fig. 1b. In this case, the controller parameter θ(t) is updated directly without any intermediate calculations, and for this reason, the scheme is called direct MRAC.
Fig. 1

Structure of (a) indirect MRAC, (b) direct MRAC

The division of MRAC to indirect and direct is, in general, unique to MRC structures, and it is possible due to the fact that the inverse maps in (2) and (3) exist which is a direct consequence of the control objective and the assumptions the plant and reference model are required to satisfy for the control law to exist. These assumptions are summarized below:
  • Plant Assumptions: G p (s) is minimum phase, i.e., has stable zeros, its relative degree, n = number of poles − number of zeros, is known and an upper bound n on its order is also known. In addition, the sign of its high-frequency gain is known even though it can be relaxed with additional complexity.

  • Reference Model Assumptions: W m (s) has stable poles and zeros, its relative degree is equal to n that of the plant, and its order is equal or less to the one assumed for the plant, i.e., of n.

The above assumptions are also used to meet the control objective in the case of known parameters, and therefore the minimum phase and relative degree assumptions are characteristics of the control objective and do not arise because of adaptive control considerations. The relative degree matching is used to avoid the need to differentiate signals in the control law. The minimum phase assumption comes from the fact that the only way for the control law to force the closed-loop plant transfer function to be equal to that of the reference model is to cancel the zeros of the plant using feedback and replace them with those of the reference model using a feedforward term. Such zero pole cancelations are possible if the zeros are stable, i.e., the plant is minimum phase; otherwise stability cannot be guaranteed for nonzero initial conditions and/or inexact cancelations.

The design of MRAC in Fig. 1 has additional variations depending on how the adaptive law is designed. If the reference model is chosen to be strictly positive real (SPR) which limits its transfer function and that of the plant to have relative degree 1, the derivation of adaptive law and stability analysis is fairly straightforward, and for this reason, this class of MRAC schemes attracted a lot of interest. As the relative degree changes to 2, the design becomes more complex as in order to use the SPR property, the CE control law has to be modified by adding an extra nonlinear term. The stability analysis remains to be simple as a single Lyapunov function can be used to establish stability. As the relative degree increases further, the design complexity increases by requiring the addition of more nonlinear terms in the CE control law (Ioannou and Fidan 2006; Ioannou and Sun 1996). The simplicity of using a single Lyapunov function analysis for stability remains however. This approach covers both direct and indirect MRAC and lead to adaptive laws which contain no normalization signals (Ioannou and Fidan 2006; Ioannou and Sun 1996). A more straightforward design approach is based on the CE principle which separates the control design from the parameter estimation part and leads to a much wider class of MRAC which can be direct or indirect. In this case, the adaptive laws need to be normalized for stability, and the analysis is far more complicated than the approach based on SPR with no normalization. An example of such a direct MRAC scheme for the case of known sign of the high-frequency gain which is assumed to be positive for both plant and reference model is listed below:
  • Control law:
    $$u_{p} =\theta _{ 1}^{T}(t) \frac{\alpha (s)} {\Lambda (s)}u_{p} +\theta _{ 2}^{T} \frac{\alpha (s)} {\Lambda (s)}y_{p} +\theta _{3}(t)y_{p} + c_{0}(t)r {=\theta }^{T}(t)\omega ,$$
    where \(\alpha \triangleq \alpha _{n-2}(s) = {[{s}^{n-2},{s}^{n-3},\ldots ,s,1]}^{T}\) for n ≥ 2, and \(\alpha (s) \triangleq0\) for n = 1, and Λ(s) is a monic polynomial with stable roots and degree n − 1 having numerator of W m (s) as a factor.
  • Adaptive law:
    $$\dot{\theta }= \Gamma \epsilon \phi ,$$
    where Γ is a positive definite matrix referred to as the adaptive gain and \(\dot{\rho }=\gamma \epsilon \xi\), \(\epsilon= \frac{e_{1}-\rho \xi } {m_{s}^{2}}\), \(m_{s}^{2} = 1 {+\phi }^{T}\phi + u_{f}^{2}\), \(\xi {=\theta }^{T}\phi + u_{f}\), \(\phi = -W_{m}(s)\omega\), and u f = W m (s)u p .

The stability properties of the above direct MRAC scheme which are typical for all classes of MRAC are the following (Ioannou and Fidan 2006; Ioannou and Sun 1996):(i) All signals in the closed-loop plant are bounded, and the tracking error e 1 converges to zero asymptotically and (ii) if the plant transfer function contains no zero pole cancelations and r is sufficiently rich of order 2n, i.e., it contains at least n distinct frequencies, then the parameter error \(\vert \tilde{\theta }\vert= \vert \theta {-\theta }^{{\ast}}\vert \) and the tracking error e 1 converge to zero exponentially fast.

Additional details on MRAC are presented in chapter 116 solely devoted to MRAC.

Adaptive Pole Placement Control

Let us consider the SISO LTI plant:
$$y_{p} = G_{p}(s)u_{p},\;\;G_{p}(s) = \frac{Z_{p}(s)} {R_{p}(s)},$$
where G p (s) is proper and R p (s) is a monic polynomial. The control objective is to choose the plant input u p so that the closed-loop poles are assigned to those of a given monic Hurwitz polynomial A (s), and y p is required to follow a certain class of reference signals y m assumed to satisfy Q m (s)y m = 0 where Q m (s) is known as the internal model of y m and is designed to have all roots in Re{s} ≤ 0 with no repeated roots on the -axis. The polynomial A (s), referred to as the desired closed-loop characteristic polynomial, is chosen based on the closed-loop performance requirements. To meet the control objective, we make the following assumptions about the plant:

P1. G p (s) is strictly proper with known degree, and R p (s) is a monic polynomial whose degree n is known and Q m (s)Z p (s) and R p (s) are coprime.

Assumption P1 allows Z p and R p to be non-Hurwitz in contrast to the MRAC case where Z p is required to be Hurwitz.

The design of the APPC scheme is based on the CE principle. The plant parameters are estimated at each time t and used to calculate the controller parameters that meet the control objective for the estimated plant as follows: Using (6) the plant equation can be expressed in a form convenient for parameter estimation via the model (Goodwin and Sin 1984; Ioannou and Fidan 2006; Ioannou and Sun 1996):
$$z =\theta _{ p}^{{\ast}}\phi ,$$
where \(z = \frac{{s}^{n}} {\Lambda _{p}(s)}y_{p}\), \(\theta _{p}^{{\ast}} = {[{\theta _{b}^{{\ast}}}^{T},{\theta _{a}^{{\ast}}}^{T}]}^{T}\), \(\phi = {[\frac{\alpha _{n-1}^{T}(s)} {\Lambda _{p}(s)} u_{p},-\frac{\alpha _{n-1}^{T}(s)} {\Lambda _{p}(s)} y_{p}]}^{T}\), \(\alpha _{n-1} = [{s}^{n-1},\) , s, 1] T , \(\theta _{a}^{{\ast}} = {[a_{n-1},\ldots ,a_{0}]}^{T}\), \(\theta _{b}^{{\ast}} = {[b_{n-1},\ldots ,b_{0}]}^{T}\), and Λ p (s) is a Hurwitz monic design polynomial. As an example of a parameter estimation algorithm, we consider the gradient algorithm
$$\dot{\theta }_{p} = \Gamma \epsilon \phi ,\;\;\epsilon= \frac{z -\theta _{p}^{T}\phi } {m_{s}^{2}} ,\;\;m_{s}^{2} = 1 {+\phi }^{T}\phi ,$$
where Γ = Γ T > 0 is the adaptive gain and \(\theta _{p} = {[\hat{b}_{n-1},\ldots ,\hat{b}_{0},\hat{a}_{n-1},\ldots ,\hat{a}_{0}]}^{T}\) are the estimated plant parameters which can be used to form the estimated plant polynomials \(\hat{R}_{p}(s,t) = {s}^{n} +\hat{ a}_{n-1}(t){s}^{n-1} +\ldots +\hat{a}_{1}(t)s +\hat{ a}_{0}(t)\) and \(\hat{Z}_{p}(s,t) =\hat{ b}_{n-1}(t){s}^{n-1} +\ldots +\hat{b}_{1}(t)s +\hat{ b}_{0}(t)\) of R p (s) and Z p (s), respectively, at each time t. The adaptive control law is given as
$$u_{p} = \left (\Lambda (s) -\hat{ L}(s,t)Q_{m}(s)\right ) \frac{1} {\Lambda (s)}u_{p} -\hat{ P}(s,t) \frac{1} {\Lambda (s)}(y_{p} - y_{m}),$$
where \(\hat{L}(s,t)\) and \(\hat{P}(s,t)\) are obtained by solving the polynomial equation \(\hat{L}(s,t) \cdot {Q}_{m}(s) \cdot \hat{ R}_{p}(s,t) +\hat{ P}(s,t) \cdot \hat{ Z}_{p}(s,t) = {A}^{{\ast}}(s)\) at each time t. The operation X(s, t) ⋅Y (s, t) denotes a multiplication of polynomials where s is simply treated as a variable. The existence and uniqueness of \(\hat{L}(s,t)\) and \(\hat{P}(s,t)\) is guaranteed provided \(\hat{R}_{p}(s,t) \cdot {Q}_{m}(s)\) and \(\hat{Z}_{p}(s,t)\) are coprime at each frozen time t. The adaptive laws that generate the coefficients of \(\hat{R}_{p}(s,t)\) and \(\hat{Z}_{p}(s,t)\) cannot guarantee this property, which means that at certain points in time, the solution \(\hat{L}(s,t)\), \(\hat{P}(s,t)\) may not exist. This problem is known as the stabilizability problem in indirect APPC and further modifications are needed in order to handle it (Goodwin and Sin 1984; Ioannou and Fidan 2006; Ioannou and Sun 1996). Assuming that the stabilizability condition holds at each time t, it can be shown (Goodwin and Sin 1984; Ioannou and Fidan 2006; Ioannou and Sun 1996) that all signals are bounded and the tracking error converges to zero with time. Other indirect adaptive pole placement control schemes include adaptive linear quadratic (Ioannou and Fidan 2006; Ioannou and Sun 1996). In principle any nonadaptive control scheme can be made adaptive by replacing the unknown parameters with their estimates in the calculation of the controller parameters. The design of direct APPC schemes is not possible in general as the map between the plant and controller parameters is nonlinear, and the plant parameters cannot be expressed as a convenient function of the controller parameters. This prevents parametrization of the plant transfer function with respect to the controller parameters as done in the case of MRC. In special cases where such parametrization is possible such as in MRAC which can be viewed as a special case of APPC, the design of direct APPC is possible. Chapters on Adaptive Control, Overview, Robust Adaptive Control, and History of Adaptive Control provide additional information regarding MRAC and APPC.

Search Methods, Multiple Models, and Switching Schemes

One of the drawbacks of APPC is the stabilizability condition which requires the estimated plant at each time t to satisfy the detectability and stabilizability condition that is necessary for the controller parameters to exist. Since the adaptive law cannot guarantee such a property, an approach emerged that involves the pre-calculation of a set of controllers based on the partitioning of the plant parameter space. The problem then becomes one of identifying which one of the controllers is the most appropriate one. The switching to the “best” possible controller could be based on some logic that is driven by some cost index, multiple estimation models, and other techniques (Fekri et al. 2007; Hespanha et al. 2003; Kuipers and Ioannou 2010; Morse 1996; Narendra and Balakrishnan 1997; Stefanovic and Safonov 2011). One of the drawbacks of this approach is that it is difficult if at all possible to find a finite set of stabilizing controllers that cover the whole unknown parameter space especially for high-order plants. If found its dimension may be so large that makes it impractical. Another drawback that is present in all adaptive schemes is that in the absence of persistently exciting signals which guarantee that the input/output data have sufficient information about the unknown plant parameters, there is no guarantee that the controller the scheme converged to is indeed a stabilizing one. In other words, if switching is disengaged or the adaptive law is switched off, there is no guarantee that a small disturbance will not drive the corresponding LTI scheme unstable. Nevertheless these techniques allow the incorporation of well-established robust control techniques in designing a priori the set of controller candidates. The problem is that if the plant parameters change in a way not accounted for a priori, no controller from the set may be stabilizing leading to an unstable system. More details can be found in chapter 119 on switching adaptive control.

Robust Adaptive Control

The MRAC and APPC schemes presented above are designed for LTI systems. Due to the adaptive law, the closed-loop system is no longer LTI but nonlinear and time varying. It has been shown using simple examples that the pure integral action of the adaptive law could cause parameter drift in the presence of small disturbances and/or unmodeled dynamics (Ioannou and Fidan 2006; Ioannou and Kokotovic 1983; Ioannou and Sun 1996) which could then excite the unmodeled dynamics and lead to instability. Modifications to counteract these possible instabilities led to the field of robust adaptive control whose focus was to modify the adaptive law in order to guarantee robustness with respect to disturbances, unmodeled dynamics, time-varying parameters, classes of nonlinearities, etc., by using techniques such as normalizing signals, projection, fixed and switching sigma modification, etc. More details on this topic can be found in chapter 118 on robust adaptive control.



  1. Astrom K, Wittenmark B (1995) Adaptive control. Addison-Wesley, ReadingGoogle Scholar
  2. Egardt B (1979) Stability of adaptive controllers. Springer, New YorkCrossRefzbMATHGoogle Scholar
  3. Fekri S, Athans M, Pascoal A (2007) Robust multiple model adaptive control (RMMAC): a case study. Int J Adapt Control Signal Process 21(1):1–30CrossRefzbMATHMathSciNetGoogle Scholar
  4. Goodwin G, Sin K (1984) Adaptive filtering prediction and control. Prentice-Hall, Englewood CliffszbMATHGoogle Scholar
  5. Hespanha JP, Liberzon D, Morse A (2003) Hysteresis-based switching algorithms for supervisory control of uncertain systems. Automatica 39(2):263–272CrossRefzbMATHMathSciNetGoogle Scholar
  6. Ioannou P, Fidan B (2006) Adaptive control tutorial. SIAM, PhiladelphiaCrossRefzbMATHGoogle Scholar
  7. Ioannou P, Kokotovic P (1983) Adaptive systems with reduced models. Springer, Berlin/New YorkCrossRefzbMATHGoogle Scholar
  8. Ioannou P, Sun J (1996) Robust adaptive control. Prentice-Hall, Upper Saddle RiverzbMATHGoogle Scholar
  9. Kuipers M, Ioannou P (2010) Multiple model adaptive control with mixing. IEEE Trans Autom Control 55(8):1822–1836CrossRefMathSciNetGoogle Scholar
  10. Landau Y (1979) Adaptive control: the model reference approach. Marcel Dekker, New YorkzbMATHGoogle Scholar
  11. Landau I, Lozano R, M’Saad M (1998) Adaptive control. Springer, New YorkCrossRefGoogle Scholar
  12. Morse A (1996) Supervisory control of families of linear set-point controllers part I: exact matching. IEEE Trans Autom Control 41(10):1413–1431CrossRefzbMATHMathSciNetGoogle Scholar
  13. Narendra K, Annaswamy A (1989) Stable adaptive systems. Prentice Hall, Englewood CliffszbMATHGoogle Scholar
  14. Narendra K, Balakrishnan J (1997) Adaptive control using multiple models. IEEE Trans Autom Control 42(2):171–187CrossRefzbMATHMathSciNetGoogle Scholar
  15. Sastry S, Bodson M (1989) Adaptive control: stability, convergence and robustness. Prentice Hall, Englewood CliffszbMATHGoogle Scholar
  16. Stefanovic M, Safonov M (2011) Safe adaptive control: data-driven stability analysis and robust synthesis. Lecture notes in control and information sciences, vol 405. Springer, BerlinGoogle Scholar
  17. Tao G (2003) Adaptive control design and analysis. Wiley-Interscience, HobokenCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  • Petros A. Ioannou
    • 1
  1. 1.University of Southern CaliforniaLos AngelesUSA