Skip to main content
Log in

Pole placement adaptive control with persistent jumps in the plant parameters

  • Original Article
  • Published:
Mathematics of Control, Signals, and Systems Aims and scope Submit manuscript

Abstract

In this paper we consider the problem of adaptively stabilizing, and providing step tracking for, an uncertain linear time-varying system. We propose an adaptive pole placement controller which solves the problem for a single-input single-output plant whose parameters switch at a moderate rate among the elements of a compact set. The output feedback controller incorporates an integrator, and its action emulates the behaviour of a pole placement state feedback compensator; the controller is periodic and mildly nonlinear, is easy to implement, is noise tolerant, and tolerates a degree of unmodelled dynamics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. We have placed a zero at the top of the last vector on the RHS of the above equation to more easily state and use the forthcoming Lemma 1.

  2. This step is not essential: the goal is to provide a small amount of delay to simplify the construction of the state space representation of the controller which is asserted to exist in the upcoming Lemma 3.

References

  1. Annaswamy AM, Narendra KS (1989) Adaptive control of a first order plant with a time-varying parameter. Am Control Conf, pp 975–980

  2. Blanchini F, Casagrande D, Miani S, Viaro V (2010) Stable LPV realization of parametric transfer functions and its application to gain-scheduling control design. IEEE Trans Autom Control 55(10):2271–2281

    Google Scholar 

  3. Baglietto M, Battistelli G, Tesi P (2013) Stabilization and tracking for switching linear systems under unknown switching sequences. Syst Control Lett 62:11–21

    Article  MATH  MathSciNet  Google Scholar 

  4. Byrnes CI, Willems JC (1984) Adaptive stabilization of multivariable linear systems. In: Proceedings of the IEEE 23rd conference on decision and control, pp 1574–1577

  5. Dimogianopoulos D, Lozano R (2001) Adaptive control for linear slowly time-varying systems using direct least-squares estimation. Automatica 37(2):251–256

    Article  MATH  MathSciNet  Google Scholar 

  6. Fidan B, Zhang Y, Ioannou P (2005) Adaptive control of a class of slowly time-varying systems with modelling uncertainties. IEEE Trans Autom Control 50(6):915–920

    Google Scholar 

  7. Goodwin GC, Ramadge PJ, Caines PE (1980) Discrete time multivariable control. IEEE Trans Autom Control AC-25:449–456

    Google Scholar 

  8. Goodwin GC, Sin KS (1984) Adaptive filtering prediction and control. Prentice-Hall, Englewood Cliffs

    MATH  Google Scholar 

  9. Hackl CM, Hopfe N, Ilchmann A, Mueller M, Trenn S (2013) Funnel control for systems with relative degree two. SIAM J Control Optim 51(2):965–995

    Article  MATH  MathSciNet  Google Scholar 

  10. Hespanha JP, Morse AS (1999) Stability of switched systems with average dwell-time. In: IEEE conference on decision and control. Phoenix, Arizona, pp 2655–2660

  11. Hocherman J, Kulkarni SR, Ramadge PJ (1998) Controller switching based on output prediction errors. IEEE Trans Autom Control 43(5):596–607

    Article  MATH  MathSciNet  Google Scholar 

  12. Hespanha JP, Liberzon D, Morse AS (2003) Overcoming the limitations of adaptive control by means of logic based switching. Syst Control Lett 49:49–65

    Article  MATH  MathSciNet  Google Scholar 

  13. Ilchmann A, Ryan EP, Trenn S (2005) Tracking control: performance funnels and prescribed transient behaviour. Syst Control Lett 55:655–670

    Article  MathSciNet  Google Scholar 

  14. Ioannou PA, Sun J (1996) Robust adaptive control. Prentice Hall, Englewood- Cliffs

  15. Kreisselmeir G (1986) Adaptive control of a class of slowly time-varying plants. Syst Control Lett 8(2):97–103

    Article  Google Scholar 

  16. Liberzon D, Morse AS (1999) Basic problems in stability and design of switched systems. IEEE Control Syst. Mag 19(5): 59–70

    Google Scholar 

  17. Ljung L, Soderstrom T (1982) Theory and practise of recursive identification. MIT Press, Cambridge

    Google Scholar 

  18. Morse AS (1980) Global stability of parameter-adaptive control systems. IEEE Trans Autom Control AC-25:433–439

    Google Scholar 

  19. Morse AS (1996) Supervisory control of families of linear set-point controllers—part 1: exact matching. IEEE Trans Autom Control AC-41:1413–1431

    Google Scholar 

  20. Morse AS (1997) Supervisory control of families of linear set-point controllers—part 2: robustness. IEEE Trans Autom Control AC-42:1500–1515

    Google Scholar 

  21. Miller DE (2003) A new approach to model reference adaptive control. IEEE Trans Autom Control AC-48:743–757

    Google Scholar 

  22. Miller DE (2006) Near optimal LQR performance for a compact set of plants. IEEE Trans Autom Control 51(9): 1423–1439

    Google Scholar 

  23. Miller DE, Davison EJ (1991) An adaptive controller which provides an arbitrarily good transient and steady-state response. IEEE Trans Autom Control 36:68–81

    Google Scholar 

  24. Middleton RH, Goodwin GC (1988) Adaptive control of time-varying linear systems. IEEE Trans Autom Control 33(2):150–155

    Article  MATH  MathSciNet  Google Scholar 

  25. Miller DE, Mansouri N (2010) Model reference adaptive control using simultaneous probing, estimation, and control. IEEE Trans Autom Control 55(9): 2014–2029

    Google Scholar 

  26. Miller DE, Rossi M (2001) Simultaneous stabilization with near optimal LQR performance. IEEE Trans Autom Control AC-46:1543–1555

    Google Scholar 

  27. Marino R, Tomei P (2003) Adaptive control of linear time-varying systems. Automatica 39:651–659

    Article  MATH  MathSciNet  Google Scholar 

  28. Miller DE, Vale JR (2010) Pole placement adaptive control with persistent jumps in the plant parameters. In: Proceedings of the 2010 IEEE conference on decision and control. Atlanta, Georgia

  29. Miller DE, Vale JR (2012) Pole placement adaptive control with persistent jumps in the plant parameters. Technical Report UW-ECE-2012-06, Department of Electrical and Computer Engineering, University of Waterloo, Canada (Available at the National Library of Canada)

  30. Narendra KS, Balakrishnan J (1997) Adaptive control using multiple models. IEEE Trans Autom Control AC-42:171–187

    Google Scholar 

  31. Narendra KS, Han Z (2012) A new approach to adaptive control using multiple models. Int J Adapt Control Signal Proc 26:778–799

    Google Scholar 

  32. Narendra KS, Lin YH, Valavani LS (1980) Stable adaptive controller design, part II: proof of stability. IEEE Trans Autom Control AC-25:440–448

    Google Scholar 

  33. Rudin W (1973) Functional analysis. McGraw-Hill, New York

    MATH  Google Scholar 

  34. Tsakalis KS, Ioannou PA (1989) Adaptive control of linear time-varying plants: a new model reference controller structure. IEEE Trans Autom Control 34(10):1038–1046

    Google Scholar 

  35. Tsakalis KS, Ioannou PA (1990) A new indirect adaptive control scheme for time-varying plants. IEEE Trans Autom Control 35(6):697–705

    Google Scholar 

  36. Tian Z, Narendra KS (2009) Adaptive control of linear periodic systems. In: 2009 American control conference. St. Louis

  37. Voulgaris PG, Dahleh MA, Valavani LS (1994) Robust adaptive control: a slowly time-varying approach. Automatica 30(9):1455–1461

    Article  MATH  MathSciNet  Google Scholar 

  38. Vu L, Liberzon D (2011) Switching adaptive control of uncertain time-varying plants. IEEE Trans Autom Control 56(1): 27–42

    Google Scholar 

  39. Vale JR, Miller DE (2011) Step tracking in the presence of persistent plant changes. IEEE Trans Autom Control, pp 43–58

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel E. Miller.

Additional information

This research was supported by a grant from the Natural Sciences and Engineering Research Council of Canada.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 312 KB)

Appendix

Appendix

To prove the KEL, we make use of Lemma 1 (i) of [22], which we restate here in our notation. Before proceeding, let us define

$$\begin{aligned} {\tilde{p}} (p) := \left[ \begin{array}{cccc} 0 \;\; {\bar{C}}_p{\bar{B}} \;\; {\bar{C}}_p{\bar{A}}_p{{\bar{B}}} \;\; \cdots \;\; {\bar{C}}_p{\bar{A}}_p^{m-1} {{\bar{B}}} \end{array} \right] ^\mathrm{T}. \end{aligned}$$

Lemma 5

Let \({\bar{h}} \in (0,1)\) and \(m \in \mathbf{N}\). There exists a constant \(\gamma > 0\) so that for every \({t_s}\in \mathbf{R},{\bar{x}}_0 \in \mathbf{R}^{n+1},h \in ( 0, {\bar{h}}),{\bar{\nu }} \in \mathbf{R}\), and \(p \in \mathcal{P }\), if \(w=0\) and

$$\begin{aligned} \sigma (t) = p,\quad \nu (t) = {\bar{\nu }},\quad t \in [{t_s},{t_s}+mh], \end{aligned}$$

then the solution of (9) satisfies

$$\begin{aligned}&\Vert H_m (h) ^{-1} S_m^{-1} \mathcal{E }_m ({t_s}) - \mathcal{O }_m ( {\bar{C}}_p, {\bar{A}}_p ) {\tilde{x}} ({t_s}) - \tilde{p} (p) {\bar{\nu }} \Vert \le \gamma h ( \Vert {\tilde{x}} ({t_s}) \Vert + \Vert {\bar{\nu }} \Vert ),\\&\Vert {\tilde{x}}(t) - {\tilde{x}} ({t_s}) \Vert \le \gamma h ( \Vert {\tilde{x}} ({t_s}) \Vert + \Vert {\bar{\nu }} \Vert ),\quad t \in [ {t_s}, {t_s}+ mh]. \end{aligned}$$

Proof of Lemma 1

Fix \({\bar{h}} \in (0,1)\) and \(\rho >0\). Let \({t_s}\in \mathbf{R},{\bar{x}} ({t_s}) \in \mathbf{R}^{n+1}\), \(h \in ( 0, {\bar{h}}),{\bar{\nu }}_1, {\bar{\nu }}_2 \in \mathbf{R},w \in {PC_{\infty }}\), \(y_\mathrm{ref}\in {\mathcal{CF}}\), and \(p \in \mathcal{P }\) be arbitrary, and let \(\sigma (t) \) and \(\nu (t)\) satisfy (15). Solving the differential equation (9) (recall that \({\tilde{x}} (t)={\bar{x}} (t)-L_p^{-1}{\bar{B}} y_\mathrm{ref} (t)\)) yields

$$\begin{aligned} {\tilde{x}}(t)&= e^{{\bar{A}}_p( t - {t_s})} {\tilde{x}} ({t_s}) + \int _{{t_s}}^t e^{{\bar{A}}_p( t - \tau )} {\bar{B}} ({\bar{\nu }}_1 + \rho {\bar{\nu }}_2 ) \, d \tau \\&+\underbrace{ \int _{{t_s}}^t e^{{\bar{A}}_p( t - \tau )} {\bar{B}} w_u ( \tau ) \, d \tau }_{=: \eta _1 (t)}, \quad t \in [ {t_s}, {t_s}+ mh ], \\ {\tilde{x}}(t)&= e^{{\bar{A}}_p( t - {t_s}- mh )} {\tilde{x}} ({t_s}+mh) + \int _{{t_s}+mh}^t e^{{\bar{A}}_p(t-\tau )}{\bar{B}} ({\bar{\nu }}_1 - \rho {\bar{\nu }}_2 ) \, d \tau \\&+\underbrace{ \int _{{t_s}+mh}^t e^{{\bar{A}}_p( t - \tau )} {\bar{B}} w_u ( \tau ) \, d \tau }_{=: \eta _1 (t)},\quad t\in [{t_s}+mh,{t_s}+2mh]. \end{aligned}$$

From (9) we see that the explicit effect of the noise on \(e\) in each of \([{t_s}, {t_s}+ mh )\) and \([ {t_s}+ mh, {t_s}+ 2mh )\) is exactly

$$\begin{aligned} \eta _2 (t) := {\bar{C}}_p \eta _1 (t) - w_y (t). \end{aligned}$$

We then define the sampled version of \(\eta _2\) (to correspond with the definition of \(\mathcal{E }_m \)) via

$$\begin{aligned} N_m ({t_s}):= \left[ \begin{array}{cccc} \eta _2 ( {t_s}) \;\; \eta _2 ( {t_s}+ h ) \;\; \cdots \;\; \eta _2( {t_s}+ mh ) \end{array} \right] ^\mathrm{T}. \end{aligned}$$

From Lemma 5 above, there exists a constant \(\gamma _1>1\) which is independent of \({t_s}\in \mathbf{R},{\bar{x}} ({t_s}) \in \mathbf{R}^{n+1}\), \(h \in ( 0, {\bar{h}} ),{\bar{\nu }}\in \mathbf{R},w_u, w_y \in {PC_{\infty }}\), \(y_\mathrm{ref} \in {\mathcal{CF}}\), and \(p \in \mathcal{P }\) so that

$$\begin{aligned}&\Vert H_m (h)^{-1} S_m ^{-1} \mathcal{E }_m ({t_s})-{\mathcal{O}_{m}( {\bar{C}}_p, {\bar{A}}_p )}{\tilde{x}} ({t_s}) - \tilde{p} (p)( {\bar{\nu }}_1 + \rho {\bar{\nu }}_2) \Vert \nonumber \\&\quad \le \gamma _1 h ( \Vert {\tilde{x}} ({t_s}) \Vert + \Vert {\bar{\nu }}_1 \Vert + \rho \Vert {\bar{\nu }}_2 \Vert ) + \Vert H_m (h)^{-1} S_m ^{-1} {N}_m ( {t_s}) \Vert ,\qquad \end{aligned}$$
(39)
$$\begin{aligned}&\Vert H_m (h)^{-1} S_m ^{-1} \mathcal{E }_m ({t_s}+mh) - {\mathcal{O}_{m}( {\bar{C}}_p, {\bar{A}}_p )}{\tilde{x}} ({t_s}+mh) - \tilde{p} (p) ( {\bar{\nu }}_1 - \rho {\bar{\nu }}_2 ) \Vert \nonumber \\&\quad \le \gamma _1 h ( \Vert {\tilde{x}} ({t_s}+mh) \Vert + \Vert {\bar{\nu }}_1 \Vert + \rho \Vert {\bar{\nu }}_2 \Vert ) + \Vert H_m (h)^{-1} S_m ^{-1} {N}_m ({t_s}+mh ) \Vert \qquad \end{aligned}$$
(40)

as well as

$$\begin{aligned} \Vert {\tilde{x}} (t) - {\tilde{x}} ({t_s}) \Vert&\le \gamma _1 h ( \Vert {\tilde{x}} ({t_s}) \Vert + \Vert {\bar{\nu }}_1 \Vert + \rho \Vert {\bar{\nu }}_2 \Vert ) +\Vert \eta _1 (t) \Vert , \;\; t \in [{t_s}, {t_s}+ mh ) \\ \Vert {\tilde{x}} (t) - {\tilde{x}} ({t_s}+mh) \Vert&\le \gamma _1 h ( \Vert {\tilde{x}} ({t_s}+mh ) \Vert + \Vert {\bar{\nu }}_1 \Vert + \rho \Vert {\bar{\nu }}_2 \Vert ) +\Vert {\eta _1 (t)} \Vert ,\\&\quad t \in [ {t_s}+ mh, {t_s}+ 2mh ). \end{aligned}$$

Now there clearly exists a constant \(\gamma _2 \ge \gamma _1\) so that for \(h \in (0, {\bar{h}})\):

$$\begin{aligned} \Vert \eta _ 1 (t) \Vert&\le \gamma _2 h \Vert w \Vert _{\infty }, \\ \Vert N_m (t) \Vert + \Vert N_m (t+mh) \Vert&\le \gamma _2 \Vert w \Vert _{\infty }, \quad t \in [ {t_s}, {t_s}+2 mh ). \end{aligned}$$

This implies, in turn, that there exists a constant \(\gamma _3 \ge \gamma _2\) so that for \(h \in (0, {\bar{h}})\):

$$\begin{aligned}&\Vert {\tilde{x}} (t) - {\tilde{x}} ({t_s}) \Vert \le \gamma _3 h ( \Vert {\tilde{x}} ({t_s}) \Vert + \Vert {\bar{\nu }}_1 \Vert + \Vert {\bar{\nu }}_2 \Vert + \Vert w \Vert _{\infty } ),\,\, t \in [ {t_s}, {t_s}+ mh ),\\&\Vert {\tilde{x}} (t) \!-\! {\tilde{x}} ({t_s}\!+\! mh ) \Vert \le \gamma _3 h ( \Vert {\tilde{x}} ({t_s}) \Vert \!+\! \Vert {\bar{\nu }}_1 \Vert \!+\! \Vert {\bar{\nu }}_2 \Vert \!+\! \Vert w \Vert _{\infty } ),\,\, t \!\in \! [ {t_s}\!+\! mh, {t_s}\!+\! 2 mh ), \end{aligned}$$

which can be combined to yield

$$\begin{aligned} \Vert {\tilde{x}} (t) \!-\! {\tilde{x}} ( {t_s}) \Vert \le 2 \gamma _3 h ( \Vert {\tilde{x}} ( {t_s}) \Vert \!+\! \Vert {\bar{\nu }}_1 \Vert + \Vert {\bar{\nu }}_2 \Vert + \Vert w \Vert _{\infty } ),\quad t \in [ {t_s}, {t_s}+ 2mh ].\qquad \end{aligned}$$
(41)

This means that there exists a constant \(\gamma _4 \ge \gamma _3\) so that (39) and (40) can be rewritten as

$$\begin{aligned}&\Vert H_m (h)^{-1} S_m ^{-1} \mathcal{E }_m ( {t_s}) - {\mathcal{O}_{m}( {\bar{C}}_p, {\bar{A}}_p )}{\tilde{x}} ( {t_s}) - \tilde{p} (p) ( {\bar{\nu }}_1 + \rho {\bar{\nu }}_2 ) \Vert \nonumber \\&\quad \le \gamma _4 h ( \Vert {\tilde{x}} ( {t_s}) \Vert + \Vert {\bar{\nu }}_1 \Vert + \Vert {\bar{\nu }} _2 \Vert ) + \gamma _4 ( 1+ \Vert H_m (h)^{-1} \Vert \times \Vert S_m ^{-1} \Vert )\Vert w \Vert _{\infty } \nonumber , \\&\Vert H_m (h)^{-1} S_m ^{-1} \mathcal{E }_m ({t_s}+mh) - {\mathcal{O}_{m}( {\bar{C}}_p, {\bar{A}}_p )}{\tilde{x}} ( {t_s}+mh) - \tilde{p} (p) ( {\bar{\nu }}_1 - \rho {\bar{\nu }} ) \Vert \nonumber \\&\quad \le \gamma _4 h ( \Vert {\tilde{x}} ({t_s}) \Vert + \Vert {\bar{\nu }}_1 \Vert + \Vert {\bar{\nu }} _2 \Vert ) + \gamma _4 ( 1+ \Vert H_m (h)^{-1} \Vert \times \Vert S_m ^{-1} \Vert ) \Vert w \Vert _{\infty } \end{aligned}$$

for \(h \in (0, {\bar{h}})\). At this point, we simply need to combine these two inequalities together with suitable weights to obtain the desired results. Specifically, if we take \(\frac{\rho -1}{2 \rho }\) times the first inequality and \(\frac{\rho +1 }{2 \rho }\) times the second inequality with \(\nu _1=\nu _2\), then we obtain

$$\begin{aligned}&\Vert H_{m} (h) ^{-1} S_{m}^{-1} \left[ \frac{\rho -1}{2 \rho } \mathcal{E }_{m} ( {t_s}) + \frac{\rho + 1 }{2 \rho } \mathcal{E }_{m} ( {t_s}+mh ) \right] - {\mathcal{O}_{m}( {\bar{C}}_p, {\bar{A}}_p )}{{\tilde{x}}}(kT) \Vert \\&\quad \le 2 \gamma _4 \frac{ \rho +1}{ \rho } h ( \Vert {{\tilde{x}}}({t_s}) \Vert + \Vert {\bar{\nu }}_1 \Vert ) + \frac{ \rho +1}{ \rho } \gamma _4 (1 + \Vert H_m (h)^{-1} \Vert \times \Vert S_m ^{-1} \Vert ) \Vert w \Vert _{\infty }; \end{aligned}$$

if we take the difference between the two entities premultiplied by \( \frac{1}{2 \rho } W\) and use (41), then it follows that there exists a constant \(\gamma _5 \) so that

$$\begin{aligned}&\left\| \frac{1}{2 \rho } W H_m (h) ^{-1} S_m^{-1} [ \mathcal{E }_m ({t_s}) - \mathcal{E }_m ({t_s}+mh )] - {{\bar{p}}} {\bar{\nu }}_2 \right\| \\&\quad \le \gamma _5 h ( \Vert {{\tilde{x}}}({t_s}) \Vert + \Vert {\bar{\nu }}_1 \Vert + \Vert {\bar{\nu }}_2 \Vert ) + \gamma _5 (1+ \Vert H_m (h)^{-1} \Vert \times \Vert S_m^{-1} \Vert ) \Vert w \Vert _{\infty }. \end{aligned}$$

The definitions of \(\gamma \) and \(\phi (h) \) are now quite clear. \(\square \)

Proof of Lemma 2

First of all, define \(\underline{d} := \inf \{ | d( {{\bar{p}}}(p) ) | : p \in \mathcal{P } \}\) and \({\bar{d}} := \sup \{ | d( {{\bar{p}}}(p) ) | : p \in \mathcal{P } \} \); due to Assumption 5, both are non-zero. If \(d( {{\bar{p}}}(p)) >0\) for all \(p \in \mathcal{P }\), then we can simply choose \(c_0 \in ( - 2/ {\bar{d}}, 0 )\) and \(c_1 =0\); if \(d( {{\bar{p}}}(p) ) < 0\) for all \(p \in \mathcal{P }\), then we can simply choose \(c_0 \in (0, 2/ {\bar{d}} )\) and \(c_1=0\); last of all, if \(d ( {{\bar{p}}}(p))\) takes both positive and negative signs as \(p\) ranges over \(\mathcal{P }\), we can choose \(c_0=0\) and \(c_1 \in (-2/{\bar{d}}^2,0)\). \(\square \)

Proof of Lemma 3

Since the controller is periodic of period \(T\), it is enough to look at what happens on the interval \([0, T)\). The state \(z_1\) has dimension \({\ell } \) and \(z_{1_i} \) stores a copy of \(e((i-1)h),\,i=1,2,\ldots ,\ell \). Adopting the natural notation of \(e_i \in \mathbf{R}^{\ell }\) to denote the \(i\)th normal vector, we set

$$\begin{aligned} (F_{11},G_{1}) (k) = \left\{ \begin{array}{ll} (0,e_1) &{}\quad k=0 \\ (I, e_{k+1} ) &{}\quad k=1,\ldots , \ell -1 ; \\ \end{array} \right. \end{aligned}$$

\((F_{11},G_{1})\) are periodic of period \(\ell \) (here \(T = \ell h\)). It is easy to verify that for \(j=1,\ldots , \ell \):

$$\begin{aligned} z_{1_j} [i] = \left\{ \begin{array}{ll} 0 &{}\quad 1 \le i \le j-1 \\ e((j-1)h) &{}\quad j \le i \le \ell . \end{array} \right. \end{aligned}$$

The state \(z_2\) has dimension one and stores a copy of the \(\hat{\nu }^0 (0)\). If we initialize \(z_2 [0] = \hat{\nu }^0 (0) \) and set \(\alpha _i (j) =0,\, j=0,1,\ldots , \ell -2 \) and \(i=1,\ldots ,\ell \), then we have

$$\begin{aligned} z_2 [j] = \hat{\nu }^0 (0),\quad j=0,1,\ldots ,\ell -1. \end{aligned}$$
(42)

To prove that \(f\) has the desired form we must define \(f(z[ \ell -1], e [ ( \ell -1 ) h], \ell -1 )\) so that \(z_2 [ \ell ] = \hat{\nu }^0 (T)\).

Using the above structure, it is easy to verify that \({\bar{z}} [j] := \left[ \begin{array}{c} z [j] \\ e (jh) \end{array} \right] \) contains all elements of \( \{ e(0), e(h),\ldots , e(jh), \hat{\nu }^0 (0) \}\) for \(j =0,1,\ldots ,\ell -1\). Hence, any linear combination of these elements can be obtained by multiplying \(\bar{z} [j]\) by a row vector of length \(l+1\). In particular,

  1. (i)

    \(\psi _m (kh)\) can be written as a linear combination of the elements of \(\bar{z} [j]\) for \(j =k+2m,\ldots ,\ell -1\).

  2. (ii)

    \(\bar{\psi }_m (kh)\) can be written as a weighted sum of the elements of \(\bar{z} [j]\) for \(j =k+2m,\ldots ,\ell -1\).

  3. (iii)

    From (27) and (ii) above it follows that each element of \(\hat{\xi } (0)\) is the argmin of two weighted sums of elements of \(\bar{z} [j]\) for \(j=4m,\ldots , \ell -1\).

  4. (iv)

    Choose \(\tilde{n}_i \in \mathbf{N}\) so that \(T_{i} = \tilde{n}_i h,\,i=1,\ldots , q+1\). From (29), we see that for \(i=1,\ldots ,q\), each element of \(\hbox {Est} [ \xi (kT) \otimes ^{i} {{\bar{p}}}]\) can be written as the argmin of two weighted sums of the elements of \(\bar{z} [j]\) for \(j= \tilde{n}_{i+1},\ldots ,\ell -1\).

  5. (v)

    At \(t= T_{q+1} = \tilde{n}_{q+1} h\), one can form \(\hat{\phi } (0)\), which is a weighted sum of the elements of \(\hbox {Est} [ \xi (kT) \otimes ^{i} {{\bar{p}}}] \in \mathbf{R}^{n_i} = \mathbf{R}^{{{\bar{m}}}^i (n+2)},\,i=0,1,\ldots , q\). In all, these \(q+1\) terms contain \(\tilde{m}\) elements. From (iii) and (iv), this means that \(\hat{\phi } (0)\) can be written as the weighted sum of \(\tilde{m}\) terms, each of which is the argmin of two weighted sums of the elements of \(\bar{z} [j]\) for \(j= \tilde{n}_{i+1},\ldots , \ell -1\).

  6. (vi)

    Choose \(\bar{\tilde{n}}_i \in \mathbf{N}\) so that \({\bar{T}}_i = \bar{\tilde{n}} h\) for \(i=1,\ldots , {{\bar{q}}}+1\). From (31) and part (i), we see that for \(i=1,\ldots ,{{\bar{q}}}\), each element of \(\hbox {Est} [ \hat{\phi }_0 (0) \otimes ^i {{\bar{p}}}]\) can be written as the argmin of two weighted sums of the elements of \(\bar{z} [j]\) for \(j=\bar{\tilde{n}}_{i+1},\ldots , \ell -1\).

  7. (vii)

    At \(t = {\bar{T}}_{{{\bar{q}}}+1 }\), we can form \(\hat{\phi }_1 (0)\): it is a weighted sum of the elements of \(\text{ Est } [ \hat{\phi }_0 (0) \otimes ^i {{\bar{p}}}] \in \mathbf{R}^{{{\bar{n}}}_i} = \mathbf{R}^{{{\bar{m}}}^i},\,i=0,1,\ldots , {{\bar{q}}}\). In all, there are \({\bar{q}}+1\) terms containing \(\bar{\tilde{m}}\) elements. This means that \(c_0 \hat{\phi }_0 (0) + c_1 \hat{\phi }_1 (0) \) can be written as weighted sums of \(\tilde{m} + \bar{\tilde{m}}\) terms, each of which is the argmin of two weighted sums of the elements of \(\bar{z} [\ell -1]\). Since \(\ell > \tilde{m} + \bar{\tilde{m}}\), we see that we can choose \(\alpha _i [ \ell -1],\,i=1,\ldots , \ell \), so that the last element of \(f\) has the correct form and \(z_2 [ \ell ] = \hat{\nu }^0 (T)\).

The form of the output Eq. (35) can be proven using the structure of \(\nu \) and the above observations. \(\square \)

Proof of Lemma 4 (KCL)

Let \(k \in \mathbf{Z}^+\), \({\bar{x}} (kT) \in \mathbf{R}^{n+1},\hat{\nu }^0 (kT) \in \mathbf{R}\), \(w \in {PC}_{\infty },y_\mathrm{ref} \in \mathcal{C F}\), \(T \in ( 0,T_s/3)\) and \(\sigma \in \Sigma _{T_s}\) be arbitrary; notice that \(\sigma (t)\) has at most one discontinuity on each interval of length \(T\). Define the probing signal which sits on top of the nominal signal:

$$\begin{aligned} \tilde{\nu } (t):= \nu (t) - \hat{\nu }^0 (kT),\quad t \in [ kT, (k+1)T),\ k \in \mathbf{Z}^+, \end{aligned}$$

as well as its scaled version \({\bar{\nu }}_2 (t) := \frac{1}{\rho } \tilde{ \nu } (t) \). Define \(\bar{\ell } := {\bar{T}}_{{\bar{q}}+1}/(4 {\bar{h}})\).

With this notation in hand, now let us interpret the KEL. We somewhat arbitrarily set \({\bar{h}} = 0.5\); then there exists a constant \(\gamma _1 \ge 1\) and a function \({f}_1 :(0,.5) \rightarrow \mathbf{R}^+\) so that for all \(i \in \{ 0,1,\ldots , 2 \bar{\ell }-1 \}\), if \(\sigma (t)\) is constant on \([kT + 2i {\bar{h}}, kT + 4 i {\bar{h}} ]\), then we have

$$\begin{aligned}&\Vert \psi _m ( kT + 2 i {\bar{h}} ) - {\bar{p}} ( \sigma ( kT + 2 i {{\bar{h}}})) {\bar{\nu }}_2 (kT + 2 i {\bar{h}} ) \Vert \nonumber \\&\quad \le \gamma _1 T ( \Vert {\tilde{x}} (kT + 2 i {\bar{h}})\Vert +\Vert \hat{\nu }^0 (kT) \Vert + \Vert {\bar{\nu }}_2 ( kT + 2 i {\bar{h}})\Vert )+ {f}_1 (T) \Vert w \Vert _{\infty },\qquad \end{aligned}$$
(43)

and using the fact that probing has a special structure on \([kT, kT +4 {\bar{h}})\), we have

$$\begin{aligned}&\Vert \bar{\psi }_m ( kT + 2 i {\bar{h}} ) - \mathcal{O }_n ( {\bar{C}}_p, {\bar{A}}_p ) {\tilde{x}} (kT+2i {\bar{h}}) \Vert \nonumber \\&\quad \le \gamma _1 T ( \Vert {\tilde{x}} (kT + 2 i {\bar{h}} ) \Vert + \Vert \hat{\nu }^0 (kT) \Vert ) + {f}_1 (T) \Vert w \Vert _{\infty },\quad i=0, 1. \end{aligned}$$
(44)

Now we develop a crude bound on \(|\nu (t)|\) and \(\Vert {\tilde{x}}(t)\Vert \). Since we carry out each estimation twice and take the smaller of the two, at least one of them will satisfy (43) or (44), as appropriate, so we can obtain a bound on the size of the estimate. First of all, because of the iterative probing carried out by the controller, it follows that there exists a constant \(\gamma _2\) so that

$$\begin{aligned}&\Vert \hbox {Est} [ \hat{\xi } (kT) \otimes ^{i} {{\bar{p}}}] \Vert \le \gamma _2 \Vert \hbox {Est} [ \hat{\xi } (kT) \otimes ^{i-1} {{\bar{p}}}] \Vert \\&\quad + \gamma _1 h \left( \sup _{\tau \in [kT, (k+1)T) } \Vert {\tilde{x}} ( \tau ) \Vert + \Vert \hat{\nu }^0 (kT) \Vert + \Vert \hbox {Est} [ \hat{\xi } (kT) \otimes ^{i-1} {{\bar{p}}}] \Vert \right) \\&\quad + {f}_1 (T) \Vert w \Vert _{\infty },\quad i=1,\ldots ,q. \end{aligned}$$

Solving iteratively, we see that there exists a constant \(\gamma _3\) so that

$$\begin{aligned} \Vert \hbox {Est} [\hat{\xi }(kT) \otimes ^{i} {{\bar{p}}}] \Vert&\le \gamma _3 \left[ \sup _{\tau \in [kT, (k+1)T) } \Vert {\tilde{x}} ( \tau ) \Vert + \Vert \hat{\nu }^0 (kT) \Vert + {f}_1 (T) \Vert w \Vert _{\infty }\right] ,\nonumber \\&\qquad i=0,1,\ldots ,q. \end{aligned}$$
(45)

Since \(\hat{\phi }^0 (kT) \) is used as a probing signal in Step (iii) of the controller implementation, it follows from that there exists a constant \(\gamma _4\) so that

$$\begin{aligned}&\Vert \hbox {Est} [ \hat{\phi }_0 (kT) \otimes ^{i} {{\bar{p}}}] \Vert \le \gamma _4 \Vert \hbox {Est} [ \hat{\phi }_0 (kT) \otimes ^{i-1} {{\bar{p}}}] \Vert \\&\quad + \gamma _1 h \left( \sup _{\tau \in [kT, (k+1)T) } \Vert {\tilde{x}} ( \tau ) \Vert + \Vert \hat{\nu }^0 (kT) \Vert + \Vert \hbox {Est} [ \hat{\phi }_0 (kT) \otimes ^{i-1} {{\bar{p}}}] \Vert \right) \\&\quad + {f}_1 (T) \Vert w \Vert _{\infty },\quad i=0,\ldots ,{\bar{q}}. \end{aligned}$$

Solving iteratively, we see that there exists a constant \(\gamma _5\) so that

$$\begin{aligned} \Vert \hbox {Est} [ \hat{\phi }_0 (kT) \otimes ^{i} {{\bar{p}}}] \Vert&\le \gamma _5 \left[ \sup _{\tau \in [kT, (k+1)T) } \Vert {\tilde{x}} ( \tau ) \Vert + \Vert \hat{\nu }^0 (kT) \Vert ) + {f}_1 (T) \Vert w \Vert _{\infty }\right] ,\nonumber \\&\qquad i=0,1,\ldots ,{\bar{q}}. \end{aligned}$$
(46)

By examining the probing signal \(\tilde{\nu }\) and using the above bounds on \(\text{ Est } [ \hat{\xi } (kT) \otimes ^i {{\bar{p}}}]\) and \(\hbox {Est} [ \hat{\phi }_0 (kT) \otimes ^i {{\bar{p}}}]\), we see that there exists a constant \(\gamma _6\) so that

$$\begin{aligned} \Vert \nu (t)\Vert&\le \gamma _6 \left( \sup _{\tau \in [kT, (k+1)T)}\Vert {\tilde{x}} ( \tau ) \Vert + \Vert \hat{\nu }^0 (kT) \Vert + {f}_1 (T) \Vert w \Vert _{\infty }\right) ,\nonumber \\&\quad t \in [kT, (k+1)T). \end{aligned}$$
(47)

If we now examine the differential equation for \({\bar{x}}\) and use the fact that \({\bar{x}}\) and \({\tilde{x}}\) are closely related, then with \(f_2 (T):= 1 + f_1 (T)\) it is easily proven that there exists a constant \(\gamma _{7}\) so that

$$\begin{aligned} \Vert {\bar{x}} (t) - {\bar{x}} (kT) \Vert&\le \gamma _{7} T ( \Vert {\bar{x}} (kT) \Vert + \Vert \hat{\nu }^0 (kT) \Vert + \Vert y_\mathrm{ref} \Vert _{\infty } ) + \gamma _{7} {f}_2 (T) \Vert w \Vert _{\infty },\nonumber \\&\quad t \in [kT, (k+1)T), \end{aligned}$$
(48)

for small \(T\), which yields the first inequality (36). By examining the differential equation for \({\bar{x}}\), we can use the above bounds on \(\nu (t)\) and \({\bar{x}} (t)\) to provide the second inequality (37) which holds when \(\chi (k,T) = 1\). We can then obtain a crude bound on \(\hat{\nu }^0 ((k+1)T)\) which we can easily leverage to obtain the last inequality (38) when \(\chi (k,T)=1\).

Now we consider the case in which \(\chi (k,T) = 0\): assume that \(\sigma (t)\) is constant on \([kT, (k+1)T]\); we use \(p\) to represent the value of \(\sigma (t)\) on this interval and \({\bar{p}}\) to represent \({\bar{p}} ( \sigma (kT))\). We start by using (48) to prove that \(\Vert {\tilde{x}}(t)-{\tilde{x}}(kT) \Vert \) is small for \(t \in [kT, (k+1)T)\). We leverage this to prove that \(\Vert \text{ Est } [ {\xi } (kT) \otimes ^{i} {{\bar{p}}}] - {{\bar{p}}}\text{ Est } [ {\xi } (kT) \otimes ^{i-1} {{\bar{p}}}] \Vert \) is small and that there exists a constant \(\gamma _{8}\) so that for small \(T\):

$$\begin{aligned} \Vert \hbox {Est} [ {\xi } (kT) \otimes ^{i} {{\bar{p}}}] - [ {\xi } (kT) \otimes ^{i} {{\bar{p}}}] \Vert \!\le \! \gamma _{8} T ( \Vert {\tilde{x}} (kT) \Vert + \Vert \hat{\nu }^0 (kT) \Vert ) \!+\! \gamma _{8} {f}_2 (T) \Vert w \Vert _{\infty } \nonumber \\ \end{aligned}$$
(49)

for \(i=0,\ldots , q\), which means that there exists a constant \(\gamma _{9}\) so that for small \(T\):

$$\begin{aligned} | \hat{\phi }_0 (kT) - \phi _0 (kT) | \le \gamma _{9} T ( \Vert {\tilde{x}} (kT) \Vert + \Vert \hat{\nu }^0 (kT) \Vert ) + \gamma _{9} {f}_2 (T) \Vert w \Vert _{\infty }. \end{aligned}$$
(50)

At this point we can apply the same logic to analyse the estimation carried out on \([kT + T_{q+1}, kT + {\bar{T}}_{{\bar{q}}+1} )\). We end up proving that \(\Vert \hbox {Est} [ \hat{\phi }_0 (kT) \otimes ^{i} {{\bar{p}}}] - [ \hat{\phi }_0 (kT) \otimes ^{i} {{\bar{p}}}] \Vert \) is small and that there exists a constant \(\gamma _{10}\) so that for small \(T\):

$$\begin{aligned} | \hat{\phi }_1 (kT) - \phi _1 (kT) | \le \gamma _{10} T ( \Vert {\tilde{x}} (kT) \Vert + \Vert \nu ^0 (kT) \Vert ) + \gamma _{10} {f}_2 (T) \Vert w \Vert _{\infty }. \end{aligned}$$
(51)

If we combine (50), (51), the update law (33), and the definitions of \(\phi _0\) and \(\phi _1\), we conclude that there exists a constant \(\gamma _{11}\) so that for small \(T\):

$$\begin{aligned}&| \hat{\nu }^0 ((k+1)T) - \hat{\nu }^0 (kT) - [ c_0 + c_1 d ( {{\bar{p}}})] \phi _0 (kT) |\\&\quad \le \gamma _{11} T (\Vert {\bar{x}} (kT) \Vert + \Vert \hat{\nu }^0 (kT) \Vert + \Vert y_\mathrm{ref} \Vert _{\infty } ) + \gamma _{11} {f}_2 (T) \Vert w \Vert _{\infty }, \end{aligned}$$

which is the inequality (38) for the case of \(\chi (k,T) = 0\).

It remains to examine the state equation, which we can rewrite as

$$\begin{aligned} \dot{{\bar{x}}} = {\bar{A}}_p {\bar{x}} + {\bar{B}} \nu + \bar{E}_p w_u = \tilde{A}_p {\bar{x}} + {\bar{B}} [ \nu - {\bar{F}}_p {\tilde{x}} ] + G_p y_\mathrm{ref} + \bar{E}_p w_u. \end{aligned}$$

If we analyse this carefully and use the fact that \(\nu (t) - {\bar{F}}_p {\tilde{x}} (t)\) has rich structure on \([kT, (k+1)T)\), then we obtain the inequality (37) for the case of \(\chi (k,T) = 0\). \(\square \)

Proof of Theorem 1

Let \( {\bar{x}}(0 )\in \mathbf{R}^{n+1},\hat{\nu }^0 (0)\in \mathbf{R}\), \(T \in ( 0, T_s/3),w \in {PC}_{\infty },y_\mathrm{ref} \in \mathcal{C F}\) and \(\sigma \in \Sigma _{T_s}\) be arbitrary.

Step 1: Defining critical times and equations

In this proof we will be applying the KCL, and we will be interested in ascertaining those intervals of the form \([kT, (k+1)T],k \in \mathbf{Z}^+\), on which \(\sigma \) is constant. Since \(\sigma \) is continuous from the right, the interval \([kT, (k+1)T]\) will be problematic if there is a discontinuity at some point in \((kT, (k+1)T]\). Associated with each \(i\in \mathbf{N}\), there is one such problematic interval for which \(t_i\in (n_i (T) T, n_i (T) T + T ]\); hence, we define

$$\begin{aligned} n_i(T):=\left\{ \begin{array}{ll} \hbox {int} \left[ \frac{t_i}{T}\right] -1 &{}\quad \text{ if } \frac{t_i}{T}\text { is an integer,} \\ \hbox {int} \left[ \frac{t_i}{T}\right] &{}\quad \text{ otherwise. } \end{array} \right. \end{aligned}$$

For convenience we define \(n_0 (T) = 0\). Since we have restricted \(T \) to be less that \(T_s\), it is clear that \(n_{i+1} (T) > n_i (T),\, i \in \mathbf{N}\), though it could very well be that \(n_1 (T) = n_0 (T)\).

The proposed controller yields \(\nu \); since the objective is to make this close to \(F_{\sigma } {\tilde{x}}\), this yields a corresponding closed loop system which we write as

$$\begin{aligned} \dot{ {\bar{x}}}&= {\bar{A}}_{\sigma } {\bar{x}} + {\bar{B}} \nu + {\bar{E}}_{\sigma } w_u = \tilde{A}_{\sigma } {\bar{x}} +{G}_{\sigma } y_\mathrm{ref}+\bar{E}_{ \sigma } w_u + {\bar{B}} [ \nu - F_{\sigma } {\tilde{x}}]. \end{aligned}$$

The goal of the controller is to make \(\nu ( t ) - {\bar{F}}_{\sigma (t)} {\tilde{x}} (t)\) small, so that \({\bar{x}} (t) \approx {\bar{x}}^0 (t) \). On the interval \([kT, (k+1)T)\), a good approximation to this is \(\hat{\eta } (kT) := \hat{\nu }^0 (kT) - {\bar{F}}_{\sigma (kT)} {\tilde{x}} (kT)\).

Step 2: Use the KCL to construct bounds on \({\bar{x}} (kT)\) and \(\hat{\eta } (kT)\)

To proceed, we define \( \chi : \mathbf{Z}^+\times \mathbf{R}^+\rightarrow \{0,1\}\) as in the statement of the KCL; hence, \(\chi (j,T)\) equals 1 at exactly those \(j\) for which \((jT,(j+1)T]\) contains an element of \(\{ t_i: i\in \mathbf{Z}^+ \}.\) Applying the KCL and converting the bound on \(\Vert \hat{\nu }^0 (kT) \Vert \) to one on \(\Vert \hat{\eta } (kT)\Vert \), we conclude that there exists a constant \(\gamma _1\) and function \(\phi _1 : \mathbf{R}^+ \rightarrow \mathbf{R}^+\) so that for small \(T\):

$$\begin{aligned}&| \hat{\eta } [ (k+1)T] - [ 1 + c_0 d({\bar{p}} ( \sigma (kT))) + c_1 d({\bar{p}} ( \sigma (kT)))^2] \hat{\eta } (kT) |\nonumber \\&\quad \le \gamma _1 (T + \chi (k,T)) (\Vert {\bar{x}} (kT)\Vert +|\hat{\eta }(kT)| + \Vert y_\mathrm{ref} \Vert _{\infty } )+ \gamma _3 \phi _1 (T) \Vert w \Vert _{\infty }, \;\; k \ge 0, \qquad \end{aligned}$$
(52)
$$\begin{aligned}&\Vert {\bar{x}}((k+1)T) - { \Phi _{\sigma } }((k+1)T, kT ) {\bar{x}} (kT)\nonumber \\&\qquad - \int _{kT}^{(k+1)T} { \Phi _{\sigma } }(t, \tau ) [ {G}_{\sigma ( \tau ) } y_{r ef } + \bar{E}_{ \sigma ( \tau )} w_u ( \tau ) + {\bar{B}} \hat{\eta } (kT) ] d \tau \Vert \nonumber \\&\quad \le \gamma _3 ( T^2 + T \chi (k,T)) (\Vert {\bar{x}} (kT)\Vert +|\hat{\eta }(kT)| + \Vert y_\mathrm{ref} \Vert _{\infty } ) + \gamma _3 \phi _1 (T) \Vert w \Vert _{\infty },\quad k \ge 0.\qquad \quad \quad \end{aligned}$$
(53)

Defining \(\phi _2 := 1 + \phi _1 (T)\) and using the bound on \(\Vert \Phi _{\sigma } (t, \tau ) \Vert \) given in Proposition 1, we see that there exists a constant \(\gamma _2 \ge 1\) so that for small \(T\):

$$\begin{aligned} \Vert {\bar{x}} (kT) \Vert&\le \gamma _2 e^{{\lambda _1}kT} \Vert {{\bar{x}}}(0 ) \Vert + \sum _{j=0}^{k-1} \gamma _2 e^{{\lambda _1}(k-j-1)T} \{ T \Vert y_\mathrm{ref} \Vert _{\infty } + T \Vert \hat{\eta } (jT) \Vert + \phi _2 (T) \Vert w \Vert _{\infty } \} \nonumber \\&+\sum _{j=0}^{k-1} \gamma _2 e^{{\lambda _1}(k-j-1)T} ( T^2 + T \chi (j,T)) \Vert {{\bar{x}}}(jT) \Vert ,\quad k \ge 0, \end{aligned}$$
(54)
$$\begin{aligned} \Vert \hat{\eta }(kT)\Vert&\le {\varepsilon }^{k} \Vert \hat{\eta }(0 ) \Vert + \sum _{j=0}^{k-1} {\varepsilon }^{k-1-j} \gamma _2 \{ [ T + \chi (j,T) ] \nonumber \\&\times \,[\Vert {\bar{x}} (jT)\Vert +|\hat{\eta }(jT)| + \Vert y_\mathrm{ref} \Vert _{\infty }] + \phi _2 (T) \Vert w \Vert _{\infty }\},\quad k \ge 0. \end{aligned}$$
(55)

Step 3: Convert difference inequalities to difference equation bounds

The above two inequalities are hard to handle directly so we define two new equations:

$$\begin{aligned} \psi [(k+1)T]&= e^{\lambda _1 T} \psi (kT) + T \Vert y_\mathrm{ref} \Vert _{\infty }+T\xi (kT) + \phi _2 (T) \Vert w \Vert _{\infty } \nonumber \\&+(T^2+T \chi (k,T)) \gamma _2 \psi (kT),\quad \psi (0) = \Vert {{\bar{x}}}(0) \Vert ,\ k \ge 0, \end{aligned}$$
(56)
$$\begin{aligned} \xi [ (k+1)T ]&= {\varepsilon }\xi (kT) + \gamma _2 \{ ( T + \chi (k,T)) ( \gamma _2 \psi (kT) + \xi (kT)\nonumber \\&+ \Vert y_\mathrm{ref} \Vert _{\infty } ) + \phi _2 (T) \Vert w \Vert _{\infty } \}, \, \xi (0) = | \hat{\eta } (0) |,\ k \ge 0. \end{aligned}$$
(57)

A straightforward inductive proof can be used to prove that the following holds:

Claim 1

We have that \(\Vert {{\bar{x}}}(kT) \Vert \le \gamma _2 \psi (kT)\) and \(\Vert \hat{\eta } (kT) \Vert \le \xi (kT)\) for \(k \ge 0\).

We can now combine the two difference Eqs. (56) and (57) into a state space form:

$$\begin{aligned}&\!\!\!\! \left[ \begin{array}{c} \psi [ (k+1)T] \\ \xi [ (k+1)T] \end{array} \right] = \underbrace{ \left[ \begin{array}{cc} e^{\lambda _1 T}+ \gamma _2 T^2 &{} T \\ \gamma _2^2 T &{} {\varepsilon }+ \gamma _2 T \end{array} \right] }_{=: Q(T)} \left[ \begin{array}{c} \psi (kT) \\ \xi (kT) \end{array} \right] + \left[ \begin{array}{c} T \\ \gamma _2 T \end{array} \right] \Vert y_\mathrm{ref} \Vert _{\infty }\nonumber \\&+\,\chi (k,T) \left[ \begin{array}{c} \gamma _2 T \\ \gamma _2^2 \end{array} \right] \psi (kT) \!+\! \chi (k,T) \left[ \begin{array}{c} 0 \\ \gamma _2 \end{array} \right] ( \xi (kT) \!+\! \Vert y_\mathrm{ref} \Vert _{\infty } ) \!+\! \left[ \begin{array}{c} 1 \\ \gamma _2 \end{array} \right] \phi _2 (T) \Vert w \Vert _{\infty }. \nonumber \\ \end{aligned}$$
(58)

Claim 2

With \(e_i \in \mathbf{R}^2\) the \(i\)th unit vector, there exists a constant \(\gamma _3\) so that, for small \(T\):

$$\begin{aligned}&\Vert Q(T)^k\Vert \!\le \!\gamma _3 e^{\lambda _2 kT},\,\, \Vert e_1^T Q(T)^k e_1 \Vert \!\le \! (1 \!+\! \gamma _3 T) e^{\lambda _2 kT}, \Vert e_1^T Q(T)^k e_2 \Vert \le \gamma _3 T e^{\lambda _2 kT}, \nonumber \\ \end{aligned}$$
(59)
$$\begin{aligned}&\Vert e_2^T Q(T)^k e_1\Vert \le \gamma _3 T e^{\lambda _2 kT},\,\, \Vert e_2^T Q(T)^k e_2 \Vert \le \left( \frac{{\varepsilon }+1}{2}\right) ^k+\gamma _3 T e^{\lambda _2 kT}, \quad k \ge 0. \nonumber \\ \end{aligned}$$
(60)

Proof

Define \({\bar{Q}}(T):=e^{-\lambda _2 T}Q(T)\); it is easy to prove that, for small \(T\), the matrix \(I-{\bar{Q}}(T)^T{\bar{Q}}(T)^T\) is positive definite, so the induced Euclidean norm of \(Q(T)\) is less than one. However, all norms are compatible on \(\mathbf{R}^{2 \times 2}\), so there exists a constant \({\gamma }_4 > 0\) so that for small \(T,\Vert {Q}(T)^k \Vert \le {\gamma }_4 e^{\lambda _2 kT},k \ge 0\). To prove the other bounds, consider the difference equation \(\rho [k+1] = Q(T) \rho [k]\). If we analyse the difference equation for \(\rho _2\) and use the first bound of (59), then we obtain the two bounds of (60). If we analyse the difference equation for \(\rho _1\) and use (60), then we obtain the second and third bounds of (59). \(\square \)

We now use Claim 2 to construct a bound on the behaviour at the sample points.

Step 4: Analyse the difference equations and obtain bounds on \(\Vert {\bar{x}} (kT) \Vert \) and \(\Vert \hat{\eta } (kT) \Vert \)

Claim 3

There exist constants \({T}_1>0\) and \(\gamma _5 > 0\) and a function \(\phi _3 : \mathbf{R}^+ \rightarrow \mathbf{R}^+\) so that for every \(\sigma \in \Sigma _{T_s},{\bar{x}}_0 \in \mathbf{R}^{n+1}\), and \(T \in (0, {T}_1 )\), we have

$$\begin{aligned}&\Vert {{\bar{x}}} (kT) \Vert \le \gamma _{5} e^{\lambda _3 k T} ( \Vert {\bar{x}} (0) \Vert + \Vert \hat{\eta } (0) \Vert ) + \gamma _{5} \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _{5} {\phi }_3 (T) \Vert w \Vert _{\infty }, \end{aligned}$$
(61)
$$\begin{aligned}&\Vert \hat{\eta } (kT) \Vert \le \gamma _{5} e^{\lambda _3 k T} ( \Vert {\bar{x}} (0) \Vert + \Vert \hat{\eta } (0) \Vert ) + \gamma _{5} \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _{5} {\phi }_3 (T) \Vert w \Vert _{\infty },\quad k \ge 0,\qquad \quad \end{aligned}$$
(62)
$$\begin{aligned}&\Vert \hat{\eta } (kT) \Vert \le \gamma _{5} \left[ \left( \frac{1 + {\varepsilon }}{2} \right) ^{k - n_i (T)} + T e^{\lambda _3 ( k - n_i (T))T} \right] e^{ \lambda _3 n_i (T)T } ( \Vert {\bar{x}} (0) \Vert + \Vert \hat{\eta } (0) \Vert ) \nonumber \\&\quad +\gamma _{5} \left[ \left( \frac{1 + {\varepsilon }}{2}\right) ^{k - n_i (T)} + T\right] \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _{5} {\phi }_3 (T) \Vert w \Vert _{\infty }, \; \nonumber \\&\qquad \qquad n_i (T) \le k \le n_{i+1} (T),\quad i \in \mathbf{Z}^+. \end{aligned}$$
(63)

Proof of Claim 3

Analysing (58) and considering the cases of \(i=0\) and \(i\in \mathbf{N}\) separately yields

$$\begin{aligned}&\left[ \begin{array}{c} \psi (kT) \\ \xi (kT) \end{array} \right] =Q(T)^{k - n_i (T)} \left[ \begin{array}{c} \psi (n_i(T)T) \\ \xi (n_i(T)T) \end{array} \right] \\&\qquad + \sum _{j=0}^{k-n_i (T)-1} Q(T)^j \left[ \left[ \begin{array}{c} T \\ \gamma _2 T \end{array} \right] \Vert y_\mathrm{ref} \Vert _{\infty } + \left[ \begin{array}{c} 1 \\ \gamma _2 \end{array} \right] \phi _2 (T) \Vert w \Vert _{\infty } \right] \\&\qquad +Q(T)^{k - n_i (T) -1 } \left[ \left[ \begin{array}{c} \gamma _2 T \\ \gamma _2^2 \end{array} \right] \psi (n_i(T)T) + \left[ \begin{array}{c} 0 \\ \gamma _2 \end{array} \right] ( \xi (n_i(T)T) + \Vert y_\mathrm{ref} \Vert _{\infty } ) \right] ,\\&\qquad n_i (T) \le k \le n_{i+1} (T),\quad i \in \mathbf{Z}^+. \end{aligned}$$

If we use Claim 2 and carefully analyse each of the three terms, then we can conclude nice bounds on \(\psi \) and \(\xi \), which we then convert to bounds on \({\bar{x}}\) and \( \hat{\eta } \) using Claim 1. \(\square \)

If we combine (61) and (62) then we immediately obtain the desired bound on \(\Vert \hat{\nu }^0 (kT)\Vert \).

Step 5: Examine \({\bar{x}}^0 - {\bar{x}}\) at the sample points

Define \({\hat{x}}(t):={\bar{x}}^0(t)-{\bar{x}}(t)\). If we combine the solution for \({\bar{x}}^0 (t)\) and \({\bar{x}} (t)\) given in (11) and (53), respectively, then from Proposition 1 there exists a constant \(\gamma _{6}\) so that for every \(\sigma \in \Sigma _{T_s}\), for small \(T\):

$$\begin{aligned}&\Vert {\hat{x}} (kT) \Vert \le \sum _{j=0}^{k-1} \gamma _{6} e^{ \lambda _1 ( k-1-j) T} T \Vert \hat{\eta } (jT) \Vert \nonumber \\&\quad +\sum ^{k-1}_{j=0} \gamma _{6} e^{\lambda _1 (k-1-j)T} [ (T^2 + T \chi (j,T)) (||{{\bar{x}}}(jT)||+ ||\hat{\eta }(jT)||\nonumber \\&\qquad +||y_\mathrm{ref}||_{\infty }) + \phi _1(T) \Vert w \Vert _{\infty } ]. \end{aligned}$$
(64)

Using straightforward analysis to obtain a good bound on the second term on the RHS, and by exploiting the intricate bound on \(\eta \) given in (63) and the properties of convolution to prove a bound on the first term on the RHS, we conclude that there exists a constant \(\gamma _7\) so that for small \(T\):

$$\begin{aligned} \Vert {\hat{x}} (kT) \Vert \!\le \! \gamma _7 T [ e^{\lambda _4 kT} ( \Vert {\bar{x}} (0) \Vert \!+\! \Vert \hat{\eta } (0) \Vert ) \!+\! \Vert y_\mathrm{ref} \Vert _{\infty } ] + \left( \frac{\gamma _7}{T}\right) {\phi }_3 (T) \Vert w \Vert _{\infty },\quad k \ge 0. \nonumber \\ \end{aligned}$$
(65)

Step 6: Examine the inter-sample behaviour of \({\bar{x}}^0 - {\bar{x}}\)

Here we leverage the fact that \(\nu \) and \(\nu ^0\) are well behaved between \(t=kT\) and \(t=(k+1)T\) to extend (65) to prove that there exists a constant \(\gamma _{8}\) so that for small \(T\):

$$\begin{aligned} \Vert {\hat{x}} (t) \Vert \!\le \! \gamma _{8} T [ e^{\lambda _4 t} ( \Vert {\bar{x}} (0) \Vert \!+\! \Vert \hat{\nu }^0 (0) \Vert ) \!+\! \Vert y_\mathrm{ref} \Vert _{\infty } ] \!+\! \gamma _{8} \left( 1 \!+\! \frac{1}{T} \phi _3 (T)\right) \Vert w \Vert _{\infty },\quad t \ge 0. \end{aligned}$$

This provides the desired bound on \(\Vert {\bar{x}}^0 (t) - {\bar{x}} (t) \Vert \), which completes the proof. \(\square \)

Proof of Theorem 2

Fix \(\delta >0\) and let \({\bar{x}}(0)={\bar{x}}_0\in \mathbf{R}^{n+1},z_0\in \mathbf{R}^l\), \(\sigma \in \Sigma _{T_s},w\in PC_{\infty },y_\mathrm{ref} \in \mathcal{C }\mathcal{F },T \in (0, T_s/3)\) and \(t_s = k_0 h\) be arbitrary. Lemma 3 states that the sampled-data controller described by (26)–(33) has a representation of the form (8) with desirable properties. We now consider the controller consisting of this latter representation together with (3).

First suppose that \({t_s}=k_0=0\). Recall that \({\bar{x}}^0\) and \(y^0\) are the closed-loop extended state and output, respectively, when the ideal control law (10) is applied; by examining the corresponding closed-loop system it follows that (12) holds, which means that there exists a constant \(\gamma _1\) so that

$$\begin{aligned} \Vert {\bar{x}}^0 (t) \Vert + \Vert \nu ^0 (t) \Vert&\le \gamma _1 e^{ \lambda _1 t} \Vert {\bar{x}}_0 \Vert + \gamma _1 \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _1 \Vert w \Vert _{\infty },\quad t \ge 0. \end{aligned}$$
(66)

From Theorem 1 there exist constants \({\bar{T}} >0\) and \(\gamma _2 > 0\) and a function \(\phi : \mathbf{R}^+ \rightarrow \mathbf{R}^+\) so that for every \(\sigma \in \Sigma _{T_s}\) and \(T \in (0,{\bar{T}})\), we have

$$\begin{aligned} \Vert {\bar{x}}(t) - {\bar{x}}^0 (t) \Vert&\le \gamma _2 T e^{ \lambda _4 t} (\Vert {\bar{x}}_0 \Vert + \Vert \hat{\nu }^0 (0) \Vert ) + \gamma _2 T \Vert y_\mathrm{ref} \Vert _{\infty } + \phi (T) \Vert w \Vert _{\infty },\quad t \ge 0,\\ \Vert \hat{\nu }^0 (kT) \Vert&\le \gamma _2 e^{ \lambda _4 t} (\Vert {\bar{x}}_0\Vert +\Vert \hat{\nu }^0(0)\Vert )+\gamma _2 \Vert y_\mathrm{ref} \Vert _{\infty }+\phi (T) \Vert w \Vert _{\infty },\quad k \ge 0. \end{aligned}$$

Define \(\tilde{T}:= \min \{ {\bar{T}}, \delta /\gamma _2 \}\), and fix \(T \in ( 0, \tilde{T} )\). Define \(\gamma _3 := \max \{ \gamma _2, \phi (T) \}\), so that

$$\begin{aligned} \Vert {\bar{x}}(t) - {\bar{x}}^0 (t) \Vert&\le \delta e^{ \lambda _4 t} (\Vert {\bar{x}}_0 \Vert + \Vert \hat{\nu }^0 (0) \Vert ) + \delta \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _3 \Vert w \Vert _{\infty },\,\, t \ge 0,\qquad \end{aligned}$$
(67)
$$\begin{aligned} \Vert \hat{\nu }^0 (kT) \Vert&\le \gamma _3 e^{ \lambda _4 kT} (\Vert {\bar{x}}_0 \Vert + \Vert \hat{\nu }^0 (0) \Vert ) + \gamma _3 \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _3 \Vert w \Vert _{\infty },\,\, k \ge 0.\qquad \quad \end{aligned}$$
(68)

Since \(\Vert \hat{v}^0 (0) \Vert \le \Vert z(0) \Vert \), it follows that for this choice of \(T\) the proposed controller provides the desired bound on \(\Vert {\bar{x}} (t) - {\bar{x}}^0 (t) \Vert \).

It remains to prove that the controller provides stability. As above, first suppose that \({t_s}= k_ 0 = 0\). If we combine (66) and (67), we see that

$$\begin{aligned} \Vert {\bar{x}} (t) \Vert \!\le \! ( \gamma _1 \!+\! \delta ) e^{ \lambda _4 t} (\Vert {\bar{x}}_0 \Vert \!+\! \Vert \hat{\nu }^0 (0) \Vert ) \!+\! ( \gamma _1 + \delta ) \Vert y_\mathrm{ref} \Vert _{\infty } \!+\! ( \gamma _1 + \gamma _3 ) \Vert w \Vert _{\infty }, \,\, t \ge 0. \nonumber \\ \end{aligned}$$
(69)

Now let us examine the controller state. From Lemma 3(iii) and (68) we see that

$$\begin{aligned} \Vert z_2 [k] \Vert \le (\gamma _3 e^{- \lambda _4 \ell T} ) e^{\lambda _4 kT} (\Vert {\bar{x}}_0 \Vert + \Vert \hat{\nu }^0 (0) \Vert ) + \gamma _3 \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _3 \Vert w \Vert _{\infty },\quad k \ge 0. \end{aligned}$$

From Lemma 3(ii) we have that the first sub-system of the controller is deadbeat; since this first-subsystem is linear and driven by \(e(t) = {\bar{C}}_{\sigma (t)} {\bar{x}} (t) + y_\mathrm{ref} (t) - w_y (t) \), it follows that \(z_1 [k]\) is a moving average of a fixed number of samples of \(e\); if we combine this with the bound on \({\bar{x}} (t)\) given above in (69) it follows that there exists a constant \(\gamma _4 \) so that

$$\begin{aligned} \Vert z_1 [k] \Vert \le \gamma _4 e^{\lambda _4 kT} (\Vert {\bar{x}}_0 \Vert + \Vert \hat{\nu }^0 (0) \Vert ) + \gamma _4 \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _4 \Vert w \Vert _{\infty },\quad k \ge 0. \end{aligned}$$

Using the definition of \(x_\mathrm{sd} \), it follows immediately that there exists a constant \(\gamma _5 >0\) so that

$$\begin{aligned} \Vert x_\mathrm{sd} (t) \Vert&\le \gamma _5 e^{\lambda _4 t} \Vert x_\mathrm{sd} (0) \Vert + \gamma _5 \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _5 \Vert w \Vert _{\infty },\quad t \ge 0. \end{aligned}$$

Now we turn to the case when \({t_s}=k_0h >0\). Since the controller is periodic of period \(T\), it follows that if our starting time is an integer multiple of \(T\), say \({t_s}=jT\), then from above

$$\begin{aligned} \Vert {x}_\mathrm{sd} (t) \Vert \le \gamma _5 e^{\lambda _4 (t- {t_s})} \Vert x_\mathrm{sd} ({t_s}) \Vert + \gamma _5 \Vert y_\mathrm{ref} \Vert _{\infty } + \gamma _5 \Vert w \Vert _{\infty },\quad t \ge {t_s}= jT. \end{aligned}$$

Now suppose that \({t_s}=k_0 h > 0\) does not have this property. It follows from the form of the nonlinear functions \(f\) and \(g\) given in (34) and (35) that they are globally Lipschitz continuous function of the first two arguments \(z[j]\) and \(e(jh)\). Given that the plant-integrator combination is linear, this means that there is a constant \(\gamma _6\) so that, for every \(k_0 \in \mathbf{N}\), we have

$$\begin{aligned} \Vert {x}_\mathrm{sd} (t) \Vert \le \gamma _6 ( \Vert x_\mathrm{sd} (k_0 h ) \Vert +\Vert y_\mathrm{ref} \Vert _{\infty } + \Vert w \Vert _{\infty }),\quad t \in [k_0 h, k_0 h + T ]. \end{aligned}$$

Since the interval \([k_0 h, k_0 h + T ]\) contains an integer multiple of \(T\), it follows that

$$\begin{aligned} \Vert {x}_\mathrm{sd} (t) \Vert&\le ( \gamma _5 \gamma _6 e^{- \lambda _4 T_s} ) e^{ \lambda _4 ( t - {t_s}) } \Vert x_\mathrm{sd} ({t_s}) \Vert + ( \gamma _5 + \gamma _6 ) \Vert y_\mathrm{ref} \Vert _{\infty }\\&\quad +(\gamma _5+ \gamma _6 ) \Vert w \Vert _{\infty },\quad t \ge {t_s}. \end{aligned}$$

We conclude that the controller stabilizes \(\mathcal{P }_{T_s}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Miller, D.E., Vale, J.R. Pole placement adaptive control with persistent jumps in the plant parameters. Math. Control Signals Syst. 26, 177–214 (2014). https://doi.org/10.1007/s00498-013-0115-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00498-013-0115-5

Keywords

Navigation