Skip to main content
Log in

Exponentially Stable Adaptive Control. Part II. Switched Systems

  • ROBUST, ADAPTIVE, AND NETWORK CONTROL
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

An adaptive state-feedback control system for a class of linear systems with piecewise-constant unknown parameters is proposed. The solution ensures a global exponential stability of a closed-loop system under condition that a regressor is finitely exciting after each parameters switch, and does not require neither any knowledge of a plant input matrix, nor the switching time instants. The obtained theoretical results are corroborated by numerical simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

REFERENCES

  1. Ioannou, P. and Sun, J., Robust Adaptive Control, New York: Dover, 2013.

    MATH  Google Scholar 

  2. Narendra, K.S. and Annaswamy, A.M., Stable Adaptive Systems, Courier Corporation, 2012.

    MATH  Google Scholar 

  3. Tao, G., Adaptive Control Design and Analysis, John Wiley & Sons, 2003.

    Book  MATH  Google Scholar 

  4. Narendra, K.S., Hierarchical Adaptive Control of Rapidly Time-Varying Systems Using Multiple Models, Control Complex Syst., Butterworth-Heinemann, 2016, pp. 33–66.

    Google Scholar 

  5. Chowdhary, G.V. and Johnson, E.N., Theory and Flight-Test Validation of A Concurrent-Learning Adaptive Controller, J. Guid. Control Dyn., 2011, vol. 34, no. 2, pp. 592–607.

    Article  Google Scholar 

  6. Pan, Y., Aranovskiy, S., Bobtsov, A., and Yu, H., Efficient Learning from Adaptive Control under Sufficient Excitation, Int. J. Robust Nonlinear Control, 2019, vol. 29, no. 10, pp. 3111–3124.

    Article  MathSciNet  MATH  Google Scholar 

  7. Lee, H.I., Shin, H.S., and Tsourdos, A., Concurrent Learning Adaptive Control with Directional Forgetting, IEEE Trans. Automat. Control, 2019, vol. 64, no. 12, pp. 5164–5170.

    Article  MathSciNet  MATH  Google Scholar 

  8. Jenkins, B.M., Annaswamy, A.M., Lavretsky, E., and Gibson, T.E., Convergence Properties of Adaptive Systems and The Definition of Exponential Stability, SIAM J. Control Optimiz., 2018, vol. 56, no. 4, pp. 2463–2484.

    MATH  Google Scholar 

  9. Ortega, R., Nikiforov, V., and Gerasimov, D., On Modified Parameter Estimators for Identification and Adaptive Control. A Unified Framework and Some New Schemes, Annual Reviews in Control, 2020, vol. 50, pp. 278–293.

    Article  MathSciNet  Google Scholar 

  10. Glushchenko, A., Petrov, V., and Lastochkin, K., Regression Filtration with Resetting to Provide Exponential Convergence of MRAC for Plants with Jump Change of Unknown Parameters, IEEE Trans. Automat. Control, 2022, pp. 1–8. Early Access.

    MATH  Google Scholar 

  11. Kersting, S., Adaptive Identifcation and Control of Uncertain Systems with Switching, PhD Thesis, Technische Universitat Munchen, 2018. https://mediatum.ub.tum.de/doc/1377055/1377055.pdf. Accessed March 15, 2022.

  12. Sang, Q. and Tao, G., Adaptive Control of Piecewise Linear Systems: The State Tracking Case, IEEE Trans. on Automat. Control, 2011, vol. 57, no. 2, pp. 522–528.

    Article  MathSciNet  MATH  Google Scholar 

  13. Sang, Q. and Tao, G., Adaptive Control of Piecewise Linear Systems with Applications to NASA GTM, Proc. Amer. Control Conf., 2011, pp. 1157–1162.

  14. Sang, Q. and Tao, G., Adaptive Control of Piecewise Linear Systems with Output Feedback for Output Tracking, Conf. Dec. Control, 2012, pp. 5422–5427.

  15. Sang, Q. and Tao, G., Adaptive Control of Piecewise Linear Systems with State Feedback for Output Tracking, Asian J. Control, 2013, vol. 15, no. 4, pp. 933–943.

    Article  MathSciNet  MATH  Google Scholar 

  16. Liberzon, D., Switching in Systems and Control, Boston: Birkhauser, 2003.

    Book  MATH  Google Scholar 

  17. De La Torre, G., Chowdhary, G., and Johnson, E.N., Concurrent learning adaptive control for linear switched systems, Amer. Control Conf., 2013, pp. 854–859.

  18. Goldar, S.N., Yazdani, M., and Sinafar, B., Concurrent Learning Based Finite-Time Parameter Estimation in Adaptive Control of Uncertain Switched Nonlinear Systems, J. Control, Automat. Electr. Syst., 2017, vol. 28, no. 4, pp. 444–456.

    Google Scholar 

  19. Wu, C., Huang, X., Niu, B., and Xie, X.J., Concurrent Learning-Based Global Exponential Tracking Control of Uncertain Switched Systems with Mode-Dependent Average Dwell Time, IEEE Access, 2018, vol. 6, pp. 39086–39095.

    Article  Google Scholar 

  20. Wu, C., Li, J., Niu, B., and Huang, X., Switched Concurrent Learning Adaptive Control of Switched Systems with Nonlinear Matched Uncertainties, IEEE Access, 2020, vol. 8, pp. 33560–33573.

    Article  Google Scholar 

  21. Liu, T. and Buss, M., Indirect Model Reference Adaptive Control of Piecewise Affine Systems with Concurrent Learning, IFAC-PapersOnLine, 2020, vol. 53, no. 2, pp. 1924–1929.

    Article  Google Scholar 

  22. Du, Y., Liu, F., Qiu, J., and Buss, M., Online Identification of Piecewise Affine Systems Using Integral Concurrent Learning, IEEE Trans. Circuits Syst. I: Reg. Papers, 2021, vol. 68, no. 10, pp. 4324–4336.

  23. Du, Y., Liu, F., Qiu, J., and Buss, M., A Novel Recursive Approach for Online Identification of Continuous-Time Switched Nonlinear Systems, Int. J. Robust Nonlinear Control , 2021, pp. 1–20.

  24. Narendra, K.S. and Balakrishnan, J., Adaptive Control Using Multiple Models, IEEE Trans. Automat. Control, 1997, vol. 42, no. 2, pp. 171–187.

  25. Glushchenko, A., Lastochkin, K., and Petrov, V., Exponentially Stable Adaptive Control. Part I. Time-Invariant Plants, Autom. Remote Control, 2022, vol. 83, no. 4, pp. 548–578.

  26. Glushchenko, A. and Lastochkin, K., Unknown Piecewise Constant Parameters Identification with Exponential Rate of Convergence, Int. J. Adap. Control Signal Proc., 2023, vol. 37, no. 1, pp. 315–346.

  27. Glushchenko, A. and Lastochkin, K., Exponentially Stable Adaptive Optimal Control of Uncertain LTI Systems, preprint arXiv:2205.02913; 2022, pp. 1–37.

  28. Glushchenko, A. and Lastochkin, K., Exponentially Convergent Direct Adaptive Pole Placement Control of Plants with Unmatched Uncertainty under FE Condition, IEEE Control Syst. Letters, 2022, vol. 6, pp. 2527–2532.

  29. Wang, L., Ortega, R., Bobtsov, A., Romero, J., and Yi, B., Identifiability Implies Robust, Globally Exponentially Convergent On-Line Parameter Estimation: Application to Model Reference Adaptive Control, preprint arXiv:2108.08436; 2021, pp. 1–16.

  30. Hakem, A., Cocquempot, V., and Pekpe, K., Switching Time Estimation and Active Mode Recognition Using a Data Projection Method, Int. J. Appl. Math. Comput. Sci., 2016, vol. 26, no. 4, pp. 827–840.

Download references

Funding

This research was in part financially supported by Grants Council of the President of the Russian Federation (project MD-1787.2022.4).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. I. Glushchenko or K. A. Lastochkin.

Additional information

This paper was recommended for publication by A.A. Bobtsov, a member of the Editorial Board

APPENDIX

APPENDIX

Proof of Proposition 1. The proof of ξ(t) exponential stability is divided into two steps. The first one is to show that \(\tilde {\theta }\)(t) exponentially converges to zero without regard to boundedness of eref(t) and ω(t). Using the obtained result, the second step is to show convergence of eref(t).

Step 1. The equation for \(\tilde {\theta }\)(t) = \(\hat {\theta }\)(t) – θ(t) obtained from (3.1) is solved:

$$\tilde {\theta }(t) = \phi \left( {t,\,\,t_{0}^{ + }} \right)\tilde {\theta }\left( {t_{0}^{ + }} \right) - \int\limits_{t_{0}^{ + }}^t {\phi (t,\,\,\tau )\sum\limits_{q = 1}^i {\Delta _{q}^{\theta }\delta \left( {\tau - t_{q}^{ + }} \right)d\tau ,} } $$
(A.1)

where ϕ(t, τ) = \({{e}^{{ - \int_\tau ^t {{{\gamma }_{1}}d\tau } }}}\).

Using the sifting property of the Dirac function:

$$\int\limits_{t_{0}^{ + }}^t {f(\tau )\delta \left( {\tau - t_{q}^{ + }} \right)d\tau = f\left( {t_{q}^{ + }} \right)h\left( {t - t_{q}^{ + }} \right),\,\,\forall f(t),} $$
(A.2)

it is obtained from (A.1):

$$\begin{gathered} \left\| {\tilde {\theta }(t)} \right\|\;\leqslant \;\phi \left( {t,\,\,t_{0}^{ + }} \right)\left\| {\tilde {\theta }\left( {t_{0}^{ + }} \right)} \right\| + \sum\limits_{q = 1}^i {\phi \left( {t,\,\,t_{q}^{ + }} \right)\left\| {\Delta _{q}^{\theta }} \right\|h\left( {t - t_{q}^{ + }} \right)} \\ = \underbrace {\left( {\left\| {\tilde {\theta }\left( {t_{0}^{ + }} \right)} \right\| + \sum\limits_{q = 1}^i {\phi \left( {t_{0}^{ + },\,\,t_{q}^{ + }} \right)\left\| {\Delta _{q}^{\theta }} \right\|h\left( {t - t_{q}^{ + }} \right)} } \right)}_{\beta (t)}\phi \left( {t,\,\,t_{0}^{ + }} \right), \\ \end{gathered} $$
(A.3)

where ϕ(\(t_{0}^{ + }\), \(t_{q}^{ + }\)) = ϕ–1(\(t_{q}^{ + }\), \(t_{0}^{ + }\)) = ϕ–1(t, \(t_{0}^{ + }\))ϕ(t, \(t_{q}^{ + }\)) = ϕ(\(t_{0}^{ + }\), t)ϕ(t, \(t_{q}^{ + }\)).

To prove the exponential stability of \(\tilde {\theta }\)(t) it remains to show that β(t) is bounded. If the number of parameters switches is finite: i \(\leqslant \) imax < ∞, then as:

(a) when i is finite, the time instants \(t_{i}^{ + }\) are also finite (we do not consider the case of switches at infinite time: \(\forall \)i \(t_{i}^{ + }\) ≠ ∞),

(b) ϕ(\(t_{0}^{ + }\), \(t_{q}^{ + }\)) is bounded in case \(t_{q}^{ + }\) is finite, we have the following upper bounds:

$$\beta (t)\;\leqslant \;\left\| {\tilde {\theta }\left( {t_{0}^{ + }} \right)} \right\| + \sum\limits_{q = 1}^{{{i}_{{\max }}}} {\phi \left( {t_{0}^{ + },\,\,t_{q}^{ + }} \right)\left\| {\Delta _{q}^{\theta }} \right\|h\left( {t - t_{q}^{ + }} \right) = {{\beta }_{{\max }}}.} $$
(A.4)

If  \(\forall \)q\(\mathbb{N}\) ||\(\Delta _{q}^{\theta }\)|| \(\leqslant \) cqϕ(\(t_{q}^{ + }\), \(t_{0}^{ + }\))), cq > cq + 1, then even in case of unbounded i it holds that:

$$\beta (t)\;\leqslant \;\left\| {\tilde {\theta }\left( {t_{0}^{ + }} \right)} \right\| + \sum\limits_{q = 1}^i {{{c}_{q}}h\left( {t - t_{q}^{ + }} \right) = {{\beta }_{{\max }}}.} $$
(A.5)

The series from (A.5) is constant sign one, and all its subsums are bounded owing to monotonicity of 0 < cq + 1 < cq, therefore, \(\sum\nolimits_{q = 1}^\infty {{{c}_{q}}} \)h(t\(t_{q}^{ + }\)) < ∞, which results in β(t) \(\leqslant \) βmax.

It immediately follows from the boundedness of (A.4) or (A.5) that:

$$\left\| {\tilde {\theta }(t)} \right\|\;\leqslant \;{{\beta }_{{\max }}}\phi \left( {t,\,\,t_{0}^{ + }} \right) = {{\beta }_{{\max }}}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} < {{\beta }_{{\max }}}.$$
(A.6)

The next aim is to analyze the behavior of the tracking error eref(t).

Step 2. The following quadratic form is introduced:

$$\begin{gathered} {{V}_{{{{e}_{{{\text{ref}}}}}}}} = e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} + \frac{{2a_{0}^{2}}}{{{{\gamma }_{1}}}}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}},\quad H = {\text{blockdiag}}\left\{ {P,\,\,\frac{{2a_{0}^{2}}}{{{{\gamma }_{1}}}}} \right\}, \\ \underbrace {{{\lambda }_{{\min }}}(H)}_{{{\lambda }_{m}}}{\text{||}}{{{\bar {e}}}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}}\;\leqslant \;V({\text{||}}{{{\bar {e}}}_{{{\text{ref}}}}}{\text{||}})\;\leqslant \;\underbrace {{{\lambda }_{{\max }}}(H)}_{{{\lambda }_{M}}}{\text{||}}{{{\bar {e}}}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}}, \\ \end{gathered} $$
(A.7)

where \(\bar {e}_{{{\text{ref}}}}^{{}}\) = \(\left[ {e_{{{\text{ref}}}}^{{\text{T}}}{{{(t)}}_{{_{{_{{_{{_{{_{{_{{_{{_{{_{{_{{_{{_{{_{{_{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \right.\) \({{\left. {{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}(t - t_{0}^{ + })}}}} \right]}^{{\text{T}}}}\), a0 > 0, and P is a solution of the below-given set of equations when K = In×n:

$$A_{{{\text{ref}}}}^{{\text{T}}}P + P{{A}_{{{\text{ref}}}}} = - Q{{Q}^{{\text{T}}}} - \mu P,\quad P{{I}_{{n \times n}}} = QK,$$
$${{K}^{{\text{T}}}}K = D + {{D}^{{\text{T}}}},$$

which is equivalent to the Riccati equation \(A_{{{\text{ref}}}}^{{\text{T}}}\)P + PAref + PPT + μP = 0n×n.

The derivative of (A.7) is written as:

$$\begin{gathered} {{{\dot {V}}}_{{{{e}_{{{\text{ref}}}}}}}} = e_{{{\text{ref}}}}^{{\text{T}}}\left( {A_{{{\text{ref}}}}^{{\text{T}}}P + P{{A}_{{{\text{ref}}}}}} \right){{e}_{{{\text{ref}}}}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + 2e_{{{\text{ref}}}}^{{\text{T}}}P{{I}_{n}}{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega \\ = - \mu e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} - e_{{{\text{ref}}}}^{{\text{T}}}Q{{Q}^{{\text{T}}}}{{e}_{{{\text{ref}}}}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + {\text{tr}}\left( {2{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega e_{{{\text{ref}}}}^{{\text{T}}}QK} \right). \\ \end{gathered} $$
(A.8)

As KK  T = K  TK = In × n, Eq. (A.8) is rewritten as:

$$\begin{gathered} {{{\dot {V}}}_{{{{e}_{{{\text{ref}}}}}}}} = - \mu e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} - e_{{{\text{ref}}}}^{{\text{T}}}QK{{K}^{{\text{T}}}}{{Q}^{{\text{T}}}}{{e}_{{{\text{ref}}}}} + {\text{tr}}\left( {2{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega e_{{{\text{ref}}}}^{{\text{T}}}QK} \right) \\ = - \mu e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + {\text{tr}}\left( { - {{K}^{{\text{T}}}}{{Q}^{{\text{T}}}}{{e}_{{{\text{ref}}}}}e_{{{\text{ref}}}}^{{\text{T}}}QK + 2{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega e_{{{\text{ref}}}}^{{\text{T}}}QK} \right). \\ \end{gathered} $$
(A.9)

Completing the square

$$\begin{gathered} {{K}^{{\text{T}}}}{{Q}^{{\text{T}}}}{{e}_{{{\text{ref}}}}}e_{{{\text{ref}}}}^{{\text{T}}}QK - 2{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega e_{{{\text{ref}}}}^{{\text{T}}}QK + {{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega {{\omega }^{{\text{T}}}}\tilde {\theta }B_{i}^{{\text{T}}} \\ = \left( {{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega - {{K}^{{\text{T}}}}{{Q}^{{\text{T}}}}{{e}_{{{\text{ref}}}}}} \right){{\left( {{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega - {{K}^{{\text{T}}}}{{Q}^{{\text{T}}}}{{e}_{{{\text{ref}}}}}} \right)}^{{\text{T}}}}\; \geqslant \;0, \\ \end{gathered} $$
(A.10)

we have:

$$\begin{gathered} {{{\dot {V}}}_{{{{e}_{{{\text{ref}}}}}}}} = - \mu e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} \\ + \,\,{\text{tr}}\left( { - {{K}^{{\text{T}}}}{{Q}^{{\text{T}}}}{{e}_{{{\text{ref}}}}}e_{{{\text{ref}}}}^{{\text{T}}}QK + 2{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega e_{{{\text{ref}}}}^{{\text{T}}}QK \pm {{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega {{\omega }^{{\text{T}}}}\tilde {\theta }B_{i}^{{\text{T}}}} \right) \\ \leqslant - {\kern 1pt} \mu e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + {\text{tr}}\left( {{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega {{\omega }^{{\text{T}}}}\tilde {\theta }B_{i}^{{\text{T}}}} \right) \\ \leqslant - {\kern 1pt} \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + b_{{\max }}^{2}{{\lambda }_{{\max }}}(\omega {{\omega }^{{\text{T}}}}){\text{||}}\tilde {\theta }{\text{|}}{{{\text{|}}}^{2}} \\ \leqslant - {\kern 1pt} \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + b_{{\max }}^{2}\beta _{{\max }}^{2}{{\lambda }_{{\max }}}(\omega {{\omega }^{{\text{T}}}}){{\phi }^{2}}\left( {t,t_{0}^{ + }} \right) \\ \leqslant - {\kern 1pt} \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + b_{{\max }}^{2}\beta _{{\max }}^{2}{{\lambda }_{{\max }}}(\omega {{\omega }^{{\text{T}}}}){{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}}, \\ \end{gathered} $$
(A.11)

where \(\forall \)i\(\mathbb{N}\) ||Bi|| \(\leqslant \) bmax follows from the fact that the pair (Ai, Bi) is controllable.

The exponential vanishing of the third term of (A.11) is required to unsure the exponential stability of the tracking error eref(t), which, in its turn, requires:

$$\chi (t) = {{\lambda }_{{\max }}}\left( {\omega (t){{\omega }^{{\text{T}}}}(t)} \right){{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}}\,\leqslant \,{{\chi }_{{{\text{UB}}}}},$$
(A.12)

where χUB > 0.

The growth rate of  λmax(ω(tT(t)) is estimated via introduction of \({{L}_{{{{e}_{{{\text{ref}}}}}}}}\) = \(e_{{{\text{ref}}}}^{{\text{T}}}\)Peref:

$$\begin{gathered} {{{\dot {L}}}_{{{{e}_{{{\text{ref}}}}}}}} = e_{{{\text{ref}}}}^{{\text{T}}}(A_{{{\text{ref}}}}^{{\text{T}}}P + P{{A}_{{{\text{ref}}}}}){{e}_{{{\text{ref}}}}} + 2e_{{{\text{ref}}}}^{{\text{T}}}P{{B}_{i}}{{{\tilde {\theta }}}^{{\text{T}}}}\omega \hfill \\ \,\,\,\,\,\,\,\,\leqslant - {\kern 1pt} \mu e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} + 2e_{{{\text{ref}}}}^{{\text{T}}}P{{B}_{i}}{{{\tilde {K}}}_{x}}x + 2e_{{{\text{ref}}}}^{{\text{T}}}P{{B}_{i}}{{{\tilde {K}}}_{r}}r \hfill \\ \,\,\,\,\,\,\,\,\leqslant - {\kern 1pt} \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}{\text{||}}{{e}_{{{\text{ref}}}}}{\text{||}}\,{\text{||}}\tilde {\theta }{\text{||}}\,{\text{||}}x{\text{||}} \hfill \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\kern 1pt} {\text{ + 2}}{{\lambda }_{{\max }}}(P){{b}_{{\max }}}{\text{||}}{{e}_{{{\text{ref}}}}}{\text{||}}\,{\text{||}}\tilde {\theta }{\text{||}}{{r}_{{\max }}} \hfill \\ \,\,\,\,\,\,\,\,\leqslant - {\kern 1pt} \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}{\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}}{\text{||}}\tilde {\theta }{\text{||}} \hfill \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\kern 1pt} {\text{ + 2}}{{\lambda }_{{\max }}}(P){{b}_{{\max }}}(x_{{{\text{ref}}}}^{{{\text{UB}}}} + {{r}_{{\max }}}){\text{||}}{{e}_{{{\text{ref}}}}}{\text{||}}\,{\text{||}}\tilde {\theta }{\text{||}} \hfill \\ \,\,\,\,\,\,\,\,\leqslant \,( - \mu {{\lambda }_{{\max }}}(P) + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}{\text{||}}\tilde {\theta }{\text{||}}){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} \hfill \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\kern 1pt} + {\text{ }}2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}(x_{{{\text{ref}}}}^{{{\text{UB}}}} + {{r}_{{\max }}}){\text{||}}{{e}_{{{\text{ref}}}}}{\text{||}}\,{\text{||}}\tilde {\theta }{\text{||}}, \hfill \\ \end{gathered} $$
(A.13)

where ||xref(t)|| \(\leqslant \) \(x_{{{\text{ref}}}}^{{{\text{UB}}}}\) is an upper bound of the reference model states norm.

The error \(\tilde {\theta }\)(t) is bounded, then, considering the conservative case, it is obtained from (A.13) that:

$${{\dot {L}}_{{{{e}_{{{\text{ref}}}}}}}}\;\leqslant \;{{c}_{1}}{\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} + 2{{c}_{2}}{\text{||}}{{e}_{{{\text{ref}}}}}{\text{||}},$$
(A.14)

where

$${{c}_{1}} = - \mu {{\lambda }_{{\min }}}(P) + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}{{\beta }_{{\max }}} > 0,$$
$${{c}_{2}} = {{\lambda }_{{\max }}}(P){{b}_{{\max }}}{{\beta }_{{\max }}}(x_{{{\text{ref}}}}^{{{\text{UB}}}} + {{r}_{{\max }}}).$$

Applying the Young’s inequality ab \(\leqslant \) \(\frac{1}{2}\)a2 + \(\frac{1}{2}\)b2, we have from (A.14) that:

$${{\dot {L}}_{{{{e}_{{{\text{ref}}}}}}}}\;\leqslant \;\left( {{{c}_{1}} + 2c_{2}^{2}} \right){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} + 0.5\;\leqslant \;\left( {{{c}_{1}} + 2c_{2}^{2}} \right){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} + 1 = \frac{{{{c}_{1}} + 2c_{2}^{2}}}{{{{\lambda }_{{\max }}}(P)}}{{L}_{{{{e}_{{{\text{ref}}}}}}}} + 1.$$
(A.15)

Equation (A.15) is solved using

$$\begin{gathered} {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}(t){\text{|}}{{{\text{|}}}^{2}}\;\leqslant \;{{L}_{{{{e}_{{{\text{ref}}}}}}}}(t),\quad {{L}_{{{{e}_{{{\text{ref}}}}}}}}(t)\;\leqslant \;{{\lambda }_{{\max }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}(t){\text{|}}{{{\text{|}}}^{2}}: \\ {\text{||}}{{e}_{{{\text{ref}}}}}(t){\text{||}}\;\leqslant \;\sqrt {\frac{{{{\lambda }_{{\max }}}(P)}}{{{{\lambda }_{{\min }}}(P)}}} {{e}^{{\frac{{{{c}_{1}} + 2c_{2}^{2}}}{{2{{\lambda }_{{\max }}}(P)}}\left( {t - t_{0}^{ + }} \right)}}}\left\| {{{e}_{{{\text{ref}}}}}\left( {t_{0}^{ + }} \right)} \right\| + \sqrt {\frac{{{{\lambda }_{{\max }}}(P){{e}^{{\frac{{{{c}_{1}} + 2c_{2}^{2}}}{{{{\lambda }_{{\max }}}(P)}}\left( {t - t_{0}^{ + }} \right)}}}}}{{{{\lambda }_{{\min }}}(P)({{c}_{1}} + 2c_{2}^{2})}}} . \\ \end{gathered} $$
(A.16)

Therefore, the growth rate of x(t) does not exceed exponential one, and thus, as r(t) is bounded, it holds that:

$$\begin{gathered} {{\lambda }_{{\max }}}\left( {\omega (t){{\omega }^{{\text{T}}}}(t)} \right) = {\text{tr}}\left( {\omega (t){{\omega }^{{\text{T}}}}(t)} \right) = \sum\limits_{i = 1}^n {x_{i}^{2}(t)} \\ + \,\sum\limits_{i = 1}^m {r_{i}^{2}(t)\;\leqslant \;{{{\bar {c}}}_{0}}} {{e}^{{{{{\bar {c}}}_{1}}\left( {t - t_{0}^{ + }} \right)}}},\quad {{{\bar {c}}}_{0}} > 0,\quad {{{\bar {c}}}_{1}} > 0. \\ \end{gathered} $$
(A.17)

The estimate (A.17) is substituted into (A.12) to obtain that (A.12) holds if γ1 > 0 is sufficiently large. Equation (A.12) is used in (A.11) to have:

$${{\dot {V}}_{{{{e}_{{{\text{ref}}}}}}}}\;\leqslant \; - \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}} - 2a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + a_{0}^{2}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} = - {{\bar {\eta }}_{{{{e}_{{{\text{ref}}}}}}}}{{V}_{{{{e}_{{{\text{ref}}}}}}}},$$
(A.18)

where

$$a_{0}^{2} = b_{{\max }}^{2}\beta _{{\max }}^{2}{{\chi }_{{{\text{UB}}}}},\quad {{\bar {\eta }}_{{{{e}_{{{\text{ref}}}}}}}} = \min \left\{ {\frac{{\mu {{\lambda }_{{\min }}}(P)}}{{{{\lambda }_{{\max }}}(P)}},\,\,\frac{{{{\gamma }_{1}}}}{2}} \right\}.$$

The differential inequality (A.18) is solved to write:

$${{V}_{{{{e}_{{{\text{ref}}}}}}}}(t)\;\leqslant \;{{e}^{{ - {{{\bar {\eta }}}_{{{{e}_{{{\text{ref}}}}}}}}\left( {t - t_{0}^{ + }} \right)}}}{{V}_{{{{e}_{{{\text{ref}}}}}}}}\left( {t_{0}^{ + }} \right).$$
(A.19)

Therefore, the tracking error eref(t) exponentially convergences to zero:

$${\text{||}}{{e}_{{{\text{ref}}}}}(t){\text{||}}\;\leqslant \;\sqrt {\frac{{{{\lambda }_{M}}}}{{{{\lambda }_{m}}}}} \left\| {{{e}_{{{\text{ref}}}}}\left( {t_{0}^{ + }} \right)} \right\|{{e}^{{ - {{\eta }_{{{{e}_{{{\text{ref}}}}}}}}\left( {t - t_{0}^{ + }} \right)}}},$$
(A.20)

where

$${{\eta }_{{{{e}_{{{\text{ref}}}}}}}} = \frac{1}{2}{{\bar {\eta }}_{{{{e}_{{{\text{ref}}}}}}}}.$$

Having combined (A.20) and (A.6), it is written:

$${\text{||}}\xi (t){\text{||}}\;\leqslant \;\max \left\{ {\sqrt {\frac{{{{\lambda }_{M}}}}{{{{\lambda }_{m}}}}} \left\| {{{e}_{{{\text{ref}}}}}\left( {t_{0}^{ + }} \right)} \right\|,{{\beta }_{{\max }}}} \right\}{{e}^{{ - {{\eta }_{{{{e}_{{{\text{ref}}}}}}}}\left( {t - t_{0}^{ + }} \right)}}},$$
(A.21)

which completes the proof of Proposition 1.

Proof of Proposition 2. The expression x(t) – l\(\bar {x}\)(t) is differentiated:

$$\dot {x}(t) - l\,\dot {\bar {x}}(t) = - l(x(t) - l\,\bar {x}(t)) + {{\vartheta }^{{\text{T}}}}(t)\Phi (t).$$
(A.22)

The differential Eq. (A.22) is solved to obtain:

$$\begin{gathered} x(t) - l\,\bar {x}(t) = {{e}^{{ - l\left( {t - \hat {t}_{i}^{ + }} \right)}}}x\left( {\hat {t}_{i}^{ + }} \right) + \int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}{{\vartheta }^{{\text{T}}}}(\tau )\Phi (\tau )d\tau \pm {{\vartheta }^{{\text{T}}}}(t)\bar {\Phi }(t)} \\ = {{{\bar {\vartheta }}}^{{\text{T}}}}(t)\bar {\varphi }(t) + \int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - k(t - \tau )}}}{{\vartheta }^{{\text{T}}}}(\tau )\Phi (\tau )d\tau - {{\vartheta }^{{\text{T}}}}(t)\bar {\Phi }(t),} \\ \end{gathered} $$
(A.23)

where \({{\bar {\vartheta }}^{{\text{T}}}}\)(t) = [Ai Bi x(\(\hat {t}_{i}^{ + }\))] ∈ Rn × (n + m + 1).

Having applied (4.2) to the left- and right-hand parts of (A.23), it is obtained:

$$\begin{gathered} \forall t\; \geqslant \;t_{0}^{ + }\,\,\,{{{\bar {z}}}_{n}}(t) = {{n}_{s}}(t)\left[ {x(t) - l\bar {x}(t)} \right] = {{{\bar {\vartheta }}}^{{\text{T}}}}(t){{{\bar {\varphi }}}_{n}}(t) + {{{\bar {\varepsilon }}}_{0}}(t), \\ {{{\bar {\varepsilon }}}_{0}}(t) = {{n}_{s}}(t)\left( {\int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}} {{\vartheta }^{{\text{T}}}}(\tau )\Phi (\tau )d\tau - {{\vartheta }^{{\text{T}}}}(t)\bar {\Phi }(t)} \right), \\ \end{gathered} $$
(A.24)

where \({{\bar {z}}_{n}}\)(t) ∈ Rn, \({{\bar {\varphi }}_{n}}\)(t) ∈ Rn+m+1, \({{\bar {\varepsilon }}_{0}}\)(t) ∈ Rn.

Considering (4.4), z(t) is multiplied by adj {φ(t)} to write:

$$\begin{gathered} \,\,\,\,\,\,\,\,\,Y(t){\text{:}}\, = {\text{adj}}{\kern 1pt} \{ \varphi (t)\} \left( {z(t) \pm \varphi (t)\bar {\vartheta }(t)} \right) = \Delta (t)\bar {\vartheta }(t) + {{{\bar {\varepsilon }}}_{1}}(t), \\ {\text{adj}}{\kern 1pt} \{ \varphi (t)\} \varphi (t) = \det \{ \varphi (t)\} {{I}_{{(n + m + 1) \times (n + m + 1)}}} = \Delta (t){{I}_{{(n + m + 1) \times (n + m + 1)}}}, \\ {{{\bar {\varepsilon }}}_{1}}(t) = {\text{adj}}{\kern 1pt} \{ \varphi (t)\} \left( {z(t) - \varphi (t)\bar {\vartheta }(t)} \right),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \\ \end{gathered} $$
(A.25)

where Y(t) ∈ R(n+m+1)×n, Δ(t) ∈ R, \({{\bar {\varepsilon }}_{1}}\)(t) ∈ R(n+m+1)×n.

Owing to Δ(t) ∈ R, the elimination (4.5) allow one to obtain from (A.25) that:

$$\begin{gathered} {{z}_{A}}(t) = {{Y}^{{\text{T}}}}(t)\mathfrak{L} = \Delta (t){{A}_{i}} + \bar {\varepsilon }_{1}^{{\text{T}}}(t)\mathfrak{L}, \\ {{z}_{B}}(t) = {{Y}^{{\text{T}}}}(t){{\mathfrak{e}}_{{n + m + 1}}} = \Delta (t){{B}_{i}} + \bar {\varepsilon }_{1}^{{\text{T}}}(t){{\mathfrak{e}}_{{n + m + 1}}}, \\ \mathfrak{L} = {{\left[ {\begin{array}{*{20}{c}} {{{I}_{{n \times n}}}}&{{{0}_{{n \times (m + 1)}}}} \end{array}} \right]}^{{\text{T}}}} \in {{R}^{{(n + m + 1) \times n}}}, \\ {{\mathfrak{e}}_{{n + m + 1}}} = {{\left[ {\begin{array}{*{20}{c}} {{{0}_{{m \times n}}}}&{{{I}_{{m \times m}}}}&{{{0}_{{m \times 1}}}} \end{array}} \right]}^{{\text{T}}}} \in {{R}^{{(n + m + 1) \times m}}}, \\ \end{gathered} $$
(A.26)

where zA(t) ∈ Rn×n, zB(t) ∈ Rn×m.

Each equation from (2.7) is left-multiplied by adj {\(z_{B}^{{\text{T}}}\)(t)zB(t)}\(z_{B}^{{\text{T}}}\)(t)Δ(t). Considering (A.26), Eqs. (4.5) are substituted into the result of multiplication, then the obtained equations are combined to have:

$$\begin{gathered} \mathcal{Y}(t) = \mathcal{M}(t)\theta (t) + d(t), \\ \mathcal{Y}(t){\kern 1pt} :\, = \left[ \begin{gathered} {\text{adj}}\left\{ {z_{B}^{{\text{T}}}(t){{z}_{B}}(t)} \right\}z_{B}^{{\text{T}}}(t)(\Delta (t){{A}_{{{\text{ref}}}}} - {{z}_{A}}(t)) \\ {\text{adj}}\left\{ {z_{B}^{{\text{T}}}(t){{z}_{B}}(t)} \right\}z_{B}^{{\text{T}}}(t)\Delta (t){{B}_{{{\text{ref}}}}} \\ \end{gathered} \right], \\ \end{gathered} $$
$${\text{adj}}\left\{ {z_{B}^{{\text{T}}}(t){{z}_{B}}(t)} \right\}z_{B}^{{\text{T}}}(t){{z}_{B}}(t) = \det \left\{ {z_{B}^{{\text{T}}}(t){{z}_{B}}(t)} \right\}{{I}_{{m \times m}}} = \,\mathcal{M}(t){{I}_{{m \times m}}},$$
(A.27)
$$d(t){\kern 1pt} \,\,: = \left[ \begin{gathered} {\text{adj}}\left\{ {z_{B}^{{\text{T}}}(t){{z}_{B}}(t)} \right\}z_{B}^{{\text{T}}}(t)\left( {\bar {\varepsilon }_{1}^{{\text{T}}}(t)\mathfrak{L} + \bar {\varepsilon }_{1}^{{\text{T}}}(t){{\mathfrak{e}}_{{n + m + 1}}}K_{i}^{x}} \right) \\ {\text{adj}}\left\{ {z_{B}^{{\text{T}}}(t){{z}_{B}}(t)} \right\}z_{B}^{{\text{T}}}(t)\bar {\varepsilon }_{1}^{{\text{T}}}(t){{\mathfrak{e}}_{{n + m + 1}}}K_{i}^{r} \\ \end{gathered} \right],$$

where \(\mathcal{Y}\)(t) ∈ R(n + m) × n, \(\mathcal{M}\)(t) ∈ R, d(t) ∈ R(n + m) × n.

Considering (A.27), Eq. (4.7a) is solved to have the following expression:

$$\begin{gathered} \Upsilon (t) = \int\limits_{t_{0}^{ + }}^t {{{e}^{{\int_{t_{0}^{ + }}^\tau {kd\tau } }}}} \mathcal{M}(\tau )\theta (\tau )d\tau + \int\limits_{t_{0}^{ + }}^t {{{e}^{{\int_{t_{0}^{ + }}^\tau {kd\tau } }}}} d(\tau )d\tau \pm \Omega (t)\theta (t) = \Omega (t)\theta (t) + w(t), \\ w(t) = \Upsilon (t) - \Omega (t)\theta (t), \\ \end{gathered} $$
(A.28)

which proves that (4.8) can be obtained using the procedures (4.1)–(4.7).

To prove the statement (a), Eq. (4.7b) is solved over both [\(\hat {t}_{i}^{ + }\); \(t_{i}^{ + }\) + Ti] and [\(t_{i}^{ + }\) + Ti; \(\hat {t}_{{i + 1}}^{ + }\)]:

$$\begin{gathered} \forall t \in \left[ {\hat {t}_{i}^{ + };t_{i}^{ + } + {{T}_{i}}} \right]\quad \Omega (t) = {{\phi }^{{{{k}_{0}}}}}\left( {t,\,\,t_{i}^{ + }} \right)\Omega \left( {\hat {t}_{i}^{ + }} \right) + \int\limits_{\hat {t}_{i}^{ + }}^t {{{\phi }^{{{{k}_{0}}}}}(t,\tau )\mathcal{M}(\tau )d\tau ,} \\ \forall t \in \left[ {t_{i}^{ + } + {{T}_{i}};\hat {t}_{{i + 1}}^{ + }} \right]\quad \Omega (t) = {{\phi }^{{{{k}_{0}}}}}\left( {t,\,\,t_{i}^{ + } + {{T}_{i}}} \right)\Omega \left( {t_{i}^{ + } + {{T}_{i}}} \right) + \int\limits_{t_{i}^{ + } + {{T}_{i}}}^t {{{\phi }^{{{{k}_{0}}}}}(t,\tau )\mathcal{M}(\tau )d\tau .} \\ \end{gathered} $$
(A.29)

It is up to notation proved in [26] that if Φ(t) ∈ FE, \(\hat {t}_{i}^{ + }\) \( \geqslant \) \(t_{i}^{ + }\), then \(\forall \)t ∈ [\(t_{i}^{ + }\) + Ti; \(\hat {t}_{{i + 1}}^{ + }\)) it holds that ΔUB \( \geqslant \) Δ(t) \( \geqslant \) ΔLB > 0. Then the following holds for the regressor \(\mathcal{M}\)(t) over the time ranges considered in (A.29):

$$\begin{gathered} \forall t \in \left[ {\hat {t}_{i}^{ + };t_{i}^{ + } + {{T}_{i}}} \right]\,\,\,\mathcal{M}(t) = \det \left\{ {z_{B}^{{\text{T}}}(t){{z}_{B}}(t)} \right\} = {{\Delta }^{m}}(t)\det \left\{ {B_{i}^{{\text{T}}}{{B}_{i}}} \right\} \equiv 0, \hfill \\ \forall t \in \left[ {t_{i}^{ + } + {{T}_{i}};\hat {t}_{{i + 1}}^{ + }} \right]\,\,\,\Delta _{{{\text{UB}}}}^{m}\det \left\{ {B_{i}^{{\text{T}}}{{B}_{i}}} \right\}\; \geqslant \;\mathcal{M}(t)\; \geqslant \;\Delta _{{LB}}^{m}\det \left\{ {B_{i}^{{\text{T}}}{{B}_{i}}} \right\} \equiv 0. \hfill \\ \end{gathered} $$
(A.30)

Having substituted (A.30) into (A.29) and considered 0 \(\leqslant \) ϕ(t, τ) \(\leqslant \) 1, the bounds for Ω(t) are obtained:

$$\begin{gathered} \forall t \in \left[ {\hat {t}_{0}^{ + };\,\,t_{0}^{ + } + {{T}_{0}}} \right]\quad \Omega (t) \equiv 0, \\ \forall i\; \geqslant \;1\,\,\forall t \in \left[ {\hat {t}_{i}^{ + };\,\,t_{i}^{ + } + {{T}_{i}}} \right]\quad \Omega \left( {\hat {t}_{i}^{ + }} \right)\; \geqslant \;\Omega (t)\; \geqslant \;{{\phi }^{{{{k}_{0}}}}}\left( {t_{i}^{ + } + {{T}_{i}},\,\,\hat {t}_{i}^{ + }} \right)\Omega \left( {\hat {t}_{i}^{ + }} \right) > 0, \\ \forall t \in \left[ {t_{i}^{ + } + {{T}_{i}};\,\,\hat {t}_{{i + 1}}^{ + }} \right]\quad \Omega \left( {t_{i}^{ + } + {{T}_{i}}} \right) + \left( {\hat {t}_{{i + 1}}^{ + } - t_{i}^{ + } - {{T}_{i}}} \right)\Delta _{{{\text{UB}}}}^{m}{\text{det}}\left\{ {B_{i}^{{\text{T}}}{{B}_{i}}} \right\}\,\,\,\,\,\,\,\, \\ \geqslant \;\Omega (t)\; \geqslant \;{{\phi }^{{{{k}_{0}}}}}\left( {\hat {t}_{{i + 1}}^{ + },\,\,t_{i}^{ + } + {{T}_{i}}} \right)\left( {\Omega \left( {t_{i}^{ + } + {{T}_{i}}} \right)} \right. \\ \,\left. { + \left( {\hat {t}_{{i + 1}}^{ + } - t_{i}^{ + } - {{T}_{i}}} \right)\Delta _{{LB}}^{m}\det \left\{ {B_{i}^{{\text{T}}}{{B}_{i}}} \right\}} \right) > 0. \\ \end{gathered} $$
(A.31)

From which we have:

$$\forall t\; \geqslant \;t_{0}^{ + } + {{T}_{0}}\quad {{\Omega }_{{{\text{UB}}}}}\; \geqslant \;\Omega (t)\; \geqslant \;{{\Omega }_{{{\text{LB}}}}} > 0,$$
$${{\Omega }_{{{\text{LB}}}}} = \mathop {\max }\limits_{\forall i\; \geqslant \;1} \left\{ \begin{gathered} {{\phi }^{{{{k}_{0}}}}}\left( {\hat {t}_{{i + 1}}^{ + },\,\,t_{i}^{ + } + {{T}_{i}}} \right)\left( {\Omega \left( {t_{i}^{ + } + {{T}_{i}}} \right)} \right. \hfill \\ \left. { + \left( {\hat {t}_{{i + 1}}^{ + } - t_{i}^{ + } - {{T}_{i}}} \right)\Delta _{{{\text{LB}}}}^{m}\det \left\{ {B_{i}^{{\text{T}}}{{B}_{i}}} \right\}} \right), \hfill \\ {{\phi }^{{{{k}_{0}}}}}\left( {t_{i}^{ + } + {{T}_{i}},\,\,\hat {t}_{i}^{ + }} \right)\Omega \left( {\hat {t}_{i}^{ + }} \right) \hfill \\ \end{gathered} \right\},$$
(A.32)
$${{\Omega }_{{{\text{UB}}}}} = \mathop {\max }\limits_{\forall i\; \geqslant \;1} \left\{ {\Omega \left( {\hat {t}_{i}^{ + }} \right),\,\,\Omega \left( {t_{i}^{ + } + {{T}_{i}}} \right) + \left( {\hat {t}_{{i + 1}}^{ + } - t_{i}^{ + } - {{T}_{i}}} \right)\Delta _{{{\text{UB}}}}^{m}\det \left\{ {B_{i}^{{\text{T}}}{{B}_{i}}} \right\}} \right\},$$

which completes the proof of the statement (a).

To prove the statement (b) the disturbance w(t) is differentiated:

$$\begin{gathered} \dot {w}(t) = \dot {\Upsilon }(t) - \dot {\Omega }(t)\theta (t) - \Omega (t)\dot {\theta }(t) \hfill \\ \,\,\,\,\,\,\,\, = - k(\Upsilon (t) - \mathcal{Y}(t)) + k(\Omega (t) - \mathcal{M}(t))\theta (t) - \Omega (t)\dot {\theta }(t) \hfill \\ \,\,\,\,\,\,\,\, = - k(\Upsilon (t) - \mathcal{M}(t)\theta (t) - d(t)) + k(\Omega (t) - \mathcal{M}(t))\theta (t) - \Omega (t)\dot {\theta }(t) \hfill \\ \,\,\,\,\,\,\,\, = - k(\Upsilon (t) - \Omega (t)\theta (t)) - \Omega (t)\dot {\theta }(t) + kd(t) \hfill \\ \,\,\,\,\,\,\,\, = - kw(t) - \Omega (t)\dot {\theta }(t) + kd(w),\quad w\left( {t_{0}^{ + }} \right) = {{0}_{{(n + m) \times m}}}. \hfill \\ \end{gathered} $$
(A.33)

The next aim is to show that the identical equality d(t) ≡ 0 holds when \(\tilde {t}_{i}^{ + }\) = 0. It follows from the definition (A.27) that \({{\bar {\varepsilon }}_{1}}\)(t) ≡ 0 \( \Leftrightarrow \) d(t) ≡ 0. Let it be assumed that \(\forall \)i\(\mathbb{N}\)  \(\hat {t}_{i}^{ + }\) \( \geqslant \) \(t_{i}^{ + }\), then the definition of  \({{\bar {\varepsilon }}_{1}}\)(t) is obtained over the time ranges [\(\hat {t}_{i}^{ + }\)\(t_{{i + 1}}^{ + }\)) and [\(t_{i}^{ + }\)\(\hat {t}_{i}^{ + }\)):

$$\begin{gathered} \forall t \in \left[ {\left. {\hat {t}_{i}^{ + };\,\,t_{{i + 1}}^{ + }} \right)} \right.\quad \vartheta (t) = {{\vartheta }_{i}} \\ \Updownarrow \\ {{{\bar {\varepsilon }}}_{1}}(t) = {\text{adj}}{\kern 1pt} \{ \varphi (t)\} \int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - \int\limits_{\hat {t}_{i}^{ + }}^\tau {\sigma ds} }}}{{{\bar {\varphi }}}_{n}}(\tau )\bar {z}_{n}^{{\text{T}}}(\tau )d\tau - \Delta (t){{{\bar {\vartheta }}}_{i}}} \\ = {\text{adj}}\{ \varphi (t)\} \left( {\int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - \int\limits_{\hat {t}_{i}^{ + }}^\tau {\sigma ds} }}}{{{\bar {\varphi }}}_{n}}(\tau )\bar {\varphi }_{n}^{{\text{T}}}(\tau )d\tau {{{\bar {\vartheta }}}_{i}} + \int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - \int\limits_{\hat {t}_{i}^{ + }}^\tau {\sigma ds} }}}{{{\bar {\varphi }}}_{n}}(\tau )\bar {\varepsilon }_{0}^{{\text{T}}}(\tau )d\tau } } } \right) \\ - \,\Delta (t){{{\bar {\vartheta }}}_{i}} = \Delta (t){{{\bar {\vartheta }}}_{i}} - \Delta (t){{{\bar {\vartheta }}}_{i}} + \int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - \int\limits_{\hat {t}_{i}^{ + }}^\tau {\sigma ds} }}}{{{\bar {\varphi }}}_{n}}(\tau )\bar {\varepsilon }_{0}^{{\text{T}}}(\tau )d\tau = {{0}_{{(n + m + 1) \times n}}}.} \\ \end{gathered} $$
(A.34)

At the same time:

$$\begin{matrix} \forall t\in \left[ \left. t_{i-1}^{+};\,\,t_{i}^{+} \right) \right.\quad \vartheta (t)={{\vartheta }_{i-1}};\,\,\forall t\in \left[ \left. t_{i}^{+};\,\,\hat{t}_{i}^{+} \right) \right.\,\,\,\,\vartheta (t)={{\vartheta }_{i}} \\ \Updownarrow \\ \forall t\in \left[ \left. t_{i}^{+};\,\,\hat{t}_{i}^{+} \right) \right.,{{{\bar{\varepsilon }}}_{1}}(t)=\text{adj}\{\varphi (t)\}\int\limits_{\hat{t}_{i}^{+}}^{t}{{{e}^{-\int\limits_{\hat{t}_{i}^{+}}^{\tau }{\sigma ds}}}{{{\bar{\varphi }}}_{n}}(\tau )\bar{z}_{n}^{\text{T}}(\tau )d\tau -\Delta (t){{{\bar{\vartheta }}}_{i}}} \\ =\text{adj}\{\varphi (t)\}\left( \int\limits_{\hat{t}_{i-1}^{+}}^{t_{i}^{+}}{{{e}^{-\int\limits_{\hat{t}_{i-1}^{+}}^{\tau }{\sigma ds}}}{{{\bar{\varphi }}}_{n}}(\tau )\bar{\varphi }_{n}^{\text{T}}(\tau )d\tau {{{\bar{\vartheta }}}_{i-1}}+\int\limits_{\hat{t}_{i-1}^{+}}^{t}{{{e}^{-\int\limits_{\hat{t}_{i-1}^{+}}^{\tau }{\sigma ds}}}{{{\bar{\varphi }}}_{n}}(\tau )\bar{\varepsilon }_{0}^{\text{T}}(\tau )d\tau {{{\bar{\vartheta }}}_{i}}}} \right) \\ =\text{adj}\{\varphi (t)\}\left( \pm \int\limits_{\hat{t}_{i-1}^{+}}^{t_{i}^{+}}{{{e}^{-\int\limits_{\hat{t}_{i-1}^{+}}^{\tau }{\sigma ds}}}{{{\bar{\varphi }}}_{n}}(\tau )\bar{\varphi }_{n}^{\text{T}}(\tau )d\tau {{{\bar{\vartheta }}}_{i}}+\int\limits_{\hat{t}_{i-1}^{+}}^{t}{{{e}^{-\int\limits_{\hat{t}_{i-1}^{+}}^{\tau }{\sigma ds}}}{{{\bar{\varphi }}}_{n}}(\tau )\bar{\varepsilon }_{0}^{\text{T}}(\tau )d\tau }} \right)-\Delta (t){{{\bar{\vartheta }}}_{i}} \\ =\text{adj}\{\varphi (t)\}\left( \int\limits_{\hat{t}_{i-1}^{+}}^{t_{i}^{+}}{{{e}^{-\int\limits_{\hat{t}_{i-1}^{+}}^{\tau }{\sigma ds}}}{{{\bar{\varphi }}}_{n}}(\tau )\bar{\varphi }_{n}^{\text{T}}(\tau )d\tau \left( {{{\bar{\vartheta }}}_{i-1}}-{{{\bar{\vartheta }}}_{i}} \right)+\int\limits_{\hat{t}_{i-1}^{+}}^{t}{{{e}^{-\int\limits_{\hat{t}_{i-1}^{+}}^{\tau }{\sigma ds}}}{{{\bar{\varphi }}}_{n}}(\tau )\bar{\varepsilon }_{0}^{\text{T}}(\tau t)d\tau }} \right). \\ \end{matrix}$$
(A.35)

Having combined (A.34) and (A.35), it is written that:

$${{\bar {\varepsilon }}_{1}}(t){\text{:}}\, = \left\{ \begin{gathered} {\text{adj}}{\kern 1pt} \{ \varphi (t)\} \left( {\int\limits_{\hat {t}_{{i - 1}}^{ + }}^{t_{i}^{ + }} {{{e}^{{ - \int\limits_{\hat {t}_{{i - 1}}^{ + }}^\tau {\sigma ds} }}}{{{\bar {\varphi }}}_{n}}(\tau )\bar {\varphi }_{n}^{{\text{T}}}(\tau )d\tau \left( {{{{\bar {\vartheta }}}_{{i - 1}}} - {{{\bar {\vartheta }}}_{i}}} \right)} } \right. \hfill \\ \left. { + \int\limits_{\hat {t}_{{i - 1}}^{ + }}^t {{{e}^{{ - \int\limits_{\hat {t}_{{i - 1}}^{ + }}^\tau {\sigma ds} }}}{{{\bar {\varphi }}}_{n}}(\tau )\bar {\varepsilon }_{0}^{{\text{T}}}(\tau )d\tau } } \right),\quad i > 0,\quad \forall t \in \left[ {t_{i}^{ + };\,\,\hat {t}_{i}^{ + }} \right) \hfill \\ {{0}_{{(n + m + 1) \times n}}},\quad \forall t \in \left[ {\hat {t}_{i}^{ + };\,\,t_{{i + 1}}^{ + }} \right), \hfill \\ \end{gathered} \right.$$
(A.36)

from which it follows that \({{\bar {\varepsilon }}_{1}}\)(t) ≡ 0 when \(t_{i}^{ + }\) = 0, and consequently that d(t) ≡ 0.

Using (A.2) and considering d(t) ≡ 0, Eq. (A.33) is solved:

$$\begin{gathered} w(t) = - \int\limits_{t_{0}^{ + } + {{T}_{0}}}^i {{{\phi }^{{{{k}_{0}}}}}(t,\,\,\tau )\Omega (\tau )\sum\limits_{q = 1}^i {\Delta _{q}^{\theta }\delta \left( {\tau - t_{q}^{ + }} \right)d\tau } } \\ = - \sum\limits_{q = 1}^i {{{\phi }^{{{{k}_{0}}}}}\left( {t,\,\,t_{q}^{ + }} \right)} \Omega \left( {t_{q}^{ + }} \right)\Delta _{q}^{\theta }h\left( {t - t_{q}^{ + }} \right) \\ = \left( { - \sum\limits_{q = 1}^i {{{\phi }^{{{{k}_{0}}}}}\left( {t_{0}^{ + } + {{T}_{0}},\,\,t_{q}^{ + }} \right)\Omega \left( {t_{q}^{ + }} \right)\Delta _{q}^{\theta }h\left( {t - t_{q}^{ + }} \right)} } \right){{\phi }^{{{{k}_{0}}}}}\left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right). \\ \end{gathered} $$
(A.37)

It should be noted that, owing to Assumption 2, there are no switches over [\(t_{0}^{ + }\); \(t_{0}^{ + }\) + T0), so the summation in (A.37) is from q = 1 to i.

If the number of switches is finite: i \(\leqslant \) imax < ∞, then, as:

(a) finite i means that time instants \(t_{i}^{ + }\) are also finite (we do not consider the case of switches at infinite time: \(\forall \)i \(t_{i}^{ + }\) ≠ ∞);

(b) \(\forall \)q\(\mathbb{N}\) \({{\phi }^{{{{k}_{0}}}}}\)(\(t_{0}^{ + }\) + T0, \(t_{q}^{ + }\)) is finite in case \(t_{q}^{ + }\) is finite,

(c) k0 \( \geqslant \) 1,

the following upper bound holds:

$$\begin{gathered} {\text{||}}w(t{\text{)||}}\;\leqslant \;\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right)\sum\limits_{q = 1}^{{{i}_{{\max }}}} {{{\phi }^{{{{k}_{0}}}}}\left( {t_{0}^{ + } + {{T}_{0}},\,\,t_{q}^{ + }} \right){{\Omega }_{{{\text{UB}}}}}\left\| {\Delta _{q}^{\theta }} \right\|h\left( {t - t_{q}^{ + }} \right)} \\ = {{w}_{{\max }}}\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right)\;\leqslant \;{{w}_{{\max }}}. \\ \end{gathered} $$
(A.38)

If \(\forall \)q\(\mathbb{N}\)   ||\(\Delta _{q}^{\theta }\)|| \(\leqslant \) cq\({{\phi }^{{{{k}_{0}}}}}\)(\(t_{q}^{ + }\), \(t_{0}^{ + }\)), cq > cq+1, then we have from (A.37) that:

$${\text{||}}w(t){\text{||}}\;\leqslant \;{{\phi }^{{{{k}_{0}}}}}\left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right){{\Omega }_{{{\text{UB}}}}}{{\phi }^{{{{k}_{0}}}}}\left( {t_{0}^{ + } + {{T}_{0}},\,\,t_{0}^{ + }} \right)\sum\limits_{q = 1}^i {{{c}_{q}}h\left( {t - t_{q}^{ + }} \right)} .$$
(A.39)

All subsums of positive terms series from (A.39) are bounded, so \(\sum\nolimits_{q = 1}^i {{{c}_{q}}} \)h(t\(t_{q}^{ + }\)) < ∞, and even if the number of switches is infinite, the following holds:

$${\text{||}}w(t){\text{||}}\;\leqslant \;{{w}_{{\max }}}\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right)\;\leqslant \;{{w}_{{\max }}},$$
(A.40)

which completes the proof of Proposition 2.

Remark 2. The disturbance d(t), which reflects the difference between the real perturbation w(t) and the estimate (A.40), occurs in the proposed parametrization when \(\tilde {t}_{i}^{ + }\) > 0 over the finite time intervals [\(t_{i}^{ + }\); \(\hat {t}_{i}^{ + }\)], and \(\forall \)t \( \geqslant \) \(\hat {t}_{i}^{ + }\) its contribution into w(t) is an exponentially vanishing function. Thus d(t) affects only the transient quality of  \(\tilde {\theta }\)(t) and eref(t), but not the global properties of the tracking error ξ(t). The effect of d(t) can be reduced by improvement of the parameter σ (detailed analysis on that matter is given in Proposition 4 in [26]).

Proof of Proposition 3. According to the results of [26], the algorithm (4.10) ensures that \(\tilde {t}_{i}^{ + }\) = Δpr \(\leqslant \) Ti holds if the function \(\epsilon \)(t) is an indicator of the system parameters switch:

$$\forall t \in \left[ {t_{i}^{ + };\,\,\hat {t}_{i}^{ + }} \right)\,\,f(t) \ne 0,\quad \forall t \in \left[ {\hat {t}_{i}^{ + };\,\,t_{{i + 1}}^{ + }} \right)\,\,f(t) \ne 0,$$
(A.41)

i.e., it is non-zero only over the time range [\(t_{i}^{ + }\); \(\hat {t}_{i}^{ + }\)).

Equations (A.25) and (A.24) are substituted into (4.9) to obtain:

$$\begin{gathered} \epsilon (t) = \Delta (t){{{\bar {\varphi }}}_{n}}(t)\bar {z}_{n}^{{\text{T}}}(t) - {{{\bar {\varphi }}}_{n}}(t)\bar {\varphi }_{n}^{{\text{T}}}(t)Y(t) = \Delta (t){{{\bar {\varphi }}}_{n}}(t)\bar {\varphi }_{n}^{{\text{T}}}(t)\bar {\vartheta }(t) \\ + \,\,\Delta (t){{{\bar {\varphi }}}_{n}}(t)\bar {\varepsilon }_{0}^{{\text{T}}}(t) - \Delta (t){{{\bar {\varphi }}}_{n}}(t)\bar {\varphi }_{n}^{{\text{T}}}(t)\bar {\vartheta }(t) - {{{\bar {\varphi }}}_{n}}(t)\bar {\varphi }_{n}^{{\text{T}}}(t){{{\bar {\varepsilon }}}_{1}}(t) \\ = \Delta (t){{{\bar {\varphi }}}_{n}}(t)\bar {\varepsilon }_{0}^{{\text{T}}}(t) - {{{\bar {\varphi }}}_{n}}(t)\bar {\varphi }_{n}^{{\text{T}}}(t){{{\bar {\varepsilon }}}_{1}}(t). \\ \end{gathered} $$
(A.42)

The error \(\epsilon \)(t) satisfies the definition (A.41) if \(\bar {\varepsilon }_{0}^{{\text{T}}}\)(t) and \({{\bar {\varepsilon }}_{1}}\)(t) meet (A.41). Using the results of Proposition 2 (see (A.36)), the function \({{\bar {\varepsilon }}_{1}}\)(t) is an indicator of the system parameters switch. Then now we need to prove the same thesis for \(\bar {\varepsilon }_{0}^{{\text{T}}}\)(t). Let it be assumed that \(\forall \)i\(\mathbb{N}\) \(\hat {t}_{i}^{ + }\) \( \geqslant \) \(t_{i}^{ + }\), then:

$$\begin{gathered} \forall t \in \left[ {\left. {\hat {t}_{i}^{ + };\,\,t_{{i + 1}}^{ + }} \right)} \right.\quad \vartheta (t) = {{\vartheta }_{i}} \\ \Updownarrow \\ \forall t \in \left[ {\left. {\hat {t}_{i}^{ + };\,\,t_{{i + 1}}^{ + }} \right)} \right.\quad {{{\bar {\varepsilon }}}_{0}}(t) = {{n}_{s}}(t)\left( {\int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}} \dot {x}(\tau )d\tau - \vartheta _{i}^{{\text{T}}}\bar {\Phi }(t)} \right) \\ = {{n}_{s}}(t)\left( {\vartheta _{i}^{{\text{T}}}\int\limits_{\hat {t}_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}} \Phi (\tau )d\tau - \vartheta _{i}^{{\text{T}}}\bar {\Phi }(t)} \right) = {{n}_{s}}(t)\,\,\left( {\vartheta _{i}^{{\text{T}}}\bar {\Phi }(t) - \vartheta _{i}^{{\text{T}}}\bar {\Phi }(t)} \right) = 0. \\ \end{gathered} $$
(A.43)

At the same time:

$$\begin{gathered} \forall t \in \left[ {\left. {t_{{i - 1}}^{ + };\,\,t_{i}^{ + }} \right)} \right.\quad \vartheta (t) = {{\vartheta }_{{i - 1}}};\,\,\forall t \in \left[ {\left. {t_{i}^{ + };\,\,\hat {t}_{i}^{ + }} \right)} \right.\,\,\,\,\vartheta (t) = {{\vartheta }_{i}}, \\ \Updownarrow \\ \forall t \in \left[ {\left. {t_{i}^{ + };\,\,\hat {t}_{i}^{ + }} \right)} \right.,\quad {{{\bar {\varepsilon }}}_{0}}(t) = {{n}_{s}}(t)\left( {\int\limits_{\hat {t}_{{i - 1}}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}\dot {x}(\tau )d\tau - \vartheta _{i}^{{\text{T}}}\bar {\Phi }(t)} } \right) \\ = {{n}_{s}}(t)\left( {{{e}^{{ - l(t - t_{i}^{ + })}}}\int\limits_{\hat {t}_{{i - 1}}^{ + }}^{t_{i}^{ + }} {{{e}^{{ - l(t_{i}^{ + } - \tau )}}}\vartheta _{{i - 1}}^{{\text{T}}}\Phi (\tau )d\tau + \int\limits_{t_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}\vartheta _{i}^{{\text{T}}}\Phi (\tau )d\tau } } } \right. \\ \left. { - \,\vartheta _{i}^{{\text{T}}}\left( {{{e}^{{ - l(t - t_{i}^{ + })}}}\int\limits_{\hat {t}_{{i - 1}}^{ + }}^{t_{i}^{ + }} {{{e}^{{ - l(t_{i}^{ + } - \tau )}}}\Phi (\tau )d\tau + \int\limits_{t_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}\Phi (\tau )d\tau } } } \right)} \right) \\ = {{n}_{s}}(t){{e}^{{ - l(t - t_{i}^{ + })}}}\left( {\vartheta _{{i - 1}}^{{\text{T}}} - \vartheta _{i}^{{\text{T}}}} \right)\int\limits_{\hat {t}_{{i - 1}}^{ + }}^{t_{i}^{ + }} {{{e}^{{ - l(t_{i}^{ + } - \tau )}}}} \Phi (\tau )d\tau . \\ \end{gathered} $$
(A.44)

Having combined (A.43) and (A.44), it is obtained:

$${{\bar {\varepsilon }}_{0}}(t)\,\,{\text{: = }}\,\,\left\{ \begin{gathered} {{n}_{s}}(t){{e}^{{ - l\left( {t - t_{i}^{ + }} \right)}}}\left( {\vartheta _{{i - 1}}^{{\text{T}}} - \vartheta _{i}^{{\text{T}}}} \right)\int\limits_{\hat {t}_{{i - 1}}^{ + }}^{t_{i}^{ + }} {{{e}^{{ - l(t_{i}^{ + } - \tau )}}}\Phi (\tau )d\tau ,\quad i > 0,\quad \forall t \in \left[ {t_{i}^{ + };\,\,\hat {t}_{i}^{ + }} \right)} \hfill \\ {{0}_{n}},\,\,\,\forall t \in \left[ {\hat {t}_{i}^{ + };\,\,t_{{i + 1}}^{ + }} \right), \hfill \\ \end{gathered} \right.$$
(A.45)

which, considering (A.36), allows one to write:

$$\forall i \in \mathbb{N},\quad \epsilon (t)\,\,{\text{: = }}\left\{ \begin{gathered} \Delta (t){{{\bar {\varphi }}}_{n}}(t)\bar {\varepsilon }_{0}^{{\text{T}}}(t) - {{{\bar {\varphi }}}_{n}}(t)\bar {\varphi }_{n}^{{\text{T}}}(t){{{\bar {\varepsilon }}}_{1}}(t),\quad i > 0,\quad \forall t \in \left[ {t_{i}^{ + };\,\,\hat {t}_{i}^{ + }} \right) \hfill \\ {{0}_{{(n + m + 1) \times n}}},\quad \forall t \in \left[ {\hat {t}_{i}^{ + };\,\,t_{{i + 1}}^{ + }} \right), \hfill \\ \end{gathered} \right.$$
(A.46)

from which \(\epsilon \)(t) is an indicator of the system parameters switch, and, following the results from [26], when Δ(t) ∈ FE and \({{\bar {\varphi }}_{n}}\)(t) ∈ FE over [\(\hat {t}_{i}^{ + }\); \(t_{i}^{ + }\) + Ti] (which holds as Assumptions 2 and 3 are met), then \(\tilde {t}_{i}^{ + }\) = Δpr \(\leqslant \) Ti.

Proof of Theorem 1. The proof of theorem is arranged in the same way as the one of Proposition 1.

Two time ranges are considered: [\(t_{0}^{ + }\); \(t_{0}^{ + }\) + T0) and [\(t_{0}^{ + }\) + T0; ∞). As for [\(t_{0}^{ + }\); \(t_{0}^{ + }\) + T0), it holds that Ω(t) \(\leqslant \) ΩLB in the conservative case, so \(\dot {\tilde {\theta }}\)(t) = 0(n+mm \( \Rightarrow \) \(\tilde {\theta }\)(t) = \(\tilde {\theta }\)(\(t_{0}^{ + }\)) (as there are no switches over [\(t_{0}^{ + }\); \(t_{0}^{ + }\) + T0) according to Assumption 2). Then, taking the proof of Proposition 1 into consideration (see (A.13)–(A.17)), the exponential growth rate of eref(t) follows from the boundedness of \(\tilde {\theta }\)(t), and, as a result, as well the boundedness of eref(t) by its finite value at the right-hand border of the time interval in question: \(\forall \)t ∈ [\(t_{0}^{ + }\); \(t_{0}^{ + }\) + T0) eref(t) \(\leqslant \) eref[\(t_{0}^{ + }\) + T0). Therefore, ξ(t) is bounded over the time range [\(t_{0}^{ + }\); \(t_{0}^{ + }\) + T0).

The next aim is to consider the interval [\(t_{0}^{ + }\) + T0; ∞).

Step 1. The exponential convergence of \(\tilde {\theta }\)(t) \(\forall \)t \( \geqslant \) \(t_{0}^{ + }\) + T0 is to be proved.

Taking into consideration (A.38) or (A.40) and the boundedness of Ω(t) \( \geqslant \) ΩLB, the solution of Eq. (4.11)\(\forall \)t \( \geqslant \) \(t_{0}^{ + }\) + T0 meets the inequality:

$$\begin{gathered} \tilde {\theta }(t) = \phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right)\tilde {\theta }\left( {t_{0}^{ + } + {{T}_{0}}} \right) + \int\limits_{t_{0}^{ + } + {{T}_{0}}}^t {\phi (t,\tau )\frac{{{{\gamma }_{1}}w(\tau )}}{{\Omega (\tau )}}d\tau } \\ - \int\limits_{t_{0}^{ + } + {{T}_{0}}}^t {\phi (t,\,\,\tau )\sum\limits_{q = 1}^i {\Delta _{q}^{\theta }\delta (\tau - t_{q}^{ + })d\tau \;\leqslant \;\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right)\tilde {\theta }\left( {t_{0}^{ + } + {{T}_{0}}} \right)} } \\ + \,\,\frac{{{{\gamma }_{1}}{{w}_{{\max }}}}}{{{{\Omega }_{{LB}}}}}\int\limits_{t_{0}^{ + } + {{T}_{0}}}^t {\phi (t,\tau )\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right)d\tau - \sum\limits_{q = 1}^i {\phi \left( {t,\,\,t_{q}^{ + }} \right)\Delta _{q}^{\theta }h\left( {t - t_{q}^{ + }} \right).} } \\ \end{gathered} $$
(A.47)

As at least one of the following conditions is met:

(1) i \(\leqslant \) imax \(\leqslant \) ∞,

(2) \(\forall \)q\(\mathbb{N}\)   ||\(\Delta _{q}^{\theta }\)|| \(\leqslant \) cq\({{\phi }^{{{{k}_{0}}}}}\)(\(t_{q}^{ + }\), \(t_{0}^{ + }\)) \(\leqslant \) cqϕ(\(t_{q}^{ + }\), \(t_{0}^{ + }\)), cq > cq+1,

then, by the analogy with (A.3)–(A.5), the following upper bound is obtained from (A.47):

$$\begin{gathered} {\text{||}}\tilde {\theta }(t){\text{||}}\;\leqslant \;{{\beta }_{{\max }}}\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right) + \frac{{{{\gamma }_{1}}{{w}_{{\max }}}}}{{{{\Omega }_{{LB}}}}}\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right)\left( {t - t_{0}^{ + } - {{T}_{0}}} \right) \\ \;\leqslant \;{{\beta }_{{\max }}}\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{0}}} \right) + \frac{{{{\gamma }_{1}}{{w}_{{\max }}}}}{{{{\Omega }_{{LB}}}}}{{\chi }_{1}}(t){{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}(t - t_{0}^{ + } - {{T}_{0}})}}},\,\,\,\,\,\,\,\,\, \\ \end{gathered} $$
(A.48)

where χ1(t) is a time-varying parameter:

$${{\chi }_{1}}(t) = {{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}(t - t_{0}^{ + } - {{T}_{0}})}}}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right),\quad {{\chi }_{1}}\left( {t_{0}^{ + } + {{T}_{0}}} \right) = 0,$$

and β(t) for both cases under consideration is defined as:

$$\beta (t)\;\leqslant \;\left\| {\tilde {\theta }\left( {t_{0}^{ + } + {{T}_{0}}} \right)} \right\| + \sum\limits_{q = 1}^{{{i}_{{\max }}}} {\phi \left( {t_{0}^{ + } + {{T}_{0}},t_{q}^{ + }} \right)\left\| {\Delta _{q}^{\theta }} \right\|h\left( {t - t_{q}^{ + }} \right) = {{\beta }_{{\max }}},} $$
(A.49)
$$\begin{gathered} \beta (t)\;\leqslant \;\left\| {\tilde {\theta }\left( {t_{0}^{ + } + {{T}_{0}}} \right)} \right\| + \sum\limits_{q = 1}^i {\phi \left( {t_{0}^{ + } + {{T}_{0}},\,\,t_{q}^{ + }} \right)\phi \left( {t_{q}^{ + },\,\,t_{0}^{ + }} \right){{c}_{q}}h\left( {t - t_{q}^{ + }} \right)} \hfill \\ \,\,\,\,\,\,\,\, = \left\| {\tilde {\theta }\left( {t_{0}^{ + } + {{T}_{0}}} \right)} \right\| + \sum\limits_{q = 1}^i {\phi \left( {t_{0}^{ + } + {{T}_{0}},\,\,t_{0}^{ + }} \right){{c}_{q}}h\left( {t - t_{q}^{ + }} \right)} = {{\beta }_{{\max }}}. \hfill \\ \end{gathered} $$
(A.50)

If the parameter χ1(t) is bounded, then it holds for \(\tilde {\theta }\)(t) that:

$$\left\| {\tilde {\theta }(t)} \right\|\;\leqslant \;\left( {{{\beta }_{{\max }}} + \frac{{{{\gamma }_{1}}{{w}_{{\max }}}}}{{{{\Omega }_{{LB}}}}}\chi _{1}^{{{\text{UB}}}}} \right){{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}}.$$
(A.51)

Then |χ1(t)| \(\leqslant \) \(\chi _{1}^{{{\text{UB}}}}\) is to be proved. We differentiate χ1(t) with respect to time:

$${{\dot {\chi }}_{1}}(t) = - \frac{{{{\gamma }_{1}}}}{2}{{\chi }_{1}}(t) + {{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}}.$$
(A.52)

The upper bound of the solution of (A.52) is written as:

$${\text{|}}{{\chi }_{1}}(t){\text{|}}\;\leqslant \;\left| {\int\limits_{t_{0}^{ + } + {{T}_{0}}}^t {{{e}^{{ - \int_\tau ^t {\frac{{{{\gamma }_{1}}}}{2}d\tau } }}}{{e}^{{\frac{{ - {{\gamma }_{1}}}}{2}\left( {\tau - t_{0}^{ + } - {{T}_{0}}} \right)}}}d\tau } } \right|\;\leqslant \;\left| {\int\limits_{t_{0}^{ + } + {{T}_{0}}}^t {{{e}^{{\frac{{ - {{\gamma }_{1}}}}{2}\left( {\tau - t_{0}^{ + } - {{T}_{0}}} \right)}}}d\tau } } \right|\;\leqslant \;\frac{2}{{{{\gamma }_{1}}}},$$
(A.53)

which proves the required boundedness |χ1(t)| \(\leqslant \) \(\chi _{1}^{{{\text{UB}}}}\).

The exponential convergence (A.51) immediately follows from boundedness (A.53), which was to be proved at Step 1.

Step 2. The exponential convergence of the error ξ(t) \(\forall \)t \( \geqslant \) \(t_{0}^{ + }\) + T0 is to be proved.

To prove the convergence of ξ(t) \(\forall \)t \( \geqslant \) \(t_{0}^{ + }\) + T0, owing to the estimate (A.51), it remains to prove the convergence of the tracking error eref(t) \(\forall \)t \( \geqslant \) \(t_{0}^{ + }\) + T0.

The following quadratic form is introduced:

$$\begin{gathered} {{V}_{{{{e}_{{{\text{ref}}}}}}}} = e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} + \frac{{4a_{0}^{2}}}{{{{\gamma }_{1}}}}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - T_{0}^{ + }} \right)}}},\quad H = {\text{blockdiag}}\left\{ {P,\,\,\frac{{4a_{0}^{2}}}{{{{\gamma }_{1}}}}} \right\}, \\ \underbrace {{{\lambda }_{{\min }}}(H)}_{{{\lambda }_{m}}}{\text{||}}{{{\bar {e}}}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}}\;\leqslant \;V({\text{||}}{{{\bar {e}}}_{{{\text{ref}}}}}{\text{||}})\;\leqslant \;\underbrace {{{\lambda }_{{\max }}}(H)}_{{{\lambda }_{M}}}{\text{||}}{{{\bar {e}}}_{{{\text{ref}}}}}{\text{|}}{{{\text{|}}}^{2}}, \\ \end{gathered} $$
(A.54)
$${{\bar {e}}_{{{\text{ref}}}}}(t) = {{\left[ {e_{{{\text{ref}}}}^{{\text{T}}}(t)\,\,{{e}^{{ - \frac{{{{\gamma }_{1}}}}{4}\left( {t - t_{0}^{ + } - T_{0}^{ + }} \right)}}}} \right]}^{{\text{T}}}}.$$

By analogy with the proof of Proposition 1, \(\forall \)t \( \geqslant \) \(t_{0}^{ + }\) + T0 the derivative of (A.54) is written as:

$${{\dot {V}}_{{{{e}_{{{\text{ref}}}}}}}}(t)\;\leqslant \; - \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}(t){\text{|}}{{{\text{|}}}^{2}} - 2a_{0}^{2}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - T_{0}^{ + }} \right)}}} + b_{{\max }}^{2}{{\lambda }_{{\max }}}\left( {\omega (t){{\omega }^{{\text{T}}}}} \right){{\left\| {\tilde {\theta }(t)} \right\|}^{2}}.$$
(A.55)

Using (A.55), the following upper bound is introduced for \(b_{{\max }}^{2}\)||\(\tilde {\theta }\)(t)||2:

$$b_{{\max }}^{2}{{\left\| {\tilde {\theta }(t)} \right\|}^{2}}\;\leqslant \;b_{{\max }}^{2}{{\left( {{{\beta }_{{\max }}} + \frac{{{{\gamma }_{1}}{{w}_{{\max }}}}}{{{{\Omega }_{{LB}}}}}\chi _{1}^{{{\text{UB}}}}} \right)}^{2}}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}}.$$
(A.56)

Equation (A.56) is substituted into (A.55):

$$\begin{gathered} {{{\dot {V}}}_{{{{e}_{{{\text{ref}}}}}}}}(t)\;\leqslant \; - \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}(t){\text{|}}{{{\text{|}}}^{2}} - 2a_{0}^{2}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - T_{0}^{ + }} \right)}}} \\ + \,\,b_{{\max }}^{2}{{\left( {{{\beta }_{{\max }}} + \frac{{{{\gamma }_{1}}{{w}_{{\max }}}}}{{{{\Omega }_{{LB}}}}}\chi _{1}^{{{\text{UB}}}}} \right)}^{2}}{{\lambda }_{{\max }}}\left( {\omega (t){{\omega }^{{\text{T}}}}(t)} \right){{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - T_{0}^{{}}} \right)}}}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}}. \\ \end{gathered} $$
(A.57)

The exponential stability of eref(t) requires the third term of (A.57) to be exponentially vanishing, which demands:

$$\chi (t) = {{\lambda }_{{\max }}}\left( {\omega (t){{\omega }^{{\text{T}}}}(t)} \right){{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + }} \right)}}}\;\leqslant \;{{\chi }_{{{\text{UB}}}}},$$
(A.58)

where χUB > 0.

The error \(\tilde {\theta }\)(t) is bounded (A.51). In such case, following results of Proposition 1, the growth rate of λmax(ω(tT(t)) does not exceed the exponential one (A.17). So, when γ1 > 0 is sufficiently large, then the estimate (A.58) holds.

Equation (A.58) is substituted into (A.57) to obtain:

$${{\dot {V}}_{{{{{\text{e}}}_{{{\text{ref}}}}}}}}(t)\;\leqslant \; - {\kern 1pt} \mu {{\lambda }_{{\min }}}(P){\text{||}}{{e}_{{{\text{ref}}}}}(t){\text{|}}{{{\text{|}}}^{2}} - 2a_{0}^{2}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}(t - t_{0}^{ + } - T_{0}^{ + })}}} + a_{0}^{2}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}(t - t_{0}^{ + } - T_{0}^{{}})}}}\;\leqslant \; - {\kern 1pt} {{\bar {\eta }}_{{{{e}_{{{\text{ref}}}}}}}}{{V}_{{{{e}_{{{\text{ref}}}}}}}}(t),$$
(A.59)

where

$$a_{0}^{2} = b_{{\max }}^{2}{{\left( {{{\beta }_{{\max }}} + \frac{{{{\gamma }_{1}}{{w}_{{\max }}}}}{{{{\Omega }_{{LB}}}}}\chi _{1}^{{{\text{UB}}}}} \right)}^{2}}{{\chi }_{{{\text{UB}}}}},\quad {{\bar {\eta }}_{{{{e}_{{{\text{ref}}}}}}}} = \min \left\{ {\frac{{\mu {{\lambda }_{{\min }}}(P)}}{{{{\lambda }_{{\max }}}(P)}},\,\,\frac{{{{\gamma }_{1}}}}{4}} \right\}.$$

The differential inequality (A.59) is solved to obtain:

$${{V}_{{{{e}_{{{\text{ref}}}}}}}}(t)\;\leqslant \;{{\epsilon }^{{ - {{{\bar {\eta }}}_{{{{e}_{{{\text{ref}}}}}}}}(t - t_{0}^{ + } - {{T}_{0}})}}}{{V}_{{{{e}_{{{\text{ref}}}}}}}}(t_{0}^{ + } + {{T}_{0}}),$$
(A.60)

from which we have the exponential convergence of the tracking error eref(t) to zero:

$${\text{||}}{{e}_{{{\text{ref}}}}}(t){\text{||}}\;\leqslant \;\sqrt {\frac{{{{\lambda }_{M}}}}{{{{\lambda }_{m}}}}} {\text{||}}{{e}_{{{\text{ref}}}}}(t_{0}^{ + } + {{T}_{0}}){\text{||}}{{e}^{{ - {{\eta }_{{{{e}_{{{\text{ref}}}}}}}}(t - t_{0}^{ + } - {{T}_{0}})}}},$$
(A.61)

where

$${{\eta }_{{{{e}_{{{\text{ref}}}}}}}} = \frac{1}{2}{{\bar {\eta }}_{{{{e}_{{{\text{ref}}}}}}}}.$$

Having combined (A.61) and (A.51), it is obtained:

$${\text{||}}\xi (t){\text{||}}\;\leqslant \;\max \left\{ {\sqrt {\frac{{{{\lambda }_{M}}}}{{{{\lambda }_{m}}}}} {\text{||}}{{e}_{{{\text{ref}}}}}(t_{0}^{ + } + {{T}_{0}}){\text{||}},\,\,{{\beta }_{{\max }}} + \frac{{{{\gamma }_{1}}{{w}_{{\max }}}}}{{{{\Omega }_{{LB}}}}}\chi _{1}^{{{\text{UB}}}}} \right\}{{e}^{{ - {{\gamma }_{{{{e}_{{{\text{ref}}}}}}}}(t - t_{0}^{ + } - {{T}_{0}})}}},$$
(A.62)

which, taking into consideration that ξ(t) is bounded over [\(t_{0}^{ + }\);  \(t_{0}^{ + }\) + T0], allows one to make conclusions of both global boundedness of ξ(t) ∈ L and the exponential convergence of ξ(t) to zero \(\forall \)t \( \geqslant \) \(t_{0}^{ + }\) + T0. The proof of Theorem 1 is complete.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glushchenko, A.I., Lastochkin, K.A. Exponentially Stable Adaptive Control. Part II. Switched Systems. Autom Remote Control 84, 253–280 (2023). https://doi.org/10.1134/S0005117923030050

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117923030050

Keywords:

Navigation