Skip to main content
Log in

Approximation-Based Approach to Adaptive Control of Linear Time-Varying Systems

  • TOPICAL ISSUE
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

An adaptive state-feedback control system is proposed for a class of linear time-varying systems represented in the controller canonical form. The adaptation problem is reduced to the one of Taylor series-based first approximations of the ideal controller parameters. The exponential convergence of identification and tracking errors of such an approximation to an arbitrarily small and adjustable neighbourhood of the equilibrium point is ensured if the condition of the regressor persistent excitation with a sufficiently small time period is satisfied. The obtained theoretical results are validated via numerical experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Notes

  1. Otherwise there exists a time instant ta \( \geqslant \) \(t_{0}^{ + }\) at which b(ta) = 0, and equations from Assumption 2 have no solution in the general case (bref ≠ 0, arefa(ta) ≠ 0n).

REFERENCES

  1. Polyak, B.T. and Tsypkin, Ya.Z., Optimal Pseudogradient Adaptation Algorithms, Autom. Remote Control, 1981, vol. 41, pp. 1101–1110.

    MathSciNet  Google Scholar 

  2. Polyak, B.T. and Tsypkin, Ya.Z., Robust Pseudogradient Adaptation Algorithms, Autom. Remote Control, 1981, vol. 41, no. 10, pp. 1404–1409.

    MathSciNet  Google Scholar 

  3. Ioannou, P. and Sun, J., Robust Adaptive Control, New York: Dover, 2013.

  4. Fradkov, A.L., Lyapunov-Bregman functions for speed-gradient adaptive control of nonlinear time-varying systems, IFAC-PapersOnLine, 2022, vol. 55, no. 12, pp. 544–548.

    Article  Google Scholar 

  5. Goel, R. and Roy, S.B., Composite adaptive control for time-varying systems with dual adaptation, arXiv preprint arXiv:2206.01700, 2022, pp. 1–6.

  6. Na, J., Xing, Y., and Costa-Castello, R., Adaptive estimation of time-varying parameters with application to roto-magnet plant, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018, vol. 51, no. 2, pp. 731–741.

    Article  Google Scholar 

  7. Chen, K. and Astolfi, A., Adaptive control for systems with time-varying parameters, IEEE Transactions on Automatic Control, 2020, vol. 66, no. 5, pp. 1986–2001.

    Article  MathSciNet  Google Scholar 

  8. Patil, O.S., Sun, R., Bhasin, S., and Dixon, W.E., Adaptive control of time-varying parameter systems with asymptotic tracking, IEEE Transactions on Automatic Control, 2022, vol. 67, no. 9, pp. 4809–4815.

    Article  MathSciNet  Google Scholar 

  9. Putov, V.V., Methods of Adaptive Control Systems Design for Nonlinear Time-Varying Dynamic Plants with Functional-Parametric Uncertainty, Thesis … Dr. of Technical Sciences, SPbGETU “LETI,” SPb., 1993, [In Russian].

  10. Putov, V.V., Polushin, I.G., Lebedev, V.V., and Putov, A.V., Generalisation of the majorizing functions method for the problems of adaptive control of nonlinear dynamic plants, Izvestiya SPbGETU LETI, 2013, no. 8, pp. 32–37.

  11. Glushchenko, A. and Lastochkin, K., Exponentially Stable Adaptive Control, Part III: Time-Varying Plants, Autom. Remote Control, 2023, vol. 84, no. 11, pp. 1232–1247.

    Article  MathSciNet  Google Scholar 

  12. Pagilla, P.R. and Zhu, Y., Adaptive control of mechanical systems with time-varying parameters and disturbances, J. Dyn. Sys., Meas., Control, 2004, vol. 126, no. 3, pp. 520–530.

    Article  Google Scholar 

  13. Quoc, D.V., Bobtsov, A.A., Nikolaev, N.A., and Pyrkin, A.A., Stabilization of a linear non-stationary system under conditions of delay and additive sinusoidal perturbation of the output, Journal of Instrument Engineering, 2021, vol. 64, no. 2, pp. 97–103.

    Google Scholar 

  14. Dat, V.Q. and Bobtsov, A.A., Output Control by Linear Time-Varying Systems using Parametric Identification Methods, Mekhatronika, Avtomatizatsiya, Upravlenie, 2020, vol. 21, no. 7, pp. 387–393.

    Google Scholar 

  15. Grigoryev, V.V., Design of control equations for variable parameter systems, Autom. Remote Control, 1983, vol. 44, no. 2, pp. 189–194.

    Google Scholar 

  16. Glushchenko, A. and Lastochkin, K., Robust Time-Varying Parameters Estimation Based on I-DREM Procedure, IFAC-PapersOnLine, 2022, vol. 55, no. 12, pp. 91–96.

    Article  Google Scholar 

  17. Dieudonne, J., Foundations of Modern Analysis, New York, Academic Press, 1960.

    Google Scholar 

  18. Leiva, H. and Siegmund, S., A necessary algebraic condition for controllability and observability of linear time-varying systems, IEEE Transactions on Automatic Control, 2003, vol. 48, no. 12, pp. 2229–2232.

    Article  MathSciNet  Google Scholar 

  19. Glushchenko, A.I. and Lastochkin, K.A., Exponentially Stable Adaptive Control. Part II. Switched Systems, Autom. Remote Control, 2023, vol. 84, no. 3, pp. 260–291.

    Google Scholar 

  20. Khalil, H., Nonlinear Systems, 3rd ed., Upper Saddle River, NJ, USA: Prentice-Hall, 2002.

    Google Scholar 

Download references

Funding

This research was in part financially supported by Grants Council of the President of the Russian Federation (project MD-1787.2022.4).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. Glushchenko or K. Lastochkin.

Additional information

This paper was recommended for publication by P.S. Shcherbakov, member of the Editorial Board

Publisher’s Note.

Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

APPENDIX

APPENDIX

Proof of Proposition 1. The proof of the proposition is divided into two steps. At the first one we analyse the properties of the parametric error \(\tilde {\theta }\)(t), at the second one—the properties of the tracking error eref(t).

Step 1. Owing to Proposition 1 from [19], if i \(\leqslant \) imax < ∞, then for the differential equation

$$\dot {\tilde {\theta }}(t) = - {{\gamma }_{1}}\tilde {\theta }(t) - \dot {\theta }(t),\;\;\tilde {\theta }\left( {t_{0}^{ + }} \right) = {{\hat {\theta }}_{0}} - \theta \left( {t_{0}^{ + }} \right),$$

the following upper bound holds

$$\left\| {\tilde {\theta }(t)} \right\|\;\leqslant \;{{\beta }_{{\max }}}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}},\;\;{{\beta }_{{\max }}} > 0,$$
(A.1)

where \(\dot {\theta }\)(t) = \(\sum\limits_{q = 1}^i {\Delta _{q}^{\theta }\delta \left( {t - t_{q}^{ + }} \right)} \), and δ : [\(t_{0}^{ + }\); ∞) → {0, ∞} is the Dirac function.

Step 2. The following quadratic form is introduced:

$$\begin{gathered} {{V}_{{{{e}_{{{\text{ref}}}}}}}} = e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} + \frac{{a_{0}^{2}}}{{{{\gamma }_{1}}}}{{e}^{{ - 2{{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}},\;\;H = {\text{blockdiag}}\left\{ {P,\,\,\frac{{a_{0}^{2}}}{{{{\gamma }_{1}}}}} \right\}, \\ \underbrace {{{\lambda }_{{\min }}}(H)}_{{{\lambda }_{{\text{m}}}}}{{\left\| {{{{\bar {e}}}_{{{\text{ref}}}}}} \right\|}^{2}}\;\leqslant \;V\left( {\left\| {{{{\bar {e}}}_{{{\text{ref}}}}}} \right\|} \right)\;\leqslant \;\underbrace {{{\lambda }_{{\max }}}(H)}_{{{\lambda }_{M}}}{{\left\| {{{{\bar {e}}}_{{{\text{ref}}}}}} \right\|}^{2}}, \\ \end{gathered} $$
(A.2)

where \({{\bar {e}}_{{{\text{ref}}}}}\)(t) = \({{\left[ {\begin{array}{*{20}{c}} {e_{{{\text{ref}}}}^{{\text{T}}}(t)}&{{{e}^{{ - {{\gamma }_{1}}(t - t_{0}^{ + })}}}} \end{array}} \right]}^{{\text{T}}}}\), P = PT > 0 is the solution of the below-given Lyapunov equation in case λmin(Q) > 2:

$$A_{{{\text{ref}}}}^{{\text{T}}}P + P{{A}_{{{\text{ref}}}}} = - Q,\;\;Q = {{Q}^{{\text{T}}}} > 0.$$

The derivative of the quadratic form (A.2) is written as:

$$\begin{matrix} {{{\dot{V}}}_{{{e}_{\text{ref}}}}}=e_{\text{ref}}^{\text{T}}\left( A_{\text{ref}}^{\text{T}}P+P{{A}_{\text{ref}}} \right){{e}_{\text{ref}}}-2a_{0}^{2}{{e}^{-2{{\gamma }_{1}}\left( t-t_{0}^{+} \right)}}+2e_{\text{ref}}^{\text{T}}P{{e}_{n}}b{{{\tilde{\theta }}}^{\text{T}}}\omega +2e_{\text{ref}}^{\text{T}}P{{e}_{n}}b\delta _{{{\theta }_{0}}}^{\text{T}}\omega \\ =-e_{\text{ref}}^{\text{T}}Q{{e}_{\text{ref}}}-2a_{0}^{2}{{e}^{-2{{\gamma }_{1}}\left( t-t_{0}^{+} \right)}}+2e_{\text{ref}}^{\text{T}}P{{e}_{n}}b{{{\tilde{\theta }}}^{\text{T}}}\left( {{\omega }_{{{e}_{\text{ref}}}}}+{{\omega }_{r}} \right)+2e_{\text{ref}}^{\text{T}}P{{e}_{n}}b\delta _{{{\theta }_{0}}}^{\text{T}}\left( {{\omega }_{{{e}_{\text{ref}}}}}+{{\omega }_{r}} \right) \\ \leqslant \ -{\kern 1pt} {{\lambda }_{\min }}(Q){{\left\| {{e}_{\text{ref}}} \right\|}^{2}}-2a_{0}^{2}{{e}^{-2{{\gamma }_{1}}\left( t-t_{0}^{+} \right)}} \\ +\ 2{{\lambda }_{\max }}(P){{b}_{\max }}{{\left\| {{e}_{\text{ref}}} \right\|}^{2}}\left\| {\tilde{\theta }} \right\|+2{{\lambda }_{\max }}(P){{{\bar{\omega }}}_{r}}{{b}_{\max }}\left\| {{e}_{\text{ref}}} \right\|\left\| {\tilde{\theta }} \right\| \\ +\ 2{{\lambda }_{\max }}(P){{b}_{\max }}{{{\dot{\mathcal{K}}}}_{\max }}T{{\left\| {{e}_{\text{ref}}} \right\|}^{2}}+2{{\lambda }_{\max }}(P){{b}_{\max }}{{{\bar{\omega }}}_{r}}{{{\dot{\mathcal{K}}}}_{\max }}T\left\| {{e}_{\text{ref}}} \right\|, \\ \end{matrix}$$
(A.3)

where

$$\left\| {\omega (t)} \right\|\;\leqslant \;\underbrace {\left\| {{\kern 1pt} \left[ {\begin{array}{*{20}{c}} {{{e}_{{{\text{ref}}}}}(t)}&0 \end{array}} \right]{\kern 1pt} } \right\|}_{\left\| {{{\omega }_{{{{e}_{{{\text{ref}}}}}}}}(t)} \right\| = \left\| {{{e}_{{{\text{ref}}}}}(t)} \right\|} + \underbrace {\left\| {{\kern 1pt} \left[ {\begin{array}{*{20}{c}} {{{x}_{{{\text{ref}}}}}(t)}&{r(t)} \end{array}} \right]{\kern 1pt} } \right\|}_{\left\| {{{\omega }_{r}}(t)} \right\|\,\leqslant \,{{{\bar {\omega }}}_{r}}}\;\leqslant \;\left\| {{{e}_{{{\text{ref}}}}}(t)} \right\| + {{\bar {\omega }}_{r}}.$$

Having applied Young’s inequality twice:

$$\begin{gathered} 2{{\lambda }_{{\max }}}(P){{{\bar {\omega }}}_{r}}{{b}_{{\max }}}\left\| {{{e}_{{{\text{ref}}}}}} \right\|\left\| {\tilde {\theta }} \right\|\;\leqslant \;{{\left\| {{{e}_{{ref}}}} \right\|}^{2}} + \lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}{{\left\| {\tilde {\theta }} \right\|}^{2}}, \\ 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}{{{\bar {\omega }}}_{r}}{{{\dot {\mathcal{K}}}}_{{\max }}}T\left\| {{{e}_{{{\text{ref}}}}}} \right\|\;\leqslant \;\lambda _{{\max }}^{2}(P)b_{{\max }}^{2}\bar {\omega }_{r}^{2}\dot {\mathcal{K}}_{{\max }}^{2}{{T}^{2}} + {{\left\| {{{e}_{{{\text{ref}}}}}} \right\|}^{2}}, \\ \end{gathered} $$
(A.4)

equation (A.3) is rewritten as:

$$\begin{gathered} {{{\dot {V}}}_{{{{e}_{{ref}}}}}}\;\leqslant \;\left[ { - {{\lambda }_{{\min }}}(Q) + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}\left( {\left\| {\tilde {\theta }} \right\| + {{{\dot {\mathcal{K}}}}_{{\max }}}T} \right) + 2} \right]{{\left\| {{{e}_{{{\text{ref}}}}}} \right\|}^{2}} \\ - 2a_{0}^{2}{{e}^{{ - 2{{\gamma }_{1}}\left( {t - t_{0}^{ + }} \right)}}} + \lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}{{\left\| {\tilde {\theta }} \right\|}^{2}} + \lambda _{{\max }}^{2}(P)b_{{\max }}^{2}\bar {\omega }_{r}^{2}\dot {\mathcal{K}}_{{\max }}^{2}{{T}^{2}}. \\ \end{gathered} $$
(A.5)

As the parametric error \(\tilde {\theta }\)(t) converges to zero exponentially (A.2), then, if λmin(Q) > 2, then there definitely exists a time instant \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\; \geqslant \;t_{0}^{ + }\) and constants Tmin > 0, a0 > λmax(P)\({{\bar {\omega }}_{r}}{{b}_{{\max }}}{{\beta }_{{\max }}}\) such that for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) and 0 < T < Tmin it holds that

$$\begin{gathered} - {{\lambda }_{{\min }}}(Q) + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}\left( {{{\beta }_{{\max }}}{{e}^{{ - {{\gamma }_{1}}\left( {{{t}_{{{{e}_{{{\text{ref}}}}}}}} - t_{0}^{ + }} \right)}}} + {{{\dot {\mathcal{K}}}}_{{\max }}}T} \right) + 2 = - {{c}_{1}} < 0, \\ \lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}\beta _{{\max }}^{2} - 2a_{0}^{2} = - {{c}_{2}} < 0. \\ \end{gathered} $$
(A.6)

Then the upper bound of the derivative (A.5) for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) is written as

$${{\dot {V}}_{{{{e}_{{{\text{ref}}}}}}}}\;\leqslant \; - {{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}}{{V}_{{{{e}_{{{\text{ref}}}}}}}} + \lambda _{{\max }}^{2}(P)b_{{\max }}^{2}\bar {\omega }_{r}^{2}\dot {\mathcal{K}}_{{\max }}^{2}{{T}^{2}},$$
(A.7)

where \({{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}} = \min \left\{ {\frac{{{{c}_{1}}}}{{{{\lambda }_{{\max }}}(P)}},\,\,\frac{{{{c}_{2}}{{\gamma }_{1}}}}{{a_{0}^{2}}}} \right\}\).

The solution of the differential inequality (A.7) for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) is obtained as

$${{V}_{{{{e}_{{{\text{ref}}}}}}}}(t)\;\leqslant \;{{e}^{{ - {{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}}}}}\left( {t - {{t}_{{{{e}_{{{\text{ref}}}}}}}}} \right){{V}_{{{{e}_{{{\text{ref}}}}}}}}\left( {{{t}_{{{{e}_{{{\text{ref}}}}}}}}} \right) + \frac{{\lambda _{{\max }}^{2}(P)b_{{\max }}^{2}\bar {\omega }_{r}^{2}\dot {\mathcal{K}}_{{\max }}^{2}{{T}^{2}}}}{{{{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}}}}.$$
(A.8)

Tending time to infinity for (A.8) and considering expression for \({{V}_{{{{e}_{{{\text{ref}}}}}}}}\), it is concluded that (2.3) holds, which completes the proof.

Proof of Proposition 2. Owing to Assumption 2 and following (3.2), (3.3), we apply the Taylor formula (1.3) to the parameters Θ(t) to obtain:

$$\Theta (t) = \Theta \left( {t_{i}^{ + }} \right) + \overbrace {\dot {\Theta }\left( {t_{i}^{ + }} \right)\left( {t - t_{i}^{ + }} \right) + \underbrace {\int\limits_{{{t}_{i}}}^t {(t - \zeta )\ddot {\Theta }(\zeta )d\zeta } }_{{{\delta }_{1}}(t)}}^{{{\delta }_{0}}(t)},$$
(A.9)

where \(\Theta \left( {t_{i}^{ + }} \right)\) = Θi, \(\dot {\Theta }\left( {t_{i}^{ + }} \right) = {{\dot {\Theta }}_{i}}\) are the values of the system parameters Θ(t) and the rate of their change at the time instant \(t_{i}^{ + }\), ||δ1(t)|| \(\leqslant \) 0.5\({{\ddot {\Theta }}_{{\max }}}{{T}^{2}}\) denotes the bounded reminder of the first order (p = 1), ||δ0(t)|| \(\leqslant \) \({{\dot {\Theta }}_{{\max }}}T\) is the bounded reminder of the zeroth order (p = 0).

Equation (A.9) is rewritten in the matrix form

$$\Theta (t) = \Lambda \left( {t,\,\,t_{i}^{ + }} \right)\vartheta (t) + {{\delta }_{1}}(t),$$
(A.10)

where ϑ(t) = \({{\left[ {\begin{array}{*{20}{c}} {\Theta _{i}^{{\text{T}}}}&{\dot {\Theta }_{i}^{{\text{T}}}} \end{array}} \right]}^{{\text{T}}}} \in {{\mathbb{R}}^{{2(n + 1)}}}\).

The substitution of (A.10) into (2.1) yields

$$\dot {x}(t) = {{A}_{0}}x + {{e}_{n}}\left( {{{\Phi }^{{\text{T}}}}(t)\Lambda \left( {t,\,\,t_{i}^{ + }} \right)\vartheta (t) + {{\Phi }^{{\text{T}}}}(t){{\delta }_{1}}(t)} \right).$$
(A.11)

The expression x(t) – \(l\bar {x}(t)\) is differentiated to obtain

$$\dot {x}(t) - l\dot {\bar {x}}(t) = - l(x(t) - l\bar {x}(t)) + {{A}_{0}}x + {{e}_{n}}\left( {{{\Phi }^{{\text{T}}}}(t)\Lambda \left( {t,\,\,t_{i}^{ + }} \right)\vartheta (t) + {{\Phi }^{{\text{T}}}}(t){{\delta }_{1}}(t)} \right).$$
(A.12)

The solution of (A.12) is written as

$$\begin{gathered} x(t) - l\bar {x}(t) = {{e}^{{ - l\left( {t - t_{i}^{ + }} \right)}}}x({{t}_{i}}) + {{A}_{0}}\bar {x}(t) + \int\limits_{t_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}{{e}_{n}}{{\Phi }^{{\text{T}}}}(\tau )\Lambda \left( {\tau ,\,\,t_{i}^{ + }} \right)\vartheta (\tau )d\tau } \\ + \;\int\limits_{t_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}{{e}_{n}}{{\Phi }^{{\text{T}}}}(\tau ){{\delta }_{1}}(\tau )d\tau } = {{A}_{0}}\bar {x}(t) + {{e}_{n}}\bar {\varphi }(t)\bar {\vartheta }(t) + {{e}_{n}}\underbrace {\int\limits_{t_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}{{\Phi }^{{\text{T}}}}(\tau ){{\delta }_{1}}(\tau )d\tau } }_{{{\varepsilon }_{0}}(t)}, \\ \end{gathered} $$
(A.13)

where \(\bar {\vartheta }\)(t) = \({{\left[ {\begin{array}{*{20}{c}} {{{\vartheta }^{{\text{T}}}}(t)}&{e_{n}^{{\text{T}}}x\left( {t_{i}^{ + }} \right)} \end{array}} \right]}^{{\text{T}}}} \in {{\mathbb{R}}^{{2n + 3}}}\), and the third equality is not violated since the reset of the filter states (4.1) and the change of parameters occur synchronously at a known time instant \(t_{i}^{ + }\), i.e., \(\bar {\vartheta }\)(t) = const for all t\(\left[ {\left. {t_{i}^{ + },\,\,t_{i}^{ + } + T} \right)} \right.\).

Equation (A.13) is substituted into (4.2) to obtain

$${{\bar {z}}_{n}}(t) = {{n}_{s}}(t)e_{n}^{{\text{T}}}[x(t) - l\bar {x}(t) - {{A}_{0}}\bar {x}(t)] = \bar {\varphi }_{n}^{{\text{T}}}(t)\bar {\vartheta }(t) + {{\bar {\varepsilon }}_{0}}(t),$$
(A.14)

where \({{\bar {z}}_{n}}(t) \in \mathbb{R}\), \({{\bar {\varphi }}_{n}}(t) \in {{\mathbb{R}}^{{2n + 3}}}\) and the perturbation \({{\bar {\varepsilon }}_{0}}(t) \in \mathbb{R}\) is bounded as follows (see definitions of Φ(t) and \({{\bar {\varphi }}_{n}}(t)\)):

$$\left\| {{{{\bar {\varepsilon }}}_{0}}(t)} \right\| = \left\| {{{n}_{s}}(t)\int\limits_{t_{i}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}{{\Phi }^{{\text{T}}}}(\tau ){{\delta }_{1}}(\tau )d\tau } } \right\|\;\leqslant \;\left\| {\bar {\varphi }_{n}^{{\text{T}}}(t)} \right\|0.5{{\ddot {\Theta }}_{{\max }}}{{T}^{2}}.$$
(A.15)

Owing to the multiplication of the regression equation (A.14) by ns(t), the regressor \(\bar {\varphi }_{n}^{{\text{T}}}(t)\). the regressand \({{\bar {z}}_{n}}(t)\) and the perturbation \({{\bar {\varepsilon }}_{0}}(t)\) are bounded. In addition, according to the upper bound (A.15), the perturbation \({{\bar {\varepsilon }}_{0}}(t)\) can be reduced by decreasing the parameter T. Therefore, further on we will use the definition \({{\bar {\varepsilon }}_{0}}(t)\) : = \({{\bar {\varepsilon }}_{0}}\)(t, T) and imply that any perturbation obtained by transformation of \({{\bar {\varepsilon }}_{0}}\)(t, T) can also be reduced by a reduction of T.

Having applied (4.3) and multiplicated z(t) by adj {φ(t)}, we have (commutativity of the filter (4.3a) is not violated as its reinitialization and parameters change happen synchronously at a known time instant \(t_{i}^{ + }\), i.e., \(\bar {\vartheta }\)(t) = const for all t ∈ [\(t_{i}^{ + }\), \(t_{i}^{ + }\) + T))

$$\begin{gathered} Y(t)\,:\; = \operatorname{adj} \{ \varphi (t)\} z(t) = \Delta (t)\bar {\vartheta }(t) + {{{\bar {\varepsilon }}}_{1}}(t,T), \\ \operatorname{adj} \{ \varphi (t)\} \varphi (t) = \det \{ \varphi (t)\} {{I}_{{2(n + 1) + 1}}} = \Delta (t){{I}_{{2(n + 1) + 1}}}, \\ {{{\bar {\varepsilon }}}_{1}}(t,T) = \operatorname{adj} \{ \varphi (t)\} \int\limits_{t_{i}^{ + }}^t {{{e}^{{ - \sigma \left( {\tau - t_{i}^{ + }} \right)}}}{{{\bar {\varphi }}}_{n}}(\tau ){{{\bar {\varepsilon }}}_{0}}(\tau ,T)d\tau ,} \\ \end{gathered} $$
(A.16)

where Y(t) ∈ \({{\mathbb{R}}^{{2n + 3}}}\), Δ(t) ∈ \(\mathbb{R}\), \({{\bar {\varepsilon }}_{1}}\)(t, T) ∈ \({{\mathbb{R}}^{{2n + 3}}}\).

Owing to Δ(t) ∈ \(\mathbb{R}\), the elimination (4.5) allows one to obtain the following from (A.16)

$$\begin{gathered} {{z}_{a}}(t) = {{Y}^{{\text{T}}}}(t){{\mathfrak{L}}_{a}} = \Delta (t)\vartheta _{a}^{{\text{T}}}(t) + \bar {\varepsilon }_{1}^{{\text{T}}}(t,\,\,T){{\mathfrak{L}}_{a}}, \\ {{z}_{b}}(t) = {{Y}^{{\text{T}}}}(t){{\mathfrak{L}}_{b}} = \Delta (t)\vartheta _{b}^{{\text{T}}}(t) + \bar {\varepsilon }_{1}^{{\text{T}}}(t,\,\,T){{\mathfrak{L}}_{b}}, \\ \end{gathered} $$
(A.17)

where za(t) ∈ \({{\mathbb{R}}^{{1 \times n}}}\), zb(t) ∈ \(\mathbb{R}\), and ϑa(t), ϑb(t) are the first order approximations of the parameters a(t) and b(t), respectively (components of the vector Θi).

In case Assumption 2 is met, following the definition of the signal \(\mathcal{K}\)(t), the first order approximations θx(t) and θr(t) of the parameters kx(t) and kr(t), respectively, satisfy the equations

$$a_{{{\text{ref}}}}^{{\text{T}}} - \vartheta _{a}^{{\text{T}}}(t) = {{\vartheta }_{b}}(t){{\theta }_{x}}(t),\;\;{{b}_{{{\text{ref}}}}} = {{\vartheta }_{b}}(t){{\theta }_{r}}(t),$$
(A.18)

where θ(t) = \({{\left[ {\begin{array}{*{20}{c}} {{{\theta }_{x}}(t)}&{{{\theta }_{r}}(t)} \end{array}} \right]}^{{\text{T}}}}\).

Each equation from (A.18) is multiplied by Δ(t). Equations (A.17) are substituted into the obtained result to have equation (4.6):

$$\begin{gathered} \mathcal{Y}(t) = \mathcal{M}(t)\theta (t) + d(t,\,\,T), \\ \mathcal{Y}(t)\,:\; = {{\left[ {\begin{array}{*{20}{c}} {\Delta (t)a_{{{\text{ref}}}}^{{\text{T}}} - {{z}_{a}}(t)}&{\Delta (t){{b}_{{{\text{ref}}}}}} \end{array}} \right]}^{{\text{T}}}}, \\ \mathcal{M}(t)\,:\; = {{z}_{b}}(t), \\ d(t,T)\,:\; = - {{\left[ {\begin{array}{*{20}{c}} {\bar {\varepsilon }_{1}^{{\text{T}}}(t,\,\,T){{\mathfrak{L}}_{a}} + \bar {\varepsilon }_{1}^{{\text{T}}}(t,T){{\mathfrak{L}}_{b}}{{\theta }_{r}}(t)}&{\bar {\varepsilon }_{1}^{{\text{T}}}(t,\,\,T){{\mathfrak{L}}_{b}}{{\theta }_{r}}(t)} \end{array}} \right]}^{{\text{T}}}}, \\ \end{gathered} $$
(A.19)

where \(\mathcal{Y}\)(t) ∈ \({{\mathbb{R}}^{{n + 1}}}\), \(\mathcal{M}\)(t) ∈ \(\mathbb{R}\), d(t, T) ∈ \({{\mathbb{R}}^{{n + 1}}}\).

Owing to (A.19), the solution of (4.7a) is written as

$$\Upsilon (t) = \int\limits_{t_{0}^{ + }}^t {{{e}^{{\int\limits_t^\tau {kd\tau } }}}\mathcal{M}(\tau )\theta (\tau )d\tau } + \int\limits_{t_{0}^{ + }}^t {{{e}^{{\int\limits_t^\tau {kd\tau } }}}d(\tau ,\,\,T)d\tau } \pm \Omega (t)\theta (t) = \Omega (t)\theta (t) + w(t),$$
(A.20)

where

$$w(t) = \Upsilon (t) - \Omega (t)\theta (t).$$

Equation (A.20) completes the proof of the fact that equation (4.8) can be obtained via procedure (4.1)–(4.7).

In order to prove statement (a), the regressor Ω(t) is represented as:

$$\begin{gathered} \Omega (t) = {{\Omega }_{1}}(t) + {{\Omega }_{2}}(t), \hfill \\ {{{\dot {\Omega }}}_{1}}(t) = - k({{\Omega }_{1}}(t) - \Delta (t){{\vartheta }_{b}}(t)),\;\;{{\Omega }_{1}}\left( {t_{0}^{ + }} \right) = 0, \hfill \\ {{{\dot {\Omega }}}_{2}}(t) = - k\left( {{{\Omega }_{2}}(t) - \bar {\varepsilon }_{1}^{{\text{T}}}(t,\,\,T){{\mathfrak{L}}_{b}}} \right),\;\;{{\Omega }_{2}}\left( {t_{0}^{ + }} \right) = 0. \hfill \\ \end{gathered} $$
(A.21)

As k > 0 and the perturbation \({{\bar {\varepsilon }}_{1}}\)(t, T) is bounded, then Ω2(t) is bounded, moreover, for all t \( \geqslant \) \(t_{0}^{ + }\) the following holds

$$\left| {{{\Omega }_{2}}(t)} \right|\;\leqslant \;{{\Omega }_{{2\max }}}(T),$$
(A.22)

and there exists a limit \({{\lim }_{{T \to 0}}}{{\Omega }_{{2\max }}}(T)\) = 0 for the upper bound as, following (A.15)–(A.19), the value of \({{\bar {\varepsilon }}_{1}}\)(t, T) can be arbitrarily reduced by reduction of T.

The next aim is to analyze Ω1(t). The solution of the first differential equation from (A.21) is written for all t\(\left[ {\left. {t_{i}^{ + } + {{T}_{s}},\,\,t_{{i + 1}}^{ + }} \right)} \right.\) as

$${{\Omega }_{1}}(t) = \phi \left( {t,t_{i}^{ + } + {{T}_{s}}} \right){{\Omega }_{1}}\left( {t_{i}^{ + } + {{T}_{s}}} \right) + \int\limits_{t_{i}^{ + } + {{T}_{s}}}^t {\phi (t,\tau )\Delta (\tau ){{\vartheta }_{b}}(\tau )d\tau ,} $$
(A.23)

where ϕ(t, τ) = \({{e}^{{ - \int\limits_\tau ^t {kd\tau } }}}\).

The upper bound is required for the signal Ω1(t) over the time range under consideration. To this end, we need bounds for Δ(t), and, in its turn, the ones for φ(t).

As, according to the premises of the proposition, \({{\bar {\varphi }}_{n}}\) ∈ PE for Ts < T, then \({{\bar {\varphi }}_{n}}\) ∈ FE over \(\left[ {t_{i}^{ + },\,\,t_{i}^{ + } + {{T}_{s}}} \right]\) (this fact can be validated by substitution of t = \(t_{i}^{ + }\) into (1.2)). Then for all t\(\left[ {\left. {t_{i}^{ + } + {{T}_{s}},\,\,t_{{i + 1}}^{ + }} \right)} \right.\) the following lower bound holds for the regressor φ(t)

$$\begin{gathered} \varphi (t) = \int\limits_{t_{i}^{ + }}^t {{{e}^{{ - \sigma \left( {\tau - t_{i}^{ + }} \right)}}}{{{\bar {\varphi }}}_{n}}(\tau )\bar {\varphi }_{n}^{{\text{T}}}(\tau )d\tau } \\ \geqslant \;\int\limits_{t_{i}^{ + }}^{t_{i}^{ + } + {{T}_{s}}} {{{e}^{{ - \sigma \left( {\tau - t_{i}^{ + }} \right)}}}{{{\bar {\varphi }}}_{n}}(\tau )\bar {\varphi }_{n}^{{\text{T}}}(\tau )d\tau } \\ \geqslant \;{{e}^{{ - \sigma \left( {t_{{i + 1}}^{ + } - t_{i}^{ + }} \right)}}}\int\limits_{t_{i}^{ + }}^{t_{i}^{ + } + {{T}_{s}}} {{{{\bar {\varphi }}}_{n}}(\tau )\bar {\varphi }_{n}^{{\text{T}}}(\tau )d\tau } \; \geqslant \;\alpha {{e}^{{ - \sigma \left( {t_{{i + 1}}^{ + } - t_{i}^{ + }} \right)}}}{{I}_{{n + 1}}}. \\ \end{gathered} $$
(A.24)

On the other hand, as \({{\left\| {{{{\bar {\varphi }}}_{n}}(t)} \right\|}^{2}}\;\leqslant \;\bar {\varphi }_{n}^{{\max }}\), then there exists an upper bound

$$\varphi (t)\;\leqslant \;\bar {\varphi }_{n}^{{\max }}\int\limits_{t_{i}^{ + }}^t {{{e}^{{ - \sigma \left( {\tau - t_{i}^{ + }} \right)}}}d\tau } \;\leqslant \;\bar {\varphi }_{n}^{{\max }}\frac{{1 - {{e}^{{ - \sigma \left( {t - t_{i}^{ + }} \right)}}}}}{\sigma }\;\leqslant \;{{\sigma }^{{ - 1}}}\bar {\varphi }_{n}^{{\max }},$$
(A.25)

and, therefore, for all t\(\left[ {\left. {t_{i}^{ + } + {{T}_{s}},\,\,t_{{i + 1}}^{ + }} \right)} \right.\) it holds that ΔUB \( \geqslant \) Δ(t) \( \geqslant \) ΔLB > 0.

Taking into consideration that, following Assumptions 1 and 2, bmax \( \geqslant \) |b(t)| \( \geqslant \) bmin > 0, and ϑb(t) is the approximation of first order of b(t), then the following holds for the multiplication Δ(tb(t)

$$\forall t \in \left[ {\left. {t_{i}^{ + } + {{T}_{s}},\,\,t_{{i + 1}}^{ + }} \right)\;\;{{\Delta }_{{{\text{UB}}}}}{{b}_{{\max }}}\; \geqslant \;\left| {\Delta (t){{\vartheta }_{b}}(t)} \right|\; \geqslant \;{{\Delta }_{{{\text{LB}}}}}{{b}_{{\min }}} > 0.} \right.$$
(A.26)

Having applied (A.21) and (A.26) and considered that 0 \(\leqslant \) ϕ(t, τ) \(\leqslant \) 1, the following estimates hold for Ω1(t)

$$\begin{gathered} \forall t \in \left[ {t_{0}^{ + },\,\,t_{0}^{ + } + {{T}_{s}}} \right]\;\;{{\Omega }_{1}}(t) \equiv 0, \\ \forall i\; \geqslant \;1\;\;\forall t \in \left[ {t_{i}^{ + } + {{T}_{s}},\,\,t_{{i + 1}}^{ + }} \right]\;\;{{\Omega }_{1}}\left( {t_{i}^{ + } + {{T}_{s}}} \right) + \left( {t_{{i + 1}}^{ + } - t_{i}^{ + } - {{T}_{s}}} \right){{\Delta }_{{{\text{UB}}}}}{{b}_{{\max }}}\; \geqslant \;{{\Omega }_{1}}(t) \\ \geqslant \;\phi \left( {t_{{i + 1}}^{ + },\,\,t_{i}^{ + } + {{T}_{s}}} \right)\left( {{{\Omega }_{1}}\left( {t_{i}^{ + } + {{T}_{s}}} \right) + \left( {t_{{i + 1}}^{ + } - t_{i}^{ + } - {{T}_{s}}} \right){{\Delta }_{{{\text{LB}}}}}{{b}_{{\min }}}} \right) > 0, \\ \end{gathered} $$
(A.27)

from which we have

$$\begin{gathered} \forall t\; \geqslant \;{{t}_{0}} + {{T}_{s}}\;\;{{\Omega }_{{1\max }}}\; \geqslant \;{{\Omega }_{1}}(t)\; \geqslant \;{{\Omega }_{{1\min }}} > 0, \\ {{\Omega }_{{1\max }}} = \mathop {\min }\limits_{\forall i\, \geqslant \,1} \left\{ {\phi \left( {t_{{i + 1}}^{ + },\,\,t_{i}^{ + } + {{T}_{s}}} \right)\left( {{{\Omega }_{1}}\left( {t_{i}^{ + } + {{T}_{s}}} \right) + \left( {t_{{i + 1}}^{ + } - t_{i}^{ + } - {{T}_{s}}} \right){{\Delta }_{{{\text{LB}}}}}{{b}_{{\min }}}} \right)} \right\}, \\ {{\Omega }_{{1\min }}} = \mathop {\max }\limits_{\forall i\, \geqslant \,1} \left\{ {{{\Omega }_{1}}\left( {t_{i}^{ + } + {{T}_{s}}} \right) + \left( {t_{{i + 1}}^{ + } - t_{i}^{ + } - {{T}_{s}}} \right){{\Delta }_{{{\text{UB}}}}}{{b}_{{\max }}}} \right\}. \\ \end{gathered} $$
(A.28)

Then, using (A.28) and (A.23), the bounds for the regressor Ω(t) are written

$$\forall t\; \geqslant \;{{t}_{0}} + {{T}_{s}}\;\,{{\Omega }_{{1\max }}} + {{\Omega }_{{2\max }}}(T)\; \geqslant \;\left| {\Omega (t)} \right|\; \geqslant \;{{\Omega }_{{1\min }}} - {{\Omega }_{{2\max }}}(T),$$
(A.29)

and, therefore, considering \({{\lim }_{{T \to 0}}}{{\Omega }_{{2\max }}}(T)\) = 0, there exists Tmin > 0 such that for all 0 < T < Tmin and t \( \geqslant \) t0 + Ts the following inequality holds

$${{\Omega }_{{{\text{UB}}}}}\; \geqslant \;\Omega (t)\; \geqslant \;{{\Omega }_{{{\text{LB}}}}} > 0,$$
(A.30)

which was to be proved in statement (a).

In order to prove the statement (b), the disturbance w(t) is differentiated with (A.20) and (4.7) at hand

$$\begin{aligned} \dot {w}(t) = \dot {\Upsilon }(t) - \dot {\Omega }(t)\theta (t) - \Omega (t)\dot {\theta }(t) \\ = - k(\Upsilon (t) - \mathcal{Y}(t)) + k(\Omega (t) - \mathcal{M}(t))\theta (t) - \Omega (t)\dot {\theta }(t) \\ = - k(\Upsilon (t) - \mathcal{M}(t)\theta (t) - d(t,T)) + k(\Omega (t) - \mathcal{M}(t))\theta (t) - \Omega (t)\dot {\theta }(t) \\ = - k(\Upsilon (t) - \Omega (t)\theta (t)) - \Omega (t)\dot {\theta }(t) + kd(t,\,\,T) \\ = - kw(t) - \Omega (t)\dot {\theta }(t) + kd(t,\,\,T),\;\;w\left( {t_{0}^{ + }} \right) = {{0}_{{n + 1}}}. \\ \end{aligned} $$
(A.31)

The solution of (A.31) is represented as:

$$\begin{gathered} w(t) = {{w}_{1}}(t) + {{w}_{2}}(t), \hfill \\ {{{\dot {w}}}_{1}}(t) = - k{{w}_{1}}(t) - \Omega (t)\dot {\theta }(t),\;\;{{w}_{1}}\left( {t_{0}^{ + }} \right) = {{0}_{{n + 1}}}, \hfill \\ {{{\dot {w}}}_{2}}(t) = - k{{w}_{2}}(t) + kd(t,T),\;\;{{w}_{2}}\left( {t_{0}^{ + }} \right) = {{0}_{{n + 1}}}. \hfill \\ \end{gathered} $$
(A.32)

As for the first differential equation from (A.32), in Proposition 2 from [19] it is proved (up to notation) that the following inequality holds

$$\left\| {{{w}_{1}}(t)} \right\|\;\leqslant \;{{w}_{{1\max }}}\phi \left( {t,\,\,t_{0}^{ + } + {{T}_{s}}} \right),$$
(A.33)

when i \(\leqslant \) imax < ∞.

As k > 0 and the disturbance d(t, T) is bounded, then w2(t) is also bounded, and consequently, the following inequality holds

$$\left\| {{{w}_{2}}(t)} \right\|\;\leqslant \;{{w}_{{2\max }}}(T),$$
(A.34)

where the limit \({{\lim }_{{T \to 0}}}{{w}_{{2\max }}}(T)\) = 0 holds, as the input of the second differential equation from (A.32) depends only from the value of d(t, T), which, in its turn, according to (A.15)–(A.19), can be reduced arbitrarily by reduction of T. The combination of the inequalities (A.33) and (A.34) in accordance with (A.32) completes the proof of proposition.

Proof of Theorem 1. Proof of theorem is similar to the above-given proof of Proposition 1.

Step 1. For all t \( \geqslant \) \(t_{0}^{ + }\) + Ts the solution of the differential equation (4.9) is written as

$$\begin{gathered} \tilde {\theta }(t) = \phi \left( {t,\,\,t_{0}^{ + } + {{T}_{s}}} \right)\tilde {\theta }\left( {t_{0}^{ + } + {{T}_{s}}} \right) + \int\limits_{t_{0}^{ + } + {{T}_{s}}}^t {\phi (t,\tau )\frac{{{{\gamma }_{1}}w(\tau )}}{{\Omega (\tau )}}d\tau } \\ - \;\int\limits_{t_{0}^{ + } + {{T}_{s}}}^t {\phi (t,\tau )} \sum\limits_{q = 1}^i {\Delta _{q}^{\theta }\delta \left( {\tau - t_{q}^{ + }} \right)d\tau ,} \\ \end{gathered} $$
(A.35)

where ϕ(t, τ) = \({{e}^{{ - \int\limits_\tau ^t {{{\gamma }_{1}}d\tau } }}}\).

Then, following the proof of Theorem 1 from [19], if i \(\leqslant \) imax < ∞, then the boundedness of the parametric error (A.35) can be shown:

$$\begin{gathered} \left\| {\tilde {\theta }(t)} \right\|\;\leqslant \;{{\beta }_{{\max }}}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}} + \frac{{{{\gamma }_{1}}{{w}_{{1\max }}}}}{{{{\Omega }_{{{\text{LB}}}}}}}\int\limits_{t_{0}^{ + } + {{T}_{s}}}^t {\phi (t,\tau )\phi \left( {\tau ,t_{0}^{ + } + {{T}_{s}}} \right)d\tau } \\ + \;\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}\int\limits_{t_{0}^{ + } + {{T}_{s}}}^t {\phi (t,\tau )d\tau } \;\leqslant \;\left( {{{\beta }_{{\max }}} + \frac{{2{{w}_{{1\max }}}}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right){{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}} + \frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}. \\ \end{gathered} $$
(A.36)

Step 2. The following quadratic form is introduced for all t \( \geqslant \) \(t_{0}^{ + }\) + Ts:

$$\begin{gathered} {{V}_{{{{e}_{{{\text{ref}}}}}}}} = e_{{{\text{ref}}}}^{{\text{T}}}P{{e}_{{{\text{ref}}}}} + \frac{{4a_{0}^{2}}}{{{{\gamma }_{1}}}}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{s}}} \right)}}},\;\;\,H = {\text{blockdiag}}\left\{ {P,\,\,\frac{{4a_{0}^{2}}}{{{{\gamma }_{1}}}}} \right\}, \\ \underbrace {{{\lambda }_{{\min }}}(H)}_{{{\lambda }_{{\text{m}}}}}{{\left\| {{{{\bar {e}}}_{{{\text{ref}}}}}} \right\|}^{2}}\;\leqslant \;V\left( {\left\| {{{{\bar {e}}}_{{{\text{ref}}}}}} \right\|} \right)\;\leqslant \;\underbrace {{{\lambda }_{{\max }}}(H)}_{{{\lambda }_{M}}}{{\left\| {{{{\bar {e}}}_{{{\text{ref}}}}}} \right\|}^{2}}, \\ {{{\bar {e}}}_{{{\text{ref}}}}}(t) = {{\left[ {\begin{array}{*{20}{c}} {e_{{{\text{ref}}}}^{{\text{T}}}(t)}&{{{e}^{{ - \frac{{{{\gamma }_{1}}}}{4}\left( {t - t_{0}^{ + } - {{T}_{s}}} \right)}}}} \end{array}} \right]}^{{\text{T}}}}. \\ \end{gathered} $$
(A.37)

Similar to proof of Proposition 1, the derivative of (A.37) is written as

$$\begin{gathered} {{{\dot {V}}}_{{{{e}_{{ref}}}}}}\;\leqslant \;\left[ { - {{\lambda }_{{\min }}}(Q) + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}\left( {\left\| {\tilde {\theta }} \right\| + {{{\dot {\mathcal{K}}}}_{{\max }}}T} \right) + 2} \right]{{\left\| {{{e}_{{{\text{ref}}}}}} \right\|}^{2}} \\ + \;\lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}{{\left\| {\tilde {\theta }} \right\|}^{2}} + \lambda _{{\max }}^{2}(P)b_{{\max }}^{2}\bar {\omega }_{r}^{2}\dot {\mathcal{K}}_{{\max }}^{2}{{T}^{2}} - 2a_{0}^{2}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{s}}} \right)}}}. \\ \end{gathered} $$
(A.38)

As for all t \( \geqslant \) \(t_{0}^{ + }\) + Ts the parametric error \(\tilde {\theta }\)(t) meets the inequality (A.36), then, considering

$${{\left\| {\tilde {\theta }(t)} \right\|}^{2}}\;\leqslant \;{{\left( {{{\beta }_{{\max }}} + \frac{{2{{w}_{{1\max }}}}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)}^{2}}{{e}^{{ - {{\gamma }_{1}}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}} + {{\left( {\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)}^{2}}$$
$$ + \;2\left( {{{\beta }_{{\max }}} + \frac{{2{{w}_{{1\max }}}}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}}$$
$$\leqslant \;\left( {{{\beta }_{{\max }}} + \frac{{2{{w}_{{1\max }}}}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)\left( {{{\beta }_{{\max }}} + \frac{{2({{w}_{{1\max }}} + {{\gamma }_{1}}{{w}_{{2\max }}}(T))}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right){{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}} + {{\left( {\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)}^{2}}$$
$$ = {{\bar {\beta }}_{{\max }}}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{0}}} \right)}}} + {{\left( {\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)}^{2}},$$

the upper bound of (A.38) is written as follows:

$$\begin{gathered} {{{\dot {V}}}_{{{{e}_{{{\text{ref}}}}}}}}\;\leqslant \;\left[ { - {{\lambda }_{{\min }}}(Q) + 2 + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}^{{^{{^{{^{{}}}}}}}}} \right. \\ \left. { \times \;\left( {\left( {{{\beta }_{{\max }}} + \frac{{2{{w}_{{1\max }}}}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right){{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{s}}} \right)}}} + \frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}} + {{{\dot {\mathcal{K}}}}_{{\max }}}T} \right)} \right]{{\left\| {{{e}_{{{\text{ref}}}}}} \right\|}^{2}} \\ + \;\lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}{{{\bar {\beta }}}_{{\max }}}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{s}}} \right)}}} + \lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}{{\left( {\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)}^{2}} \\ + \;\lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}\dot {\mathcal{K}}_{{\max }}^{2}{{T}^{2}} - 2a_{0}^{2}{{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {t - t_{0}^{ + } - {{T}_{s}}} \right)}}}. \\ \end{gathered} $$
(A.39)

There definitely exists a time instant \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\; \geqslant \;t_{0}^{ + } + {{T}_{s}}\) and constants T → 0, a0 > λmax(P)\({{\bar {\omega }}_{r}}{{b}_{{\max }}}\bar {\beta }_{{\max }}^{{\frac{1}{2}}}\) such that for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) it holds that

$$\begin{aligned} - {{\lambda }_{{\min }}}(Q) + 2 + 2{{\lambda }_{{\max }}}(P){{b}_{{\max }}}\left( {\left( {{{\beta }_{{\max }}} + \frac{{2{{w}_{{1\max }}}}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right){{e}^{{ - \frac{{{{\gamma }_{1}}}}{2}\left( {{{t}_{{{{e}_{{ref}}}}}} - t_{0}^{ + } - {{T}_{s}}} \right)}}}} \right. \\ \left. { + \;\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}} + {{{\dot {\mathcal{K}}}}_{{\max }}}T} \right) = - {{c}_{1}} < 0, \\ \lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}{{{\bar {\beta }}}_{{\max }}} - 2a_{0}^{2} = - {{c}_{2}} < 0.\quad \; \\ \end{aligned} $$
(A.40)

Then the upper bound for the derivative (A.39) for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) is obtained as

$${{\dot {V}}_{{{{e}_{{{\text{ref}}}}}}}}\;\leqslant \; - {\kern 1pt} {{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}}{{V}_{{{\text{ref}}}}} + \lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}{{\left( {\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)}^{2}} + \lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}\dot {\mathcal{K}}_{{\max }}^{2}{{T}^{2}},$$
(A.41)

where \({{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}} = \min \left\{ {\frac{{{{c}_{1}}}}{{{{\lambda }_{{\max }}}(P)}},\,\,\frac{{{{c}_{2}}{{\gamma }_{1}}}}{{4a_{0}^{2}}}} \right\}\).

The solution of the differential inequality (A.41) for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) is written as

$$\begin{gathered} {{V}_{{{{e}_{{{\text{ref}}}}}}}}(t)\;\leqslant \;{{e}^{{ - {{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}}\left( {t - {{t}_{{{{e}_{{{\text{ref}}}}}}}}} \right)}}}{{V}_{{{{e}_{{{\text{ref}}}}}}}}\left( {{{t}_{{{{e}_{{{\text{ref}}}}}}}}} \right) \\ + \;\frac{1}{{{{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}}}}\left( {\lambda _{{\max }}^{2}(P)\bar {\omega }_{r}^{2}b_{{\max }}^{2}{{{\left( {\frac{{{{\gamma }_{1}}{{w}_{{2\max }}}(T)}}{{{{\Omega }_{{{\text{LB}}}}}}}} \right)}}^{2}} + \lambda _{{\max }}^{2}(P)b_{{\max }}^{2}\bar {\omega }_{r}^{2}\dot {\mathcal{K}}_{{\max }}^{2}{{T}^{2}}} \right), \\ \end{gathered} $$
(A.42)

which completes the proof of statement (ii) of theorem.

Step 3. Owing to (A.36) and (A.42), the error \(\tilde {\theta }\)(t) is bounded for all t \( \geqslant \) \(t_{0}^{ + }\) + Ts, and the error eref(t)—for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\). Then, to prove the statement (i), we need to show that \(\tilde {\theta }\)(t) is bounded over \(\left[ {\left. {t_{0}^{ + },\,\,t_{0}^{ + } + {{T}_{s}}} \right)} \right.\), and eref(t) is bounded over \(\left[ {\left. {t_{0}^{ + },\,\,t_{{{{e}_{{{\text{ref}}}}}}}^{{}}} \right)} \right.\).

In the conservative case, the inequality Ω(t) \(\leqslant \) ΩLB is satisfied over \(\left[ {\left. {t_{0}^{ + },\,\,t_{0}^{ + } + {{T}_{s}}} \right)} \right.\), whence, owing to \(\dot {\tilde {\theta }}\)(t) = 0n + 1, if Assumption 1 is met, it follows that the parametric error \(\tilde {\theta }\)(t) = \(\hat {\theta }\left( {t_{0}^{ + }} \right)\) – θ(t) is bounded over \(\left[ {\left. {t_{0}^{ + },\,\,t_{0}^{ + } + {{T}_{s}}} \right)} \right.\) and, as a consequence, for all t \( \geqslant \) \(t_{0}^{ + }\).

Considering the time range \(\left[ {\left. {t_{0}^{ + },\,\,t_{{{{e}_{{{\text{ref}}}}}}}^{{}}} \right)} \right.\) and taking into account the notation from (A.3), (A.18), the error equation (3.1) is written in the following form:

$${{\dot {e}}_{{{\text{ref}}}}}(t) = \left( {{{A}_{{{\text{ref}}}}} + {{e}_{n}}b(t)\left( {{{{\hat {\theta }}}_{x}}(t) - {{k}_{x}}(t)} \right)} \right){{e}_{{{\text{ref}}}}}(t) + {{e}_{n}}b(t)\left( {{{{\hat {\theta }}}^{{\text{T}}}}(t) - {{\mathcal{K}}^{{\text{T}}}}(t)} \right){{\omega }_{r}}(t),$$

which, as it has been proved that \(\tilde {\theta }\)(t) is bounded for all t \( \geqslant \) \(t_{0}^{ + }\) and Assumptions 1 and 2 are met, allows one, using Theorem 3.2 from [20], to make the conclusion that (1) eref(t) is bounded over \(\left[ {\left. {t_{0}^{ + },\,\,t_{{{{e}_{{{\text{ref}}}}}}}^{{}}} \right)} \right.\), (2) ξ(t) ∈ L for all t \( \geqslant \) \(t_{0}^{ + }\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glushchenko, A., Lastochkin, K. Approximation-Based Approach to Adaptive Control of Linear Time-Varying Systems. Autom Remote Control 85, 443–460 (2024). https://doi.org/10.1134/S0005117924050047

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117924050047

Keywords:

Navigation