Skip to main content
Log in

Adaptive Observer of State and Disturbances for Linear Overparameterized Systems

  • ROBUST, ADAPTIVE, AND NETWORK CONTROL
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

The problem of state reconstruction is considered for a class of linear systems with time-invariant unknown parameters and overparameterization that are affected by external perturbations generated by a known exosystem with unknown initial conditions. An extended adaptive observer is proposed, which, in contrast to existing approaches, solves state and perturbation adaptive estimation problems for systems that are not represented in the observer canonical form. The obtained theoretical results are validated via mathematical modeling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.

Notes

  1. For the sake of brevity, hereafter the arguments θ and t will be omitted except the cases in which it is necessary for understanding.

  2. Without loss of generality, further the exponentially decaying term \(\epsilon \)(t) is omitted.

  3. A measurable regression equation means that its regressor and regressand are measurable or can be computed, while its parameters are unknown.

REFERENCES

  1. Khlebnikov, M.V., Polyak, B.T., and Kuntsevich, V.M., Optimization of linear systems subject to bounded exogenous disturbances: The invariant ellipsoid technique, Autom. Remote Control, 2011, vol. 72, no. 11, pp. 2227–2275.

    Article  MathSciNet  Google Scholar 

  2. Khalil, H.K. and Praly, L., High-gain observers in nonlinear feedback control, Int. J. Robust Nonlinear Control, 2014, vol. 24, no. 6, pp. 993–1015.

    Article  MathSciNet  Google Scholar 

  3. Krasnova, S.A. and Utkin, V.A., Cascade Design of State Observers for Dynamical Systems, Moscow: Nauka, 2006 [in Russian].

    Google Scholar 

  4. Shtessel, Y., Edwards, C., Fridman, L., and Levant, A., Sliding Mode Control and Observation, New York: Springer New York, 2014.

    Book  Google Scholar 

  5. Ioannou, P. and Sun, J., Robust Adaptive Control, New York: Dover, 2013.

    Google Scholar 

  6. Narendra, K.S. and Annaswamy, A.M., Stable Adaptive Systems, Courier Corporation, 2012.

    Google Scholar 

  7. Carroll, R. and Lindorff, D., An adaptive observer for single-input single-output linear systems, IEEE Trans. Automat. Control, 1973, vol. 18, no. 5, pp. 428–435.

    Article  MathSciNet  Google Scholar 

  8. Kudva, P. and Narendra, K.S., Synthesis of an adaptive observer using Lyapunov’s direct method, Int. J. Control, 1973, vol. 18, no. 6, pp. 1201–1210.

    Article  MathSciNet  Google Scholar 

  9. Luders, G. and Narendra, K.S., Stable adaptive schemes for state estimation and identification of linear systems, IEEE Trans. Automat. Control, 1974, vol. 19, no. 6, pp. 841–847.

    Article  Google Scholar 

  10. Kreisselmeier, G., Adaptive observers with exponential rate of convergence, IEEE Trans. Automat. Control, 1977, vol. 22, no. 1, pp. 2–8.

    Article  MathSciNet  Google Scholar 

  11. Katiyar, A., Roy, S.B., and Bhasin, S., Initial Excitation Based Robust Adaptive Observer for MIMO LTI Systems, IEEE Trans. Automat. Control, 2022.

  12. Bobtsov, A., Pyrkin, A., Vedyakov, A., Vediakova, A., and Aranovskiy, S., A Modification of Generalized Parameter-Based Adaptive Observer for Linear Systems with Relaxed Excitation Conditions, IFACPapersOnLine, 2022, vol. 55, no. 12, pp. 324–329.

    Google Scholar 

  13. Glushchenko, A. and Lastochkin, K., Exponentially Stable Adaptive Observation for Systems Parameterized by Unknown Physical Parameters, arXiv preprint: arXiv:2212.08405, 2022, pp. 1–6.

  14. Aranovskiy, S., Ushirobira, R., Korotina, M., and Vedyakov, A., On preserving-excitation properties of Kreisselmeiers regressor extension scheme, IEEE Trans. Automat. Control, 2022, pp. 1–6.

  15. Bhattacharyya, S.P. and De Souza, E., Pole assignment via Sylvester’s equation, Syst. Control Lett., 1982, vol. 1, no. 4, pp. 261–263.

    Article  MathSciNet  Google Scholar 

  16. Dudarenko, N.A., Slita, O.V., and Ushakov, A.V., Algebraic conditions of generalized modal control, IFAC Proc. Volumes, 2012, vol. 45, no. 13, pp. 150–155.

  17. Poznyak, A.S., Advanced Mathematical Tools for Automatic Control Engineers, Elsevier Science, 2009.

    Google Scholar 

  18. Glushchenko, A.I., Lastochkin, K.A., and Petrov, V.A., Exponentially stable adaptive control. Part I. Time-invariant plants, Autom. Remote Control, 2022, vol. 83, no. 4, pp. 548–578.

    Article  MathSciNet  Google Scholar 

  19. Ortega, R., Nikiforov, V., and Gerasimov, D., On modified parameter estimators for identification and adaptive control. A unified framework and some new schemes, Ann. Rev. Control, 2020, vol. 50, pp. 278–293.

    Article  MathSciNet  Google Scholar 

  20. Katiyar, A., Basu Roy, S., and Bhasin, S., Finite excitation based robust adaptive observer for MIMO LTI systems, Int. J. Adaptive Control Signal Proc., 2022, vol. 36, no. 2, pp. 180–197.

    Article  MathSciNet  Google Scholar 

  21. Glushchenko, A. and Lastochkin, K., Extended Adaptive Observer for Linear Systems with Overparametrization, Proceedings of 2023 31st Mediterranean Conference on Control and Automation (MED), Limassol: IEEE, 2023, pp. 789–794.

  22. Polyak, B.T. and Smirnov, G., Large deviations for non-zero initial conditions in linear systems, Automatica, 2016, vol. 74, pp. 297–307.

    Article  MathSciNet  Google Scholar 

  23. Glushchenko, A. and Lastochkin, K., Parameter Estimation-Based Observer for Linear Systems with Polynomial Overparametrization, Proceedings of 2023 31st Mediterranean Conference on Control and Automation (MED), Limassol: IEEE, 2023, pp. 795–799.

  24. Nikiforov, V.O., Observers of external deterministic disturbances. II. Objects with unknown parameters, Autom. Remote Control, 2004, vol. 65, no. 11, pp. 1724–1732.

    Article  MathSciNet  Google Scholar 

Download references

Funding

Research was in part financially supported by Grants Council of the President of the Russian Federation (project MD-1787.2022.4).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. I. Glushchenko or K. A. Lastochkin.

Additional information

This paper was recommended for publication by A.A. Bobtsov, a member of the Editorial Board

Publisher’s Note.

Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

APPENDIX

APPENDIX

Proof of Lemma 1. The parameterization (3.3) is obtained as a combination of the results from [12, 24] with the dynamic regressor extension and mixing procedure from [14, 19]. The proof of Lemma 1 is derived on the basis of Lemma 1 and Theorem 2 from [24]. To make it easier to understand the adopted notation and ensure that the results of the paper are self-contained, we next present the proof of this lemma in accordance with the one in [24]. In contrast to the results [24], in this paper, owing to Assumption 2, β is known, which allows one not to avoid overparameterization in (3.3) (see (A.23)).

Step 1. The following error is considered:

$$\tilde {\xi }(t) = \xi (t) - z(t) - \Omega (t){{\psi }_{a}}(\theta ) - P(t){{\psi }_{b}}(\theta ).$$
(A.1)

The time derivative of (A.1) is written:

$$\begin{gathered} \dot {\tilde {\xi }}(t) = {{A}_{0}}\xi (t) + {{\psi }_{a}}(\theta )y(t) + {{\psi }_{b}}(\theta )u(t) + {{\psi }_{d}}(\theta )\delta (t) - {{A}_{K}}z(t) \\ - \;Ky(t) - ({{A}_{K}}\Omega (t) + {{I}_{n}}y(t)){{\psi }_{a}}(\theta ) - ({{A}_{K}}P(t) + {{I}_{n}}u(t)){{\psi }_{b}}(\theta ) \\ = {{A}_{0}}\xi (t) - {{A}_{K}}z(t) - Ky(t) - {{A}_{K}}\Omega (t){{\psi }_{a}}(\theta ) - {{A}_{K}}P(t){{\psi }_{b}}(\theta ) + {{\psi }_{d}}(\theta )\delta (t) \\ = {{A}_{K}}\tilde {\xi }(t) + {{\psi }_{d}}(\theta )\delta (t). \\ \end{gathered} $$
(A.2)

The solution of Eq. (A.2) is obtained as

$$\tilde {\xi }(t) = {{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}) + \bar {\delta }(t),$$
(A.3)

where the external perturbation \(\bar {\delta }(t)\) is described as a set of equations

$$\left\{ \begin{gathered} \dot {\bar {\delta }}(t) = {{A}_{K}}\bar {\delta }(t) + {{\psi }_{d}}(\theta )\delta (t) \hfill \\ {{{v}}_{f}}(t) = C_{0}^{{\text{T}}}\bar {\delta }(t). \hfill \\ \end{gathered} \right.$$
(A.4)

Having substituted (A.3) into (A.1), it is written:

$$\begin{gathered} {{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}) + \bar {\delta }(t) = \xi (t) - z(t) - \Omega (t){{\psi }_{a}}(\theta ) - P(t){{\psi }_{b}}(\theta ), \\ \Updownarrow \\ \xi (t) = {{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}) + \bar {\delta }(t) + z(t) + \Omega (t){{\psi }_{a}}(\theta ) + P(t){{\psi }_{b}}(\theta ). \\ \end{gathered} $$
(A.5)

Equation (A.5) is multiplied by \(C_{0}^{{\text{T}}}\) to obtain:

$$y(t) = C_{0}^{{\text{T}}}\xi (t) = C_{0}^{{\text{T}}}z(t) + C_{0}^{{\text{T}}}\Omega (t){{\psi }_{a}}(\theta ) + C_{0}^{{\text{T}}}P(t){{\psi }_{b}}(\theta ) + {{{v}}_{f}}(t) + C_{0}^{{\text{T}}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}).$$
(A.6)

Considering (A.6), the function \(\bar {q}\) = y(t) – \(C_{0}^{{\text{T}}}z(t)\) is differentiated:

$$\dot {\bar {q}}(t) = C_{0}^{{\text{T}}}\dot {\Omega }(t){{\psi }_{a}}(\theta ) + C_{0}^{{\text{T}}}\dot {P}(t){{\psi }_{b}}(\theta ) + {{{\dot {v}}}_{f}}(t) + C_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}).$$
(A.7)

Step 2. The next aim is to parametrize the term \({{{\dot {v}}}_{f}}(t)\) of Eq. (A.7) as a linear regression equation with a measurable regressor. For this purpose, the system (A.4) is rewritten as a transfer function:

$${{{v}}_{f}}(t) = C_{0}^{{\text{T}}}{{(s{{I}_{n}} - {{A}_{K}})}^{{ - 1}}}{{\psi }_{d}}(\theta )\delta (t) = {{W}_{f}}[\delta (t)].$$
(A.8)

The derivative of the perturbation δ(t) is represented as:

$$\dot {\delta }(t) = h_{\delta }^{{\text{T}}}{{\mathcal{A}}_{\delta }}{{x}_{\delta }}(t) + \delta ({{t}_{0}}){{D}_{\delta }}(t),$$
(A.9)

where Dδ(t) is a Dirac delta function.

A virtual signal δd(t) = \(h_{\delta }^{{\text{T}}}{{\mathcal{A}}_{\delta }}{{x}_{\delta }}(t)\) is introduced into consideration. Then the following equalities hold

$$\begin{gathered} {{{\dot {x}}}_{\delta }}(t) = {{\mathcal{A}}_{\delta }}{{x}_{\delta }}(t), \hfill \\ {{\delta }_{d}}(t) = \bar {h}_{\delta }^{{\text{T}}}{{x}_{\delta }}(t),\quad \bar {h}_{\delta }^{{\text{T}}} = h_{\delta }^{{\text{T}}}{{\mathcal{A}}_{\delta }}. \hfill \\ \end{gathered} $$
(A.10)

Equation (A.8) is differentiated, and then (A.9), (A.10) are substituted into the obtained result to write:

$${{{\dot {v}}}_{f}} = s{{W}_{f}}[\delta (t)] = {{W}_{f}}[\mathop \delta \limits^. (t)] = {{W}_{f}}\left[ {h_{\delta }^{{\text{T}}}{{A}_{\delta }}{{x}_{\delta }}(t) + \delta ({{t}_{0}}){{D}_{\delta }}(t)} \right] = \underbrace {{{W}_{f}}[{{\delta }_{d}}(t)]}_{{{{v}}_{f}}(t)} + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)].$$
(A.11)

Thus, owing to the fact that the matrix AK is a Hurwitz one, it is sufficient to parametrize \({{{v}}_{f}}(t)\) to parameterize \({{{\dot {v}}}_{f}}(t)\). For this purpose, an auxiliary signal ζ(t) = \({{M}_{\delta }}{{x}_{\delta }}(t)\) is considered, where the transformation matrix Mδ is a solution of the Sylvester equation

$${{M}_{\delta }}{{\mathcal{A}}_{\delta }} - G{{M}_{\delta }} = l\bar {h}_{\delta }^{{\text{T}}},$$
(A.12)

which has a unique solution [15, 16, 24] as, owing to Assumption 2, the pair \(\left( {h_{\delta }^{{\text{T}}},{{\mathcal{A}}_{\delta }}} \right)\) is observable and, following the premises of this lemma, (G, l) is controllable and σ{\({{\mathcal{A}}_{\delta }}\)} ∩ σ{G} = 0.

We differentiate ζ(t) to obtain:

$$\mathop \zeta \limits^. (t) = {{M}_{\delta }}{{\mathcal{A}}_{\delta }}{{x}_{\delta }}(t) = G{{M}_{\delta }}{{x}_{\delta }}(t) + l\bar {h}_{\delta }^{{\text{T}}}{{x}_{\delta }}(t) = G\zeta (t) + l{{\delta }_{d}}(t),$$
(A.13)

form which, considering xδ(t) = \(M_{\delta }^{{ - 1}}\zeta (t)\), it follows that

$${{\delta }_{d}}(t) = \bar {h}_{\delta }^{{\text{T}}}M_{\delta }^{{ - 1}}\zeta = {{\beta }^{{\text{T}}}}\zeta ,\quad \beta = \bar {h}_{\delta }^{{\text{T}}}M_{\delta }^{{ - 1}}.$$
(A.14)

Taking into account (A.14), Eq. (A.11) is rewritten as:

$${{{\dot {v}}}_{f}}(t) = {{W}_{f}}\left[ {{{\beta }^{{\text{T}}}}\zeta (t)} \right] + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)] = {{\beta }^{{\text{T}}}}{{W}_{f}}[\zeta (t)] + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)] = {{\beta }^{{\text{T}}}}{{\zeta }_{w}}(t) + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)].$$
(A.15)

The signal \({{{v}}_{f}}(t)\) is filtered via (A.13) instead of δd(t):

$${{\zeta }_{f}}(t) = {{(sI - G)}^{{ - 1}}}l[{{{v}}_{f}}(t)] + {{e}^{{G(t - {{t}_{0}})}}}{{\zeta }_{f}}({{t}_{0}}),$$
(A.16)

then, owing to ζ(t) = (sIG)–1ld(t)] + \({{e}^{{G(t - {{t}_{0}})}}}\zeta ({{t}_{0}})\), the following equality holds:

$$\begin{gathered} {{\zeta }_{w}}(t) = {{W}_{f}}[\zeta (t)] = {{W}_{f}}\left[ {{{{(sI - G)}}^{{ - 1}}}l[{{\delta }_{d}}(t)] + {{e}^{{G(t - {{t}_{0}})}}}\zeta ({{t}_{0}})} \right] \\ = {{(sI - G)}^{{ - 1}}}l{{W}_{f}}[{{\delta }_{d}}(t)] + {{W}_{f}}\left[ {{{e}^{{G(t - {{t}_{0}})}}}\zeta ({{t}_{0}})} \right] \\ = {{(sI - G)}^{{ - 1}}}l{{{v}}_{f}} + {{W}_{f}}\left[ {{{e}^{{G(t - {{t}_{0}})}}}\zeta ({{t}_{0}})} \right] \\ = {{\zeta }_{f}}(t) - {{e}^{{G(t - {{t}_{0}})}}}{{\zeta }_{f}}({{t}_{0}}) + {{W}_{f}}\left[ {{{e}^{{G(t - {{t}_{0}})}}}\zeta ({{t}_{0}})} \right]. \\ \end{gathered} $$
(A.17)

Having substituted (A.17) into (A.15), it is written:

$${{{\dot {v}}}_{f}}(t) = {{\beta }^{{\text{T}}}}{{\zeta }_{f}}(t) - {{\beta }^{{\text{T}}}}{{e}^{{G(t - {{t}_{0}})}}}{{\zeta }_{f}}({{t}_{0}}) + {{\beta }^{{\text{T}}}}{{W}_{f}}\left[ {{{e}^{{G(t - {{t}_{0}})}}}\xi ({{t}_{0}})} \right] + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)].$$
(A.18)

The following observer of state ζf(t) is introduced:

$${{\hat {\zeta }}_{f}}(t) = F(t) + H(t){{\psi }_{b}}(\theta ) + N(t){{\psi }_{a}}(\theta ) + ly(t).$$
(A.19)

Considering equations (A.7), (A.11), (A.16), (A.19), the error is differentiated \({{\tilde {\zeta }}_{f}}(t)\) = \({{\zeta }_{f}}(t)\)\({{\hat {\zeta }}_{f}}(t)\) to obtain:

$${{\dot {\tilde {\zeta }}}_{f}} = G{{\zeta }_{f}}(t) + l{{{v}}_{f}}(t) - GF(t) - Gly(t) + lC_{0}^{{\text{T}}}\dot {z}(t)$$
$$ - \;\left( {GH(t) - lC_{0}^{{\text{T}}}\dot {P}(t)} \right){{\psi }_{b}}(\theta ) - \left( {GN(t) - lC_{0}^{{\text{T}}}\dot {\Omega }(t)} \right){{\psi }_{a}}(\theta )$$
$$ - \;lC_{0}^{{\text{T}}}\dot {z}(t) - lC_{0}^{{\text{T}}}\dot {\Omega }(t){{\psi }_{a}}(\theta ) - lC_{0}^{{\text{T}}}\dot {P}(t){{\psi }_{b}}(\theta )$$
$$ - \;l({{{v}}_{f}}(t) + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)]) - lC_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}})$$
(A.20)
$$ = G{{\zeta }_{f}}(t) - \underbrace {GF(t) - Gly(t) - GH(t){{\psi }_{b}}(\theta ) - GN(t){{\psi }_{a}}(\theta )}_{G{{{\hat {\zeta }}}_{f}}(t)}$$
$$ - \;l{{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)] - lC_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}})$$
$$ = G{{\tilde {\zeta }}_{f}} - lC_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}) - l{{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)].$$

The set of equations (A.20) is solved:

$${{\tilde {\zeta }}_{f}}(t) = {{\zeta }_{f}}(t) - {{\hat {\zeta }}_{f}}(t) = {{e}^{{G(t - {{t}_{0}})}}}{{\tilde {\zeta }}_{f}}({{t}_{0}}) - \mathfrak{H}\left[ {C_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}) + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)]} \right],$$
(A.21)

which allows one to rewrite (A.18) as follows:

$${{{\dot {v}}}_{f}}(t) = {{\beta }^{{\text{T}}}}{{\hat {\zeta }}_{f}}(t) + {{\beta }^{{\text{T}}}}{{e}^{{G(t - {{t}_{0}})}}}{{\tilde {\zeta }}_{f}}({{t}_{0}})$$
$$ - \;{{\beta }^{{\text{T}}}}\mathfrak{H}\left[ {C_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}) + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)]} \right]$$
$$\begin{gathered} - \;{{\beta }^{{\text{T}}}}{{e}^{{G(t - {{t}_{0}})}}}{{\zeta }_{f}}({{t}_{0}}) + {{\beta }^{{\text{T}}}}{{W}_{f}}\left[ {{{e}^{{G(t - {{t}_{0}})}}}\xi ({{t}_{0}})} \right] + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)] \\ = {{\beta }^{{\text{T}}}}(F(t) + ly(t)) + {{\beta }^{{\text{T}}}}H(t){{\psi }_{b}}(\theta ) + {{\beta }^{{\text{T}}}}N(t){{\psi }_{a}}(\theta ) \\ \end{gathered} $$
(A.22)
$$ + \;{{\beta }^{{\text{T}}}}{{e}^{{G(t - {{t}_{0}})}}}{{\tilde {\zeta }}_{f}}({{t}_{0}}) - {{\beta }^{{\text{T}}}}\mathfrak{H}\left[ {C_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}) + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)]} \right]$$
$$ - \;{{\beta }^{{\text{T}}}}{{e}^{{G(t - {{t}_{0}})}}}{{\zeta }_{f}}({{t}_{0}}) + {{\beta }^{{\text{T}}}}{{W}_{f}}\left[ {{{e}^{{G(t - {{t}_{0}})}}}\xi ({{t}_{0}})} \right] + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)],$$

where \(\mathfrak{H}[.]\) = \({{(s{{I}_{{{{n}_{\delta }}}}} - G)}^{{ - 1}}}l{\kern 1pt} [.]\).

Equation (A.22) is substituted into (A.7) to obtain:

$$\dot {\bar {q}}(t) = C_{0}^{{\text{T}}}\dot {\Omega }(t){{\psi }_{a}}(\theta ) + C_{0}^{{\text{T}}}\dot {P}(t){{\psi }_{b}}(\theta )$$
$$ + \;{{\beta }^{{\text{T}}}}(F(t) + ly(t)) + {{\beta }^{{\text{T}}}}H(t){{\psi }_{b}}(\theta ) + {{\beta }^{{\text{T}}}}N(t){{\psi }_{a}}(\theta )$$
$$\begin{gathered} + \;{{\beta }^{{\text{T}}}}{{e}^{{G(t - {{t}_{0}})}}}{{{\tilde {\zeta }}}_{f}}({{t}_{0}}) - {{\beta }^{{\text{T}}}}\mathfrak{H}\left[ {C_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}}) + {{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)]} \right] \\ - \;{{\beta }^{{\text{T}}}}{{e}^{{G(t - {{t}_{0}})}}}{{\zeta }_{f}}({{t}_{0}}) + {{\beta }^{{\text{T}}}}{{W}_{f}}\left[ {{{e}^{{G(t - {{t}_{0}})}}}\xi ({{t}_{0}})} \right] \\ \end{gathered} $$
(A.23)
$$ + \;{{W}_{f}}[\delta ({{t}_{0}}){{D}_{\delta }}(t)] + C_{0}^{{\text{T}}}{{A}_{K}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}\tilde {\xi }({{t}_{0}})$$
$$ = {{\bar {\varphi }}^{{\text{T}}}}(t)\eta (\theta ) + {{\beta }^{{\text{T}}}}(F(t) + ly(t)) + \bar {\varepsilon }(t),$$

where \(\bar {\varepsilon }(t)\) are aggregated exponentially vanishing functions.

Step 3. The next aim is to transform the regression equation (A.23) into the form of (3.3) via application of the dynamic regressor extension and mixing procedure. For this purpose, considering (A.23), (3.5), the signal χ(t) = \(\bar {q}(t)\)\({{k}_{1}}{{\bar {q}}_{f}}(t)\) is differentiated to obtain:

$$\begin{gathered} \dot {\chi }(t) = {{{\bar {\varphi }}}^{{\text{T}}}}(t)\eta (\theta ) + {{\beta }^{{\text{T}}}}(F(t) + ly(t)) + \bar {\varepsilon }(t) - {{k}_{1}}\left( { - {{k}_{1}}{{{\bar {q}}}_{f}}(t) + \bar {q}(t)} \right) \\ = - {{k}_{1}}\chi (t) + {{{\bar {\varphi }}}^{{\text{T}}}}(t)\eta (\theta ) + {{\beta }^{{\text{T}}}}(F(t) + ly(t)) + \bar {\varepsilon }(t). \\ \end{gathered} $$
(A.24)

The solution of the differential equation (A.24) allows one to write:

$$\bar {q}(t) - {{k}_{1}}{{\bar {q}}_{f}}(t) - {{\beta }^{{\text{T}}}}({{F}_{f}}(t) + l{{y}_{f}}(t)) = {{e}^{{ - {{k}_{1}}(t - {{t}_{0}})}}}\bar {q}({{t}_{0}}) + \bar {\varphi }_{f}^{{\text{T}}}(t)\eta (\theta ) + {{\bar {\varepsilon }}_{f}}(t),$$
(A.25)

where \({{\dot {\bar {\varepsilon }}}_{f}}(t)\) = \( - {{k}_{1}}{{\bar {\varepsilon }}_{f}}(t)\) + \({{k}_{1}}\bar {\varepsilon }(t)\), \({{\bar {\varepsilon }}_{f}}({{t}_{0}})\) = 0.

Owing to (A.25), the solution of the first differential equation from (3.4) satisfies the following equation

$$q(t) = \varphi (t)\eta (\theta ) + \varepsilon (t),$$
(A.26)

where \(\mathop \varepsilon \limits^. (t) = - {{k}_{2}}\varepsilon (t)\) + \({{\bar {\varphi }}_{f}}(t)\left( {{{{\bar {\varepsilon }}}_{f}}(t) + {{e}^{{ - {{k}_{1}}(t - {{t}_{0}})}}}\bar {q}({{t}_{0}})} \right)\), ε(t0) = 02n.

Having multiplied Eq. (A.26) by k(t)adj{φ(t)} and applied the property

$$\operatorname{adj} \{ \varphi (t)\} \varphi (t) = \det \{ \varphi (t)\} {{I}_{{2n}}},$$

Eq. (3.3) is obtained with \(\epsilon (t)\) = k(t)adj{φ(t)}ε(t).

In accordance with Lemma 6.8 from [6], when \(\bar {\varphi }(t)\) ∈ PE, it also holds that \({{\bar {\varphi }}_{f}}(t)\) ∈ PE. Following Proposition 1, when \({{\bar {\varphi }}_{f}}(t)\) ∈ PE, then it holds that Δ(t) \( \geqslant \) Δmin > 0. Since the signals y(t), u(t) are bounded by Assumption 1, owing to the stability of the filters (3.4)–(3.6), the inequality Δmax \( \geqslant \) Δ(t) holds for all t \( \geqslant \) t0. Then for all t \( \geqslant \) t0 + T it holds that Δmax \( \geqslant \) Δ(t) \( \geqslant \) Δmin > 0, which completes the proof of Lemma.

Proof of Lemma 2. According to Definition 1 and Hypothesis 1 and owing to

$${{\Xi }_{\mathcal{S}}}(\Delta ) = {{\bar {\Xi }}_{\mathcal{S}}}(\Delta )\Delta (t),\quad {{\Xi }_{\mathcal{G}}}(\Delta ) = {{\bar {\Xi }}_{\mathcal{G}}}(\Delta )\Delta (t),$$
$${{\mathcal{Y}}_{{ab}}}(t) = {{\mathcal{L}}_{{ab}}}\mathcal{Y}(t) = \Delta (t){{\mathcal{L}}_{{ab}}}\eta (\theta ) = \Delta (t){{\psi }_{{ab}}}(\theta ),$$
$${{\bar {\Xi }}_{\mathcal{S}}}(\Delta )\Delta (t){{\psi }_{{ab}}}(\theta ) = {{\bar {\Xi }}_{\mathcal{S}}}(\Delta ){{\mathcal{Y}}_{{ab}}}(t),$$
$${{\bar {\Xi }}_{\mathcal{G}}}(\Delta )\Delta (t){{\psi }_{{ab}}}(\theta ) = {{\bar {\Xi }}_{\mathcal{G}}}(\Delta ){{\mathcal{Y}}_{{ab}}}(t),$$

it follows from (3.9) that

$${{\mathcal{T}}_{\mathcal{S}}}\left( {{{{\bar {\Xi }}}_{\mathcal{S}}}(\Delta ){{\mathcal{Y}}_{{ab}}}} \right) = {{\mathcal{T}}_{\mathcal{G}}}\left( {{{{\bar {\Xi }}}_{\mathcal{G}}}(\Delta ){{\mathcal{Y}}_{{ab}}}} \right)\theta .$$
(A.27)

Then, having multiplied (A.27) by adj\(\left\{ {{{\mathcal{T}}_{\mathcal{G}}}\left( {{{{\bar {\Xi }}}_{\mathcal{G}}}(\Delta ){{\mathcal{Y}}_{{ab}}}} \right)} \right\}\), the following regression equation is obtained

$${{\mathcal{Y}}_{\theta }}(t) = {{\mathcal{M}}_{\theta }}(t)\theta ,$$
(A.28)

which is used together with (3.8) to write:

$${{\mathcal{T}}_{\mathcal{Z}}}\left( {{{{\bar {\Xi }}}_{\mathcal{Z}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right) = {{\mathcal{T}}_{\mathcal{X}}}\left( {{{{\bar {\Xi }}}_{\mathcal{X}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right){{\Theta }_{{AB}}}(\theta ),$$
(A.29a)
$${{\mathcal{T}}_{\mathcal{W}}}\left( {{{{\bar {\Xi }}}_{\mathcal{W}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right) = {{\mathcal{T}}_{\mathcal{R}}}\left( {{{{\bar {\Xi }}}_{\mathcal{R}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right){{\psi }_{d}}(\theta ).$$
(A.29b)

Having multiplied (A.29a) by adj\(\left\{ {{{\mathcal{T}}_{\mathcal{X}}}\left( {{{{\bar {\Xi }}}_{\mathcal{X}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right)} \right\}\), the regression equation \({{\mathcal{Y}}_{{AB}}}(t)\) = \({{\mathcal{M}}_{{AB}}}(t){{\Theta }_{{AB}}}(\theta )\) is obtained.

The next aim it to parametrize equation with respect to L(θ). If Assumption 2 is met, then, following the generalized pole placement theory [15, 16], the vector L(θ) can be obtained as a solution of the following set of equations

$$\left\{ \begin{gathered} {{A}^{{\text{T}}}}(\theta )M - M\Gamma = C{{B}^{{\text{T}}}}(\theta ) \hfill \\ {{B}^{{\text{T}}}}(\theta ) = {{L}^{{\text{T}}}}(\theta )M, \hfill \\ \end{gathered} \right.$$
(A.30)

which has a unique solution [15, 16], as, following Assumption 3, the pair \(\left( {{{A}^{{\text{T}}}}(\theta ),C} \right)\) is controllable, the pair \(\left( {{{B}^{{\text{T}}}}(\theta ),\Gamma } \right)\) is observable and \(\sigma \left\{ {{{A}^{{\text{T}}}}(\theta )} \right\}\) ∩ σ{Γ} = 0.

Having vectorized the first equation from (A.30) and considered the property vec(AB) = (IA)vec(B) = \(\left( {{{B}^{{\text{T}}}} \otimes I} \right)vec(A)\), it is obtained that:

$$\left( {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right)vec(M) = vec\left( {C{{B}^{{\text{T}}}}(\theta )} \right).$$
(A.31)

As Eqs. (A.30), (A.31) have unique solutions, then

$$\det \left\{ {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\} \ne 0,$$

and therefore, having multiplied (A.31) by an adjoint matrix adj\(\left\{ {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}\), it is written:

$$\begin{gathered} \det \left\{ {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}vec(M) \\ = \operatorname{adj} \left\{ {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}vec\left( {C{{B}^{{\text{T}}}}(\theta )} \right). \\ \end{gathered} $$
(A.32)

The obtained result is devectorized (vec–1{.}) and substituted into the second equation of (A.30):

$$\begin{gathered} \underbrace {\det \left\{ {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}B(\theta )}_{\mathcal{Q}({{\Theta }_{{AB}}})} \\ = \underbrace {ve{{c}^{{ - 1}}}{{{\left\{ {\operatorname{adj} \left\{ {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}vec\left( {C{{B}^{{\text{T}}}}(\theta )} \right)} \right\}}}^{{\text{T}}}}}_{\mathcal{P}({{\Theta }_{{AB}}})}L(\theta ), \\ \end{gathered} $$
(A.33)

where det{\(\mathcal{P}\)AB)} ≠ 0.

The following equalities are introduced:

$$\begin{gathered} {{\mathcal{M}}_{{AB}}}(t){{A}^{{\text{T}}}}(\theta ) = ve{{c}^{{ - 1}}}\left( {{{\mathcal{L}}_{{{{A}^{{\text{T}}}}}}}{{\mathcal{D}}_{\Phi }}{{\mathcal{Y}}_{{AB}}}(t)} \right), \\ {{\mathcal{M}}_{{AB}}}(t){{B}^{{\text{T}}}}(\theta ) = {{\left[ {{{\mathcal{L}}_{B}}{{\mathcal{D}}_{\Phi }}{{\mathcal{Y}}_{{AB}}}(t)} \right]}^{{\text{T}}}}, \\ {{\mathcal{M}}_{{AB}}}(t)B(\theta ) = {{\mathcal{L}}_{B}}{{\mathcal{D}}_{\Phi }}{{\mathcal{Y}}_{{AB}}}(t). \\ \end{gathered} $$
(A.34)

Having multiplied (A.33) by ΠL(\({{\mathcal{M}}_{{AB}}}\)) = \(\mathcal{M}_{{AB}}^{{{{n}^{2}} + 1}}{{I}_{n}}\), used the properties \({{c}^{n}}\det \{ A\} \) = det{cA}, \({{c}^{{n - 1}}}\operatorname{adj} \{ A\} \) = adj{cA}, A\({{\mathbb{R}}^{{n \times n}}}\) and substituted (A.34), it is obtained:

$${{\mathcal{T}}_{\mathcal{P}}}({{\Xi }_{\mathcal{P}}}({{\mathcal{M}}_{{AB}}}){{\Theta }_{{AB}}}) = {{\Pi }_{L}}({{\mathcal{M}}_{{AB}}})\mathcal{P}({{\Theta }_{{AB}}}) = \mathcal{M}_{{AB}}^{{{{n}^{2}} + 1}}\mathcal{P}({{\Theta }_{{AB}}})$$
$$ = \mathcal{M}_{{AB}}^{{{{n}^{2}} + 1}}ve{{c}^{{ - 1}}}{{\left\{ {\operatorname{adj} \left\{ {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}vec\left( {C{{B}^{{\text{T}}}}(\theta )} \right)} \right\}}^{{\text{T}}}}$$
$$\begin{gathered} = ve{{c}^{{ - 1}}}{{\left\{ {{{\mathcal{M}}_{{AB}}}\operatorname{adj} \left\{ {{{I}_{n}} \otimes ve{{c}^{{ - 1}}}\left( {{{\mathcal{L}}_{{{{A}^{{\text{T}}}}}}}{{\mathcal{D}}_{\Phi }}{{\mathcal{Y}}_{{AB}}}} \right) - {{\mathcal{M}}_{{AB}}}{{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}vec\left( {C{{{({{\mathcal{L}}_{B}}{{\mathcal{D}}_{\Phi }}{{\mathcal{Y}}_{{AB}}})}}^{{\text{T}}}}} \right)} \right\}}^{{\text{T}}}}, \\ {{\mathcal{T}}_{\mathcal{Q}}}({{\Xi }_{\mathcal{Q}}}({{\mathcal{M}}_{{AB}}}){{\Theta }_{{AB}}}) = {{\Pi }_{L}}({{\mathcal{M}}_{{AB}}})\mathcal{Q}({{\Theta }_{{AB}}}) = \mathcal{M}_{{AB}}^{{{{n}^{2}} + 1}}\mathcal{Q}({{\Theta }_{{AB}}}) \\ \end{gathered} $$
(A.35)
$$ = \mathcal{M}_{{AB}}^{{{{n}^{2}} + 1}}\det \left\{ {{{I}_{n}} \otimes {{A}^{{\text{T}}}}(\theta ) - {{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}B(\theta )$$
$$ = \det \left\{ {{{I}_{n}} \otimes ve{{c}^{{ - 1}}}\left( {{{\mathcal{L}}_{{{{A}^{{\text{T}}}}}}}{{\mathcal{D}}_{\Phi }}{{\mathcal{Y}}_{{AB}}}(t)} \right) - {{\mathcal{M}}_{{AB}}}(t){{\Gamma }^{{\text{T}}}} \otimes {{I}_{n}}} \right\}({{\mathcal{L}}_{B}}{{\mathcal{D}}_{\Phi }}{{\mathcal{Y}}_{{AB}}}(t)),$$

where \({{\Xi }_{\mathcal{P}}}({{\mathcal{M}}_{{AB}}}) = {{\Xi }_{\mathcal{Q}}}({{\mathcal{M}}_{{AB}}}) = {{\mathcal{M}}_{{AB}}}(t)\).

The following regression equation is written on the basis of Eqs. (A.33) and (A.35):

$${{\mathcal{T}}_{\mathcal{Q}}}\left( {{{{\bar {\Xi }}}_{\mathcal{Q}}}({{\mathcal{M}}_{{AB}}}){{\mathcal{Y}}_{{AB}}}} \right) = {{\mathcal{T}}_{\mathcal{P}}}\left( {{{{\bar {\Xi }}}_{\mathcal{P}}}({{\mathcal{M}}_{{AB}}}){{\mathcal{Y}}_{{AB}}}} \right)L(\theta ),$$
(A.36)

where \({{\bar {\Xi }}_{\mathcal{P}}}({{\mathcal{M}}_{{AB}}}) = {{\bar {\Xi }}_{\mathcal{Q}}}({{\mathcal{M}}_{{AB}}})\) = 1.

Having multiplied (A.36) by adj\(\left\{ {{{\mathcal{T}}_{\mathcal{P}}}\left( {{{{\bar {\Xi }}}_{\mathcal{P}}}({{\mathcal{M}}_{{AB}}}){{\mathcal{Y}}_{{AB}}}} \right)} \right\}\), the regression equation \({{\mathcal{Y}}_{L}}(t)\) = \({{\mathcal{M}}_{L}}(t)L(\theta )\) is obtained.

The next aim is to derive the regression equation with respect to xδ0. Using the properties of the vectorization operation

$$vec\left( {{{\psi }_{d}}(\theta )h_{\delta }^{{\text{T}}}{{\Phi }_{\delta }}(t){{x}_{{\delta 0}}}} \right) = \underbrace {\left( {x_{{\delta 0}}^{{\text{T}}} \otimes {{\psi }_{d}}(\theta )} \right)}_{n \times {{n}_{\delta }}}\underbrace {vec\left( {h_{\delta }^{{\text{T}}}{{\Phi }_{\delta }}} \right)}_{{{n}_{\delta }}},$$
$$vec\left( {\left( {x_{{x0}}^{{\text{T}}} \otimes {{\psi }_{d}}(\theta )} \right)vec\left( {h_{\delta }^{{\text{T}}}{{\Phi }_{\delta }}(t)} \right)} \right) = \underbrace {\left( {h_{\delta }^{{\text{T}}}{{\Phi }_{\delta }}(t) \otimes {{I}_{n}}} \right)}_{n \times n{{n}_{\delta }}}\underbrace {vec\left( {x_{{\delta 0}}^{{\text{T}}} \otimes {{\psi }_{d}}(\theta )} \right)}_{n{{n}_{\delta }}},$$

Eq. (3.1) is rewritten as follows:

$$\begin{gathered} \dot {\xi }(t) = {{A}_{0}}\xi (t) + {{\psi }_{a}}(\theta )y(t) + {{\psi }_{b}}(\theta )u(t) \\ + \;\left( {h_{\delta }^{{\text{T}}}{{\Phi }_{\delta }}(t) \otimes {{I}_{n}}} \right)vec\left( {x_{{\delta 0}}^{{\text{T}}} \otimes {{\psi }_{d}}(\theta )} \right). \\ \end{gathered} $$
(A.37)

The following error is introduced:

$$e(t) = \xi (t) - z(t) - \Omega (t){{\psi }_{a}}(\theta ) - P(t){{\psi }_{b}}(\theta ) - V(t)vec\left( {x_{{\delta 0}}^{{\text{T}}} \otimes {{\psi }_{d}}(\theta )} \right).$$
(A.38)

Having differentiated (A.38), Eq. \(\dot {e}(t) = {{A}_{K}}e(t)\) is obtained in a similar way as (A.2). Then, having multiplied (A.38) by \(C_{0}^{{\text{T}}}\), it is written:

$$\begin{gathered} \bar {q}(t) = C_{0}^{{\text{T}}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}e({{t}_{0}}) + C_{0}^{{\text{T}}}\Omega (t){{\psi }_{a}}(\theta ) \\ + \;C_{0}^{{\text{T}}}P(t){{\psi }_{b}}(\theta ) + C_{0}^{{\text{T}}}V(t)vec\left( {x_{{\delta 0}}^{{\text{T}}} \otimes {{\psi }_{d}}(\theta )} \right). \\ \end{gathered} $$
(A.39)

Using the properties

$$\begin{gathered} x_{{\delta 0}}^{{\text{T}}} \otimes {{\psi }_{d}}(\theta ) = {{\psi }_{d}}(\theta ){{x}_{{\delta 0}}}, \\ vec({{\psi }_{d}}(\theta ){{x}_{{\delta 0}}}) = \underbrace {({{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta ))}_{n{{n}_{\delta }} \times {{n}_{\delta }}}{{x}_{{\delta 0}}}, \\ \end{gathered} $$

Eq. (A.39) is transformed into

$$\begin{gathered} \bar {q}(t) = C_{0}^{{\text{T}}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}e({{t}_{0}}) + C_{0}^{{\text{T}}}\Omega (t){{\psi }_{a}}(\theta ) \\ + \;C_{0}^{{\text{T}}}P(t){{\psi }_{b}}(\theta ) + C_{0}^{{\text{T}}}V(t)vec\left( {x_{{\delta 0}}^{{\text{T}}} \otimes {{\psi }_{d}}(\theta )} \right). \\ \end{gathered} $$
(A.40)

To compensate for the unknown terms \(C_{0}^{{\text{T}}}\Omega (t){{\psi }_{a}}(\theta )\) + \(C_{0}^{{\text{T}}}P(t){{\psi }_{b}}(\theta )\), the following auxiliary signal is introduced

$$\begin{gathered} {{{\bar {p}}}_{e}}(t) = \Delta (t)C_{0}^{{\text{T}}}\Omega (t){{\psi }_{a}}(\theta ) + \Delta (t)C_{0}^{{\text{T}}}P(t){{\psi }_{b}}(\theta ) \\ = C_{0}^{{\text{T}}}\Omega (t){{\mathcal{L}}_{a}}\mathcal{Y}(t) + C_{0}^{{\text{T}}}P(t){{\mathcal{L}}_{b}}\mathcal{Y}(t), \\ \end{gathered} $$
(A.41)

where

$$\begin{gathered} {{\mathcal{L}}_{a}}\mathcal{Y}(t) = {{\mathcal{L}}_{a}}\Delta (t)\eta (\theta ) = \Delta (t){{\mathcal{L}}_{a}}\eta (\theta ) = \Delta (t){{\psi }_{a}}(\theta ), \\ {{\mathcal{L}}_{b}}\mathcal{Y}(t) = \Delta (t){{\mathcal{L}}_{b}}\eta (\theta ) = \Delta (t){{\psi }_{b}}(\theta ). \\ \end{gathered} $$

Having multiplied (A.40) by Δ(t) and subtracted (A.41) from the obtained result, it is written:

$$\begin{gathered} p(t) = \Delta (t)\bar {q}(t) - {{{\bar {p}}}_{e}}(t) \\ = \Delta (t)C_{0}^{{\text{T}}}V(t)\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right){{x}_{{\delta 0}}} + \Delta (t)C_{0}^{{\text{T}}}{{e}^{{{{A}_{K}}(t - {{t}_{0}})}}}e({{t}_{0}}). \\ \end{gathered} $$
(A.42)

To implement the multiplier ψd(θ) indirectly, Eq. (A.29b) is multiplied by adj \(\left\{ {{{\mathcal{T}}_{\mathcal{R}}}\left( {{{{\bar {\Xi }}}_{\mathcal{R}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right)} \right\}\):

$$\begin{gathered} {{\mathcal{Y}}_{{{{\psi }_{d}}}}}(t) = {{\mathcal{M}}_{{{{\psi }_{d}}}}}(t){{\psi }_{d}}(\theta ), \\ {{\mathcal{Y}}_{{{{\psi }_{d}}}}}(t) = \operatorname{adj} \left\{ {{{\mathcal{T}}_{\mathcal{R}}}\left( {{{{\bar {\Xi }}}_{\mathcal{R}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right)} \right\}{{\mathcal{T}}_{\mathcal{W}}}\left( {{{{\bar {\Xi }}}_{\mathcal{W}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right), \\ {{\mathcal{M}}_{{{{\psi }_{d}}}}}(t) = \det \left\{ {{{\mathcal{T}}_{\mathcal{R}}}\left( {{{{\bar {\Xi }}}_{\mathcal{R}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right)} \right\}. \\ \end{gathered} $$
(A.43)

The multiplication of (A.42) by \({{\mathcal{M}}_{{{{\psi }_{d}}}}}(t)\) and substitution of (A.43) into the obtained result allow one to write:

$$\begin{gathered} {{\mathcal{M}}_{{{{\psi }_{d}}}}}(t)p(t) = {{\mathcal{M}}_{{{{\psi }_{d}}}}}(t)\Delta (t)C_{0}^{{\text{T}}}V(t)\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right){{x}_{{\delta 0}}} \\ = \Delta (t)C_{0}^{{\text{T}}}V(t)\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\mathcal{Y}}_{{{{\psi }_{d}}}}}(t)} \right){{x}_{{\delta 0}}}. \\ \end{gathered} $$
(A.44)

Having filtered (A.44) via (4.3) and multiplied the obtained result by adj{Vf(t)}, the regression equation \({{\mathcal{Y}}_{{{{x}_{{\delta 0}}}}}}(t)\) = \({{\mathcal{M}}_{{{{x}_{{\delta 0}}}}}}(t){{x}_{{\delta 0}}}\) is obtained, which completes proof of statement that Eqs. (4.2) can be formed on the basis of the measurable signals.

Following Lemma 1, if \(\bar {\varphi }(t)\) ∈ PE, then for all t \( \geqslant \) t0 + T it holds that Δ(t) \( \geqslant \) Δmin > 0, and, owing to Hypotheses 1–3 and proved inequalities:

$${{\det }^{2}}\{ \mathcal{X}(\theta )\} > 0,\quad {{\det }^{2}}\{ \mathcal{R}(\theta )\} > 0,$$
$${{\det }^{2}}\{ \mathcal{G}({{\psi }_{{ab}}})\} > 0,\quad {{\det }^{2}}\{ \mathcal{P}({{\Theta }_{{AB}}})\} > 0,$$
$$\det \{ {{\Pi }_{\theta }}(\Delta )\} \; \geqslant \;{{\Delta }^{{{{\ell }_{\theta }}}}}(t),\quad \det \{ {{\Pi }_{\Theta }}({{\mathcal{M}}_{\theta }})\} \; \geqslant \;\mathcal{M}_{\theta }^{{{{\ell }_{\Theta }}}}(t),$$
$$\det \{ {{\Pi }_{{{{\psi }_{d}}}}}({{\mathcal{M}}_{\theta }})\} \; \geqslant \;\mathcal{M}_{\theta }^{{{{\ell }_{{{{\psi }_{d}}}}}}}(t),\quad \det \{ {{\Pi }_{L}}({{\mathcal{M}}_{{AB}}})\} \; \geqslant \;\mathcal{M}_{{AB}}^{{{{n}^{3}} + n}}(t),$$

we have that, if \(\bar {\varphi }(t)\) ∈ PE, then for all t \( \geqslant \) t0 + T the following holds:

$$\left| {{{\mathcal{M}}_{\theta }}(t)} \right| = \left| {\det \left\{ {{{\mathcal{T}}_{\mathcal{G}}}\left( {{{{\bar {\Xi }}}_{\mathcal{G}}}(\Delta ){{\mathcal{Y}}_{{ab}}}} \right)} \right\}} \right| = \left| {\det \{ {{\Pi }_{\theta }}(\Delta )\} \det \{ \mathcal{G}({{\psi }_{{ab}}})\} } \right|$$
$$ \geqslant \;\left| {\det \{ \mathcal{G}({{\psi }_{{ab}}})\} } \right|\Delta _{{\min }}^{{{{\ell }_{\theta }}}} = \underline {{{\mathcal{M}}_{\theta }}} > 0,$$
$$\left| {{{\mathcal{M}}_{{AB}}}(t)} \right| = \left| {\det \left\{ {{{\mathcal{T}}_{\mathcal{X}}}\left( {{{{\bar {\Xi }}}_{\mathcal{X}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right)} \right\}} \right| = \left| {\det \{ {{\Pi }_{\Theta }}({{\mathcal{M}}_{\theta }})\} \det \{ \mathcal{X}(\theta )\} } \right|$$
$$ \geqslant \;\left| {{{{\det }}^{{{{\ell }_{\Theta }}}}}\{ \mathcal{G}({{\psi }_{{ab}}})\} } \right|\left| {\det \{ \mathcal{X}(\theta )\} } \right|\Delta _{{\min }}^{{{{\ell }_{\theta }}{{\ell }_{\Theta }}}} = \underline {{{\mathcal{M}}_{{AB}}}} > 0,$$
$$\left| {{{\mathcal{M}}_{{{{\psi }_{d}}}}}(t)} \right| = \left| {\det \left\{ {{{\mathcal{T}}_{\mathcal{R}}}\left( {{{{\bar {\Xi }}}_{\mathcal{R}}}({{\mathcal{M}}_{\theta }}){{\mathcal{Y}}_{\theta }}} \right)} \right\}} \right| = \left| {\det \{ {{\Pi }_{{{{\psi }_{d}}}}}({{\mathcal{M}}_{\theta }})\} \det \{ \mathcal{R}(\theta )\} } \right|$$
$$ \geqslant \;\left| {{{{\det }}^{{{{\ell }_{{{{\psi }_{d}}}}}}}}\{ \mathcal{G}({{\psi }_{{ab}}})\} } \right|\left| {\det \{ \mathcal{R}(\theta )\} } \right|\Delta _{{\min }}^{{{{\ell }_{\theta }}{{\ell }_{{{{\psi }_{d}}}}}}} = \underline {{{\mathcal{M}}_{{{{\psi }_{d}}}}}} > 0,$$
$$\left| {{{\mathcal{M}}_{L}}(t)} \right| = \left| {\det \left\{ {{{\mathcal{T}}_{\mathcal{P}}}\left( {{{{\bar {\Xi }}}_{\mathcal{P}}}({{\mathcal{M}}_{{AB}}}){{\mathcal{Y}}_{{AB}}}} \right)} \right\}} \right| = \left| {\det \{ {{\Pi }_{L}}({{\mathcal{M}}_{{AB}}})\} \det \{ \mathcal{P}({{\Theta }_{{AB}}})\} } \right|$$
$$ \geqslant \;\left| {\det \{ \mathcal{P}({{\Theta }_{{AB}}})\} } \right|\mathcal{M}_{{AB}}^{{{{n}^{3}} + n}}\; \geqslant \;\left| {\det \{ \mathcal{P}({{\Theta }_{{AB}}})\} } \right|\underline {\mathcal{M}_{{AB}}^{{{{n}^{3}} + n}}} = \underline {{{\mathcal{M}}_{L}}} > 0.$$

To obtain the lower bound for the regressor \({{\mathcal{M}}_{{{{x}_{{\delta 0}}}}}}(t)\), first of all, such bound needs to be derived for the solution of the differential equation for Vf(t) in case \(\bar {\varphi }(t)\) ∈ PE and \(\left( {h_{\delta }^{{\text{T}}}{{\Phi }_{\delta }}(t) \otimes {{I}_{n}}} \right)\) ∈ PE:

$${{V}_{f}}(t) = \int\limits_{{{t}_{0}}}^t {{{e}^{{ - {{k}_{2}}(t - \tau )}}}{{\Delta }^{2}}(\tau ){{{\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\mathcal{Y}}_{{{{\psi }_{d}}}}}(\tau )} \right)}}^{{\text{T}}}}{{V}^{{\text{T}}}}(\tau ){{C}_{0}}C_{0}^{{\text{T}}}V(\tau )\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\mathcal{Y}}_{{{{\psi }_{d}}}}}(\tau )} \right)d\tau } $$
$$ = {{\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)}^{{\text{T}}}}\int\limits_{{{t}_{0}}}^t {{{e}^{{ - {{k}_{2}}(t - \tau )}}}\mathcal{M}_{{{{\psi }_{d}}}}^{2}(\tau ){{\Delta }^{2}}(\tau ){{V}^{{\text{T}}}}(\tau ){{C}_{0}}C_{0}^{{\text{T}}}V(\tau )d\tau \left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)} $$
$$ \geqslant \;\underline {\mathcal{M}_{{{{\psi }_{d}}}}^{2}} \Delta _{{\min }}^{2}{{\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)}^{{\text{T}}}}\int\limits_{{{t}_{0}}}^t {{{e}^{{ - {{k}_{2}}(t - \tau )}}}{{V}^{{\text{T}}}}(\tau ){{C}_{0}}C_{0}^{{\text{T}}}V(\tau )d\tau \left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)} $$
$$ \geqslant \;\underline {\mathcal{M}_{{{{\psi }_{d}}}}^{2}} \Delta _{{\min }}^{2}{{\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)}^{{\text{T}}}}\left[ {\int\limits_{{{t}_{0}}}^{t - \bar {k}T} {{{e}^{{ - {{k}_{2}}(t - \tau )}}}{{V}^{{\text{T}}}}(\tau ){{C}_{0}}C_{0}^{{\text{T}}}V(\tau )d\tau } } \right.$$
$$\left. { + \;\sum\limits_{k = 1}^{\bar {k}} {\int\limits_{t - kT}^{t - kT + T} {{{e}^{{ - {{k}_{2}}(t - \tau )}}}{{V}^{{\text{T}}}}(\tau ){{C}_{0}}C_{0}^{{\text{T}}}V(\tau )d\tau } } } \right]\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)$$
$$ \geqslant \;\underline {\mathcal{M}_{{{{\psi }_{d}}}}^{2}} \Delta _{{\min }}^{2}{{e}^{{ - {{k}_{2}}t}}}{{\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)}^{{\text{T}}}}\sum\limits_{k = 1}^{\bar {k}} {\int\limits_{t - kT}^{t - kT + T} {{{e}^{{{{k}_{2}}\tau }}}{{V}^{{\text{T}}}}(\tau ){{C}_{0}}C_{0}^{{\text{T}}}V(\tau )d\tau \left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)} } $$
$$ \geqslant \;\underline {\mathcal{M}_{{{{\psi }_{d}}}}^{2}} \Delta _{{\min }}^{2}{{\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)}^{{\text{T}}}}\sum\limits_{k = 1}^{\bar {k}} {{{e}^{{ - {{k}_{2}}kT}}}\int\limits_{t - kT}^{t - kT + T} {{{V}^{{\text{T}}}}(\tau ){{C}_{0}}C_{0}^{{\text{T}}}V(\tau )d\tau \left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)} } ,$$

where \(\bar {k}\; \geqslant \;k\; \geqslant \;1\) are integers.

In accordance with Lemma 6.8 from [6], if \(\left( {h_{\delta }^{{\text{T}}}{{\Phi }_{\delta }}(t) \otimes {{I}_{n}}} \right)\) ∈ PE, then the following inequality holds

$$\int\limits_t^{t + T} {{{V}^{{\text{T}}}}(\tau ){{C}_{0}}C_{0}^{{\text{T}}}V(\tau )d\tau } \; \geqslant \;\alpha {{I}_{{n{{n}_{\delta }}}}}$$
(A.45)

and, using the properties of the Kronecker product, it is obtained that:

$$\begin{gathered} {{\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)}^{{\text{T}}}}\underbrace {\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right)}_{n{{n}_{\delta }} \times {{n}_{\delta }}} = \left( {I_{{{{n}_{\delta }}}}^{{\text{T}}} \otimes \psi _{d}^{{\text{T}}}(\theta )} \right)\left( {{{I}_{{{{n}_{\delta }}}}} \otimes {{\psi }_{d}}(\theta )} \right) \\ = {{I}_{{{{n}_{\delta }}}}} \otimes \psi _{d}^{{\text{T}}}(\theta ){{\psi }_{d}}(\theta ) = \underbrace {\psi _{d}^{{\text{T}}}(\theta ){{\psi }_{d}}(\theta )}_{ > 0}{{I}_{{{{n}_{\delta }}}}}. \\ \end{gathered} $$
(A.46)

Then for all t \( \geqslant \) t0 + T it holds that:

$${{V}_{f}}(t)\; \geqslant \;\underbrace {\underline {\mathcal{M}_{{{{\psi }_{d}}}}^{2}} \Delta _{{\min }}^{2}\alpha \sum\limits_{k = 1}^{\bar {k}} {{{e}^{{ - {{k}_{2}}kT}}}\psi _{d}^{{\text{T}}}(\theta ){{\psi }_{d}}(\theta )} }_{ > 0}{{I}_{{{{n}_{\delta }}}}}\; \geqslant \;\sqrt[{{{n}_{\delta }}}]{{{{\mathcal{M}}_{{{{x}_{{\delta 0}}}}}}}}{{I}_{{{{n}_{\delta }}}}},$$
(A.47)

from which for all t \( \geqslant \) t0 + T we have \({{\mathcal{M}}_{{{{x}_{{\delta 0}}}}}}\) \( \geqslant \) \(\underline {{{\mathcal{M}}_{{{{x}_{{\delta 0}}}}}}} \) > 0, which allows one to obtain:

$$\forall t\; \geqslant \;{{t}_{0}} + T\left| {{{\mathcal{M}}_{\kappa }}(t)} \right| = \left| {\mathcal{M}_{{AB}}^{{{{n}_{\Theta }}}}(t)\mathcal{M}_{L}^{n}(t)\mathcal{M}_{{{{x}_{{\delta 0}}}}}^{{{{n}_{\delta }}}}(t)} \right|\; \geqslant \;\underline {\mathcal{M}_{{AB}}^{{{{n}_{\Theta }}}}} {\kern 1pt} \underline {\mathcal{M}_{L}^{n}} {\kern 1pt} \underline {\mathcal{M}_{{{{x}_{{\delta 0}}}}}^{{{{n}_{\delta }}}}} = \underline {{{\mathcal{M}}_{\kappa }}} > 0.$$
(A.48)

This completes proof of Lemma 2.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glushchenko, A.I., Lastochkin, K.A. Adaptive Observer of State and Disturbances for Linear Overparameterized Systems. Autom Remote Control 84, 1208–1231 (2023). https://doi.org/10.1134/S0005117923110036

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117923110036

Keywords:

Navigation