Skip to main content
Log in

Relaxation of Conditions for Convergence of Dynamic Regressor Extension and Mixing Procedure

  • NONLINEAR SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

A generalization of the dynamic regressor extension and mixing procedure is proposed, which, unlike the original procedure, first, guarantees a reduction of the unknown parameter identification error if the requirement of regressor semi-finite excitation is met, and second, it ensures exponential convergence of the regression function (regressand) tracking error to zero when the regressor is semi-persistently exciting with a rank one or higher.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.
Fig. 11.
Fig. 12.
Fig. 13.
Fig. 14.
Fig. 15.
Fig. 16.
Fig. 17.

Notes

  1. In statement (2) of Proposition 2, without loss of generality, it is assumed that the first np columns of the regressor φ(t) = [φ1(t)…φi(t)…φn(t)] are linearly dependent (in case \(\bar {r}\)(t) > 0 such form can always be obtained by columns permutation).

REFERENCES

  1. Ortega, R., Nikiforov, V., and Gerasimov, D., On modified parameter estimators for identification and adaptive control. a unified framework and some new schemes, Annual Reviews in Control, 2020, vol. 50, pp. 278–293.

    Article  MathSciNet  Google Scholar 

  2. Aranovskiy, S., Bobtsov, A., Ortega, R., and Pyrkin, A., Performance enhancement of parameter estimators via dynamic regressor extension and mixing, IEEE Trans. Automat. Control, 2016, vol. 62, no. 7, pp. 3546–3550.

    Article  MathSciNet  MATH  Google Scholar 

  3. Glushchenko, A.I., Petrov, V.A., and Lastochkin, K.A., I-DREM: Relaxing the square integrability condition, Autom. Remote Control, 2021, vol. 82, no. 7, pp. 1233–1247.

    Article  MathSciNet  MATH  Google Scholar 

  4. Korotina, M., Romero, J.G., Aranovskiy, S., Bobtsov, A., and Ortega, R., A new on-line exponential parameter estimator without persistent excitation, Sys. Control Letters, 2022, vol. 159, pp. 1–10.

    MATH  Google Scholar 

  5. Wang, L., Ortega, R., Bobtsov, A., Romero, J.G., and Yi, B., Identifiability implies robust, globally exponentially convergent on-line parameter estimation: application to model reference adaptive control. arXiv:2108.08436, 2021, pp. 1–16.

  6. Wang, J., Efimov, D., Aranovskiy, S., and Bobtsov, A., Fixed-time estimation of parameters for non-persistent excitation, European J. Control, 2020, vol. 55, pp. 24–32.

    Article  MathSciNet  MATH  Google Scholar 

  7. Yi, B. and Ortega, R., Conditions for convergence of dynamic regressor extension and mixing parameter estimators using LTI filters, IEEE Trans. Automat. Control, 2022, pp. 1–6.

  8. Aranovskiy, S., Ushirobira, R., Korotina, M., and Vedyakov, A., On preserving-excitation properties of Kreisselmeiers regressor extension scheme, IEEE Trans. Automat. Control, 2022, pp. 1–6.

  9. Sastry, S. and Bodson, M., Adaptive Control—Stability, Convergence, and Robustness, Englewood Cliffs, N.J.: Prentice Hall, 1989.

    MATH  Google Scholar 

  10. Kreisselmeier, G. and Rietze-Augst, G., Richness and excitation on an interval-with application to continuous-time adaptive control, IEEE Trans. Automat. Control, 1990, vol. 35, no. 2, pp. 165–171.

    Article  MathSciNet  MATH  Google Scholar 

  11. Roy, S.B. and Bhasin, S., Novel model reference adaptive control architecture using semi-initial excitation-based switched parameter estimator, Int. J. Adaptive Control Signal Proc., 2019, vol. 33, no. 12, pp. 1759–1774.

    Article  MATH  Google Scholar 

  12. Glushchenko, A. and Lastochkin, K., Robust time-varying parameters estimation based on I-DREM procedure, IFAC-PapersOnLine, 2022, vol. 55, no. 12, pp. 91–96.

    Article  Google Scholar 

  13. Ovcharov, A., Vedyakov, A., Kazak, S., Bespalov, V., Pyrkin, A., and Bobtsov, A., Flux observer for the levitated ball with relaxed excitation conditions, Proc. European Control Conf., 2021, pp. 2334–2339.

  14. Ovcharov, A., Vedyakov, A., Kazak, S., and Pyrkin, A., Overparameterized model parameter recovering with finite-time convergence, Int. J. Adapt. Control. Signal Process., 2022, pp. 1305–1325.

  15. Tihonov, A.N., Solution of incorrectly formulated problems and the regularization method, Soviet Math., 1963, vol. 4, pp. 1035–1038.

    MathSciNet  Google Scholar 

  16. Hansen, P.C., The truncated SVD as a method for regularization, BIT Num. Math., 1987, vol. 27, no. 4, pp. 534–553.

    Article  MathSciNet  MATH  Google Scholar 

  17. Meyer, C.D., Matrix Analysis and Applied Linear Algebra, Siam, 2000.

    Book  Google Scholar 

  18. Glushchenko, A.I., Lastochkin, K.A., and Petrov, V.A., Normalization of regressor excitation in the dynamic extension and mixing procedure, Autom. Remote Control, 2022, vol. 83, no. 1, pp. 17–31.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

This research was in part financially supported by Grants Council of the President of the Russian Federation (project MD-1787.2022.4).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. I. Glushchenko or K. A. Lastochkin.

Additional information

This paper was recommended for publication by A.A. Bobtsov, a member of the Editorial Board

APPENDIX

APPENDIX

Proof of Proposition 1. The lower bounds of the regressor ω(t) are written on the basis of Corollaries 1–4:

$$\bar {\varphi }(t) \in {\text{PE}} \Leftrightarrow \forall t\; \geqslant \;kT\;\;\omega (t) = \det \{ \Phi (t)\} = \prod\limits_{i = 1}^n {{{\lambda }_{i}}(t)} \; \geqslant \;\lambda _{{\min }}^{n}(t) > {{\mu }^{n}} > 0,$$
$$\bar {\varphi }(t) \in {\text{FE}} \Leftrightarrow \forall t \in [{{t}_{\delta }};\,\,{{t}_{\delta }} + \delta ] \subset [t_{r}^{ + };\,\,{{t}_{e}}]\;\;\omega (t) = \prod\limits_{i = 1}^n {{{\lambda }_{i}}(t)} \; \geqslant \;\lambda _{{\min }}^{n}(t) > {{\mu }^{n}} > 0,$$
$$\bar {\varphi }(t) \in {\text{s-PE}} \Leftrightarrow \forall t\; \geqslant \;kT\;\;\omega (t) = {{\varepsilon }^{{\bar {r}}}}\prod\limits_{i = 1}^n {{{\lambda }_{i}}(t)} \; \geqslant \;\min \{ \lambda _{{\min }}^{n}(t),\,\,{{\varepsilon }^{n}}\} > 0,$$
$$\bar {\varphi }(t) \in {\text{s-FE}} \Leftrightarrow \forall t \in [{{t}_{\delta }};\,\,{{t}_{\delta }} + \delta ] \subset [t_{r}^{ + };\,\,{{t}_{e}}]\;\;\omega (t) = {{\varepsilon }^{{\bar {r}}}}\prod\limits_{i = 1}^r {{{\lambda }_{i}}(t)} \; \geqslant \;\min \{ \lambda _{{\min }}^{n}(t),\,\,{{\varepsilon }^{n}}\} > 0,$$

as was to be proved in Proposition 1.

Proof of Theorem 1. (1) As, following Corollaries 1 and 2, the following implications hold when \(\bar {\varphi }\)(t) \( \in \) FE/\(\bar {\varphi }\)(t) \( \in \) PE:

$$\begin{gathered} \bar {\varphi }(t) \in {\text{PE}} \Leftrightarrow \forall t\; \geqslant \;kT\;\;\lambda _{{\min }}^{{}}(t) > \mu > 0, \\ \bar {\varphi }(t) \in {\text{FE}} \Leftrightarrow \forall t \in [{{t}_{\delta }};\,\,{{t}_{\delta }} + \delta ] \subset [t_{r}^{ + };\,\,{{t}_{e}}]\;\;\lambda _{{\min }}^{{}}(t) > \mu > 0, \\ \end{gathered} $$
(A.1)

then, when \(\bar {\varphi }\)(t) \( \in \) FE/\(\bar {\varphi }\)(t) \( \in \) PE, in accordance with (3.2), zero eigenvalues in Λ(t) are not substituted Ξ(t) = \({{0}_{{n \times n}}}\), the equality Φ(t) = φ(t) holds for the regressor matrix Φ(t), then it holds for the unknown parameters Θ that Θ = θ owing to \({{\bar {\Lambda }}^{{ - 1}}}(t)\,\Xi \,(t) = {{0}_{{n \times n}}}\), and the identification law (3.5) coincides with (2.7) up to the definition of the adaptive gain γ, from which it follows that (3.5) ensures b1b5 when \(\bar {\varphi }\)(t) \( \in \) FE/\(\bar {\varphi }\)(t) \( \in \) PE.

(2) The following function, in which time arguments are omitted for the sake of brevity, is introduced:

$$\forall t \in [t_{r}^{ + };\,\,{{t}_{e}}]\;\;L = {{\tilde {\theta }}^{{\text{T}}}}\tilde {\theta }.$$
(A.2)

Equation (A.2) is differentiated along the solutions of (3.5) to obtain:

$$\dot {L} = - 2{{\tilde {\theta }}^{{\text{T}}}}(\gamma \omega (\omega \hat {\theta } - \omega \theta + \omega V{{\bar {\Lambda }}^{{ - 1}}}\,\Xi \,{{V}^{{\text{T}}}}\theta )) = - 2{{\tilde {\theta }}^{{\text{T}}}}\gamma {{\omega }^{2}}\tilde {\theta } - 2{{\tilde {\theta }}^{{\text{T}}}}\gamma {{\omega }^{2}}V{{\bar {\Lambda }}^{{ - 1}}}\,\Xi \,{{V}^{{\text{T}}}}\theta .$$
(A.3)

Considering Assumption 1 and the definition of γ, the upper bound of (A.3) for all \(t \in [{{t}_{\delta }};\,\,{{t}_{\delta }} + \delta ] \subset [t_{r}^{ + };\,\,{{t}_{e}}]\) is written as:

$$\dot {L}\;\leqslant \; - {\kern 1pt} 2{{\tilde {\theta }}^{{\text{T}}}}\frac{{{{\gamma }_{0}}}}{{{{\omega }^{2}}}}{{\omega }^{2}}\tilde {\theta } - 2{{\tilde {\theta }}^{{\text{T}}}}\frac{{{{\gamma }_{0}}}}{{{{\omega }^{2}}}}{{\omega }^{2}}V{{\bar {\Lambda }}^{{ - 1}}}\,\Xi \,{{V}^{{\text{T}}}}\theta \;\leqslant \; - {\kern 1pt} 2{{\tilde {\theta }}^{{\text{T}}}}{{\gamma }_{0}}\tilde {\theta } - 2{{\tilde {\theta }}^{{\text{T}}}}{{\gamma }_{0}}V{{\bar {\Lambda }}^{{ - 1}}}\,\Xi \,{{V}^{{\text{T}}}}\theta \;\leqslant \; - {\kern 1pt} 2{{\gamma }_{0}}{{\left\| {\tilde {\theta }} \right\|}^{2}} + 2{{\gamma }_{0}}\left\| {\tilde {\theta }} \right\|{{\theta }_{{\max }}}.$$
(A.4)

Here, spectral norm of the multiplier \(V{{\bar {\Lambda }}^{{ - 1}}}\Xi {{V}^{{\text{T}}}}\), which value is one as the matrices V and VT are orthogonal ones, is calculated to obtain (A.4).

Assuming that a = \(\sqrt {2{{\gamma }_{0}}} \left\| {\tilde {\theta }} \right\|\), b = \(\sqrt {2{{\gamma }_{0}}} {{\theta }_{{\max }}}\) and using the inequality –a2 + ab \(\leqslant \) \( - \frac{1}{2}{{a}^{2}} + \frac{1}{2}{{b}^{2}}\), it is obtained from (A.4):

$$\dot {L}\;\leqslant \; - {\kern 1pt} {{\gamma }_{0}}{{\left\| {\tilde {\theta }} \right\|}^{2}} + {{\gamma }_{0}}\theta _{{\max }}^{2}.$$
(A.5)

The solution of the differential inequality (A.5) for all \(t \in [{{t}_{\delta }};\,\,{{t}_{\delta }} + \delta ]\) is written as:

$$\forall t \in [{{t}_{\delta }};\,\,{{t}_{\delta }} + \delta ]\;\;L\;\leqslant \;{{e}^{{ - {{\gamma }_{0}}(t - {{t}_{\delta }})}}}{{\left\| {\tilde {\theta }({{t}_{\delta }})} \right\|}^{2}} + \theta _{{\max }}^{2}.$$
(A.6)

Considering (A.6), L = \({{\left\| {\tilde {\theta }} \right\|}^{2}}\) and the fact that for all c, d the inequalities \(\sqrt {{{c}^{2}} + {{d}^{2}}} \;\leqslant \;\sqrt {{{c}^{2}}} + \sqrt {{{d}^{2}}} \) hold, we obtain:

$$\left\| {\tilde {\theta }({{t}_{\delta }} + \delta )} \right\|\;\leqslant \;{{e}^{{ - 0.5{{\gamma }_{0}}\delta }}}\left\| {\tilde {\theta }({{t}_{\delta }})} \right\| + \theta _{{\max }}^{{}}.$$
(A.7)

As for the most conservative case, it holds that ω(t) \( \equiv \) 0 for all \(t \in \{ [t_{r}^{ + };\,\,{{t}_{\delta }}],\;[{{t}_{\delta }} + \delta ;\,\,{{t}_{e}}]\} \); therefore, the inequalities \(\left\| {\tilde {\theta }(t_{r}^{ + })} \right\|\; \geqslant \;\left\| {\tilde {\theta }({{t}_{\delta }})} \right\|\), \(\left\| {\tilde {\theta }({{t}_{e}})} \right\|\;\leqslant \;\left\| {\tilde {\theta }({{t}_{\delta }} + \delta )} \right\|\) also hold, using which (A.7) is rewritten as:

$$\left\| {\tilde {\theta }({{t}_{e}})} \right\|\;\leqslant \;{{e}^{{ - 0.5{{\gamma }_{0}}\delta }}}\left\| {\tilde {\theta }(t_{r}^{ + })} \right\| + \theta _{{\max }}^{{}}.$$
(A.8)

The premise (2.1) is substituted into (A.8) to obtain:

$$\left\| {\tilde {\theta }({{t}_{e}})} \right\|\;\leqslant \;\left( {{{e}^{{ - 0.5{{\gamma }_{0}}\delta }}} + \frac{1}{{{{\beta }_{1}}}}} \right)\left\| {\tilde {\theta }(t_{r}^{ + })} \right\|.$$
(A.9)

Hence, the choice of γ0 on the basis of the condition

$$0 < {{e}^{{ - 0.5{{\gamma }_{0}}\delta }}} + \frac{1}{{{{\beta }_{1}}}} < 1 \Leftrightarrow {{\gamma }_{0}} > \frac{{ - 2\ln \left( {1 - \frac{1}{{{{\beta }_{1}}}}} \right)}}{\delta }$$
(A.10)

allows one to ensure that the premise (2.2) also holds and, as a consequence, obtain the following:

$$\left\| {\tilde {\theta }({{t}_{e}})} \right\|\;\leqslant \;\underbrace {\left( {{{e}^{{ - 0.5{{\gamma }_{0}}\delta }}} + \frac{1}{{{{\beta }_{1}}}}} \right)}_{0 < \beta < 1}\left\| {\tilde {\theta }(t_{r}^{ + })} \right\|,$$
(A.11)

which means that the error \(\tilde {\theta }\)(t) decreases over the time range [\(t_{r}^{ + };\,\,{{t}_{e}}\)].

The substitution of (A.11) into the upper bound of \(\tilde {z}\)(te) yields:

$$\left| {\tilde {z}({{t}_{e}})} \right|\;\leqslant \;{{\bar {\varphi }}_{{\max }}}\left\| {\tilde {\theta }({{t}_{e}})} \right\|\;\leqslant \;{{\bar {\varphi }}_{{\max }}}\beta \left\| {\tilde {\theta }(t_{r}^{ + })} \right\| = \beta \left| {\tilde {z}(t_{r}^{ + })} \right|,$$
(A.12)

which completes the proof of the second statement and verifies the convergence of (3.5) when \(\bar {\varphi }\)(t) \( \in \) s-FE and the premises (2.1) and (2.2) hold.

(3) The derivative of \(\tilde {\Theta }\)(t) is calculated to prove the third statement:

$$\dot {\tilde {\Theta }}(t) = - \gamma (t){{\omega }^{2}}(t)\tilde {\Theta }(t) - \dot {\Theta }(t).$$
(A.13)

The general solution of the differential equation (A.13) is:

$$\tilde {\Theta }(t) = \phi (t,{{t}_{0}})\tilde {\Theta }({{t}_{0}}) - \int\limits_{{{t}_{0}}}^t {\phi (t,\tau )\dot {\Theta }(\tau )d\tau } ,$$
(A.14)

where \(\phi (t,\,\,s) = {{e}^{{ - \int_s^t {\gamma (\tau ){{\omega }^{2}}(\tau )d\tau } }}}\).

As, owing to \(\sqrt {{{\gamma }_{1}}} \notin {{L}_{2}}\), \(\frac{{\sqrt {{{\gamma }_{0}}} }}{{\omega (t)}} \notin {{L}_{2}}\), and ω(t) \( \notin \) L2, for all possible switches of the nonlinear operator in (3.5) it is true that \(\sqrt \gamma \omega (t) \notin {{L}_{2}}\), then the function \(\phi (t,\,\,s)\) has the following properties:

$$\sqrt \gamma \omega (t) \notin {{L}_{2}} \Leftrightarrow \left\{ \begin{gathered} 0 < \phi (t,s)\;\leqslant \;1, \hfill \\ \mathop {\lim }\limits_{t \to \infty } \phi (t,s) = 0. \hfill \\ \end{gathered} \right.$$
(A.15)

Using the first property, the upper bound of (A.14) is obtained:

$$\tilde {\Theta }(t)\;\leqslant \;\phi (t,\,\,{{t}_{0}})\tilde {\Theta }({{t}_{0}}) - \Theta (t).$$
(A.16)

On the basis of (A.16) and definitions \(\tilde {\Theta }\)(t) = \(\tilde {\theta }\)(t) + d(t), \(\Theta \)(t) = θ – d(t) we have:

$$\tilde {\theta }(t)\;\leqslant \;\phi (t,\,\,{{t}_{0}})\tilde {\Theta }({{t}_{0}}) - \theta .$$
(A.17)

From this, based on the second property of (A.15), it follows that \({{\lim }_{{t \to \infty }}}\left\| {\tilde {\theta }(t)} \right\|\;\leqslant \;{{\theta }_{{{\text{max}}}}}\), which completes the proof of the third statement of the theorem.

(4) When the condition \(\bar {\varphi }\)(t) \( \in \) s-PE is met, in accordance with the third statement of Proposition 1 for all \(t\; \geqslant \;kT\) it holds that \(\omega (t)\; \geqslant \;\min \{ \lambda _{{\min }}^{n}(t),{{\varepsilon }^{n}}\} \) > 0 and, consequently, the function \(\phi (t,\,\,kT)\) is written as:

$$\phi (t,\,\,kT) = {{e}^{{ - {{\gamma }_{0}}(t - kT)}}}.$$
(A.18)

Then, having solved (A.13) for all \(t\; \geqslant \;kT\), the following is obtained in a similar manner to (A.14)–(A.17):

$$\left\| {\tilde {\theta }(t)} \right\|\;\leqslant \;{{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\left\| {\tilde {\Theta }(kT)} \right\| + \theta _{{\max }}^{{}},$$
(A.19)

from which it follows that, when \(\bar {\varphi }\)(t) \( \in \) s-PE, the errors \(\tilde {\theta }\)(t) exponentially convergence to the set with the bound θmax, which completes the proof of the theorem.

Proof of Theorem 2. (I) To prove the first statement of Theorem 2, Eq. (3.4) is written in the element-wise form:

$${{\Upsilon }_{i}}(t) = \omega (t){{\Theta }_{i}},\quad \forall i \in \{ 1, \ldots ,n\} .$$
(A.20)

Given (A.20), the law (3.5) for all \(i \in \{ 1, \ldots ,n\} \) is written as follows:

$${{\dot {\hat {\theta }}}_{i}}(t) = {{\dot {\tilde {\Theta }}}_{i}}(t) = - \gamma (t)\omega (t)(\omega (t){{\hat {\theta }}_{i}}(t) - \omega (t){{\Theta }_{i}}) = - \gamma (t){{\omega }^{2}}(t){{\tilde {\Theta }}_{i}}(t).$$
(A.21)

As γ(t2(t) > 0, then sgn {\({{\dot {\tilde {\Theta }}}_{i}}(t)\)} = const, and it holds for \({{\tilde {\Theta }}_{i}}\)(t) that \(\left| {{{{\tilde {\Theta }}}_{i}}({{t}_{a}})} \right|\;\leqslant \;\left| {{{{\tilde {\Theta }}}_{i}}({{t}_{b}})} \right|\) \(\forall {{t}_{a}}\; \geqslant \;{{t}_{b}}\), which was to be proved in part I of the theorem.

(II) When \(\bar {\varphi }\)(t) \( \in \) s-FE and Assumption 2 is met, in accordance with Corollary 4 the solution of Eq. (A.13) over [\({{t}_{\delta }};\,\,{{t}_{\delta }} + \delta \)] is written as:

$$\tilde {\Theta }(t) = \phi (t,\,\,{{t}_{\delta }})\tilde {\Theta }({{t}_{\delta }}) = {{e}^{{ - {{\gamma }_{0}}(t - {{t}_{\delta }})}}}\tilde {\Theta }({{t}_{\delta }}).$$
(A.22)

Considering the most conservative case, for all \(t \in \{ [t_{r}^{ + };\,\,{{t}_{\delta }}],\;[{{t}_{\delta }} + \delta ;\,\,{{t}_{e}}]\} \) it holds that ω(t) ≡ 0; therefore, we have the inequalities \(\left\| {\tilde {\Theta }(t_{r}^{ + })} \right\|\; \geqslant \;\left\| {\tilde {\Theta }({{t}_{\delta }})} \right\|\), \(\left\| {\tilde {\Theta }({{t}_{e}})} \right\|\;\leqslant \;\left\| {\tilde {\Theta }({{t}_{\delta }} + \delta )} \right\|\), on the base of which the upper bound of \(\tilde {\Theta }\)(t) at the time instant te is obtained:

$$\left\| {\tilde {\Theta }({{t}_{e}})} \right\|\;\leqslant \;{{e}^{{ - {{\gamma }_{0}}\delta }}}\left\| {\tilde {\Theta }(t_{r}^{ + })} \right\|.$$
(A.23)

The definition β = \({{e}^{{ - {{\gamma }_{0}}\delta }}} \in (0;\,\,1)\) is introduced into (A.23) to complete the proof that the error \(\tilde {\Theta }\)(t) decreases over [\(t_{r}^{ + };\,\,{{t}_{e}}\)].

To prove the error \(\tilde {z}\)(t) reduction, the correctness of the following implication owing to \(V_{1}^{{\text{T}}}(t){{V}_{2}}\) = \({{0}_{{r \times \bar {r}}}}\) is taken into consideration:

$$\begin{gathered} y(t) = \varphi (t)\theta = {{V}_{1}}(t){{\Lambda }_{1}}(t)V_{1}^{{\text{T}}}(t)(\theta - {{V}_{2}}V_{2}^{{\text{T}}}\theta ) = \varphi (t)(\theta - {{V}_{2}}V_{2}^{{\text{T}}}\theta ) = \varphi (t)\Theta \\ = \int\limits_{t_{0}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}\bar {\varphi }(\tau ){{{\bar {\varphi }}}^{{\text{T}}}}(\tau )d\tau \Theta } = \int\limits_{t_{0}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}\bar {\varphi }(\tau )z(\tau )d\tau } \\ = \,\,\,\int\limits_{t_{0}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}\bar {\varphi }(\tau )\underbrace {{{{\bar {\varphi }}}^{{\text{T}}}}(\tau )\theta }_{z(\tau )}d\tau } = \int\limits_{t_{0}^{ + }}^t {{{e}^{{ - l(t - \tau )}}}\bar {\varphi }(\tau )\underbrace {{{{\bar {\varphi }}}^{{\text{T}}}}(\tau )\Theta }_{z(\tau )}d\tau } , \\ \Updownarrow \\ z(t) = {{{\bar {\varphi }}}^{{\text{T}}}}(t)\theta = {{{\bar {\varphi }}}^{{\text{T}}}}(t)(\theta - {{V}_{2}}V_{2}^{{\text{T}}}\theta ) = {{{\bar {\varphi }}}^{{\text{T}}}}(t)\Theta . \\ \end{gathered} $$
(A.24)

Then, considering (A.22), the upper bound of the tracking error is written as:

$$\forall t \in [{{t}_{\delta }};\,\,{{t}_{\delta }} + \delta ]\;\;\left| {\tilde {z}(t)} \right|\;\leqslant \;{{\bar {\varphi }}_{{\max }}}{{e}^{{ - {{\gamma }_{0}}(t - {{t}_{\delta }})}}}\left\| {\tilde {\Theta }({{t}_{\delta }})} \right\|,$$
(A.25)

from which, owing to (A.23), we immediately have:

$$\left| {\tilde {z}({{t}_{e}})} \right|\;\leqslant \;{{\bar {\varphi }}_{{\max }}}\beta \left\| {\tilde {\Theta }(t_{r}^{ + })} \right\| = \beta \left| {\tilde {z}(t_{r}^{ + })} \right|,$$
(A.26)

which was to be proved in part II.

(III) When Assumption 2 is met, for all \(t \in [{{t}_{\infty }};\,\,\infty )\) the solution of the error (A.13) is written as:

$$\tilde {\Theta }(t) = \phi (t,\,\,{{t}_{0}})\tilde {\Theta }({{t}_{0}}),$$
(A.27)

from which, according to the second property of (A.15), it follows that:

$$\sqrt {\gamma (t)} \omega (t) \notin {{L}_{2}} \Leftrightarrow {{\lim }_{{t \to \infty }}}\left\| {\tilde {\Theta }(t)} \right\| = 0,$$
(A.28)

which holds for all possible variants of switches of the nonlinear operator (3.5) owing to \(\sqrt {{{\gamma }_{1}}} \notin {{L}_{2}}\), \(\frac{{\sqrt {{{\gamma }_{0}}} }}{{\omega (t)}} \notin {{L}_{2}}\), and \(\omega (t) \notin {{L}_{2}}\).

Having applied the implication (A.28) to obtain the upper bound of (A.24), we have:

$$\sqrt {\gamma (t)} \omega (t) \notin {{L}_{2}} \Leftrightarrow {{\lim }_{{t \to \infty }}}\left| {\tilde {z}(t)} \right|\;\leqslant \;{{\lim }_{{t \to \infty }}}\left( {{{{\bar {\varphi }}}_{{\max }}}\left\| {\tilde {\Theta }(t)} \right\|} \right) = 0.$$
(A.29)

Thus, all statements of the third part of Theorem 2 are proved.

(IV) When \(\bar {\varphi }\)(t) \( \in \) s-PE, then (A.18) holds \(\forall t\; \geqslant \;kT\), and therefore the following bound is obtained on the basis of (A.22):

$$\forall t\; \geqslant \;kT\;\;\left\| {\tilde {\Theta }(t)} \right\|\;\leqslant \;{{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\left\| {\tilde {\Theta }(kT)} \right\|,$$
(A.30)

which proves the exponential convergence of the error \(\tilde {\Theta }\)(t) to zero for all \(t\; \geqslant \;kT\).

Having (A.30) at hand, considering the boundedness of  \(\left\| {\bar {\varphi }(t)} \right\|\;\leqslant \;{{\bar {\varphi }}_{{\max }}}\) and using (A.24), the exponential convergence of the error \(\tilde {z}\)(t) for all \(t\; \geqslant \;kT\) can be proved in the similar way to (A.25), which completes the proof of Theorem 2.

Proof of Theorem 3. When \(\bar {\varphi }\)(t) \( \in \) s-PE, on the basis of the third statement of proved Proposition 1 for all \(t\; \geqslant \;kT\) \(\omega (t)\; \geqslant \;\min \{ \lambda _{{\min }}^{n}(t),\,\,{{\varepsilon }^{n}}\} \) > 0 holds, and therefore Eq. (A.13) is written as:

$$\forall t\; \geqslant \;kT\;\;\dot {\tilde {\Theta }}(t) = - {{\gamma }_{0}}\tilde {\Theta }(t) - \dot {\Theta }(t).$$
(A.31)

Owing to Assumption 3, the derivative \(\dot {\Theta }\)(t) is written as follows according to (3.7):

$$\dot {\Theta }(t) = \sum\limits_{j = 1}^\infty {{{\Delta }_{j}}\delta (t - {{t}_{j}})} .$$
(A.32)

Considering (A.32), the solution of the differential Eq. (A.31) is obtained:

$$\forall t\; \geqslant \;kT\;\;\tilde {\Theta }(t) = {{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\tilde {\Theta }(kT) - \int\limits_{kT}^t {{{e}^{{ - {{\gamma }_{0}}(t - \tau )}}}\sum\limits_{j = 1}^\infty {{{\Delta }_{j}}\delta (\tau - {{t}_{j}})d\tau } } .$$
(A.33)

Following the sifting property of the Dirac function, for any differentiable function f(t) we have:

$$\int\limits_{{{t}_{0}}}^t {f(\tau )\delta (\tau - {{t}_{j}})d\tau } = \left. {f({{t}_{j}})h(\tau - {{t}_{j}})} \right|_{{{{t}_{0}}}}^{t} = f({{t}_{j}})h(t - {{t}_{j}}) - f({{t}_{j}})\underbrace {h({{t}_{0}} - {{t}_{j}})}_{ = 0} \equiv f({{t}_{j}})h(t - {{t}_{j}}).$$
(A.34)

On the basis of (A.34) Eq. (A.33) is rewritten as:

$$\forall t\; \geqslant \;kT\;\;\tilde {\Theta }(t) = {{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\tilde {\Theta }(kT) - \sum\limits_{j = 1}^\infty {{{e}^{{ - {{\gamma }_{0}}(t - {{t}_{j}})}}}{{\Delta }_{j}}h(t - {{t}_{j}})} .$$
(A.35)

Having multiplied (A.35) by \({{\tilde {\Theta }}^{{\text{T}}}}(kT)\), it is obtained:

$$\forall t\; \geqslant \;kT\;\;{{\tilde {\Theta }}^{{\text{T}}}}(kT)\tilde {\Theta }(t) = {{e}^{{ - {{\gamma }_{0}}(t - kT)}}}{{\left\| {\tilde {\Theta }(kT)} \right\|}^{2}} - \sum\limits_{j = 1}^\infty {{{e}^{{ - {{\gamma }_{0}}(t - {{t}_{j}})}}}{{{\tilde {\Theta }}}^{{\text{T}}}}(kT){{\Delta }_{j}}h(t - {{t}_{j}})} .$$
(A.36)

The term \({{e}^{{ - {{\gamma }_{0}}(t - kT)}}}{{\left\| {\tilde {\Theta }(kT)} \right\|}^{2}}\) is put outside the brackets in the right-hand side of Eq. (A.36) to obtain for all \(t\; \geqslant \;kT\) that:

$$\begin{gathered} {{{\tilde {\Theta }}}^{{\text{T}}}}(kT)\tilde {\Theta }(t) = \underbrace {\left( {1 - \frac{1}{{{{{\left\| {\tilde {\Theta }(kT)} \right\|}}^{2}}}}\sum\limits_{j = 1}^\infty {{{e}^{{ - {{\gamma }_{0}}(kT - {{t}_{j}})}}}{{{\tilde {\Theta }}}^{{\text{T}}}}(kT){{\Delta }_{j}}h(t - {{t}_{j}})} } \right)}_{ \in R}{{e}^{{ - {{\gamma }_{0}}(t - kT)}}}{{{\tilde {\Theta }}}^{{\text{T}}}}(kT)\tilde {\Theta }(kT), \\ \tilde {\Theta }(t) = \left( {1 - \frac{1}{{{{{\left\| {\tilde {\Theta }(kT)} \right\|}}^{2}}}}\sum\limits_{j = 1}^\infty {{{e}^{{ - {{\gamma }_{0}}(kT - {{t}_{j}})}}}{{{\tilde {\Theta }}}^{{\text{T}}}}(kT){{\Delta }_{j}}h(t - {{t}_{j}})} } \right){{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\tilde {\Theta }(kT), \\ \end{gathered} $$
(A.37)

where \(\left\| {\tilde {\Theta }(kT)} \right\| \ne 0\) since for all \(t \in [{{t}_{0}};\,\,kT)\) \(\omega (t) \equiv 0 \Rightarrow \dot {\hat {\theta }}(t)\) = \(0 \Rightarrow \left\| {\tilde {\Theta }(kT)} \right\|\; \geqslant \;\left\| {\tilde {\Theta }({{t}_{0}})} \right\|\).

Equation (A.37) allows one to have the first expression from (3.8) up to the following notation:

$$a({{t}_{j}}) = \left| {1 - \frac{1}{{{{{\left\| {\tilde {\Theta }(kT)} \right\|}}^{2}}}}\sum\limits_{j = 1}^\infty {{{e}^{{ - {{\gamma }_{0}}(kT - {{t}_{j}})}}}{{{\tilde {\Theta }}}^{{\text{T}}}}(kT){{\Delta }_{j}}h(t - {{t}_{j}})} } \right|.$$
(A.38)

So the exponential recovery of the parameter error \(\tilde {\Theta }\)(t) to its equilibrium point is proved.

Having (A.24) at hand, the upper bound of the tracking error \(\left| {\tilde {z}(t)} \right|\) is written as:

$$\forall t\; \geqslant \;kT\;\;\left| {\tilde {z}(t)} \right|\;\leqslant \;a({{t}_{j}}){{\bar {\varphi }}_{{\max }}}{{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\left\| {\tilde {\Theta }(kT)} \right\| = a({{t}_{j}}){{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\left| {\tilde {z}(kT)} \right|.$$
(A.39)

Therefore, the exponential recovery of the error \(\tilde {z}\)(t) to its equilibrium point is also proved.

If, additionally, for a(tj) there exists an upper bound amax, then it is immediately obtained from (3.8) that:

$$\left\{ \begin{gathered} \mathop {\lim }\limits_{t \to \infty } \left\| {\tilde {\Theta }(t)} \right\|\;\leqslant \;\mathop {\lim }\limits_{t \to \infty } \left( {{{a}_{{\max }}}{{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\left\| {\tilde {\Theta }(kT)} \right\|} \right) = 0 \hfill \\ \mathop {\lim }\limits_{t \to \infty } \left| {\tilde {z}(t)} \right|\;\leqslant \;\mathop {\lim }\limits_{t \to \infty } \left( {{{a}_{{\max }}}{{{\bar {\varphi }}}_{{\max }}}{{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\left\| {\tilde {\Theta }(kT)} \right\|} \right) = \mathop {\lim }\limits_{t \to \infty } \left( {{{a}_{{\max }}}{{e}^{{ - {{\gamma }_{0}}(t - kT)}}}\left| {\tilde {z}(kT)} \right|} \right) = 0. \hfill \\ \end{gathered} \right.$$
(A.40)

Hence, the tracking error \(\tilde {z}\)(t) and the parameter error \(\tilde {\Theta }\)(t) are exponentially stable, which completes the proof of Theorem 3.

Proof of Corollary 5. According to the first statement of Corollary 5, it is assumed that the number of Θ(t) changes is finite: j \(\leqslant \) jmax < ∞.

Then the following upper bound of the function a(tj) is obtained:

$$\begin{gathered} a({{t}_{j}}) = \left| {1 - \frac{1}{{{{{\left\| {\tilde {\Theta }(kT)} \right\|}}^{2}}}}\sum\limits_{j = 1}^{{{j}_{{\max }}}} {{{e}^{{ - {{\gamma }_{0}}(kT - {{t}_{j}})}}}{{{\tilde {\Theta }}}^{{\text{T}}}}(kT){{\Delta }_{j}}h(t - {{t}_{j}})} } \right|\;\leqslant \;1 + \left| {\frac{1}{{{{{\left\| {\tilde {\Theta }(kT)} \right\|}}^{2}}}}\sum\limits_{j = 1}^{{{j}_{{\max }}}} {{{e}^{{ - {{\gamma }_{0}}(kT - {{t}_{j}})}}}{{{\tilde {\Theta }}}^{{\text{T}}}}(kT){{\Delta }_{j}}h(t - {{t}_{j}})} } \right| \\ \leqslant \;1 + \,\,\frac{1}{{\left\| {\tilde {\Theta }(kT)} \right\|}}\sum\limits_{j = 1}^{{{j}_{{\max }}}} {\left\| {{{\Delta }_{j}}} \right\|{{e}^{{ - {{\gamma }_{0}}(kT - {{t}_{j}})}}}h(t - {{t}_{j}})} . \\ \end{gathered} $$
(A.41)

As, when j is finite, the number of time instants tj is also finite, then the exponential multiplier in the sum (A.41) is bounded, and the following definition holds:

$$a({{t}_{j}})\;\leqslant \;1 + \,\,\frac{1}{{\left\| {\tilde {\Theta }(kT)} \right\|}}\sum\limits_{j = 1}^{{{j}_{{\max }}}} {\left\| {{{\Delta }_{j}}} \right\|{{e}^{{ - {{\gamma }_{0}}(kT - {{t}_{j}})}}}h(t - {{t}_{j}}) = {{a}_{{\max }}}} ,$$
(A.42)

which was to be proved in the first part of the corollary.

To prove the second statement of the corollary, the upper bound of \(\left\| {{{\Delta }_{j}}} \right\|\) is taken into consideration, and the upper bound of a(tj) is obtained similarly to (A.42), but under the condition of the infinite number of switches:

$$a({{t}_{j}})\;\leqslant \;1 + \left| {\frac{1}{{{{{\left\| {\tilde {\Theta }(kT)} \right\|}}^{2}}}}\sum\limits_{j = 1}^\infty {{{e}^{{ - {{\gamma }_{0}}(kT - {{t}_{j}})}}}{{{\tilde {\Theta }}}^{{\text{T}}}}(kT){{\Delta }_{j}}h(t - {{t}_{j}})} } \right|\;\leqslant \;1 + \sum\limits_{j = 1}^\infty {c({{t}_{j}})h(t - {{t}_{j}})} .$$
(A.43)

The series from (A.43) is of positive terms, and all its subsums are bounded because of monotonicity 0 < c(\({{t}_{{j + 1}}}\)) \(\leqslant \) c(tj), and, therefore, 1 + \(\sum\nolimits_{j = 1}^\infty {c({{t}_{j}})h(t - {{t}_{j}})} \) \(\leqslant \) amax, which completes the proof of Corollary 5.

Proof of Proposition 2. As, when \(\bar {\varphi }\)(t) \( \in \) FE/\(\bar {\varphi }\)(t) \( \in \) PE, the following implications hold according to Corollaries 1 and 2:

$$\begin{gathered} \bar {\varphi }(t) \in {\text{PE}} \Leftrightarrow \forall t\; \geqslant \;kT\;\;\lambda _{{\min }}^{{}}(t) > \mu > 0, \\ \bar {\varphi }(t) \in {\text{FE}} \Leftrightarrow \forall t \in [{{t}_{\delta }};\,\,{{t}_{\delta }} + \delta ] \subset [t_{r}^{ + };\,\,{{t}_{e}}]\;\;\lambda _{{\min }}^{{}}(t) > \mu > 0, \\ \end{gathered} $$

then, when \(\bar {\varepsilon }\) = 0, according to (3.3) we have Ξ(t) = \({{0}_{{n \times n}}}\), as a result \({{\bar {\Lambda }}^{{ - 1}}}(t)\,\Xi \,(t)\) = \({{0}_{{n \times n}}}\) and, consequently, \(\bar {\varphi }\)(t) \( \in \) FF/\(\bar {\varphi }\)(t) \( \in \) PE \( \Rightarrow \) d(t) = 0n \( \Rightarrow \) Θ(t) = θ, which completes the proof of statement (a) of Proposition 2.

The necessity of conditions \(\bar {\varphi }\)(t) \( \in \) s-FE/\(\bar {\varphi }\)(t) \( \in \) s-PE follows from the fact that only if 0 < r < n, the premises of the statement (b) are consistent (\(\exists p > 0\) \(\sum\nolimits_{i = 1}^{n - p} {{{w}_{i}}{{\varphi }_{i}}(t)} \) = 0n, \({{w}_{i}} \ne 0\)). The necessity of the condition n > 2 follows from the contradiction, which occurs when n = 2 in general case (φ1(t) \( \ne \) 0n):

$${{w}_{1}}{{\varphi }_{1}}(t) + {{w}_{2}}{{\varphi }_{2}}(t) = {{0}_{n}}\quad {{w}_{1}} \ne 0,\quad {{w}_{2}} = 0.$$

The next step is to prove the necessity and sufficiency of the following condition to ensure that \(\exists M \subset \{ 1, \ldots ,n\} \), |M| = p, \(\forall i \in M\), Θi = θi:

$$\sum\limits_{i = 1}^{n - p} {{{w}_{i}}{{\varphi }_{i}}(t)} + \sum\limits_{j = n - p + 1}^n {{{w}_{j}}{{\varphi }_{j}}(t)} = {{0}_{n}},\quad {{w}_{i}} \ne 0,\quad {{w}_{j}} = 0.$$
(A.44)

Necessity. To begin with, it should be noted that according to (3.5), the elements of the vector of new unknown parameters Θ coincide with the elements of the vector of original parameters θ if the corresponding elements of the vector d are equal to zero. Therefore, d is considered in more detail. If  \(\bar {r}\) > 0, the multiplication \({{\bar {\Lambda }}^{{ - 1}}}(t)\,\Xi \,(t)\) has the following structure:

$${{\bar {\Lambda }}^{{ - 1}}}(t)\,\Xi \,(t) = \left[ {\begin{array}{*{20}{c}} {\Lambda _{1}^{{ - 1}}(t)}&{{{0}_{{r \times \bar {r}}}}} \\ {{{0}_{{\bar {r} \times r}}}}&{{{\varepsilon }^{{ - 1}}}{{I}_{{\bar {r}}}}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{{0}_{r}}}&{{{0}_{{r \times \bar {r}}}}} \\ {{{0}_{{\bar {r} \times r}}}}&{\varepsilon {{I}_{{\bar {r}}}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{{0}_{r}}}&{{{0}_{{r \times \bar {r}}}}} \\ {{{0}_{{\bar {r} \times r}}}}&{{{I}_{{\bar {r}}}}} \end{array}} \right].$$
(A.45)

Then, owing to the notation (3.4), the definition of d is rewritten as:

$$d = V(t){{\bar {\Lambda }}^{{ - 1}}}(t)\,\Xi \,(t){{V}^{{\text{T}}}}(t)\theta = {{V}_{2}}V_{2}^{{\text{T}}}\theta = {{[{{d}_{1}} \ldots {{d}_{i}} \ldots {{d}_{n}}]}^{{\text{T}}}},$$
(A.46)

from which it follows that d has p zero elements if, in particular, the number of zero rows and columns of the matrix \({{V}_{2}}V_{2}^{{\text{T}}}\) is p, which, in turn, is satisfied when the matrix V2 has p zero rows.

Following the definition of the singular decomposition of a positively semi-definite symmetric matrix [15, 16], the matrix V2 can be obtained as a solution of a homogeneous system of linear algebraic equations:

$$\varphi (t)V_{2}^{k} = \sum\limits_{i = 1}^n {{v}_{i}^{k}{{\varphi }_{i}}(t)} = {{0}_{n}},\quad \forall k \in \{ 1,\,\,\bar {r}\} ,$$
(A.47)

where \(V_{2}^{k}\) is the kth column of the matrix V2.

To prove the necessity of the condition (A.44), it is to be shown that if \({{w}_{j}} \ne 0\), then the vector \(V_{2}^{k}\), \(\forall k \in \{ 1,\,\,\bar {r}\} \), does not contain zero elements.

The expression (A.47) can be rewritten in the following equivalent form (taking into account the orthonormality of \(V_{2}^{k}\), \(\forall k \in \{ 1,\,\,\bar {r}\} \)):

$$\begin{gathered} \varphi (t)V_{2}^{k} = \sum\limits_{i = 1}^n {{v}_{i}^{k}{{\varphi }_{i}}(t)} = \frac{1}{{\sqrt {\sum\limits_{i = 1}^n {w_{i}^{2}} } }}\sum\limits_{i = 1}^n {{{w}_{i}}{{\varphi }_{i}}(t)} = \frac{1}{{\sqrt {\sum\limits_{i = 1}^n {w_{i}^{2}} } }}\left( {\sum\limits_{i = 1}^{n - p} {{{w}_{i}}{{\varphi }_{i}}(t)} + \sum\limits_{j = n - p + 1}^n {{{w}_{j}}{{\varphi }_{j}}(t)} } \right) \\ = \sum\limits_{i = 1}^{n - p} {{v}_{i}^{k}{{\varphi }_{i}}(t)} + \sum\limits_{j = n - p + 1}^n {{v}_{j}^{k}{{\varphi }_{j}}(t)} = {{0}_{n}}{\kern 1pt} . \\ \end{gathered} $$
(A.48)

Since we consider only nontrivial solutions to find \(V_{2}^{k}\), if the condition (A.44) is not satisfied, the set of solutions is given as follows:

$${v}_{i}^{k} = \frac{{{{w}_{i}}}}{{\sqrt {\sum\limits_{i = 1}^n {w_{i}^{2}} } }} \ne 0;\quad {v}_{j}^{k} = \frac{{{{w}_{j}}}}{{\sqrt {\sum\limits_{i = 1}^n {w_{i}^{2}} } }} \ne 0,$$

and then \(V_{2}^{k}\), \(\forall k \in \{ 1,\,\,\bar {r}\} \), does not include zero elements and, consequently, \(\not {\exists }{{d}_{i}} = 0 \Rightarrow \not {\exists }M \subset \{ 1, \ldots ,n\} \), |M| = p, \(\forall i \in M\), Θi = θi, which completes the proof of necessity of the condition (A.45).

Sufficiency. Following the statement of the proposition, when the condition (A.44) is met, the solution set of the equation of the form (A.47) is defined as follows:

$${v}_{i}^{k} = \frac{{{{w}_{i}}}}{{\sqrt {\sum\limits_{i = 1}^n {w_{i}^{2}} } }} \ne 0;\quad {v}_{j}^{k} = \frac{{{{w}_{j}}}}{{\sqrt {\sum\limits_{i = 1}^n {w_{i}^{2}} } }} = 0,$$

and then the vector \(V_{2}^{k}\), \(\forall k \in \{ 1,\,\,\bar {r}\} \), includes p zero elements and, consequently, \(\exists M \subset \{ 1, \ldots ,n\} \), |M| = p, \(\forall i \in M\), Θi = θi, which completes the proof of sufficiency of the condition (A.44).

Thus, the condition (A.44) is necessary and sufficient for the identifiability of p elements of the unknown parameters vector θ, which completes the proof of the second statement of Proposition 2.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glushchenko, A.I., Lastochkin, K.A. Relaxation of Conditions for Convergence of Dynamic Regressor Extension and Mixing Procedure. Autom Remote Control 84, 14–41 (2023). https://doi.org/10.1134/S0005117923010046

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117923010046

Keywords:

Navigation