Abstract
An adaptive state-feedback control system is proposed for a class of linear time-varying systems represented in the controller canonical form. The adaptation problem is reduced to the one of Taylor series-based first approximations of the ideal controller parameters. The exponential convergence of identification and tracking errors of such an approximation to an arbitrarily small and adjustable neighbourhood of the equilibrium point is ensured if the condition of the regressor persistent excitation with a sufficiently small time period is satisfied. The obtained theoretical results are validated via numerical experiments.
Notes
Otherwise there exists a time instant ta \( \geqslant \) \(t_{0}^{ + }\) at which b(ta) = 0, and equations from Assumption 2 have no solution in the general case (bref ≠ 0, aref – a(ta) ≠ 0n).
REFERENCES
Polyak, B.T. and Tsypkin, Ya.Z., Optimal Pseudogradient Adaptation Algorithms, Autom. Remote Control, 1981, vol. 41, pp. 1101–1110.
Polyak, B.T. and Tsypkin, Ya.Z., Robust Pseudogradient Adaptation Algorithms, Autom. Remote Control, 1981, vol. 41, no. 10, pp. 1404–1409.
Ioannou, P. and Sun, J., Robust Adaptive Control, New York: Dover, 2013.
Fradkov, A.L., Lyapunov-Bregman functions for speed-gradient adaptive control of nonlinear time-varying systems, IFAC-PapersOnLine, 2022, vol. 55, no. 12, pp. 544–548.
Goel, R. and Roy, S.B., Composite adaptive control for time-varying systems with dual adaptation, arXiv preprint arXiv:2206.01700, 2022, pp. 1–6.
Na, J., Xing, Y., and Costa-Castello, R., Adaptive estimation of time-varying parameters with application to roto-magnet plant, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018, vol. 51, no. 2, pp. 731–741.
Chen, K. and Astolfi, A., Adaptive control for systems with time-varying parameters, IEEE Transactions on Automatic Control, 2020, vol. 66, no. 5, pp. 1986–2001.
Patil, O.S., Sun, R., Bhasin, S., and Dixon, W.E., Adaptive control of time-varying parameter systems with asymptotic tracking, IEEE Transactions on Automatic Control, 2022, vol. 67, no. 9, pp. 4809–4815.
Putov, V.V., Methods of Adaptive Control Systems Design for Nonlinear Time-Varying Dynamic Plants with Functional-Parametric Uncertainty, Thesis … Dr. of Technical Sciences, SPbGETU “LETI,” SPb., 1993, [In Russian].
Putov, V.V., Polushin, I.G., Lebedev, V.V., and Putov, A.V., Generalisation of the majorizing functions method for the problems of adaptive control of nonlinear dynamic plants, Izvestiya SPbGETU LETI, 2013, no. 8, pp. 32–37.
Glushchenko, A. and Lastochkin, K., Exponentially Stable Adaptive Control, Part III: Time-Varying Plants, Autom. Remote Control, 2023, vol. 84, no. 11, pp. 1232–1247.
Pagilla, P.R. and Zhu, Y., Adaptive control of mechanical systems with time-varying parameters and disturbances, J. Dyn. Sys., Meas., Control, 2004, vol. 126, no. 3, pp. 520–530.
Quoc, D.V., Bobtsov, A.A., Nikolaev, N.A., and Pyrkin, A.A., Stabilization of a linear non-stationary system under conditions of delay and additive sinusoidal perturbation of the output, Journal of Instrument Engineering, 2021, vol. 64, no. 2, pp. 97–103.
Dat, V.Q. and Bobtsov, A.A., Output Control by Linear Time-Varying Systems using Parametric Identification Methods, Mekhatronika, Avtomatizatsiya, Upravlenie, 2020, vol. 21, no. 7, pp. 387–393.
Grigoryev, V.V., Design of control equations for variable parameter systems, Autom. Remote Control, 1983, vol. 44, no. 2, pp. 189–194.
Glushchenko, A. and Lastochkin, K., Robust Time-Varying Parameters Estimation Based on I-DREM Procedure, IFAC-PapersOnLine, 2022, vol. 55, no. 12, pp. 91–96.
Dieudonne, J., Foundations of Modern Analysis, New York, Academic Press, 1960.
Leiva, H. and Siegmund, S., A necessary algebraic condition for controllability and observability of linear time-varying systems, IEEE Transactions on Automatic Control, 2003, vol. 48, no. 12, pp. 2229–2232.
Glushchenko, A.I. and Lastochkin, K.A., Exponentially Stable Adaptive Control. Part II. Switched Systems, Autom. Remote Control, 2023, vol. 84, no. 3, pp. 260–291.
Khalil, H., Nonlinear Systems, 3rd ed., Upper Saddle River, NJ, USA: Prentice-Hall, 2002.
Funding
This research was in part financially supported by Grants Council of the President of the Russian Federation (project MD-1787.2022.4).
Author information
Authors and Affiliations
Corresponding authors
Additional information
This paper was recommended for publication by P.S. Shcherbakov, member of the Editorial Board
Publisher’s Note.
Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
APPENDIX
APPENDIX
Proof of Proposition 1. The proof of the proposition is divided into two steps. At the first one we analyse the properties of the parametric error \(\tilde {\theta }\)(t), at the second one—the properties of the tracking error eref(t).
Step 1. Owing to Proposition 1 from [19], if i \(\leqslant \) imax < ∞, then for the differential equation
the following upper bound holds
where \(\dot {\theta }\)(t) = \(\sum\limits_{q = 1}^i {\Delta _{q}^{\theta }\delta \left( {t - t_{q}^{ + }} \right)} \), and δ : [\(t_{0}^{ + }\); ∞) → {0, ∞} is the Dirac function.
Step 2. The following quadratic form is introduced:
where \({{\bar {e}}_{{{\text{ref}}}}}\)(t) = \({{\left[ {\begin{array}{*{20}{c}} {e_{{{\text{ref}}}}^{{\text{T}}}(t)}&{{{e}^{{ - {{\gamma }_{1}}(t - t_{0}^{ + })}}}} \end{array}} \right]}^{{\text{T}}}}\), P = PT > 0 is the solution of the below-given Lyapunov equation in case λmin(Q) > 2:
The derivative of the quadratic form (A.2) is written as:
where
Having applied Young’s inequality twice:
equation (A.3) is rewritten as:
As the parametric error \(\tilde {\theta }\)(t) converges to zero exponentially (A.2), then, if λmin(Q) > 2, then there definitely exists a time instant \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\; \geqslant \;t_{0}^{ + }\) and constants Tmin > 0, a0 > λmax(P)\({{\bar {\omega }}_{r}}{{b}_{{\max }}}{{\beta }_{{\max }}}\) such that for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) and 0 < T < Tmin it holds that
Then the upper bound of the derivative (A.5) for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) is written as
where \({{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}} = \min \left\{ {\frac{{{{c}_{1}}}}{{{{\lambda }_{{\max }}}(P)}},\,\,\frac{{{{c}_{2}}{{\gamma }_{1}}}}{{a_{0}^{2}}}} \right\}\).
The solution of the differential inequality (A.7) for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) is obtained as
Tending time to infinity for (A.8) and considering expression for \({{V}_{{{{e}_{{{\text{ref}}}}}}}}\), it is concluded that (2.3) holds, which completes the proof.
Proof of Proposition 2. Owing to Assumption 2 and following (3.2), (3.3), we apply the Taylor formula (1.3) to the parameters Θ(t) to obtain:
where \(\Theta \left( {t_{i}^{ + }} \right)\) = Θi, \(\dot {\Theta }\left( {t_{i}^{ + }} \right) = {{\dot {\Theta }}_{i}}\) are the values of the system parameters Θ(t) and the rate of their change at the time instant \(t_{i}^{ + }\), ||δ1(t)|| \(\leqslant \) 0.5\({{\ddot {\Theta }}_{{\max }}}{{T}^{2}}\) denotes the bounded reminder of the first order (p = 1), ||δ0(t)|| \(\leqslant \) \({{\dot {\Theta }}_{{\max }}}T\) is the bounded reminder of the zeroth order (p = 0).
Equation (A.9) is rewritten in the matrix form
where ϑ(t) = \({{\left[ {\begin{array}{*{20}{c}} {\Theta _{i}^{{\text{T}}}}&{\dot {\Theta }_{i}^{{\text{T}}}} \end{array}} \right]}^{{\text{T}}}} \in {{\mathbb{R}}^{{2(n + 1)}}}\).
The substitution of (A.10) into (2.1) yields
The expression x(t) – \(l\bar {x}(t)\) is differentiated to obtain
The solution of (A.12) is written as
where \(\bar {\vartheta }\)(t) = \({{\left[ {\begin{array}{*{20}{c}} {{{\vartheta }^{{\text{T}}}}(t)}&{e_{n}^{{\text{T}}}x\left( {t_{i}^{ + }} \right)} \end{array}} \right]}^{{\text{T}}}} \in {{\mathbb{R}}^{{2n + 3}}}\), and the third equality is not violated since the reset of the filter states (4.1) and the change of parameters occur synchronously at a known time instant \(t_{i}^{ + }\), i.e., \(\bar {\vartheta }\)(t) = const for all t ∈ \(\left[ {\left. {t_{i}^{ + },\,\,t_{i}^{ + } + T} \right)} \right.\).
Equation (A.13) is substituted into (4.2) to obtain
where \({{\bar {z}}_{n}}(t) \in \mathbb{R}\), \({{\bar {\varphi }}_{n}}(t) \in {{\mathbb{R}}^{{2n + 3}}}\) and the perturbation \({{\bar {\varepsilon }}_{0}}(t) \in \mathbb{R}\) is bounded as follows (see definitions of Φ(t) and \({{\bar {\varphi }}_{n}}(t)\)):
Owing to the multiplication of the regression equation (A.14) by ns(t), the regressor \(\bar {\varphi }_{n}^{{\text{T}}}(t)\). the regressand \({{\bar {z}}_{n}}(t)\) and the perturbation \({{\bar {\varepsilon }}_{0}}(t)\) are bounded. In addition, according to the upper bound (A.15), the perturbation \({{\bar {\varepsilon }}_{0}}(t)\) can be reduced by decreasing the parameter T. Therefore, further on we will use the definition \({{\bar {\varepsilon }}_{0}}(t)\) : = \({{\bar {\varepsilon }}_{0}}\)(t, T) and imply that any perturbation obtained by transformation of \({{\bar {\varepsilon }}_{0}}\)(t, T) can also be reduced by a reduction of T.
Having applied (4.3) and multiplicated z(t) by adj {φ(t)}, we have (commutativity of the filter (4.3a) is not violated as its reinitialization and parameters change happen synchronously at a known time instant \(t_{i}^{ + }\), i.e., \(\bar {\vartheta }\)(t) = const for all t ∈ [\(t_{i}^{ + }\), \(t_{i}^{ + }\) + T))
where Y(t) ∈ \({{\mathbb{R}}^{{2n + 3}}}\), Δ(t) ∈ \(\mathbb{R}\), \({{\bar {\varepsilon }}_{1}}\)(t, T) ∈ \({{\mathbb{R}}^{{2n + 3}}}\).
Owing to Δ(t) ∈ \(\mathbb{R}\), the elimination (4.5) allows one to obtain the following from (A.16)
where za(t) ∈ \({{\mathbb{R}}^{{1 \times n}}}\), zb(t) ∈ \(\mathbb{R}\), and ϑa(t), ϑb(t) are the first order approximations of the parameters a(t) and b(t), respectively (components of the vector Θi).
In case Assumption 2 is met, following the definition of the signal \(\mathcal{K}\)(t), the first order approximations θx(t) and θr(t) of the parameters kx(t) and kr(t), respectively, satisfy the equations
where θ(t) = \({{\left[ {\begin{array}{*{20}{c}} {{{\theta }_{x}}(t)}&{{{\theta }_{r}}(t)} \end{array}} \right]}^{{\text{T}}}}\).
Each equation from (A.18) is multiplied by Δ(t). Equations (A.17) are substituted into the obtained result to have equation (4.6):
where \(\mathcal{Y}\)(t) ∈ \({{\mathbb{R}}^{{n + 1}}}\), \(\mathcal{M}\)(t) ∈ \(\mathbb{R}\), d(t, T) ∈ \({{\mathbb{R}}^{{n + 1}}}\).
Owing to (A.19), the solution of (4.7a) is written as
where
Equation (A.20) completes the proof of the fact that equation (4.8) can be obtained via procedure (4.1)–(4.7).
In order to prove statement (a), the regressor Ω(t) is represented as:
As k > 0 and the perturbation \({{\bar {\varepsilon }}_{1}}\)(t, T) is bounded, then Ω2(t) is bounded, moreover, for all t \( \geqslant \) \(t_{0}^{ + }\) the following holds
and there exists a limit \({{\lim }_{{T \to 0}}}{{\Omega }_{{2\max }}}(T)\) = 0 for the upper bound as, following (A.15)–(A.19), the value of \({{\bar {\varepsilon }}_{1}}\)(t, T) can be arbitrarily reduced by reduction of T.
The next aim is to analyze Ω1(t). The solution of the first differential equation from (A.21) is written for all t ∈ \(\left[ {\left. {t_{i}^{ + } + {{T}_{s}},\,\,t_{{i + 1}}^{ + }} \right)} \right.\) as
where ϕ(t, τ) = \({{e}^{{ - \int\limits_\tau ^t {kd\tau } }}}\).
The upper bound is required for the signal Ω1(t) over the time range under consideration. To this end, we need bounds for Δ(t), and, in its turn, the ones for φ(t).
As, according to the premises of the proposition, \({{\bar {\varphi }}_{n}}\) ∈ PE for Ts < T, then \({{\bar {\varphi }}_{n}}\) ∈ FE over \(\left[ {t_{i}^{ + },\,\,t_{i}^{ + } + {{T}_{s}}} \right]\) (this fact can be validated by substitution of t = \(t_{i}^{ + }\) into (1.2)). Then for all t ∈ \(\left[ {\left. {t_{i}^{ + } + {{T}_{s}},\,\,t_{{i + 1}}^{ + }} \right)} \right.\) the following lower bound holds for the regressor φ(t)
On the other hand, as \({{\left\| {{{{\bar {\varphi }}}_{n}}(t)} \right\|}^{2}}\;\leqslant \;\bar {\varphi }_{n}^{{\max }}\), then there exists an upper bound
and, therefore, for all t ∈ \(\left[ {\left. {t_{i}^{ + } + {{T}_{s}},\,\,t_{{i + 1}}^{ + }} \right)} \right.\) it holds that ΔUB \( \geqslant \) Δ(t) \( \geqslant \) ΔLB > 0.
Taking into consideration that, following Assumptions 1 and 2, bmax \( \geqslant \) |b(t)| \( \geqslant \) bmin > 0, and ϑb(t) is the approximation of first order of b(t), then the following holds for the multiplication Δ(t)ϑb(t)
Having applied (A.21) and (A.26) and considered that 0 \(\leqslant \) ϕ(t, τ) \(\leqslant \) 1, the following estimates hold for Ω1(t)
from which we have
Then, using (A.28) and (A.23), the bounds for the regressor Ω(t) are written
and, therefore, considering \({{\lim }_{{T \to 0}}}{{\Omega }_{{2\max }}}(T)\) = 0, there exists Tmin > 0 such that for all 0 < T < Tmin and t \( \geqslant \) t0 + Ts the following inequality holds
which was to be proved in statement (a).
In order to prove the statement (b), the disturbance w(t) is differentiated with (A.20) and (4.7) at hand
The solution of (A.31) is represented as:
As for the first differential equation from (A.32), in Proposition 2 from [19] it is proved (up to notation) that the following inequality holds
when i \(\leqslant \) imax < ∞.
As k > 0 and the disturbance d(t, T) is bounded, then w2(t) is also bounded, and consequently, the following inequality holds
where the limit \({{\lim }_{{T \to 0}}}{{w}_{{2\max }}}(T)\) = 0 holds, as the input of the second differential equation from (A.32) depends only from the value of d(t, T), which, in its turn, according to (A.15)–(A.19), can be reduced arbitrarily by reduction of T. The combination of the inequalities (A.33) and (A.34) in accordance with (A.32) completes the proof of proposition.
Proof of Theorem 1. Proof of theorem is similar to the above-given proof of Proposition 1.
Step 1. For all t \( \geqslant \) \(t_{0}^{ + }\) + Ts the solution of the differential equation (4.9) is written as
where ϕ(t, τ) = \({{e}^{{ - \int\limits_\tau ^t {{{\gamma }_{1}}d\tau } }}}\).
Then, following the proof of Theorem 1 from [19], if i \(\leqslant \) imax < ∞, then the boundedness of the parametric error (A.35) can be shown:
Step 2. The following quadratic form is introduced for all t \( \geqslant \) \(t_{0}^{ + }\) + Ts:
Similar to proof of Proposition 1, the derivative of (A.37) is written as
As for all t \( \geqslant \) \(t_{0}^{ + }\) + Ts the parametric error \(\tilde {\theta }\)(t) meets the inequality (A.36), then, considering
the upper bound of (A.38) is written as follows:
There definitely exists a time instant \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\; \geqslant \;t_{0}^{ + } + {{T}_{s}}\) and constants T → 0, a0 > λmax(P)\({{\bar {\omega }}_{r}}{{b}_{{\max }}}\bar {\beta }_{{\max }}^{{\frac{1}{2}}}\) such that for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) it holds that
Then the upper bound for the derivative (A.39) for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) is obtained as
where \({{\eta }_{{{{{\bar {e}}}_{{{\text{ref}}}}}}}} = \min \left\{ {\frac{{{{c}_{1}}}}{{{{\lambda }_{{\max }}}(P)}},\,\,\frac{{{{c}_{2}}{{\gamma }_{1}}}}{{4a_{0}^{2}}}} \right\}\).
The solution of the differential inequality (A.41) for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\) is written as
which completes the proof of statement (ii) of theorem.
Step 3. Owing to (A.36) and (A.42), the error \(\tilde {\theta }\)(t) is bounded for all t \( \geqslant \) \(t_{0}^{ + }\) + Ts, and the error eref(t)—for all t \( \geqslant \) \({{t}_{{{{e}_{{{\text{ref}}}}}}}}\). Then, to prove the statement (i), we need to show that \(\tilde {\theta }\)(t) is bounded over \(\left[ {\left. {t_{0}^{ + },\,\,t_{0}^{ + } + {{T}_{s}}} \right)} \right.\), and eref(t) is bounded over \(\left[ {\left. {t_{0}^{ + },\,\,t_{{{{e}_{{{\text{ref}}}}}}}^{{}}} \right)} \right.\).
In the conservative case, the inequality Ω(t) \(\leqslant \) ΩLB is satisfied over \(\left[ {\left. {t_{0}^{ + },\,\,t_{0}^{ + } + {{T}_{s}}} \right)} \right.\), whence, owing to \(\dot {\tilde {\theta }}\)(t) = 0n + 1, if Assumption 1 is met, it follows that the parametric error \(\tilde {\theta }\)(t) = \(\hat {\theta }\left( {t_{0}^{ + }} \right)\) – θ(t) is bounded over \(\left[ {\left. {t_{0}^{ + },\,\,t_{0}^{ + } + {{T}_{s}}} \right)} \right.\) and, as a consequence, for all t \( \geqslant \) \(t_{0}^{ + }\).
Considering the time range \(\left[ {\left. {t_{0}^{ + },\,\,t_{{{{e}_{{{\text{ref}}}}}}}^{{}}} \right)} \right.\) and taking into account the notation from (A.3), (A.18), the error equation (3.1) is written in the following form:
which, as it has been proved that \(\tilde {\theta }\)(t) is bounded for all t \( \geqslant \) \(t_{0}^{ + }\) and Assumptions 1 and 2 are met, allows one, using Theorem 3.2 from [20], to make the conclusion that (1) eref(t) is bounded over \(\left[ {\left. {t_{0}^{ + },\,\,t_{{{{e}_{{{\text{ref}}}}}}}^{{}}} \right)} \right.\), (2) ξ(t) ∈ L∞ for all t \( \geqslant \) \(t_{0}^{ + }\).
Rights and permissions
About this article
Cite this article
Glushchenko, A., Lastochkin, K. Approximation-Based Approach to Adaptive Control of Linear Time-Varying Systems. Autom Remote Control 85, 443–460 (2024). https://doi.org/10.1134/S0005117924050047
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0005117924050047