Skip to main content
Log in

About the Method for Constructing External Estimates of the Limit Controllability Set for the Linear Discrete-Time System with Bounded Control

  • LINEAR SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

This article discusses the problem of constructing an external estimate of the limit set of controllability for a linear discrete system with convex control constraints. We have proposed a decomposition method that allows us to reduce the problem for the initial system to subsystems of smaller dimension by switching to the normal Jordan basis of the matrix of the system. The statement about the structure of the reference hyperplane to the limit set of controllability is formulated and proved. A method for constructing an external estimate of the limit set of controllability with an arbitrary order of accuracy in the sense of the Hausdorff distance is proposed based on the principle of contraction mappings. The paper provides examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.

REFERENCES

  1. Sirotin, A.N. and Formal’skii, A.M., Reachability and Controllability of Discrete-Time Systems under Control Actions Bounded in Magnitude and Norm, Autom. Remote Control, 2003, vol. 64, no. 12, pp. 1844–1857.

    Article  MathSciNet  MATH  Google Scholar 

  2. Fisher, M.E. and Gayek, J.E., Estimating Reachable Sets for Two-Dimensional Linear Discrete Systems, J. Optim. Theory Appl., 1988, vol. 56, no. 1, pp. 67–88.

    Article  MathSciNet  MATH  Google Scholar 

  3. Desoer, C.A. and Wing, J., The Minimal Time Regulator Problem for Linear Sampled-Data Systems: General Theory, J. Franklin Inst., 1961, vol. 272, no. 3, pp. 208–228.

    Article  MATH  Google Scholar 

  4. Hamza, M.H. and Rasmy, M.E., A Simple Method for Determining the Reahable Set for Linear Discrete Systems, IEEE Trans. on Automat. Control, 1971, vol. 16, pp. 281–282.

    Article  Google Scholar 

  5. Corradini, M.L., Cristofaro, A., Giannoni, F., and Orlando, G., Estimation of the Null Controllable Region: Discrete-Time Plants, Control Systems with Saturating Inputs, Lecture Notes in Control and Information Sciences, Springer, 2012, vol. 424, pp. 33–52.

    Article  Google Scholar 

  6. Hu, T., Miller, D.E., and Qiu, L., Null Controllable Region of LTI Discrete-Time Systems with Input Saturation, Automatica, 2002, vol. 38, no. 11, pp. 2009–2013.

    Article  MathSciNet  MATH  Google Scholar 

  7. Kalman, R., On the General Theory of Control Systems, Proceedings of the I Intern. Congr. IFAC, 1961, vol. 2, pp. 521–547.

  8. Boltyansky, V.G., Matematicheskie metody optimal’nogo upravleniya (Mathematical Methods of Optimal Control), Moscow: Nauka, 1969.

  9. Pontryagin, L.S., Boltyansky, V.G., Gamkrelidze, R.V., and Mishchenko, B.F., Matematicheskaya teoriya optimal’nykh protsessov (Mathematical Theory of Optimal Processes), Moscow: Nauka, 1969.

  10. Moiseev, N.N., Elementy teorii optimal’nykh sistem (Elements of the Theory of Optimal Systems), Moscow: Nauka, 1975.

  11. Agrachev, A.A. and Sachkov, Yu.L., Geometricheskaya teoriya upravleniya (Geometric Control Theory), Moscow: Nauka, 2005.

  12. Yevtushenko, Yu.G., Metody resheniya ekstremal’nykh zadach i ikh prilozheniya v sistemakh optimizatsii (Methods for Solving Extremal Problems and Their Applications in Optimization Systems), Moscow: Nauka, 1982.

  13. Holtzman, J.M. and Halkin, H., Directional Convexity and the Maximum Principle for Discrete Systems, J. SIAM Control, 1966, vol. 4, no. 2, pp. 263–275.

    Article  MathSciNet  MATH  Google Scholar 

  14. Propoi, A.I., Elementy teorii optimil’nykh diskretnykh protsessov (Elements of the Theory of Optimal Discrete Processes), Moscow: Nauka, 1973.

  15. Bellman, R., Dinamicheskoe programmirovanie (Dynamic Programming), Moscow: Inostrannaya Literatura, 1960.

  16. Kurzhanskiy, A.F. and Varaiya, P., Theory and Computational Techniques for Analysis of Discrete-Time Control Systems with Disturbancens, Optim. Method Software, 2011, vol. 26, nos. 4–5, pp. 719–746.

    Article  MATH  Google Scholar 

  17. Boltyansky, V.G., Optimal’noe upravlenie diskretnymi sistemami (Optimal Control of Discrete Systems), Moscow: Nauka, 1973.

  18. Lin, W.-S., Time-Optimal Control Strategy for Saturating Linear Discrete Systems, Int. J. Control, 1986, vol. 43, no. 5, pp. 1343–1351.

    Article  MATH  Google Scholar 

  19. Moroz, A.I., Synthesis of Time-Optimal Control for Linear Discrete Objects of the Third Order, Autom. Remote Control, 1965, vol. 25, no. 9, pp. 193–206.

    MathSciNet  Google Scholar 

  20. Ibragimov, D.N. and Sirotin, A.N., On the Problem of Optimal Speed for the Discrete Linear System with Bounded Scalar Control on the Basis of 0-Controllability Sets, Autom. Remote Control, 2015, vol. 76, no. 9, pp. 1517–1540.

    Article  MathSciNet  MATH  Google Scholar 

  21. Ibragimov, D.N., Speed-Optimal Control of the Motion of a Balloon, Internet Magazine, Tr. MAI, 2015, no. 83. http://trudymai.ru/published.php

  22. Ibragimov, D.N., Approximation of the Set of Admissible Controls in the Time-Optimal Problem by a Linear Discrete System, Internet Magazine, Tr. MAI, 2016, no. 87. http://trudymai.ru/published.php.

  23. Ibragimov, D.N. and Portseva, E.Yu., Outer Approximation Algorithm for a Convex Set of Admissible Controls for a Discrete System with Bounded Control, Modeling and Data Analysis, 2019, no. 2, pp. 83–98.

  24. Rockafellar, R., Vypuklyi analiz (Convex Analysis), Moscow: Mir, 1973.

  25. Horn, R. and Johnson, C., Matrichnyi analiz (Matrix Analysis), Moscow: Mir, 1989.

  26. Ibragimov, D.N. and Sirotin, A.N., On the Problem of Operation Speed for the Class of Linear Infinite-Dimensional Discrete-Time Systems with Bounded Control, Autom. Remote Control, 2017, vol. 78, no. 10, pp. 1731–1756.

    Article  MathSciNet  MATH  Google Scholar 

  27. Kronover, R.M., Fraktaly i khaos v dinamicheskikh sistemakh (Fractals and Chaos in Dynamical Systems), Moscow: Postmarket, 2000.

  28. Kolmogorov, A.N. and Fomin, S.V., Elementy teorii funktsii i funktsional’nogo analiza (Elements of the Theory of Functions and Functional Analysis), Moscow: Fizmatlit, 2012.

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. V. Berendakova or D. N. Ibragimov.

Additional information

This paper was recommended for publication by M.V. Khlebnikov, a member of the Editorial Board

APPENDIX

APPENDIX

Proof of Lemma 1. Denote the initial states of the system (A1, \({{\mathcal{U}}_{1}}\)) and (A2, \({{\mathcal{U}}_{2}}\)) by x0,е1\({{\mathbb{R}}^{{{{n}_{1}}}}}\) and x0, 2\({{\mathbb{R}}^{{{{n}_{2}}}}}\), respectively. Then x0 = \(\left( \begin{gathered} {{x}_{{0,1}}} \\ {{x}_{{0,2}}} \\ \end{gathered} \right)\) is the initial state of the system (A, \(\mathcal{U}\)).

By (1) it is true that for all N\(\mathbb{N}\)

$$x(N) = {{A}^{N}}{{x}_{0}} + {{A}^{{N - 1}}}u(0) + {{A}^{{N - 2}}}u(1) + \ldots + u(N - 1)$$
$$ = \left( {\begin{array}{*{20}{c}} {A_{1}^{N}}&O \\ O&{A_{2}^{N}} \end{array}} \right)\left( \begin{gathered} {{x}_{{0,1}}} \\ {{x}_{{0,2}}} \\ \end{gathered} \right) + \left( {\begin{array}{*{20}{c}} {A_{1}^{{N - 1}}}&O \\ O&{A_{2}^{{N - 1}}} \end{array}} \right)\left( \begin{gathered} {{u}_{1}}(0) \\ {{u}_{2}}(0) \\ \end{gathered} \right) + \ldots + \left( \begin{gathered} {{u}_{1}}(N - 1) \\ {{u}_{2}}(N - 1) \\ \end{gathered} \right)$$
$$ = \left( \begin{gathered} A_{1}^{N}{{x}_{{0,1}}} + A_{1}^{{N - 1}}{{u}_{1}}(0) + \ldots + {{u}_{1}}(N - 1) \\ A_{2}^{N}{{x}_{{0,2}}} + A_{2}^{{N - 1}}{{u}_{2}}(0) + \ldots + {{u}_{2}}(N - 1) \\ \end{gathered} \right) = \left( \begin{gathered} {{x}_{1}}(N) \\ {{x}_{2}}(N) \\ \end{gathered} \right).$$

Then x(N) = 0 if and only if there exist u1(0), …, u1(N – 1) ∈ \({{\mathcal{U}}_{1}}\) and u2(0), …, u2(N – 1) ∈ \({{\mathcal{U}}_{2}}\) such that x1(N) = 0, x2(N) = 0. The equality data is, by virtue of (2), equivalent to including x0, 1\({{\mathcal{X}}_{1}}\)(N), x0, 2\({{\mathcal{X}}_{2}}\)(N). Hence,

$$\mathcal{X}(N) = {{\mathcal{X}}_{1}}(N)\,\, \times \,\,{{\mathcal{X}}_{2}}(N).$$

Let x0\({{\mathcal{X}}_{\infty }}\). Then according to (3) there exists \(\tilde {N} \in \mathbb{N}\) such that

$${{x}_{0}} = \mathcal{X}(\tilde {N}) = {{\mathcal{X}}_{1}}(\tilde {N})\,\, \times \,\,{{\mathcal{X}}_{2}}(\tilde {N}) \subset \bigcup\limits_{N = 0}^\infty {{{\mathcal{X}}_{1}}(N)} \,\, \times \,\,\bigcup\limits_{N = 0}^\infty {{{\mathcal{X}}_{2}}(N)} = {{\mathcal{X}}_{{1,\infty }}}\,\, \times \,\,{{\mathcal{X}}_{{2,\infty }}}.$$

Then \({{\mathcal{X}}_{\infty }}\)\({{\mathcal{X}}_{{1,\infty }}}\) × \({{\mathcal{X}}_{{2,\infty }}}\).

Let x0\({{\mathcal{X}}_{{1,\infty }}}\) × \({{\mathcal{X}}_{{2,\infty }}}\). Therefore, there are \({{\tilde {N}}_{1}}\), \({{\tilde {N}}_{2}} \in \mathbb{N}\) such that x0\({{\mathcal{X}}_{1}}({{\tilde {N}}_{1}})\) × \({{\mathcal{X}}_{2}}({{\tilde {N}}_{2}})\)\({{\mathcal{X}}_{1}}(\tilde {N})\) × \({{\mathcal{X}}_{2}}(\tilde {N})\), where \(\tilde {N}\) = max{\({{\tilde {N}}_{1}}\), \({{\tilde {N}}_{2}}\)}. Then, by point 1 of Lemma 1

$${{x}_{0}} \in \mathcal{X}(\tilde {N}) \subset \bigcup\limits_{N = 0}^\infty {\mathcal{X}(N)} = {{\mathcal{X}}_{\infty }}.$$

It follows that \({{\mathcal{X}}_{{1,\infty }}}\) × \({{\mathcal{X}}_{{2,\infty }}} \subset {{\mathcal{X}}_{\infty }}\).

Finally, we get that \({{\mathcal{X}}_{\infty }}\) = \({{\mathcal{X}}_{{1,\infty }}}\) × \({{\mathcal{X}}_{{2,\infty }}}\). Lemma 1 is proved.

Proof of Lemma 2. Let \(\{ y(k)\} _{{k = 0}}^{N}\) be the trajectory of the system (\({{S}^{{ - 1}}}AS\), \({{S}^{{ - 1}}}\mathcal{U}\)), i.e., y(N) according to (1) for the initial state y0\({{\mathbb{R}}^{n}}\) admits the following representation:

$$y(N) = {{S}^{{ - 1}}}ASy(N - 1) + {v}(N - 1) = \ldots = {{S}^{{ - 1}}}{{A}^{N}}S{{y}_{0}} + {{S}^{{ - 1}}}{{A}^{{N - 1}}}S{v}(0) + \ldots + {v}(N - 1),$$

where \({v}\)(0), …, \({v}\)(N – 1) ∈ \({{S}^{{ - 1}}}\mathcal{U}\).

By virtue of (2) y0\(\mathcal{Y}(N)\) if and only if y(N) = 0, i.e.,

$${{S}^{{ - 1}}}{{A}^{N}}S{{y}_{0}} + {{S}^{{ - 1}}}{{A}^{{N - 1}}}S{v}(0) + \ldots + {v}(N - 1) = 0,$$
$${{A}^{N}}S{{y}_{0}} + {{A}^{{N - 1}}}S{v}(0) + \ldots + S{v}(N - 1) = 0,$$

which, by virtue of (2), is equivalent to including Sy0\(\mathcal{X}(N)\), since by construction \(S{v}\)(0), …, \(S{v}\)(N – 1) ∈ \(\mathcal{U}\). Whence follows the equality \(\mathcal{X}(N)\) = \(S\mathcal{Y}(N)\).

Let x0\({{\mathcal{X}}_{\infty }}\). By (3), there exists \(\tilde {N} \in \mathbb{N} \cup \{ 0\} \) such that x0\(\tilde {\mathcal{X}}(\tilde {N})\), which is equivalent to including x0 ∈ \(S\mathcal{Y}(\tilde {N})\) according to point 1 of Lemma 2. Hence S–1x0\(\mathcal{Y}(\tilde {N})\). Then S–1x0\(\bigcup\nolimits_{N = 0}^\infty {\mathcal{Y}(N)} \) = \({{\mathcal{Y}}_{\infty }}\), i.e., x0\(S{{\mathcal{Y}}_{\infty }}\). Then \({{\mathcal{X}}_{\infty }}\)\(S{{\mathcal{Y}}_{\infty }}\).

Let x0\(S{{\mathcal{Y}}_{\infty }}\), then S–1x0\({{\mathcal{Y}}_{\infty }}\). By (3) there exists \(\tilde {N} \in \mathbb{N} \cup \{ 0\} \) such that S–1x0\(\mathcal{Y}(\tilde {N})\). Then x0\(S\mathcal{Y}(\tilde {N})\), which is equivalent to including x0\(\mathcal{X}(\tilde {N})\) by point 1 of Lemma 2. According to relations (3), the inclusion x0\({{\mathcal{X}}_{\infty }}\) is also true. Then \(S{{\mathcal{Y}}_{\infty }}\)\({{\mathcal{X}}_{\infty }}\).

Finally, we get that \(S{{\mathcal{Y}}_{\infty }}\) = \({{\mathcal{X}}_{\infty }}\). Lemma 2 is proved.

Proof of Lemma 3. Let x0\({{\mathbb{R}}^{{{{n}_{1}}}}} \times {{\mathcal{X}}_{{2,\infty }}}\). Then x0 = \(\left( \begin{gathered} {{x}_{{0,1}}} \\ {{x}_{{0,2}}} \\ \end{gathered} \right)\), where x0, 1\({{\mathbb{R}}^{{{{n}_{1}}}}}\), x0, 2\({{\mathcal{X}}_{{2,\infty }}}\), whence according to (3) there exists \(\tilde {N} \in \mathbb{N} \cup \{ 0\} \) such that x0, 2\({{\mathcal{X}}_{2}}(\tilde {N})\), which according to (2) is equivalent to the existence of \(u_{2}^{*}\)(0), …, \(u_{2}^{*}\)(N – 1) ∈ \({{\mathcal{U}}_{2}}\) such that x2(\(\tilde {N}\)) = 0. Then for the system (A, \(\mathcal{U}\)) there are u(0), …, u(\(\tilde {N}\) – 1) ∈ \(\mathcal{U}\) such that u(k) = \(\left( \begin{gathered} {{u}_{1}}(k) \\ u_{2}^{*}(k) \\ \end{gathered} \right)\), k = \(\overline {0,\tilde {N} - 1} \). By (1), x(\(\tilde {N}\)) has the representation

$$x(\tilde {N}) = \left( \begin{gathered} A_{1}^{{\tilde {N}}}{{x}_{{0,1}}} + A_{1}^{{\tilde {N} - 1}}{{u}_{1}}(0) + \ldots + {{u}_{1}}(\tilde {N} - 1) \\ 0 \\ \end{gathered} \right) = \left( \begin{gathered} {{{\tilde {x}}}_{1}} \\ 0 \\ \end{gathered} \right).$$

According to Lemma 1, it suffices to show that there exists \({{\mathcal{U}}_{1}} \subset {{\mathbb{R}}^{{{{n}_{1}}}}}\) such that \({{\mathcal{U}}_{1}}\) × {0} ⊂ \(\mathcal{U}\) and \({{\mathcal{X}}_{{1,\infty }}}\) = \({{\mathbb{R}}^{{{{n}_{1}}}}}\), where \({{\mathcal{X}}_{{1,\infty }}}\) is the limit set of 0-controllability of the system (A1, \({{\mathcal{U}}_{1}}\)).

Denote by S\({{\mathbb{R}}^{{{{n}_{1}} \times {{n}_{1}}}}}\) the transition matrix to the normal Jordan basis of the matrix A1. Since 0 ∈ int\(\mathcal{U}\), there exists umax > 0 such that S[–umax; \({{u}_{{\max }}}{{]}^{{{{n}_{1}}}}}\) × {0} ⊂ \(\mathcal{U}\). Moreover, due to the non-degeneracy of the matrix S and Lemma 2, the equality \({{\mathcal{X}}_{{1,\infty }}}\) = \({{\mathbb{R}}^{{{{n}_{1}}}}}\) is true for the case \({{\mathcal{U}}_{1}}\) = S[–umax; \({{u}_{{\max }}}{{]}^{{{{n}_{1}}}}}\) if and only if \({{S}^{{ - 1}}}{{\mathcal{X}}_{{1,\infty }}}\) = \({{\mathbb{R}}^{{{{n}_{1}}}}}\), where \({{S}^{{ - 1}}}{{\mathcal{X}}_{{1,\infty }}}\) is the limit set of 0-controllability of the system (S–1A1S, [–umax; \({{u}_{{\max }}}{{]}^{{{{n}_{1}}}}}\)). Moreover, according to the normal Jordan form theorem [25], the equality

$${{S}^{{ - 1}}}{{A}_{1}}S = \left( {\begin{array}{*{20}{c}} {{{J}_{1}}}&0&0& \ldots \\ 0&{{{J}_{2}}}&0& \ldots \\ \vdots & \vdots & \ddots & \vdots \\ 0&0& \ldots &{{{J}_{{{{{\tilde {n}}}_{1}}}}}} \end{array}} \right),$$

where the Jordan cells Ji corresponding to the real eigenvalues λi\(\mathbb{R}\) of the matrix A1 have the form

$${{J}_{i}} = \left( {\begin{array}{*{20}{c}} {{{\lambda }_{i}}}&1&0& \ldots &0 \\ 0&{{{\lambda }_{i}}}&1& \ldots &0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \ldots &{{{\lambda }_{i}}} \end{array}} \right) \in {{\mathbb{R}}^{{{{m}_{i}} \times {{m}_{i}}}}},$$
(A.1)

and the Jordan cells Ji corresponding to the complex eigenvalues λi\(\mathbb{C}\) of the matrix A1 have the form

$${{J}_{i}} = \left( {\begin{array}{*{20}{c}} {{{r}_{i}}{{A}_{{{{\varphi }_{i}}}}}}&I&0& \ldots &0 \\ 0&{{{r}_{i}}{{A}_{{{{\varphi }_{i}}}}}}&I& \ldots &0 \\ \vdots & \vdots & \ddots &{}& \vdots \\ 0&0& \ldots &{{{r}_{i}}{{A}_{{{{\varphi }_{i}}}}}}&I \\ 0&0&0& \ldots &{{{r}_{i}}{{A}_{{{{\varphi }_{i}}}}}} \end{array}} \right) \in {{\mathbb{R}}^{{2{{m}_{i}} \times 2{{m}_{i}}}}},\quad {{A}_{{{{\varphi }_{i}}}}} = \left( {\begin{array}{*{20}{c}} {\cos {{\varphi }_{i}}}&{\sin {{\varphi }_{i}}} \\ { - \sin {{\varphi }_{i}}}&{\cos {{\varphi }_{i}}} \end{array}} \right),$$
(A.2)

where ri = |λi|, φi = arg(λi).

Hence, by virtue of Lemma 1, it suffices to show that for |λi| \(\leqslant \) 1, the limit sets of null-controllability of the system are (Ji, [–umax; \({{u}_{{\max }}}{{]}^{{{{m}_{i}}}}}\)) for the case of (A.1) and the systems (Ji, [–umax; \({{u}_{{\max }}}{{]}^{{2{{m}_{i}}}}}\)) for the case of (A.2) coincide with \({{\mathbb{R}}^{{{{m}_{i}}}}}\) and \({{\mathbb{R}}^{{2{{m}_{i}}}}}\) correspondingly, for all i = \(\overline {1,{{{\tilde {n}}}_{1}}} \).

Let J\({{\mathbb{R}}^{{m \times m}}}\) satisfy (A.1). Then for all N \( \geqslant \) m the following relations hold

$${{J}^{N}} = \left( {\begin{array}{*{20}{c}} {{{\lambda }^{N}}}&{N{{\lambda }^{{N - 1}}}}&{C_{N}^{2}{{\lambda }^{{N - 2}}}}& \ldots &{C_{N}^{{m - 1}}{{\lambda }^{{N - m + 1}}}} \\ 0&{{{\lambda }^{N}}}&{N{{\lambda }^{{N - 1}}}}& \ldots &{C_{N}^{{m - 2}}{{\lambda }^{{N - m + 2}}}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \ldots &{{{\lambda }^{N}}} \end{array}} \right),$$
(A.3)

where we denote the number of combinations of N choose m by \(C_{N}^{m}\):

$$C_{N}^{m} = \frac{{N!}}{{(N - m)!m!}}.$$

Denote by {y(k), \({v}\)(k – 1), \({{y}_{0}}\} _{{k = 1}}^{N}\) the process of controlling the system (J, [–umax; umax]m). Hence

$$y(N) = {{J}^{N}}{{y}_{0}} + \sum\limits_{k = 0}^{N - 1} {{{J}^{k}}{v}(N - k - 1).} $$

If we denote z0 = JNy0, then by (A.3) it is right for each ith coordinate of z0, that

$${{z}_{{0,i}}} = \sum\limits_{j = 0}^{m - i} {{{\lambda }^{{N - j}}}C_{N}^{j}{{y}_{{0,j + i}}}} ,\quad i = \overline {1,m} .$$

Let us assume, that |λ| < 1. Then for all N \( \geqslant \) 2m the following relations hold

$$\left| {{{z}_{{0,i}}}} \right|\;\leqslant \;\sum\limits_{j = 0}^{m - i} {\left| {{{\lambda }^{{N - j}}}{{y}_{{0,j + i}}}C_{N}^{j}} \right|} \;\leqslant \;\sum\limits_{j = 0}^{m - 1} {\left| {{{\lambda }^{{N - j}}}} \right|\left| {\mathop {\max }\limits_{i = \overline {1,m} } {{y}_{{0,i}}}} \right|\left| {C_{N}^{j}} \right|} $$
$$\leqslant \;m\left| {{{\lambda }^{{N - m + 1}}}} \right|\mathop {\max }\limits_{i = \overline {1,m} } \left| {{{y}_{{0,i}}}} \right|C_{N}^{{m - 1}}\;\leqslant \;m\mathop {\max }\limits_{i = \overline {1,m} } \left| {{{y}_{{0,i}}}} \right|{{\left| \lambda \right|}^{{N - m + 1}}}\frac{{N \cdot (N - 1) \cdot \ldots \cdot (N - m + 2)}}{{(m - 1)!}}$$
$$\leqslant \;m\mathop {\max }\limits_{i = \overline {1,m} } \left| {{{y}_{{0,i}}}} \right|{{\left| \lambda \right|}^{{N - m + 1}}}\frac{{{{N}^{{m - 1}}}}}{{(m - 1)!}}\;\xrightarrow{{N \to \infty }}\;0.$$

Then there exists \(\tilde {N} \in \mathbb{N}\) such, that for all i = \(\overline {1,m} \)

$$ - {{u}_{{\max }}} < {{z}_{{0,i}}} < {{u}_{{\max }}}.$$

Let us take \({v}\)(0) = … = \({v}(\tilde {N} - 2)\) = 0 and \({v}(\tilde {N} - 1)\) = –z0 ∈ [–umax; umax]m. Then we obtain y(\(\tilde {N}\)) = 0, i.e., y0\(\mathcal{Y}(\tilde {N})\). Therefore, by choosing arbitrary y0\({{\mathbb{R}}^{m}}\) and Eq. (3) we obtain the result \({{\mathcal{Y}}_{\infty }}\) = \({{\mathbb{R}}^{m}}\).

Let us assume, that |λ| = 1. Then by (A.3) for some Nm\(\mathbb{N}\) and mth coordinate of y(Nm) it is right that

$${{y}_{m}}({{N}_{m}}) = {{\lambda }^{{{{N}_{m}}}}}{{y}_{m}}(0) + \sum\limits_{k = 0}^{{{N}_{m}} - 1} {{{\lambda }^{k}}{{{v}}_{m}}({{N}_{m}} - k - 1).} $$

Here we choose Nm\(\mathbb{N}\), requiring |ym(0)| \(\leqslant \) Nmumax. Then we take \({v}\)(0), …, \({v}\)(Nm – 1) ∈ [–umax; umax]m in accordance with the following condition:

$${{{v}}_{m}}({{N}_{m}} - k - 1) = - \frac{{{{\lambda }^{{{{N}_{m}}}}}{{y}_{m}}(0)}}{{{{\lambda }^{k}}{{N}_{m}}}} \in [ - {{u}_{{\max }}};{{u}_{{\max }}}],\quad k = \overline {0,{{N}_{m}} - 1} .$$

We obtain

$${{y}_{m}}({{N}_{m}}) = {{\lambda }^{{{{N}_{m}}}}}{{y}_{m}}(0) + \sum\limits_{k = 0}^{{{N}_{m}} - 1} {{{\lambda }^{k}}\frac{{ - {{\lambda }^{{{{N}_{m}}}}}{{y}_{m}}(0)}}{{{{\lambda }^{k}}{{N}_{m}}}}} = 0.$$

Let us assume that for some N\(\mathbb{N}\) and i = \(\overline {1,m - 1} \) it is right that ym(N) = … = \({{y}_{{m - i + 1}}}(N)\) = 0. Then if \({{{v}}_{m}}\)(N + Nm  ik – 1) = … = \({{{v}}_{{m - i + 1}}}\)(N + Nm  ik – 1) = 0, k = \(\overline {0,{{N}_{{m - i}}} - 1} \) is right, then by (A.3) the following result is straightforward

$${{y}_{{m - i}}}(N + {{N}_{{m - i}}}) = {{\lambda }^{{{{N}_{{m - i}}}}}}{{y}_{{m - i}}}(N) + \sum\limits_{k = 0}^{{{N}_{{m - i}}} - 1} {{{\lambda }^{k}}{{{v}}_{{m - i}}}(N + {{N}_{{m - i}}} - k - 1),} $$

where \({{N}_{{m - i}}} \in \mathbb{N}\) is chosen from the condition |ym  i(N)| \(\leqslant \) \({{N}_{{m - i}}}{{u}_{{\max }}}\). Then we determine \({v}\)(N), …, \({v}\)(N + \({{N}_{{m - i}}}\) – 1) ∈ [–umax; umax]m in order that

$${{{v}}_{{m - i}}}(N + {{N}_{{m - i}}} - k - 1) = \frac{{ - {{\lambda }^{{{{N}_{{m - i}}}}}}{{y}_{{m - i}}}(N)}}{{{{N}_{{m - i}}}{{\lambda }^{k}}}},\quad k = \overline {0,{{N}_{{m - i}}} - 1} .$$

We obtain

$${{y}_{{m - i}}}(N + {{N}_{{m - i}}}) = {{\lambda }^{{{{N}_{{m - i}}}}}}{{y}_{{m - i}}}(N) + \sum\limits_{k = 0}^{{{N}_{{m - i}}} - 1} {{{\lambda }^{k}}\frac{{ - {{\lambda }^{{{{N}_{{m - i}}}}}}{{y}_{{m - i}}}(N)}}{{{{N}_{{m - i}}}{{\lambda }^{k}}}}} = 0,$$
$${{y}_{m}}(N + {{N}_{{m - i}}}) = \ldots = {{y}_{{m - i + 1}}}(N + {{N}_{{m - i}}}) = 0.$$

Then there exists N\(\mathbb{N}\) such, that y(N) = 0, i.e., y0\(\mathcal{Y}(N)\) by the method of mathematical induction. Therefore, we obtain the result \({{\mathcal{Y}}_{\infty }}\) = \({{\mathbb{R}}^{m}}\) by choosing arbitrary y0\({{\mathbb{R}}^{m}}\) and Eq. (3).

Let J\({{\mathbb{R}}^{{2m \times 2m}}}\) satisfy the case (A.2). Then for all N \( \geqslant \) m the following relations hold

$${{J}^{N}} = \left( {\begin{array}{*{20}{c}} {{{r}^{N}}{{A}_{{N\varphi }}}}&{N{{r}^{{N - 1}}}{{A}_{{(N - 1)\varphi }}}}& \ldots &{C_{N}^{{m - 1}}{{r}^{{N - m + 1}}}{{A}_{{(N - m + 1)\varphi }}}} \\ 0&{{{r}^{N}}{{A}_{{N\varphi }}}}& \ldots &{C_{N}^{{m - 2}}{{r}^{{N - m + 2}}}{{A}_{{(N - m + 2)\varphi }}}} \\ \vdots & \vdots & \ddots & \vdots \\ 0&0& \ldots &{{{r}^{N}}{{A}_{{N\varphi }}}} \end{array}} \right).$$
(A.4)

Denote the process of controlling the system (J, [–umax; umax]2m) by {y(k), \({v}\)(k – 1), \({{y}_{0}}\} _{{k = 1}}^{N}\). Hence

$$y(N) = {{J}^{N}}{{y}_{0}} + \sum\limits_{k = 0}^{N - 1} {{{J}^{k}}{v}(N - k - 1).} $$

If we denote z0 = JNy0, then by (A.4) for each ith two-dimensional subvector z0 it is right, that

$${{z}_{{0,i}}} = \sum\limits_{j = 0}^{m - i} {C_{N}^{j}{{r}^{{N - j}}}{{A}_{{(N - j)\varphi }}}{{y}_{{0,j + i}}}} \in {{\mathbb{R}}^{2}},\quad i = \overline {1,m} ,$$

where z0 = \((z_{{0,1}}^{{\text{T}}}\), …, \(z_{{0,m}}^{{\text{T}}}{{)}^{{\text{T}}}}\), y0 = \((y_{{0,1}}^{{\text{T}}}\), …, \(y_{{0,m}}^{{\text{T}}}{{)}^{{\text{T}}}}\).

Let us assume, that r < 1. Then for all N \( \geqslant \) 2m the following relations hold

$$\left\| {{{z}_{{0,i}}}} \right\|\;\leqslant \;\sum\limits_{j = 0}^{m - i} {\left\| {{{r}^{{N - j}}}{{A}_{{(N - j)\varphi }}}{{y}_{{0,j + i}}}C_{N}^{j}} \right\|} \;\leqslant \;\sum\limits_{j = 0}^{m - 1} {{{r}^{{N - j}}}\left\| {{{A}_{{(N - j)\varphi }}}{{y}_{{0,i}}}} \right\|C_{N}^{j}} $$
$$\leqslant \;\sum\limits_{j = 0}^{m - 1} {{{r}^{{N - j}}}\mathop {\max }\limits_{i = \overline {1,m} } \left\| {{{y}_{{0,i}}}} \right\|C_{N}^{j}} \;\leqslant \;m{{r}^{{N - m + 1}}}\mathop {\max }\limits_{i = \overline {1,m} } \left\| {{{y}_{{0,i}}}} \right\|\frac{{N(N - 1) \cdot \ldots \cdot (N - m + 2)}}{{(m - 1)!}}$$
$$\leqslant \;m{{r}^{{N - m + 1}}}\mathop {\max }\limits_{i = \overline {1,m} } \left\| {{{y}_{{0,i}}}} \right\|\frac{{{{N}^{{m - 1}}}}}{{(m - 1)!}}\;\xrightarrow{{N \to \infty }}\;0.$$

Then there exists \(\tilde {N} \in \mathbb{N}\) such, that for all i = \(\overline {1,m} \) it is right that

$$\left\| {{{z}_{{0,i}}}} \right\| < {{u}_{{\max }}}.$$

Let us determine \({v}\)(0) = … = \({v}\)(\(\tilde {N}\) – 2) = 0 and \({v}\)(\(\tilde {N}\) – 1) = –z0 ∈ [–umax; umax]2m. We obtain the result y(\(\tilde {N}\)) = 0, i.e., y0\(\mathcal{Y}(\tilde {N})\). Therefore, we obtain the result \({{\mathcal{Y}}_{\infty }}\) = \({{\mathbb{R}}^{{2m}}}\) by choosing arbitrary y0\({{\mathbb{R}}^{{2m}}}\) and Eq. (3).

Let us assume, that r = 1. Then by (A.4) for some Nm\(\mathbb{N}\) and mth two-dimensional subvector y(Nm) it is true, that

$${{y}_{m}}({{N}_{m}}) = {{A}_{{{{N}_{m}}\varphi }}}{{y}_{m}}(0) + \sum\limits_{k = 0}^{{{N}_{m}} - 1} {{{A}_{{k\varphi }}}{{{v}}_{m}}({{N}_{m}} - k - 1).} $$

Let us take Nm\(\mathbb{N}\) which hold the inequality ||ym(0)|| \(\leqslant \) Nmumax. Then we choose \({v}\)(0), …, \({v}\)(Nm – 1) ∈ [‒umax; umax]2m in accordance with the equality

$${{{v}}_{m}}({{N}_{m}} - k - 1) = - \frac{{{{A}_{{({{N}_{m}} - k)\varphi }}}{{y}_{m}}(0)}}{{{{N}_{m}}}} \in {{[ - {{u}_{{\max }}};{{u}_{{\max }}}]}^{{2m}}},\quad k = \overline {0,{{N}_{m}} - 1} .$$

We obtain

$${{y}_{m}}({{N}_{m}}) = {{A}_{{{{N}_{m}}\varphi }}}{{y}_{m}}(0) + \sum\limits_{k = 0}^{{{N}_{m}} - 1} {\frac{{ - {{A}_{{{{N}_{m}}\varphi }}}{{y}_{m}}(0)}}{{{{N}_{m}}}}} = 0.$$

Let us assume that for some N\(\mathbb{N}\) and i = \(\overline {1,m - 1} \) the relation ym(N) = … = \({{y}_{{m - i + 1}}}(N)\) = 0 is correct. Then if \({{{v}}_{m}}\)(N + \({{N}_{{m - i}}}\)k – 1) = … = \({{{v}}_{{m - i + 1}}}\)(N + \({{N}_{{m - i}}}\)k – 1) = 0, k = \(\overline {0,{{N}_{{m - i}}} - 1} \), then according to the (A.4)

$${{y}_{{m - i}}}(N + {{N}_{{m - i}}}) = {{A}_{{{{N}_{{m - i}}}\varphi }}}{{y}_{{m - i}}}(N) + \sum\limits_{k = 0}^{{{N}_{{m - i}}} - 1} {{{A}_{{k\varphi }}}{{{v}}_{{m - i}}}(N + {{N}_{{m - i}}} - k - 1),} $$

where \({{N}_{{m - i}}}\)\(\mathbb{N}\) is selected from the condition ||\({{y}_{{m - i}}}\)(N)|| \(\leqslant \) \({{N}_{{m - i}}}{{u}_{{\max }}}\). Then we define \({v}\)(N), …, \({v}\)(N + \({{N}_{{m - i}}}\) – 1) ∈ [–umax; umax]2m in order that

$${{{v}}_{{m - i}}}(N + {{N}_{{m - i}}} - k - 1) = \frac{{ - {{A}_{{({{N}_{{m - i}}} - k)\varphi }}}{{y}_{{m - i}}}(N)}}{{{{N}_{{m - i}}}}},\quad k = \overline {0,{{N}_{{m - i}}} - 1} .$$

We obtain

$${{y}_{{m - i}}}(N + {{N}_{{m - i}}}) = {{A}_{{{{N}_{{m - i}}}\varphi }}}{{y}_{{m - i}}}(N) + \sum\limits_{k = 0}^{{{N}_{{m - i}}} - 1} {\frac{{ - {{A}_{{{{N}_{{m - i}}}\varphi }}}{{y}_{{m - i}}}(N)}}{{{{N}_{{m - i}}}}}} = 0,$$
$${{y}_{m}}(N + {{N}_{{m - i}}}) = \ldots = {{y}_{{m - i + 1}}}(N + {{N}_{{m - i}}}) = 0.$$

Then there exists N\(\mathbb{N}\) such, that y(N) = 0, i.e., y0\(\mathcal{Y}(N)\) by the method of mathematical induction. Therefore, we obtain the result \({{\mathcal{Y}}_{\infty }}\) = \({{\mathbb{R}}^{{2m}}}\) by choosing arbitrary y0\({{\mathbb{R}}^{{2m}}}\) and Eq. (3).

Hence Lemma 3 is proved.

Proof of Theorem 1. Let x0\({{\mathcal{X}}_{\infty }}\), which by (3) is equivalent to the existence of N\(\mathbb{N}\) ∪ {0} such, that x0\(\mathcal{X}\)(N). By (2) there exist u(0), …, u(N – 1) ∈ \(\mathcal{U}\) such, that x(N) = 0. Based on (1), for all h ∈ ∂B1(0), ε > 0 the following relations hold

$$\tilde {x}(N) = {{A}^{N}}({{x}_{0}} + \varepsilon h) + {{A}^{{N - 1}}}u(0) + \ldots + u(N - 1)$$
$$ = {{A}^{N}}{{x}_{0}} + {{A}^{{N - 1}}}u(0) + \ldots + u(N - 1) + {{A}^{N}}h\varepsilon = x(N) + {{A}^{N}}h\varepsilon = {{A}^{N}}h\varepsilon ,$$
$$\tilde {x}(N + 1) = A\tilde {x}(N) + u(N) = {{A}^{{N + 1}}}h\varepsilon + u(N),$$

where u(N) ∈ \(\mathcal{U}\). Considering 0 ∈ int \(\mathcal{U}\), there exists δ > 0 such, that Oδ(0) ⊂ \(\mathcal{U}\), and ε > 0 such, that \(\varepsilon {{A}^{{N + 1}}}{{B}_{1}}(0)\)Oδ(0). Let us take

$$u(N) = - {{A}^{{N + 1}}}h\varepsilon \in \varepsilon {{A}^{{N + 1}}}{{B}_{1}}(0) \subset {{O}_{\delta }}(0) \subset \mathcal{U}.$$

Then \(\tilde {x}\)(N + 1) = 0, i.e., for all hB1(0) it is true that x0 + εh\(\mathcal{X}\)(N + 1). As a result, Bε(x0) ⊂ \(\mathcal{X}\)(N + 1) ⊂ \({{\mathcal{X}}_{\infty }}\), i.e., x0 ∈ int \({{\mathcal{X}}_{\infty }}\). Hence, \({{\mathcal{X}}_{\infty }}\), is open.

Let x0, 1, x0, 2\({{\mathcal{X}}_{\infty }}\), α ∈ [0; 1]. Then there exists N\(\mathbb{N}\) ∪ {0} such, that x0, 1, x0, 2\(\mathcal{X}\)(N), i.e., there exist u1(0), u1(1), …, u1(N – 1), u2(0), u2(1), …, u2(N – 1) ∈ \(\mathcal{U}\) such, that x1(N) = 0, x2(N) = 0. According to (1) it is true that

$$0 = {{x}^{1}}(N) = {{A}^{N}}{{x}_{{0,1}}} + {{A}^{{N - 1}}}{{u}^{1}}(0) + {{A}^{{N - 2}}}{{u}^{1}}(1) + \ldots + {{u}^{1}}(N - 1),$$
$$0 = {{x}^{2}}(N) = {{A}^{N}}{{x}_{{0,2}}} + {{A}^{{N - 1}}}{{u}^{2}}(0) + {{A}^{{N - 2}}}{{u}^{2}}(1) + \ldots + {{u}^{2}}(N - 1),$$
$$0 = \alpha {{x}^{1}}(N) = \alpha {{A}^{N}}{{x}_{{0,1}}} + \sum\limits_{k = 0}^{N - 1} {\alpha {{A}^{k}}{{u}^{1}}(N - k - 1),} $$
$$0 = (1 - \alpha ){{x}^{2}}(N) = (1 - \alpha ){{A}^{N}}{{x}_{{0,2}}} + \sum\limits_{k = 0}^{N - 1} {(1 - \alpha ){{A}^{k}}{{u}^{2}}(N - k - 1),} $$
$$0 = {{A}^{N}}(\alpha {{x}_{{0,1}}} + (1 - \alpha ){{x}_{{0,2}}}) + \sum\limits_{k = 0}^{N - 1} {{{A}^{k}}(\alpha {{u}^{1}}(N - k - 1) + (1 - \alpha ){{u}^{2}}(N - k - 1)).} $$

According to the convexity of \(\mathcal{U}\) the relations \({v}\)(Nk – 1) = αu1(Nk – 1) + (1 – α)u2(Nk – 1) ∈ \(\mathcal{U}\), k = \(\overline {0,N - 1} \) are correct. Then αx0, 1 + (1 – α)x0, 2\(\mathcal{X}\)(N) ⊂ \({{\mathcal{X}}_{\infty }}\), from which it follows that \({{\mathcal{X}}_{\infty }}\) is convex.

Theorem 1 is proved.

Proof of Lemma 4. Let x0\({{\mathcal{X}}_{\infty }}\). Then by (3) there exists N\(\mathbb{N}\) ∪ {0} such, that x0\(\mathcal{X}\)(N). According to (2) there exist u(0), u(1), …, u(N – 1) ∈ \(\mathcal{U}\) such, that x(N) = 0. Based on (1), it is right that

$$0 = x(N) = {{A}^{N}}{{x}_{0}} + \sum\limits_{k = 0}^{N - 1} {{{A}^{k}}u(N - k - 1),} $$
$$0 = {{x}_{0}} + {{A}^{{ - N}}}\sum\limits_{k = 0}^{N - 1} {{{A}^{k}}u(N - k - 1),} $$
$${{x}_{0}} = - \sum\limits_{k = 0}^{N - 1} {{{A}^{{ - N + k}}}u(N - k - 1)} = - \sum\limits_{k = 1}^N {{{A}^{{ - k}}}u(k - 1).} $$

Let p belong to \({{\mathbb{R}}^{n}}\)\{0}. Then

$$(p,{{x}_{0}}) = \left( {p,\,\, - \sum\limits_{k = 1}^N {{{A}^{{ - k}}}u(k - 1)} } \right) = \sum\limits_{k = 1}^N {\left( {p,\,\, - {{A}^{{ - k}}}u(k - 1)} \right)} $$
$$ = \sum\limits_{k = 1}^N {\left( {{{{\left( { - {{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,u(k - 1)} \right)} \;\leqslant \;\sum\limits_{k = 1}^N {\mathop {\max }\limits_{{{u}_{k}} \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,{{u}_{k}}} \right).} $$

Since 0 ∈ \(\mathcal{U}\), then for all k\(\mathbb{N}\)

$$\mathop {\max }\limits_{{{u}_{k}} \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,{{u}_{k}}} \right)\; \geqslant \;0.$$

Then

$$(p,{{x}_{0}})\;\leqslant \;\sum\limits_{}^{} {\mathop {\max }\limits_{{{u}_{k}} \in \mathcal{U}} \left. {\left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}} \right)p,\,\,{{u}_{k}}} \right),} $$

i.e., x0\({{\mathcal{H}}_{p}}\). It follows that \({{\mathcal{X}}_{\infty }} \subset {{\mathcal{H}}_{p}}\).

Let us consider the following quantity for some p\({{\mathbb{R}}^{n}}\)\{0}:

$$(p,x\text{*}) = \sum\limits_{k = 1}^\infty {\left( {p, - {{A}^{{ - k}}}u_{k}^{*}} \right)} = \sum\limits_{k = 1}^\infty {\left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,u_{k}^{*}} \right)} = \sum\limits_{k = 1}^\infty {\mathop {\max }\limits_{{{u}_{k}} \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,{{u}_{k}}} \right).} $$

Then x* ∈ \(\partial {{\mathcal{H}}_{p}}\).

Since all eigenvalues of the matrix A are strictly greater than 1 in absolute value, then the series \(\sum\nolimits_{k = 1}^\infty {{{A}^{{ - k}}}u_{k}^{*}} \) converges. Then xN = \( - \sum\limits_{k = 1}^N {{{A}^{{ - k}}}u_{k}^{*}} \xrightarrow{{N \to \infty }}x\text{*}\). Demonstrate that xN\(\mathcal{X}\)(N) ⊂ \({{\mathcal{X}}_{\infty }}\). Let us take u(k) = \(u_{{k + 1}}^{*}\)\(\mathcal{U}\). Then

$$x(N) = {{A}^{N}}{{x}_{N}} + \sum\limits_{k = 0}^{N - 1} {{{A}^{k}}u(N - k - 1)} = - \sum\limits_{k = 1}^N {{{A}^{{N - k}}}u_{k}^{*}} + \sum\limits_{k = 0}^{N - 1} {{{A}^{k}}u_{{N - k - 1}}^{*}} = 0.$$

Then xN\(\mathcal{X}\)(N) ⊂ \({{\mathcal{X}}_{\infty }}\) for all N\(\mathbb{N}\). It follows that x* = \(\mathop {\lim }\limits_{N \to \infty } {{x}_{N}} \in \overline {{{\mathcal{X}}_{\infty }}} \).

Lemma 4 is proved.

Corollary 2. Let all eigenvalues of the matrix A\({{\mathbb{R}}^{{n \times n}}}\) be strictly greater than 1 in absolute value, \({{\mathcal{X}}_{\infty }}\) is defined by (3).

Then for all p\({{\mathbb{R}}^{n}}\)\{0} it is right that:

(1) \({{\mathcal{X}}_{\infty }} \subset {{\mathcal{H}}_{{ - p}}} = \left\{ {x \in {{\mathbb{R}}^{n}}{\kern 1pt} {\kern 1pt} :\;(p,x)\; \geqslant \;\sum\limits_{k = 1}^\infty {\mathop {\min }\limits_{{{u}_{k}} \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,{{u}_{k}}} \right)} } \right\}\);

(2) \(x\text{*} = - \sum\limits_{k = 1}^\infty {{{A}^{{ - k}}}u_{k}^{*}} \in \overline {{{\mathcal{X}}_{\infty }}} \cap \partial {{\mathcal{H}}_{{ - p}}}\), where

$$u_{k}^{*} = \arg \mathop {\min }\limits_{{{u}_{k}} \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,{{u}_{k}}} \right).$$

Proof of Corollary 2. For proving that it is sufficient to consider the conditions of Lemma 4 for the vector –p.

Proof of Lemma 5. Let p = (0 … 0 1 0 … 0)T\({{\mathbb{R}}^{n}}\), where 1 corresponds to the ith coordinate of the vector p. Then for arbitrary k\(\mathbb{N}\)

$${{A}^{{ - k}}} = \left( {\begin{array}{*{20}{c}} {{{\lambda }^{{ - k}}}}&{( - 1)k{{\lambda }^{{ - k - 1}}}}&{{{{( - 1)}}^{2}}k(k + 1){{\lambda }^{{ - k - 2}}}\frac{1}{2}}& \ldots &{{{{( - 1)}}^{{n - 1}}}\frac{{(k + n - 2)!}}{{(n - 1)!(k - 1)!}}{{\lambda }^{{ - k - n + 1}}}} \\ 0&{{{\lambda }^{{ - k}}}}&{( - 1)k{{\lambda }^{{ - k - 1}}}}& \ldots &{{{{( - 1)}}^{{n - 2}}}\frac{{(k + n - 3)!}}{{(n - 2)!(k - 1)!}}{{\lambda }^{{ - k - n + 2}}}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \ldots &{{{\lambda }^{{ - k}}}} \end{array}} \right),$$
$$ - {{({{A}^{{ - k}}})}^{{\text{T}}}}p = - \left( \begin{gathered} 0 \\ \vdots \\ 0 \\ {{\lambda }^{{ - k}}} \\ ( - 1)k{{\lambda }^{{ - k - 1}}} \\ \vdots \\ {{( - 1)}^{{n - i}}}\frac{{(k + n - i - 1)!}}{{(k - 1)!(n - i)!}}{{\lambda }^{{ - k - n + i}}} \\ \end{gathered} \right),$$
$$\left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right) = - \left( {{{\lambda }^{{ - k}}}{{u}_{i}} - k{{\lambda }^{{ - k - 1}}}{{u}_{{i + 1}}} + \ldots + {{{( - 1)}}^{{n - i}}}{{\lambda }^{{ - k - n + i}}}C_{{k + n - i - 1}}^{{n - i}}{{u}_{n}}} \right)$$
$$ = - \sum\limits_{j = 0}^{n - i} {{{\lambda }^{{ - k - j}}}{{{( - 1)}}^{j}}{{u}_{{j + i}}}C_{{k + j - 1}}^{j}} = \sum\limits_{j = 0}^{n - i} {{{\lambda }^{{ - k - j}}}{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i}}}C_{{k + j - 1}}^{j}.} $$

Let us consider the case λ > 1. Since ui ∈ [ui, min; ui, max], then for all k\(\mathbb{N}\) the following inequalities hold

$$\left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right)\;\leqslant \;\sum\limits_{j = 0}^{n - i} {{{\lambda }^{{ - k - j}}}C_{{k + j - 1}}^{j}\max \left\{ {{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i,\min }}};\,\,{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i,\max }}}} \right\},} $$
$$ - \left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right)\;\leqslant \; - \sum\limits_{j = 0}^{n - i} {{{\lambda }^{{ - k - j}}}C_{{k + j - 1}}^{j}\min \left\{ {{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i,\min }}};\,\,{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i,\max }}}} \right\}.} $$

Additionally, the following equalities hold

$$\sum\limits_{k = 1}^\infty {\sum\limits_{j = 0}^{n - i} {{{\lambda }^{{ - k - j}}}C_{{k + j - 1}}^{j}} } = \sum\limits_{j = 0}^{n - i} {\sum\limits_{k = 1}^\infty {{{\lambda }^{{ - k - j}}}C_{{k + j - 1}}^{j}} } = \sum\limits_{j = 0}^{n - i} {\frac{1}{{{{{(\lambda - 1)}}^{{j + 1}}}}}.} $$

By Lemma 4 and Corollary 2, these equalities imply, that for all x\({{\mathcal{X}}_{\infty }}\) inequalities hold

$$\sum\limits_{j = 0}^{n - i} {\frac{{\min \left\{ {{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i,\min }}};\,\,{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i,\max }}}} \right\}}}{{{{{(\lambda - 1)}}^{{j + 1}}}}}} \;\leqslant \;(p,\,\,x)$$
$$\leqslant \;\sum\limits_{j = 0}^{n - i} {\frac{{\max \left\{ {{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i,\min }}};\,\,{{{( - 1)}}^{{j + 1}}}{{u}_{{j + i,\max }}}} \right\}}}{{{{{(\lambda - 1)}}^{{j + 1}}}}}.} $$

Since \({{\mathcal{X}}_{\infty }}\) is open according to Theorem 1, these inequalities strictly hold, i.e.,

$${{\mathcal{X}}_{\infty }} \subset \bigcap\limits_{i = 1}^n {\left\{ {x \in {{\mathbb{R}}^{n}}{\kern 1pt} :\;{{x}_{i}} \in ({{x}_{{i,\min }}};{{x}_{{i,\max }}})} \right\}.} $$

Let us consider the case λ < –1. For all k\(\mathbb{N}\) it is true that

$$\left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right) = \sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - k - j}}}{{{( - 1)}}^{{ - k - j + j + 1}}}{{u}_{{j + i}}}C_{{k + j - 1}}^{j}} = \sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - k - j}}}{{{( - 1)}}^{{ - k + 1}}}{{u}_{{j + i}}}C_{{k + j - 1}}^{j}.} $$

Then

$$\mathop {\max }\limits_{u \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - (2k - 1)}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right) = \mathop {\max }\limits_{u \in \mathcal{U}} \left( {\sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - (2k - 1) - j}}}{{{( - 1)}}^{{ - (2k - 1) + 1}}}{{u}_{{j + i}}}C_{{(2k - 1) + j - 1}}^{j}} } \right)$$
$$ = \sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - (2k - 1) - j}}}{{u}_{{i + j,\max }}}C_{{(2k - 1) + j - 1}}^{j},} $$
$$\mathop {\max }\limits_{u \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - 2k}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right) = \mathop {\max }\limits_{u \in \mathcal{U}} \left( {\sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - 2k - j}}}{{{( - 1)}}^{{ - 2k + 1}}}{{u}_{{j + i}}}C_{{2k + j - 1}}^{j}} } \right)$$
$$ = \sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - 2k - j}}}( - {{u}_{{i + j,\min }}})C_{{2k + j - 1}}^{j}.} $$

Then by Lemma 4 for all x\({{\mathcal{X}}_{\infty }}\) it is right, that

$$(p,x)\;\leqslant \;\sum\limits_{k = 1}^\infty {\sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - (2k - 1) - j}}}{{u}_{{i + j,\max }}}C_{{(2k - 1) + j - 1}}^{j}} } - \sum\limits_{k = 1}^\infty {\sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - 2k - j}}}{{u}_{{i + j,\min }}}C_{{2k + j - 1}}^{j}} } $$
$$ = \sum\limits_{j = 0}^{n - i} {{{u}_{{i + j,\max }}}\left( {\frac{1}{{2{{{\left( {\left| \lambda \right| + 1} \right)}}^{{j + 1}}}}} + \frac{1}{{2{{{\left( {\left| \lambda \right| - 1} \right)}}^{{j + 1}}}}}} \right)} $$
$$ - \;\sum\limits_{j = 0}^{n - i} {{{u}_{{i + j,\min }}}\left( {\frac{1}{{2{{{\left( {\left| \lambda \right| - 1} \right)}}^{{j + 1}}}}} + \frac{1}{{2{{{\left( {\left| \lambda \right| + 1} \right)}}^{{j + 1}}}}}} \right)} = {{x}_{{i,\max }}}.$$

Similarly

$$\mathop {\min }\limits_{u \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - (2k - 1)}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right) = \mathop {\min }\limits_{u \in \mathcal{U}} \left( {\sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - (2k - 1) - j}}}{{{( - 1)}}^{{ - (2k - 1) + 1}}}{{u}_{{j + i}}}C_{{(2k - 1) + j - 1}}^{j}} } \right)$$
$$ = \sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - (2k - 1) - j}}}{{u}_{{j + i,\min }}}C_{{(2k - 1) + j - 1}}^{j},} $$
$$\mathop {\min }\limits_{u \in \mathcal{U}} \left( { - {{{\left( {{{A}^{{ - 2k}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right) = \mathop {\min }\limits_{u \in \mathcal{U}} \left( {\sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - 2k - j}}}{{{( - 1)}}^{{ - 2k + 1}}}{{u}_{{j + i}}}C_{{2k + j - 1}}^{j}} } \right)$$
$$ = \sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - 2k - j}}}( - {{u}_{{j + i,\max }}})C_{{2k + j - 1}}^{j}.} $$

Then by Corollary 2 for all x\({{\mathcal{X}}_{\infty }}\) it is right that

$$(p,x)\; \geqslant \;\sum\limits_{k = 1}^\infty {\sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - (2k - 1) - j}}}{{u}_{{i + j,\min }}}C_{{(2k - 1) + j - 1}}^{j}} } - \sum\limits_{k = 1}^\infty {\sum\limits_{j = 0}^{n - i} {{{{\left| \lambda \right|}}^{{ - 2k - j}}}{{u}_{{i + j,\max }}}C_{{2k + j - 1}}^{j}} } $$
$$ = \sum\limits_{j = 0}^{n - i} {{{u}_{{i + j,\min }}}\left( {\frac{1}{{2{{{\left( {\left| \lambda \right| + 1} \right)}}^{{j + 1}}}}} + \frac{1}{{2{{{\left( {\left| \lambda \right| - 1} \right)}}^{{j + 1}}}}}} \right)} $$
$$ - \;\sum\limits_{j = 0}^{n - i} {{{u}_{{i + j,\max }}}\left( {\frac{1}{{2{{{\left( {\left| \lambda \right| - 1} \right)}}^{{j + 1}}}}} + \frac{1}{{2{{{\left( {\left| \lambda \right| + 1} \right)}}^{{j + 1}}}}}} \right)} = {{x}_{{i,\min }}}.$$

Since \({{\mathcal{X}}_{\infty }}\) is open according to Theorem 1

$${{x}_{{i,\min }}} < (p,x) < {{x}_{{i,\max }}},$$
$${{\mathcal{X}}_{\infty }} \subset \bigcap\limits_{i = 1}^n {\left\{ {x \in {{\mathbb{R}}^{n}}{\kern 1pt} :\;{{x}_{i}} \in ({{x}_{{i,\min }}};{{x}_{{i,\max }}})} \right\}.} $$

Lemma 5 is proved.

Proof of Lemma 6. Let p = (0 0 … \({{\tilde {p}}^{{\text{T}}}}\) … 0)T\({{\mathbb{R}}^{{2n}}}\), \(\tilde {p}\) = (p1 p2)T\({{\mathbb{R}}^{2}}\), \(p_{1}^{2}\) + \(p_{2}^{2}\) = 1, where \(\tilde {p}\) corresponds to the (2i – 1)th and 2ith coordinates of the vector p. Then for arbitrary k\(\mathbb{N}\) it is true that

$${{A}^{{ - k}}} = \left( {\begin{array}{*{20}{c}} {{{r}^{{ - k}}}{{A}_{{ - k\varphi }}}}&{ - k{{r}^{{ - k - 1}}}{{A}_{{( - k - 1)\varphi }}}}& \ldots &{{{{( - 1)}}^{{n - 1}}}C_{{n + k - 2}}^{{n - 1}}{{r}^{{ - k - n + 1}}}{{A}_{{( - k - n + 1)\varphi }}}} \\ 0&{{{r}^{{ - k}}}{{A}_{{ - k\varphi }}}}& \ldots &{{{{( - 1)}}^{{n - 2}}}C_{{n + k - 3}}^{{n - 2}}{{r}^{{ - k - n + 2}}}{{A}_{{( - k - n + 2)\varphi }}}} \\ \vdots & \vdots & \ddots & \vdots \\ 0&0& \ldots &{{{r}^{{ - k}}}{{A}_{{ - k\varphi }}}} \end{array}} \right),$$
$$ - {{\left( {{{A}^{{ - k}}}} \right)}^{{\text{T}}}}p = \left( \begin{gathered} 0 \\ \vdots \\ 0 \\ {{r}^{{ - k}}}{{A}_{{ - k\varphi }}}\tilde {p} \\ - k{{r}^{{ - k - 1}}}{{A}_{{( - k - 1)\varphi }}}\tilde {p} \\ \vdots \\ {{( - 1)}^{{n - i}}}C_{{k + n - i - 1}}^{{n - i}}{{r}^{{ - k - n + i}}}{{A}_{{( - k - n + i)\varphi }}}\tilde {p} \\ \end{gathered} \right).$$

Let ui\({{\mathbb{R}}^{2}}\), i = \(\overline {1,n} \), u = (u1T, ..., unT)T\({{\mathbb{R}}^{{2n}}}\). Then

$$\left( { - {{{\left( {{{A}^{{ - k}}}} \right)}}^{{\text{T}}}}p,\,\,u} \right) = - \left( {{{r}^{{ - k}}}\left( {{{A}_{{ - k\varphi }}}\tilde {p},{{u}^{i}}} \right) + ( - 1)k{{r}^{{ - k - 1}}}\left( {{{A}_{{( - k - 1)\varphi }}}\tilde {p},{{u}^{{i + 1}}}} \right)} \right. + \ldots $$
$$ + \;{{( - 1)}^{{n - i}}}C_{{k + n - i - 1}}^{{n - i}}{{r}^{{ - k - n + i}}}\left. {\left( {{{A}_{{( - k - n + i)\varphi }}}\tilde {p},{{u}^{n}}} \right)} \right)$$
$$ = - \sum\limits_{j = 0}^{n - i} {{{r}^{{ - k - j}}}\left( {{{A}_{{( - k - j)\varphi }}}\tilde {p},{{u}^{{i + j}}}} \right)C_{{k + j - 1}}^{j}} \;\leqslant \;\sum\limits_{j = 0}^{n - i} {{{r}^{{ - k - j}}}\left\| {({{A}_{{( - k - j)\varphi }}}\tilde {p}} \right\|\left\| {{{u}^{{i + j}}}} \right\|C_{{k + j - 1}}^{j}} $$
$$ = \sum\limits_{j = 0}^{n - i} {{{r}^{{ - k - j}}}\left\| {{{u}^{{i + j}}}} \right\|C_{{k + j - 1}}^{j}} \;\leqslant \;\sum\limits_{j = 0}^{n - i} {{{r}^{{ - k - j}}}{{r}_{{i + j,\max }}}C_{{k + j - 1}}^{j}.} $$

Then by Lemma 4 for arbitrary x\({{\mathcal{X}}_{\infty }}\) it is right that

$$(p,x)\;\leqslant \;\sum\limits_{k = 1}^\infty {\sum\limits_{j = 0}^{n - i} {{{r}^{{ - k - j}}}{{r}_{{i + j,\max }}}C_{{k + j - 1}}^{j}} } = \sum\limits_{j = 0}^{n - i} {\sum\limits_{k = 1}^\infty {{{r}^{{ - k - j}}}{{r}_{{i + j,\max }}}C_{{k + j - 1}}^{j}} } = \sum\limits_{j = 0}^{n - i} {\frac{{{{r}_{{i + j,\max }}}}}{{{{{(r - 1)}}^{{j + 1}}}}}.} $$

Since \({{\mathcal{X}}_{\infty }}\) according to Theorem 1 is open, then

$${{\mathcal{X}}_{\infty }} \subset \bigcap\limits_{i = 1}^n {\left\{ {x \in {{\mathbb{R}}^{{2n}}}{\kern 1pt} :\;{{{\left\| {{{{({{x}_{{2i - 1}}}\;{{x}_{{2i}}})}}^{{\text{T}}}}} \right\|}}_{{{{\mathbb{R}}^{2}}}}} < {{R}_{{i,\max }}}} \right\}.} $$

Lemma 6 is proved.

Proof of Theorem 2. Let us consider for some B\({{\mathbb{R}}^{{n \times n}}}\) and \(\mathcal{C} \in {{\mathbb{K}}_{n}}\) the mapping

$$\tilde {T}(\mathcal{X}) = B\mathcal{X} + \mathcal{C}.$$

Let us demonstrate, that if B  : \({{\mathbb{R}}^{n}} \to {{\mathbb{R}}^{n}}\) is a contraction mapping with the compression ratio β ∈ [0; 1), then \(\tilde {T}\) : \({{\mathbb{K}}_{n}} \to {{\mathbb{K}}_{n}}\) is also a contraction mapping.

$${{\rho }_{H}}(\tilde {T}(\mathcal{X}),\tilde {T}(\mathcal{Y})) = \max \left\{ {\mathop {\sup }\limits_{x \in \tilde {T}(\mathcal{X})} \mathop {\inf }\limits_{y \in \tilde {T}(\mathcal{Y})} \rho (x,y);\,\,\,\mathop {\sup }\limits_{y \in \tilde {T}(\mathcal{Y})} \mathop {\inf }\limits_{x \in \tilde {T}(\mathcal{X})} \rho (x,y)} \right\}$$
$$ = \max \left\{ {\mathop {\sup }\limits_{\substack{ x \in \mathcal{X} \\ {{c}_{1}} \in \mathcal{C} }} \mathop {\inf }\limits_{\substack{ y \in \mathcal{Y} \\ {{c}_{2}} \in \mathcal{C} }} \left\| {Bx + {{c}_{1}} - By - {{c}_{2}}} \right\|;\,\,\,\mathop {\sup }\limits_{\substack{ y \in \mathcal{Y} \\ {{c}_{2}} \in \mathcal{C} }} \mathop {\inf }\limits_{\substack{ x \in \mathcal{X} \\ {{c}_{1}} \in \mathcal{C} }} \left\| {Bx + {{c}_{1}} - By - {{c}_{2}}} \right\|} \right\}$$
$$\leqslant \;\max \left\{ {\mathop {\sup }\limits_{\substack{ x \in \mathcal{X} \\ {{c}_{1}} \in \mathcal{C} }} \mathop {\inf }\limits_{\substack{ y \in \mathcal{Y} \\ {{c}_{2}} \in \mathcal{C} }} \left( {\left\| {B(x - y)} \right\| + \left\| {{{c}_{1}} - {{c}_{2}}} \right\|} \right);\,\,\,\mathop {\sup }\limits_{\substack{ y \in \mathcal{Y} \\ {{c}_{2}} \in \mathcal{C} }} \mathop {\inf }\limits_{\substack{ x \in \mathcal{X} \\ {{c}_{1}} \in \mathcal{C} }} \left( {\left\| {B(x - y)} \right\| + \left\| {{{c}_{1}} - {{c}_{2}}} \right\|} \right)} \right\}$$
$$ = \max \left\{ {\mathop {\sup }\limits_{x \in \mathcal{X}} \mathop {\inf }\limits_{y \in \mathcal{Y}} \left\| {B(x - y)} \right\|;\,\,\,\mathop {\sup }\limits_{y \in \mathcal{Y}} \mathop {\inf }\limits_{x \in \mathcal{X}} \left\| {B(x - y)} \right\|} \right\}$$
$$\leqslant \;\max \left\{ {\mathop {\sup }\limits_{x \in \mathcal{X}} \mathop {\inf }\limits_{y \in \mathcal{Y}} \beta \left\| {x - y} \right\|;\,\,\,\mathop {\sup }\limits_{y \in \mathcal{Y}} \mathop {\inf }\limits_{x \in \mathcal{X}} \beta \left\| {x - y} \right\|} \right\} = \beta {{\rho }_{H}}(\mathcal{X},\mathcal{Y}).$$

Then \(\tilde {T}\) is a contraction mapping with the compression ratio β.

According to (4) the following equality holds

$${{T}^{M}}(\mathcal{X}) = {{A}^{{ - M}}}\mathcal{X} + \sum\limits_{k = 1}^M {\left( { - {{A}^{{ - k}}}\mathcal{U}} \right)} = \tilde {T}(\mathcal{X}),$$

where B = AM, \(\mathcal{C}\) = \(\sum\nolimits_{k = 1}^M {( - {{A}^{{ - k}}}\mathcal{U})} \).

Since all eigenvalues of the matrix A are strictly greater than 1 in absolute value, then all eigenvalues of  the matrix A–1 are strictly less than 1 in absolute value. Then according to the [24, Theorem 5.6.12] ||Ak|| \(\xrightarrow{{k \to \infty }}\) 0. Then by definition of the limit for α ∈ [0; 1) there exists M\(\mathbb{N}\) such, that ||AM|| < α. As the following inequality

$$\left\| {{{A}^{{ - M}}}(x - y)} \right\|\;\leqslant \;\left\| {{{A}^{{ - M}}}} \right\| \cdot \left\| {x - y} \right\| < \alpha \left\| {x - y} \right\|,$$

is true AM is a contraction mapping with the compression ratio α ∈ [0; 1). Then TM: \({{\mathbb{K}}_{n}} \to {{\mathbb{K}}_{n}}\) is also a contraction mapping with the compression ratio α.

By virtue of Lemma 7 for all N\(\mathbb{N}\) it is right, that \(\mathcal{X}\)(N) ⊂ \(\mathcal{X}\)(N + 1), in addition, \(\mathcal{X}\)(N) is a compact set. Then according to the ([27], Corollary A.3.4)

$${{\rho }_{H}}({{\mathcal{X}}_{\infty }},\mathcal{X}(N))\;\xrightarrow{{N \to \infty }}\;0.$$
(A.5)

In contrast, by virtue of ([27], Theorem A.3.9) the metric space (\({{\mathbb{K}}_{n}}\), ρH) is complete. Then the contraction mapping \(\tilde {T}\) has a unique fixed point \(\mathcal{X}\text{*} \in {{\mathbb{K}}_{n}}\), which can be computed by fixed point iteration method:

$$\mathcal{X}\text{*} = \mathop {\lim }\limits_{N \to \infty } \underbrace {(\tilde {T} \circ \ldots \circ \tilde {T})}_N(\mathcal{X}),$$
(A.6)

where \(\mathcal{X} \in {{\mathbb{K}}_{n}}\) is arbitrary. Let us take \(\mathcal{X}\) = {0}. Then by virtue of Lemma 7

$$\underbrace {(\tilde {T} \circ \ldots \circ \tilde {T})}_N(\{ 0\} ) = - \sum\limits_{k = 1}^{MN} {{{A}^{{ - k}}}\mathcal{U}} = \mathcal{X}(NM).$$

According to the uniqueness of the limit point and formulae (A.5) and (A.6)

$$\overline {{{\mathcal{X}}_{\infty }}} = \overline {\bigcup\limits_{N = 0}^\infty {\mathcal{X}(N)} } = \mathcal{X}\text{*}{\kern 1pt} .$$

The error in the fixed point iteration method can be estimated by the following formula [28]:

$${{\rho }_{H}}\left( {\overline {{{\mathcal{X}}_{\infty }}} ,\,\,\mathcal{X}(NM)} \right)\;\leqslant \;\frac{{{{\alpha }^{N}}}}{{1 - \alpha }}{{\rho }_{H}}(\mathcal{X}(M),\,\,\{ 0\} ).$$

Theorem 2 is proved.

Proof of Theorem 3. By virtue of the point 3 of Theorem 2

$${{\rho }_{H}}\left( {\overline {{{\mathcal{X}}_{\infty }}} ,\,\,\mathcal{X}(NM)} \right)\;\leqslant \;\frac{{\alpha _{p}^{N}}}{{1 - \alpha _{p}^{N}}}{{\rho }_{H}}(\mathcal{X}(M),\,\,\{ 0\} ) = {{R}_{p}},\quad p \in \{ 1,2,\infty \} .$$

Then according to the definition of the Hausdorff distance

$${{\mathcal{X}}_{\infty }} \subset \overline {{{\mathcal{X}}_{\infty }}} \subset \mathcal{X}(NM) + {{B}_{{{{R}_{p}}}}}(0),$$

where

$${{B}_{{{{R}_{1}}}}}(0) = {\text{conv}}\left\{ {{{{(\underbrace {0, \ldots ,0}_i,r,0, \ldots ,0)}}^{{\text{T}}}}{\kern 1pt} :\;r \in \{ - {{R}_{1}},{{R}_{1}}\} ,i = \overline {0,n - 1} } \right\},$$
$${{B}_{{{{R}_{2}}}}}(0) = \left\{ {x \in {{\mathbb{R}}^{n}}{\kern 1pt} :\;\sqrt {\sum\limits_{i = 1}^n {{{{\left| {{{x}_{i}}} \right|}}^{2}}} } \;\leqslant \;{{R}_{2}}} \right\},$$
$${{B}_{{{{R}_{\infty }}}}}(0) = \left\{ {x \in {{\mathbb{R}}^{n}}{\kern 1pt} :\;\mathop {\max }\limits_{i = \overline {1,n} } \left| {{{x}_{i}}} \right|\;\leqslant \;{{R}_{\infty }}} \right\}.$$

Theorem 3 is proved.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Berendakova, A.V., Ibragimov, D.N. About the Method for Constructing External Estimates of the Limit Controllability Set for the Linear Discrete-Time System with Bounded Control. Autom Remote Control 84, 83–104 (2023). https://doi.org/10.1134/S0005117923020030

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117923020030

Keywords:

Navigation