Skip to main content
Log in

D-Decomposition in the Case of Polynomial Dependence of the Coefficients of a Polynomial on Two Parameters

  • LINEAR SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

We propose a method for constructing the stability domains of a polynomial whose coefficients depend polynomially on two real parameters. The method is based on an approximation of the D-decomposition domains by a set of rectangles on each of which the number of zeros of the polynomial in the left half-plane is constant.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Similar content being viewed by others

REFERENCES

  1. Chernetskii, V.I., Diduk, G.A., and Potapenko, A.A., Matematicheskie metody i algoritmy issledovaniya avtomaticheskikh sistem (Mathematical Methods and Algorithms for Studying Automated Systems), Leningrad: Energiya, 1970.

    MATH  Google Scholar 

  2. Savin, M.M., Elsukov, V.S., and Pyatina, O.N., Teoriya avtomaticheskogo upravleniya (Automated Control Theory), Rostov-on-Don: Feniks, 2007.

    Google Scholar 

  3. Egorov, A.I., Obyknovennye differentsial’nye uravneniya s prilozheniyami (Ordinary Differential Equations with Applications), Moscow: Fizmatlit, 2005.

    Google Scholar 

  4. Dorf, R. and Bishop, R., Modern Control Systems, New Jersey: Prentice Hall, 2011.

    MATH  Google Scholar 

  5. Gryazina, E.N., Polyak, B.T., and Tremba, A.A., D-decomposition technique state-of-the-art, Autom. Remote Control, 2008, vol. 69, no. 12, pp. 1991–2026.

    Article  MathSciNet  Google Scholar 

  6. Vasil’ev, O.O., Study of D-decompositions by the methods of computational real-valued algebraic geometry, Autom. Remote Control, 2012, vol. 73, no. 12, pp. 1978–1993.

    Article  MathSciNet  Google Scholar 

  7. Zorich, V.A., Matematicheskii analiz. Ch. 1 (Mathematical Analysis. Part 1), Moscow: MTsNMO, 2012.

    Google Scholar 

  8. Kostrikin, A.I., Vvedenie v algebru. Ch. 1. Osnovy algebry (Introduction to Algebra. Part 1: Basics of Algebra), Moscow: Fizmatlit, 2000.

    Google Scholar 

  9. Gantmakher, F.R., Teoriya matrits (Theory of Matrices), Moscow: Fizmatlit, 2010.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. F. Pryashnikova.

Additional information

Translated by V. Potapchouck

APPENDIX

Proof of Theorem 1. If \(a_n\left (\alpha ,\beta \right )\neq 0\) for all \(\left (\alpha ,\beta \right )\in p\), then the zeros of the polynomial (1) are continuous functions of its coefficients \(a_k\ \left (k=0,\dots ,n\right )\) on the rectangle \(p \) [8, p.252–253]. In turn, for the polynomial dependence (2), the coefficients \(a_k \) (\(k=0,\dots ,n \)) are continuous functions of the variables \(\alpha \) and \(\beta \) in \({\mathbb {R}}^2 \). Then, by the theorem on the continuity of composition of functions [7, p. 492], the zeros of the polynomial (1) are continuous functions of the variables \( \alpha \) and \(\beta \) on the rectangle \(p \). The continuity of the zeros implies the continuity of their real parts.

Let us show that the real parts of the zeros of the polynomial (1) may have an infinite limit at the points \(\left (\alpha ,\beta \right ) \) where \(a_n\left (\alpha ,\beta \right )=0\). Let us make the change of variables \(s=\frac {1}{\xi }\), which, for \(\xi \neq 0 \), takes the polynomial (1) to the function \(f\left (\xi ,\alpha ,\beta \right )=\varphi \left (\xi ,\alpha ,\beta \right )/{\xi }^n \), where \(\varphi \left (\xi ,\alpha ,\beta \right )={\sum }^n_{k=0}a_{n-k}\left (\alpha ,\beta \right ){\xi }^k \). At the points \(\left (\alpha ,\beta \right ) \) where \(a_0\left (\alpha ,\beta \right )\neq 0\), the zeros of the polynomial \(\varphi \left (\xi ,\alpha ,\beta \right )\) are continuous functions of the variables \(\alpha \) and \(\beta \). By virtue of continuity, if \(\lim \limits _{\left (\alpha ,\beta \right )\to ({\alpha }_0,{\beta }_0)} a_n\left (\alpha ,\beta \right )=0 \), then, as \(\left (\alpha ,\beta \right )\to ({\alpha }_0,{\beta }_0)\), the polynomial \(\varphi \left (\xi ,\alpha ,\beta \right )\) has an infinitesimal zero \( \xi \). In this case, the polynomial (1) has an infinitely large zero \(s=\frac {1}{\xi } \), whose real part may tend to infinity; i.e., it may not be continuous at the point \(({\alpha }_0,{\beta }_0) \). The proof of Theorem 1 is complete.\(\quad \blacksquare \)

Proof of Theorem 2. Consider the following assertions:

\(\mathrm {A}. \) :

The point \(\left (\alpha ,\beta \right ) \) is not a solution of any of the equations in (5).

\(\mathrm {B}. \) :

The real parts of the zeros of the polynomial (1) are not zero at the point \(\left (\alpha ,\beta \right ) \).

One has the assertion \(\mathrm {(A}\Rightarrow \mathrm {B)}\Leftrightarrow (\neg \mathrm {A}\Rightarrow \neg \mathrm {B})\), by virtue of which the proof of the desired assertion \(\mathrm {A}\Rightarrow \mathrm {B}\) can be replaced by the proof of the equivalent assertion \(\neg \mathrm {B}\Rightarrow \neg \mathrm {A}\). If the assertion \( \neg \mathrm {B}\) is true, according to which at the point \(\left (\alpha ,\beta \right )\) there exists a zero \(s \) of the polynomial (1) with zero real part, then \(s=i\omega \) (\(\omega \in \mathbb {R} \)). For \(\omega \neq 0 \), the polynomial (1) with real coefficients has the complex conjugate zero \(\overline {s}=-i\omega \) distinct from \(s \). By the condition of the theorem, \(a_n\left (\alpha ,\beta \right )\neq 0\), and then one has the Orlando formula [9, p. 465]

$$ {\Delta }_{n-1}\left (\alpha ,\beta \right )={(-1)}^{n(n+1)/2}a^{n-1}_n\left (\alpha ,\beta \right )\cdot \prod ^{1, \dots ,n}_{k<q}{(s_k\left (\alpha ,\beta \right )+s_q\left (\alpha ,\beta \right ))}.$$
Since \(s+\overline {s}=0\), it follows from the Orlando formula that \(\Delta _{n-1}\left (\alpha ,\beta \right )=0\). If \(\omega =0 \), then the polynomial (1) has the zero \(s=0 \), which implies that \(a_0\left (\alpha ,\beta \right )=0\). The proof of Theorem 2 is complete.\(\quad \blacksquare \)

Proof of Theorem 3. By the condition of the theorem, each term \(d_{\mu \nu }(\alpha ,\beta ) \) \((\mu =0,\dots ,m_{\alpha } \); \(\nu =0,\dots ,m_{\beta }) \) is a monotone function in each of the arguments on the rectangle \(p \) and hence assumes the greatest and least values at the vertices of the rectangle, so that \(d^{\prime }_{\mu \nu }=\min _{\left (\alpha ,\beta \right )\in p} d_{\mu \nu }(\alpha ,\beta )\) and \(d^{{\prime \prime }}_{\mu \nu }=\max _{\left (\alpha ,\beta \right )\in p} d_{\mu \nu }(\alpha ,\beta ) \). Then for each point \(\left (\alpha ,\beta \right )\in p\) one has the inequalities

$$ \left \{ \begin {array}{@{}l} d\left (\alpha ,\beta \right )=\displaystyle \sum \limits ^{m_{\alpha }}_{\mu =0}{\sum \limits ^{m_{\beta }}_{\nu =0}{d_{\mu \nu }}}(\alpha ,\beta ) \ge \sum \limits ^{m_{\alpha }}_{\mu =0}{\sum \limits ^{m_{\beta }}_{\nu =0}{d^{\prime }_{\mu \nu }}} \\ d\left (\alpha ,\beta \right )=\displaystyle \sum \limits ^{m_{\alpha }}_{\mu =0}{\sum \limits ^{m_{\beta }}_{\nu =0}{d_{\mu \nu }}}(\alpha ,\beta )\le \sum \limits ^{m_{\alpha }}_{\mu =0} {\sum \limits ^{m_{\beta }}_{\nu =0}{d^{{\prime \prime }}_{\mu \nu }}}. \end {array} \right .$$
(A.1)
If the first inequality in (7) holds, then it follows from the first inequality in system (A.1) that \(d\left (\alpha ,\beta \right )>0\) for each point \(\left (\alpha ,\beta \right )\in p\), and consequently, the polynomial \( d\left (\alpha ,\beta \right )\) does not have zeros on the rectangle \(p\). If the second inequality in (7) holds, then it follows from the second inequality in system (A.1) that \(d\left (\alpha ,\beta \right )<0\) for each point \(\left (\alpha ,\beta \right )\in p\), and consequently, the polynomial \( d\left (\alpha ,\beta \right )\) does not have zeros on the rectangle \(p\). The proof of Theorem 3 is complete.\(\quad \blacksquare \)

To prove Theorems 4 and 5, we use Theorem A.1.

Theorem A.1 [7, p. 495].

If a continuous function \(f:\ E\to \mathbb {R} \) on a connected set \(E \) assumes values \( f\left (a\right )=A\) and \( f\left (b\right )=B\) at points \(a,\thinspace b\in E \) , then for any number \(C \) between \(A \) and \(B \) there exists a point \( c\in E\) at which \( f\left (c\right )=C\) .

Proof of Theorem 4. By analogy with the proof of Theorem 2, consider the following assertions:

\(A. \) :

The continuous real parts \(\mathrm {Re}S\left (\alpha ,\beta \right ) \) of all zeros of the polynomial (1) do not vanish on the rectangle \(p \).

\(B. \) :

All real parts \(\mathrm {Re}S\left (\alpha ,\beta \right ) \) preserve their sign on the rectangle \(p \).

We replace the proof of the desired assertion \(\mathrm {A}\Rightarrow \mathrm {B}\) by the proof of the equivalent assertion \(\neg B\Rightarrow \neg A\). Assume that assertion \(\neg B \) holds, according to which there exist points \(\left ({\alpha }_1,{\beta }_1\right )\) and \(\left ({\alpha }_2,{\beta }_2\right )\) on the rectangle \(p \) at one of which the function \(\mathrm {R}\mathrm {e}S\left (\alpha ,\beta \right )\) is positive and at the other it is negative. If we set \(E=p\), \(f=\mathrm {Re}S\left (\alpha ,\beta \right )\), \(a=\left ({\alpha }_1,{\beta }_1\right )\), \(b=\left ({\alpha }_2,{\beta }_2\right )\), \(A=\mathrm {Re}S\left ({\alpha }_1,{\beta }_1\right )\), \(B=\mathrm {Re}S\left ({\alpha }_2,{\beta }_2\right )\), and \(C=0 \) in Theorem A.1, then all the conditions in this theorem are satisfied. Then, by Theorem A.1, there exists a point \(c=\left ({\alpha }_0,{\beta }_0\right )\) on the rectangle \(p \) at which \(\mathrm {Re}S\left ({\alpha }_0,{\beta }_0\right )=0\). The proof of Theorem 4 is complete.\(\quad \blacksquare \)

Proof of Theorem 5. By the condition of the theorem, there exists a pair of points, say, \(v_1\) and \(v_2 \), at which one of the polynomials \(d\left (\alpha ,\beta \right ) \) assumes values of opposite signs. Now Theorem 5 is a straightforward consequence of Theorem A.1 if we set \(E=p \), \(f=d\left (\alpha ,\beta \right )\), \(a=v_1 \), \(b=v_2 \), \(A=d\left (v_1\right ) \), \(B=d\left (v_2\right ) \), and \(C=0 \). The proof of Theorem 5 is complete.\(\quad \blacksquare \)

Proof of Theorem 6. Consider a polynomial (6) in which each term

$$ d_{\mu \nu }\left (\alpha ,\beta \right )\qquad {\left (\mu =0,\dots ,m_{\alpha };\ \nu =0,\dots ,m_{\beta }\right )}$$
is a continuous function of the arguments \(\alpha \) and \(\beta \). By virtue of continuity, for the point \(\left ({\alpha }_0,{\beta }_0\right )\in {\mathbb {R}}^2\) and for any \( \varepsilon >0\) there exists a disk
$$ {V_{r_{\mu \nu }}\left ({\alpha }_0,{\beta }_0\right )} {=\left \{\left (\alpha ,\beta \right )\in {\mathbb {R}}^2\ |\ \sqrt {{\left (\alpha -{\alpha }_0\right )}^2+{\left (\beta -{\beta }_0\right )}^2}\le r_{\mu \nu }\right \}}$$
of radius \(r_{\mu \nu }> 0\) centered at the point \(\left ({\alpha }_0,{\beta }_0\right )\) such that for all \(\left (\alpha ,\beta \right )\in V_{r_{\mu \nu }}\left ({\alpha }_0,{\beta }_0\right ) \) one has the inequality \(\left |d_{\mu \nu }\left (\alpha ,\beta \right )-d_{\mu \nu }\left ({\alpha }_0,{\beta }_0\right )\right |<\varepsilon \). If we set \(r=\min {\left \{r_{\mu \nu }\right \}}^{\mu =m_{\alpha };\ \nu =m_{\beta }}_{\mu ,\nu =0} \), then on the disk
$$ {V_r\left ({\alpha }_0,{\beta }_0\right )=\left \{\left (\alpha ,\beta \right )\in {\mathbb {R}}^2\ |\ \sqrt {{\left (\alpha -{\alpha }_0\right )}^2+{\left (\beta -{\beta }_0\right )}^2}\le r\right \}}$$
one has the system of inequalities
$$ \left |d_{\mu \nu }\left (\alpha ,\beta \right )-d_{\mu \nu }\left ({\alpha }_0,{\beta }_0\right )\right |<\varepsilon \left (\mu =0,\dots ,m_{\alpha };\quad \nu =0,\dots ,m_{\beta }\right ). $$
(A.2)

The disk \(V_r\left ({\alpha }_0,{\beta }_0\right ) \) is a compact set, and hence the continuous function \(d_{\mu \nu }\left (\alpha ,\beta \right )\) attains its least and greatest values on it,

$$ \eqalign { d_{\mu \nu ,\min } &=\min \limits _{(\alpha ,\beta )\in V_r(\alpha _0,\beta _0)} d_{\mu \nu }(\alpha ,\beta ) =d_{\mu \nu }(\alpha _{\mu \nu ,\min },\beta _{\mu \nu ,\min });\cr d_{\mu \nu ,\max }&=\max \limits _{(\alpha ,\beta )\in V_r(\alpha _0,\beta _0)} d_{\mu \nu }(\alpha ,\beta ) =d_{\mu \nu }(\alpha _{\mu \nu ,\max },\beta _{\mu \nu ,\max }),} $$
where \(\left ({\alpha }_{\mu \nu ,\min },{\beta }_{\mu \nu , \min }\right )\in V_r\left ({\alpha }_0,{\beta }_0\right ) \) and \({\left ({\alpha }_{\mu \nu , \max },{\beta }_{\mu \nu , \max }\right )} {\in V_r\left ({\alpha }_0,{\beta }_0\right )}\). Then inequalities (A.2) acquire the following form for \(\left (\alpha ,\beta \right )=\left ({\alpha }_{\mu \nu , \min },{\beta }_{\mu \nu , \min }\right ) \):
$$ d_{\mu \nu }\left ({\alpha }_0,{\beta }_0\right )-d_{\mu \nu , \min }<\varepsilon \left (\mu =0,\dots ,m_{\alpha };\quad \nu =0,\dots ,m_{\beta }\right ),$$
(A.3)
and for \(\left (\alpha ,\beta \right )=\left ({\alpha }_{\mu \nu , \max },{\beta }_{\mu \nu , \max }\right ) \) inequalities (A.2) take the form
$$ d_{\mu \nu ,\max }-d_{\mu \nu }\left ({\alpha }_0,{\beta }_0\right )<\varepsilon \left (\mu =0,\dots ,m_{\alpha };\quad \nu =0,\dots ,m_{\beta }\right ).$$
(A.4)
Summing inequalities (A.3) over \(\mu =0,\dots ,m_{\alpha } \) and \(\nu =0,\dots ,m_{\beta } \), we obtain the inequality
$$ \sum ^{m_{\alpha }}_{\mu =0}\sum ^{m_{\beta }}_{\nu =0}d_{\mu \nu }({\alpha }_0,{\beta }_0)-\sum ^{m_{\alpha }}_{\mu =0}{\sum }^{m_{\beta }}_{\nu =0}d_{\mu \nu ,\min }<\left (m_{\alpha }+1\right )\left (m_{\beta }+1\right )\varepsilon , $$
which is equivalent to the inequality
$$ \sum ^{m_{\alpha }}_{\mu =0}{\sum ^{m_{\beta }}_{\nu =0}{d_{\mu \nu ,\min }>d\left ({\alpha }_0,{\beta }_0\right )-\left (m_{\alpha }+1\right )\left (m_{\beta }+1\right )\varepsilon }}. $$
(A.5)
In a similar way, inequalities (A.4) imply the inequality
$$ \sum ^{m_{\alpha }}_{\mu =0}{\sum ^{m_{\beta }}_{\nu =0}{d_{\mu \nu ,\max }<d\left ({\alpha }_0,{\beta }_0\right )+\left (m_{\alpha }+1\right )\left (m_{\beta }+1\right )\varepsilon }}. $$
(A.6)

If \( d(\alpha _0,\beta _0){\thinspace >\thinspace }0 \), then we can set \(\varepsilon =d(\alpha _0,\beta _0)(2(m_{\alpha }+1)(m_{\beta }+1))^{-1} \), and then it follows from inequality (A.5) that there exists a disk \(V_{r^{\prime }}\left ({\alpha }_0,{\beta }_0\right ) \) on which the inequality

$$ \sum ^{m_{\alpha }}_{\mu =0}{\sum ^{m_{\beta }}_{\nu =0}{d_{\mu \nu ,\min }>\frac {d\left ({\alpha }_0,{\beta }_0\right )}{2}>0}} $$
(A.7)
holds. If \(d(\alpha _0,\beta _0){\thinspace <\thinspace }0\), then we can set \(\varepsilon =-d(\alpha _0,\beta _0)(2(m_{\alpha }+1)(m_{\beta }+1))^{-1} \), and then it follows from inequality (A.6) that there exists a disk \(V_{r^{{\prime \prime }}}\left ({\alpha }_0,{\beta }_0\right )\) on which one has the inequality
$$ \sum ^{m_{\alpha }}_{\mu =0}{\sum ^{m_{\beta }}_{\nu =0}{d_{\mu \nu ,\max }<\frac {d\left ({\alpha }_0,{\beta }_0\right )}{2}<0}}. $$
(A.8)

By the Hurwitz stability criterion, the stability of the polynomial \(a\left (s,\alpha ,\beta \right )\) at the point \(\left ({\alpha }_0,{\beta }_0\right )\) implies that the values \(a_0\left ({\alpha }_0,{\beta }_0\right )\), \(a_n\left ({\alpha }_0,{\beta }_0\right )\), and \({\Delta }_{n-1}\left ({\alpha }_0,{\beta }_0\right ) \) are either all positive or all negative. Each of the functions \( a_0\left (\alpha ,\beta \right )\), \(a_n\left (\alpha ,\beta \right )\), and \({\Delta }_{n-1}\left (\alpha ,\beta \right )\) is a polynomial of the form (6), and hence for these functions there exists a disk \(V_r\left ({\alpha }_0,{\beta }_0\right )\) on which one of inequalities (A.7) and (A.8) holds. Set \(d=r \), select \(d_{\max }\in (0;d) \), and construct the set of rectangles \(P=P_a\cup P_b \). Let \(p \) be a rectangle in \(P \) to which the point \(\left ({\alpha }_0,{\beta }_0\right )\) belongs. If we assume that \(p\in P_b \), then \(d_p\le d_{\max }<d \), and consequently, \(p\subseteq V_r\left ({\alpha }_0,{\beta }_0\right )\). Then one of inequalities (A.7) and (A.8) holds on the rectangle \(p \), and hence \(p\in P_a \). The resulting contradiction proves that \(p\in P_a \). On each rectangle in the set \(P_a \), the polynomial \(a\left (s,\alpha ,\beta \right )\) is either stable or unstable. Since the polynomial \(a\left (s,\alpha ,\beta \right )\) is stable at the point \(\left ({\alpha }_0,{\beta }_0\right )\in p\), we conclude that \(p \) is a stable rectangle covering the point \(\left ({\alpha }_0,{\beta }_0\right )\). The proof of Theorem 6 is complete. \(\quad \blacksquare \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pryashnikova, P.F. D-Decomposition in the Case of Polynomial Dependence of the Coefficients of a Polynomial on Two Parameters. Autom Remote Control 82, 398–409 (2021). https://doi.org/10.1134/S0005117921030024

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117921030024

Keywords

Navigation