1. INTRODUCTION AND FORMULATION OF THE PROBLEM

Spectral problems of linear algebra are important for theoretical and applied studies. Many problems of the control theory and problems on description of the \(\varepsilon \)spectrum or pseudospectrum lead to questions on location of the matrix spectrum with respect to various domains whose boundaries are contours in the complex plane and to dichotomy problems for the matrix spectrum with respect to certain curves (see, for example, [7, 8, 9, 12]). Therefore, establishing criteria and developing algorithms for description of location of the matrix spectrum in the complex plane are of great importance.

In linear algebra, Lyapunov’s criteria are the bestknown criteria for localization of the matrix spectrum. They provide us with necessary and sufficient conditions for location of the matrix spectrum in the unit disc

$$ {\cal C}_i = \{\lambda \in {\mathbb C}: \;|\lambda | < 1\}$$

and in the left halfplane

$$ {\cal C}_- = \{\lambda \in {\mathbb C}: \;\mathrm {Re}\thinspace \lambda < 0\}. $$

We present the formulations for the case of an ( \(n\times n \))matrix \(A \).

Theorem 1.1 (A. M. Lyapunov) \(. \) Each eigenvalue of a matrix \(A\) belongs to the unit disc \({\cal C}_i \) if and only if the matrix equation

$$ H - A^*HA = C, \quad C = C^* > 0, $$
(1.1)

has a Hermitian positive definite solution \(H\), i.e., \(H = H^* > 0 \).

Corollary 1.2\(. \) If each eigenvalue of a matrix \(A\) belongs to the unit disc \({\cal C}_i \) then there exists a solution \(H\) to matrix equation (1.1) that satisfies the conditions \(H = H^* > 0 \) and has the form

$$ H = \sum \limits _{j=0}^{\infty } (A^*)^j C A^j. $$
(1.2)

Theorem 1.3 (A. M. Lyapunov) \(. \) Each eigenvalue of a matrix \(A\) belongs to the left halfplane \({\cal C}_- \) if and only if the matrix equation

$$ HA + A^*H = -C, \quad C = C^* > 0, $$
(1.3)

has a Hermitian positive definite solution \(H\), i.e., \(H = H^* > 0 \).

Corollary 1.4\(. \) If each eigenvalue of a matrix \(A\) belongs to the left halfplane \({\cal C}_- \) then there exists a solution \(H\) to matrix equation (1.3) that satisfies the conditions \(H = H^* > 0 \) and has the form

$$ H = \int \limits _{0}^{\infty } e^{tA^*} C e^{tA} dt. $$
(1.4)

Matrix equations (1.1) and (1.3) are usually called the Lyapunov equations. Notice that the above criteria are used in numerical research. In particular, Godunov and Bulgakov used formulas (1.2) and (1.4) and developed algorithms for numerical solving the problem on location of the matrix spectrum in the domains \({\cal C}_i\) and \({\cal C}_- \) with a guaranteed accuracy. A survey of results in this direction can be found in [3, 7].

In [4, 6], the problem was studied on location of the matrix spectrum in the domain bounded by the ellipse

$$ {\cal E}_i = \left \{\lambda \in {\mathbb C}: \;\frac {(\mathrm {Re}\thinspace \lambda )^2}{a^2} + \frac {(\mathrm {Im}\thinspace \lambda )^2}{b^2} < 1 \right \}, \quad a > b.$$

In particular, the following assertion was proven in [6].

Theorem 1.5 \(. \) Each eigenvalue of a matrix \(A\) belongs to the domain \({\cal E}_i \) if and only if the matrix equation

$$ H - \left (\frac {1}{2a^2}+\frac {1}{2b^2}\right ) A^*HA - \left (\frac {1}{4a^2}-\frac {1}{4b^2}\right )(HA^2+(A^*)^2H) = C, \quad C = C^* > 0,$$
(1.5)

has a Hermitian positive definite solution \(H\) , i.e., \(H = H^* > 0 \) .

In [4], a formula was obtained for solving equation (1.5) with \(C=I\). Namely, we have

$$ H = \sum \limits _{k=0}^{\infty } \left (\sum \limits _{l=0}^k C_k^l \alpha ^{k-l}(A^*)^{k-l} \left [\beta ^l \sum \limits ^{l}_{j=0} C^j_{l} (A^*)^{2j} A^{2(l-j)} \right ] A^{k-l}\right ), $$
(1.6)

where

$$ \alpha = \frac {1}{2a^2}+\frac {1}{2b^2}, \quad \beta = \frac {1}{4a^2}-\frac {1}{4b^2}.$$

We rewrite formula (1.6) as follows:

$$ H = (1 - \gamma ^2) \sum \limits _{k=0}^{\infty } f^*_k f_k, $$

where

$$ \begin {gathered} f_0 = I, \quad f_1 = \sigma A, \\ f_k = \sigma f_{k-1}A + \gamma f_{k-2}, \quad k = 2, 3,\ldots , \\ \gamma = -\frac {a-b}{a+b}, \quad \sigma = \frac {2}{a+b}. \end {gathered} $$

On the basis of this formula, an algorithm for numerical solving was developed for the problem on location of the matrix spectrum in the domain \({\cal E}_i \), see [4].

Notice that Theorems 1.1–1.3 can be generalized for solving the problem on dichotomy of the matrix spectrum with respect to a circle, straight line, and ellipse, see [3, 5, 6, 7].

The above theorems establish connections between location of the matrix spectrum in the domains \( {\cal C}_i\), \({\cal C}_- \), and \({\cal E}_i \) and solvability of matrix equations (1.1), (1.3), and (1.5) respectively. Notice that similar connections exist in more general cases. In [8, 9], they were established for problems on location of the matrix spectrum in a variety of domains in the complex plane and the class of generalized Lyapunov equations of the form

$$ \sum \limits ^N_{j,k=0} a_{jk} B^jHA^k = C, $$
(1.7)

where \(A\), \(B \), and \(C \) are known matrices of sizes \(n \times n \), \(m \times m\), and \(m \times n \) respectively, the number \(N \) determines the order of equation (1.7), each \(a_{jk} \) is a constant coefficient, and \(H \) is an unknown (\(m \times n \))matrix.

It is obvious that equations (1.1), (1.3), and (1.5) are partial cases of equation (1.7) with \(N \le 2 \), \(m = n\), and \(B = A^* \).

Generalized Lyapunov equations (1.7) first appeared in papers by Krein, see [5, Ch. 1]. We present below one of his results. Let

$$ P(\lambda ,\mu ) = \sum \limits ^N_{j,k=0} a_{jk}\lambda ^j \mu ^k$$
(1.8)

denote the characteristic polynomial of equation (1.7). We introduce the following notation.

  1. Let \(\sigma (A) = \{\mu _1,\dots ,\mu _n\}\) be the spectrum of a matrix \(A\).

  2. Let \(\sigma (B) = \{\lambda _1,\dots ,\lambda _m\}\) be the spectrum of a matrix \(B\).

  3. Let \(\gamma _A\) be a closed contour surrounding \(\sigma (A) \).

  4. Let \(\gamma _B\) be a closed contour surrounding \(\sigma (B) \).

For the proof of the following assertion, the reader is referred to [5].

Theorem 1.6 (M. G. Krein) \(. \) Assume that

$$ P(\lambda _s,\mu _r) \neq 0,$$

where

$$ \lambda _s \in \sigma (B), \quad s = 1, \dots ,m, \quad \mu _r \in \sigma (A), \quad r =1,\dots ,n. $$

Then, for every matrix \(C \), there exists a unique solution to equation (1.7). This solution can be represented as follows:

$$ H = \frac {1}{(2\pi i)^2} \int \limits _{\gamma _A} \int \limits _{\gamma _B} \frac {1}{P(\lambda , \mu )} (\lambda I - B)^{-1} C (\mu I - A)^{-1} d\lambda \thinspace d\mu .$$
(1.9)

In the present article, we consider the problem on location of the matrix spectrum with respect to a parabola. In particular, we solve the problem on dichotomy of the spectrum with respect to a parabola. Notice that a series of algorithms is known for solving such problems (see, for example, [1, 2, 10, 11]). However, these algorithms are based on reduction of spectral problems to wellknown problems on location of the matrix spectrum with respect to either a circle or a straight line. We suggest a new solution of the problems under consideration without intermediate reduction to known ones. Our solution is based on studying a special Lyapunov type matrix equation. In terms of solvability of this equation, we prove assertions on location of the matrix spectrum in the domains \({\cal P}_i\) (bounded by a parabola) and \({\cal P}_e\) (lying outside the closure of \({\cal P}_i\)). Moreover, we use the norm of the solution to the matrix equation and find a region near the parabola

$$ h = \left \{\lambda \in {\mathbb C}: \; \frac {1}{2p}(\mathrm {Im}\thinspace \lambda )^2 = \mathrm {Re}\thinspace \lambda \right \}, \quad p > 0, $$

without eigenvalues of \(A \). Studying the problem on location of the matrix spectrum in the domain \({\cal P}_i\), we also obtain an explicit formula for the solution to the matrix equation which is similar to formulas (1.2), (1.4), and (1.6) and can be used for developing numerical algorithms. Notice that we use the same equation for proving an analog of Lyapunov–Krein theorem on dichotomy of the matrix spectrum with respect to a parabola.

2. LOCATION OF THE MATRIX SPECTRUM IN A DOMAIN BOUNDED BY A PARABOLA

In the present section, we establish necessary and sufficient conditions for location of the matrix spectrum in a domain in the complex plane that is bounded by a parabola. The conditions are similar to the above criteria. They are formulated in terms of solvability of a certain matrix equation.

Let \(A\) be an ( \(n \times n \))matrix. We denote by \({\cal P}_i \) the domain bounded by a parabola \(h \), i.e.,

$$ {\cal P}_i = \left \{\lambda \in {\mathbb C}: \; \frac {1}{2p}(\mathrm {Im}\thinspace \lambda )^2 < \mathrm {Re}\thinspace \lambda \right \}, \quad p > 0.$$

We prove the following assertion.

Theorem 2.1 \(. \) If the spectrum of a matrix \(A\) is a subset of the domain \({\cal P}_i \) then, for each matrix \(C \) , there exists a unique solution to the matrix equation

$$ HA + A^*H - \frac {1}{2p}\left [A^*HA - \frac {1}{2}(HA^2 + (A^*)^2H)\right ] = C.$$
(2.1)

Proof. It is obvious [8] that equation (2.1) is a generalized Lyapunov equation (see (1.7)) with

$$ N = 2, \; \; B = A^*, \; \; a_{{00}} = 0, \; \; a_{10}=a_{01}=1, \; \; a_{{11}} = -\frac {1}{2p}, \; \; a_{02} = a_{20} = \frac {1}{4p}.$$

By the conditions of the theorem, each eigenvalue \(\mu _j\) of \(A \) belongs to the domain \({\cal P}_i \). We have \(B = A^* \). The spectrum of \(B \) is a subset of \({\cal P}_i \). We show that the conditions of Krein’s theorem are satisfied.

For equation (2.1), we represent the characteristic polynomial (see (1.8)) as follows:

$$ P(\lambda ,\mu ) = \mu + \lambda - \frac {1}{2p}\lambda \mu + \frac {1}{4p}(\mu ^2 + \lambda ^2).$$

Let \(\mu _r = \alpha _r + i\beta _r \). We have \(\lambda _s = \overline \mu _s = \alpha _s - i\beta _s\) and

$$ P(\lambda _s,\mu _r) = \mu _r + \overline \mu _s - \frac {1}{2p}\overline \mu _s\mu _r + \frac {1}{4p}(\mu ^2_r + \overline \mu ^2_s).$$

It is obvious that

$$ \begin {gathered} \mathrm {Re}\thinspace P(\lambda _s,\mu _r) = \alpha _r+\alpha _s - \frac {1}{2p} (\alpha _s\alpha _r+\beta _s\beta _r) + \frac {1}{4p}(\alpha _s^2-\beta _s^2+\alpha _r^2-\beta _r^2) \\ = (\alpha _r - \frac {1}{2p}\beta ^2_r) + (\alpha _s - \frac {1}{2p}\beta ^2_s) + \frac {1}{4p}(\alpha _r + \alpha _s)^2 + \frac {1}{4p}(\beta _r + \beta _s)^2; \end {gathered}$$

hence, we have

$$ \mathrm {Re}\thinspace P(\lambda _s,\mu _r) \ge \left (\alpha _r - \frac {1}{2p}\beta ^2_r\right ) + \left (\alpha _s - \frac {1}{2p}\beta ^2_s\right ). $$

Since \(\sigma (A) \subset {\cal P}_i \), we obtain \(\alpha _j - \frac {1}{2p}\beta ^2_j > 0\), \(j=1,\dots ,n \). We conclude that

$$ \mathrm {Re}\thinspace P(\lambda _s,\mu _r) > 0, \quad \lambda _s \in \sigma (A^*), \quad \mu _r \in \sigma (A), \quad s,r = 1,\dots ,n.$$

Therefore, the conditions of Theorem 1.4 hold; hence, there exists a unique solution to equation (2.1) for each choice of \(C \). By formula (1.9), we have

$$ H = \frac {1}{(2\pi i)^2} \int \limits _{\gamma _A} \int \limits _{\gamma _{A^*}} \frac {1}{P(\lambda ,\mu )} (\lambda I - A^*)^{-1} C (\mu I - A)^{-1} d\lambda \thinspace d\mu ,$$
(2.2)

where \(\gamma _A \) and \(\gamma _{A^*} \) are contours that surround the spectra of the matrices \(A \) and \(A^* \) respectively. We may take the boundary of the set

$$ \left \{\lambda : |\lambda | \le \|A\| + \varepsilon , \quad \frac {1}{2p}(\mathrm {Im}\thinspace \lambda )^2 \le \mathrm {Re}\thinspace \lambda \right \}, \quad \varepsilon > 0, $$

for both \(\gamma _A\) and \(\gamma _{A^*} \). This completes the proof of the theorem. \(\quad \square \)

Corollary 2.2 \(. \) If \(C = C^*\) then \(H = H^*\) . If \(C = C^* > 0\) then \(H = H^* > 0\) .

Theorem 2.3\(. \) Assume that \(C = C^* > 0\) and there exists a Hermitian positive definite solution \(H \) to equation (2.1). Then each eigenvalue \(\mu _k\) of \(A \) belongs to the domain \( {\cal P}_i\). Moreover, the following estimate is valid:

$$ \frac {1}{2p}(\mathrm {Im}\thinspace \mu _k)^2 \le \mathrm {Re}\thinspace \mu _k - \frac {1}{2\|C^{-1}\|\|H\|}, \quad k = 1,\ldots ,n.$$
(2.3)

Proof. Let \(\mu _k \) be an eigenvalue of \(A \) and let \(v_k \) be a corresponding unit eigenvector. By (2.1), we have

$$ \begin {gathered} \langle Cv_k, v_k \rangle = \left \langle \left ( HA+A^*H-\frac {1}{2p}\left [A^*HA - \frac {1}{2}(HA^2+(A^*)^2H)\right ] \right )\thinspace v_k, v_k \right \rangle \\ = (\mu _k+\overline \mu _k) \langle H v_k, v_k \rangle - \frac {1}{2p}\overline \mu _k\mu _k \langle H v_k, v_k \rangle + \frac {1}{4p}(\mu ^2_k + \overline \mu ^2_k) \langle H v_k, v_k \rangle . \end {gathered} $$

Since

$$ \begin {gathered} \mu _k + \overline \mu _k = 2\mathrm {Re}\thinspace \mu _k, \qquad \overline \mu _k \mu _k = (\mathrm {Re}\thinspace \mu _k)^2 + (\mathrm {Im}\thinspace \mu _k)^2, \\ \mu ^2_k + \overline \mu ^2_k = 2(\mathrm {Re}\thinspace \mu _k)^2-2(\mathrm {Im}\thinspace \mu _k)^2, \end {gathered}$$

we obtain

$$ \langle Cv_k, v_k \rangle = 2\left (\mathrm {Re}\thinspace \mu _k - \frac {1}{2p}(\mathrm {Im}\thinspace \mu _k)^2\right ) \langle Hv_k, v_k \rangle .$$
(2.4)

Therefore, the conditions \(C = C^*> 0 \) and \(H = H^*> 0 \) imply that \(\mu _k \in {\cal P}_i \), \(k = 1,\ldots ,n \). We take into account (2.4) and obtain the estimate

$$ 2\left (\mathrm {Re}\thinspace \mu _k - \frac {1}{2p}(\mathrm {Im}\thinspace \mu _k)^2\right ) \|H\| \ge \frac {1}{\|C^{-1}\|},$$

which is equivalent to (2.3). This completes the proof of the theorem. \(\quad \square \)

The following assertion is immediate from Theorems 2.1 and 2.2.

Theorem 2.4\(. \) Let \(C = C^* > 0\). The spectrum of \(A \) is a subset of the domain \({\cal P}_i\) if and only if there exists a Hermitian positive definite solution \(H \) to equation (2.1), i.e., \(H = H^* > 0 \).

3. FORMULAS FOR THE SOLUTION TO EQUATION (2.1)

In the present section, we obtain an analog of Lyapunov’s formulas for the solution to equation (2.1) provided that the spectrum of \(A\) is a subset of \({\cal P}_i \).

We consider the case in which the matrix on the righthand side of equation (2.1) is the unit matrix, i.e., we have \(C = I\). As is proven in the previous section, there exists a unique solution to this equation; moreover, the solution is Hermitian and positive definite. In view of formula (2.2), we may represent the solution as follows:

$$ H = \frac {1}{(2\pi i)^2} \int \limits _{\gamma _A} \int \limits _{\gamma _{A^*}} \frac {1}{\mu + \lambda - \frac {1}{2p}\lambda \mu + \frac {1}{4p}(\mu ^2 + \lambda ^2)} (\lambda I - A^*)^{-1} (\mu I - A)^{-1} d\lambda \thinspace d\mu .$$
(3.1)

This formula is not convenient for solving particular problems. We suggest a simpler formula that can be used for numerical solving.

Let \(\mu \) be an eigenvalue of \(A \) and let \(v \) be a corresponding eigenvector. We take into account (2.4) and obtain the equality

$$ 2\left (\mathrm {Re}\thinspace \mu - \frac {1}{2p}(\mathrm {Im}\thinspace \mu )^2\right ) \langle Hv, v \rangle = \langle v, v \rangle .$$

We pass to the equivalent relation

$$ \langle Hv, v \rangle = \frac {1}{2\left (\mathrm {Re}\thinspace \mu - \frac {1}{2p}(\mathrm {Im}\thinspace \mu )^2\right )}\langle v, v \rangle .$$
(3.2)

Since \(\mu \in {\cal P}_i \), we have

$$ 0 < \frac {(\mathrm {Im}\thinspace \mu )^2}{2p \mathrm {Re}\thinspace \mu } < 1. $$

We rewrite (3.2) in the following form:

$$ \langle Hv, v \rangle = \frac {1}{2\mathrm {Re}\thinspace \mu } \sum \limits _{k=0}^{\infty } \left (\frac {(\mathrm {Im}\thinspace \mu )^2}{2p \mathrm {Re}\thinspace \mu }\right )^k \langle v, v \rangle .$$

We find that

$$ \langle Hv, v \rangle = \sum \limits _{k=0}^{\infty } \left (\frac {-1}{4p}\right )^k \frac {(\mu - \overline \mu )^{2k}}{(\mu + \overline \mu )^{k+1}} \langle v, v \rangle = \sum \limits _{k=0}^{\infty } J_k. $$
(3.3)

We transform each of the summands \(J_k \), \(k = 0,1,\ldots \). Since \(\mathrm {Re}\thinspace \mu > 0 \), we obtain

$$ J_0 = \frac {1}{(\mu + \overline \mu )} \langle v, v \rangle = \int \limits _0^\infty e^{-(\mu + \overline \mu )t}dt \langle v, v \rangle = \int \limits _0^\infty \langle e^{-tA} v, e^{-tA}v \rangle dt = \langle \left (\int \limits _0^\infty e^{-tA^*} e^{-tA} dt\right ) v, v \rangle$$

for \(k = 0 \). It is obvious that

$$ \begin {gathered} J_1 = -\frac {1}{4p}\frac {(\mu - \overline \mu )^2}{(\mu + \overline \mu )^2} \langle v, v \rangle = -\frac {1}{4p}\frac {(\mu - \overline \mu )}{(\mu + \overline \mu )^2} \left (\langle Av, v \rangle - \langle v, Av \rangle \right ) \\ = -\frac {1}{4p}\frac {1}{(\mu + \overline \mu )^2} \left (\langle A^2v, v \rangle - 2\langle Av, Av \rangle + \langle v, A^2v \rangle \right ) \end {gathered} $$

for \(k = 1\). The same transform as for \( k = 0\) leads to the equalities

$$ \begin {gathered} J_1 = -\frac {1}{4p}\frac {1}{(\mu + \overline \mu )} \int \limits _0^\infty e^{-(\mu + \overline \mu )t_0}dt_0 \left (\langle A^2v, v \rangle - 2\langle Av, Av \rangle + \langle v, A^2v \rangle \right ) \\ = -\frac {1}{4p}\frac {1}{(\mu + \overline \mu )} \bigg (\int \limits _0^\infty \langle A^2e^{-t_0 A}v, e^{-t_0 A}v \rangle dt_0 - 2\int \limits _0^\infty \langle Ae^{-t_0 A}v, Ae^{-t_0 A}v \rangle dt_0 \\ + \int \limits _0^\infty \langle e^{-t_0 A}v, A^2e^{-t_0 A}v \rangle dt_0 \bigg ). \end {gathered}$$

We repeat similar transforms and obtain

$$ \begin {gathered} J_1 = -\frac {1}{4p} \bigg (\int \limits _0^\infty \int \limits _0^\infty \langle A^2e^{-(t_1 + t_0) A}v, e^{-(t_1 + t_0) A}v \rangle dt_0 dt_1 \\ - 2\int \limits _0^\infty \int \limits _0^\infty \langle Ae^{-(t_1 + t_0) A}v, Ae^{-(t_1 + t_0) A}v \rangle dt_0 dt_1 \\ + \int \limits _0^\infty \int \limits _0^\infty \langle e^{-(t_1 + t_0) A}v, A^2e^{-(t_1 + t_0) A}v \rangle dt_0 dt_1 \bigg ). \end {gathered}$$

We conclude that

$$ J_1 = \langle \bigg (-\frac {1}{4p} \int \limits _0^\infty \int \limits _0^\infty e^{-(t_1 + t_0) A^*} (A^2 + 2(-A^*)A + (-A^*)^2) e^{-(t_1 + t_0) A} dt_0 dt_1 \bigg ) v, v \rangle . $$

We transform each summand \(J_k \), \(k\ge 2 \), following the same lines and rewrite (3.3) as follows:

$$ \langle Hv, v \rangle = \langle \bigg ( \sum \limits _{k=0}^{\infty } \left (\frac {-1}{4p}\right )^k \int \limits _{R^{k+1}_+} e^{-(t_k + \ldots + t_0) A^*} \left (\sum \limits ^{2k}_{j=0} C^j_{2k} (-A^*)^{2k-j} A^j \right ) e^{-(t_k + \ldots + t_0) A} dt_0 \ldots dt_k \bigg ) v, v \rangle .$$
(3.4)

For \(k \ge 0\), we introduce the notation

$$ H_k = \left (\frac {-1}{4p}\right )^k \int \limits _{R^{k+1}_+} e^{-(t_k + \ldots + t_0) A^*} \left (\sum \limits ^{2k}_{j=0} C^j_{2k} (-A^*)^{2k-j} A^j \right ) e^{-(t_k + \ldots + t_0) A} dt_0 \ldots dt_k$$

and rewrite (3.4) as follows:

$$ \langle \left (H - \sum \limits _{k=0}^{\infty } H_k\right ) v, v \rangle = 0. $$

If the eigenvalues are pairwise distinct and \(\mu _l \in {\cal P}_i\) for every \(l \) then the corresponding eigenvectors \(v_l \) satisfy the following system of \(n \) equalities:

$$ \langle \left (H - \sum \limits _{k=0}^{\infty } H_k\right ) v_l, v_l \rangle = 0, \quad l = 1,\ldots ,n. $$

Since \(H = H^*\) and \(H_k = H^*_k \), we conclude that the Hermitian matrix

$$ H - \sum \limits _{k=0}^{\infty } H_k$$

is the zero matrix. Thus, we may rewrite formula (3.1) as follows:

$$ H= \sum \limits _{k=0}^{\infty } \left (\frac {-1}{4p}\right )^k \int \limits _{R^{k+1}_+} e^{-(t_k + \ldots + t_0) A^*} \left (\sum \limits ^{2k}_{j=0} C^j_{2k} (-A^*)^{2k-j} A^j \right ) e^{-(t_k + \ldots + t_0) A} dt_0 \ldots dt_k.$$
(3.5)

Notice that each matrix can be approximated with a guaranteed accuracy by matrices whose eigenvalues are pairwise distinct. Hence, formula (3.5) is valid in the general case too. Formula (3.5) is an analog of formula (1.6). As in [4], it can be used for developing numerical algorithms.

4. DICHOTOMY OF THE MATRIX SPECTRUM WITH RESPECT TO A PARABOLA

In the present section, we consider the problem on dichotomy of the matrix spectrum with respect to a parabola \(h\). We begin with the problem on location of the matrix spectrum \(\sigma (A) \) in the domain

$$ {\cal P}_e = \left \{\lambda \in {\mathbb C}: \;\frac {1}{2p}(\mathrm {Im}\thinspace \lambda )^2 > \mathrm {Re}\thinspace \lambda \right \}, \quad p > 0. $$

We consider matrix equation (2.1) with a modified righthand side:

$$ HA + A^*H - \frac {1}{2p}A^*HA + \frac {1}{4p}(HA^2+(A^*)^2H) = - C.$$
(4.1)

Theorem 4.1\(. \) Assume that the spectrum of \(A\) lies outside the closure of the domain \({\cal P}_i \). Then, for each matrix \(C\), there exists a unique solution \(H \) to equation (4.1). If \(C = C^* \) then \(H = H^*\). If \(C = C^*> 0\) then \(H = H^* > 0\).

Proof. Since each eigenvalue \(\mu _j \) of \(A \) belongs to \({\cal P}_e \), we have

$$ \mathrm {Re}\thinspace P(\lambda _s,\mu _r) \neq 0, \quad \lambda _s \in \sigma (A^*), \quad \mu _r \in \sigma (A), \quad s,r = 1,\dots ,n,$$

as in the proof of Theorem 2.1. We find that the conditions of Krein’s theorem hold. Therefore, for each \(C \), there exists a unique solution to equation (4.1); moreover, the solution can be represented in the integral form and the formula for this solution differs from (2.2) by the sign and the choice of contours of integration. We conclude that the solution is Hermitian if the matrix on the righthand side is Hermitian. It is easy to see that the conditions \(C = C^*> 0\) imply the conditions \(H = H^* > 0 \), which completes the proof of the theorem. \(\quad \square \)

Theorem 4.2\(. \) Let \(C = C^*> 0\). Assume that there exists a Hermitian positive definite solution \(H \) to equation (4.1), i.e., \(H = H^*> 0 \). Then the spectrum of \(A \) is a subset of the domain \({\cal P}_e\) and

$$ \frac {1}{2p}(\mathrm {Im}\thinspace \mu _k)^2 \ge \mathrm {Re}\thinspace \mu _k + \frac {1}{2\|C^{-1}\|\|H\|}, \quad k = 1,\ldots ,n.$$
(4.2)

Proof. Let \(\mu _k \) be an eigenvalue of \(A \) and let \(v_k \) be a corresponding unit eigenvector. We repeat the arguments from the proof of (2.4) and derive the following equality from (4.1):

$$ -\langle Cv_k, v_k \rangle = 2\left (\mathrm {Re}\thinspace \mu _k - \frac {1}{2p}(\mathrm {Im}\thinspace \mu _k)^2\right ) \langle Hv_k, v_k \rangle .$$
(4.3)

Since \(C = C^* > 0 \) and \(H = H^* > 0 \), we conclude that \(\mu _k\in {\cal P}_e \). It follows from (4.3) that, for every eigenvalue, we have

$$ 2\left (\frac {1}{2p}(\mathrm {Im}\thinspace \mu _k)^2 - \mathrm {Re}\thinspace \mu _k\right ) \|H\| \ge \frac {1}{\|C^{-1}\|}.$$

This inequality is equivalent to estimate (4.2). \(\quad \square \)

The following assertion is immediate from Theorems 4.1 and 4.2.

Theorem 4.3\(. \) Each eigenvalue of \(A \) belongs to the domain

$$ {\cal P}_e = \left \{\lambda \in {\mathbb C}: \;\frac {1}{2p}(\mathrm {Im}\thinspace \lambda )^2 > \mathrm {Re}\thinspace \lambda \right \}, \quad p > 0, $$

if and only if there exists a Hermitian positive definite solution \(H \) to matrix equation (4.1) with \(C = C^* > 0 \), i.e., we have \(H = H^* > 0\).

We consider matrix equation (2.1) with a special expression on the righthand side:

$$ HA + A^*H - \frac {1}{2p}A^*HA + \frac {1}{4p}(HA^2 + (A^*)^2H) = P^*CP - (I - P)^*C(I - P).$$
(4.4)

Theorem 4.4\(. \) Let \(P \) be a projection onto an invariant subspace with respect to \(A \) that commutes with \(A \). Assume that \(C = C^* > 0\). If there exists a Hermitian positive definite solution \(H \) to equation (4.4) such that

$$ H = P^*HP - (I - P)^*H(I - P) $$
(4.5)

then the matrix spectrum \( \sigma (A)\) is disjoint from the parabola \(h\). Moreover, the projection \(P \) is a mapping onto a maximal invariant subspace with respect to \(A \) that corresponds to eigenvalues in the domain \({\cal P}_i\).

Proof. Since \(AP = PA \), we find that \((I - P) \) is a projection onto an invariant subspace with respect to \(A \) too. Hence, there exists a nondegenerate matrix \(T \) such that

$$ A = T\left (\begin {array}{{cc}} A_{{11}} & 0 \\ 0 & A_{{22}} \end {array} \right ) T^{-1} $$

and

$$ P = T \left (\begin {array}{{cc}} I_{{11}} & 0 \\ 0 & 0 \end {array} \right ) T^{-1}, \quad I = T \left (\begin {array}{{cc}} I_{{11}} & 0 \\ 0 & I_{{22}} \end {array} \right ) T^{-1}.$$

By (4.5), we have

$$ \begin {gathered} H = (T^*)^{-1} \left (\begin {array}{{cc}} I_{{11}} & 0 \\ 0 & 0 \end {array} \right ) T^* H T \left (\begin {array}{{cc}} I_{{11}} & 0 \\ 0 & 0 \end {array} \right ) T^{-1} + (T^*)^{-1} \left (\begin {array}{{cc}} 0 & 0 \\ 0 & I_{{22}} \end {array} \right ) T^* H T \left (\begin {array}{{cc}} 0 & 0 \\ 0 & I_{{22}} \end {array} \right ) T^{-1}. \end {gathered} $$

We consider the matrix \(\hat {H} \) with

$$ \hat {H} = T^*HT = \left (\begin {array}{{cc}} H_{{11}} & H_{12} \\ H_{21} & H_{{22}} \end {array} \right )$$

and find that

$$ \hat {H} = \left (\begin {array}{{cc}} H_{{11}} & 0 \\ 0 & H_{{22}} \end {array} \right ). $$

In a similar way, we rewrite (4.4) in the following form:

$$ \hat {H} \hat {A} + \hat {A}^*\hat {H} - \frac {1}{2p}\hat {A}^*\hat {H}\hat {A} + \frac {1}{4p}(\hat {H}\hat {A}^2 + (\hat {A}^*)^2\hat {H}) = \hat {P}^*\hat {C}\hat {P} - (I - \hat {P})^*\hat {C}(I - \hat {P}),$$

where

$$ \hat {A} = \left (\begin {array}{{cc}} A_{{11}} & 0 \\ 0 & A_{{22}} \end {array} \right ), \quad \hat {P} = \left (\begin {array}{{cc}} I_{{11}} & 0 \\ 0 & 0 \end {array} \right ), \quad \hat {C} = T^*CT = \left (\begin {array}{{cc}} C_{{11}} & C_{12} \\ C_{21} & C_{{22}} \end {array} \right ).$$

It is obvious that

$$ \hat {P}^*\hat {C}\hat {P} - (I - \hat {P})^*\hat {C}(I - \hat {P}) = \left (\begin {array}{{cc}} C_{{11}} & 0 \\ 0 & C_{{22}} \end {array} \right ). $$

Therefore, equation (4.4) is equivalent to the following systems of equations:

$$ \begin {gathered} H_{{11}}A_{{11}} + A_{{11}}^*H_{{11}} - \frac {1}{2p}A_{{11}}^*H_{{11}}A_{{11}} + \frac {1}{4p}(H_{{11}}A_{{11}}^2 + (A_{{11}}^*)^2H_{{11}}) = C_{{11}}, \\ H_{{22}}A_{{22}} + A_{{22}}^*H_{{22}} - \frac {1}{2p}A_{{22}}^*H_{{22}}A_{{22}} + \frac {1}{4p}(H_{{22}}A_{{22}}^2 + (A_{{22}}^*)^2H_{{22}}) = -C_{{22}}. \end {gathered} $$

Since \(H = H^* > 0\) and \(C = C^* > 0 \), we obtain

$$ H_{{11}} = H^*_{{11}} > 0, \quad H_{{22}} = H^*_{{22}} > 0, \quad C_{{11}} = C^*_{{11}} > 0, \quad C_{{22}} = C^*_{{22}} > 0.$$

By Theorems 2.2 and 4.2, each eigenvalue of \( A_{11}\) belongs to \({\cal P}_i \) and each eigenvalue of \(A_{22} \) belongs to \({\cal P}_e \). It is also clear that \(P \) is a projection onto a maximal invariant subspace with respect to \(A \) that corresponds to eigenvalues in the domain \({\cal P}_i \). \(\quad \square \)

Corollary 4.5 \(. \) Let the conditions of the above theorem hold. Then each eigenvalue of \(A \) belongs to the set

$$ \left \{\lambda \in {\cal P}_i: \frac {1}{2p}(\mathrm {Im}\thinspace \lambda )^2 \le \mathrm {Re}\thinspace \lambda - \frac {1}{2\|C^{-1}\|\|H\|}\right \} \bigcup \left \{\lambda \in {\cal P}_e: \frac {1}{2p}(\mathrm {Im}\thinspace \lambda )^2 \ge \mathrm {Re}\thinspace \lambda + \frac {1}{2\|C^{-1}\|\|H\|}\right \}. $$