1 Introduction

The Schur complement has been proved to be a useful tool in many fields such as numerical algebra and control theory. [1, 2] proposed a kind of iteration called the Schur-based iteration. Applying this method, we can solve large scale linear systems though reducing the order by the Schur complement. In addition, when utilizing the conjugate gradient method to solve large scale linear systems, if the eigenvalues of the system matrix are more concentrated, the convergent speed of the iterative method is faster (see, e.g., [[3], pp.312-317]). From [1, 2], it can be seen that for large scale linear systems, after applying the Schur-based iteration to reduce the order, the corresponding system matrix of linear equations is the Schur complement of the system matrix of original large scale linear systems and its eigenvalues are more concentrated than those of the original system matrix, leading to the Schur-based conjugate gradient method computing faster than the ordinary conjugate gradient method.

Hence, it is always interesting to know whether some important properties of matrices are inherited by their Schur complements. Clearly, the Schur complements of positive semidefinite matrices are positive semidefinite, the same is true for M-matrices, H-matrices and inverse M-matrices (see [4, 5]). Carlson and Markham showed that the Schur complements of strictly diagonally dominant matrices are diagonally dominant (see [6]). Li, Tsatsomeros and Ikramov independently proved the Schur complement of a strictly doubly diagonally dominant matrix is strictly doubly diagonally dominant (see [7, 8]). These properties have been repeatedly used for the convergence of iterations in numerical analysis and for deriving matrix inequalities in matrix analysis (see [3, 9, 10]). More importantly, the distribution for the eigenvalues of the Schur complement is of great significance, as shown in [1, 2, 8, 1117]. The aim of this paper is to study the distributions for the eigenvalues of the Schur complement of some diagonally dominant matrices.

Denote by C n × n the set of all n×n complex matrices. Let N={1,2,,n}. For A=( a i j ) C n × n (n2), assume

P i (A)= j N , j i | a i j |, S i (A)= j N , j i | a j i |,i=1,2,,n.

Set

N r (A)= { i | i N , | a i i | > P i ( A ) } ; N c (A)= { j | j N , | a j j | > S j ( A ) } .

Let us recall that A is a (row) diagonally dominant matrix ( D n ) if

| a i i | P i (A),iN;
(1.1)

A is a doubly diagonally dominant matrix ( DD n ) if

| a i i || a j j | P i (A) P j (A),ij,i,jN;
(1.2)

A is a γ-diagonally dominant matrix ( D n γ ) if there exists γ[0,1] such that

| a i i |γ P i (A)+(1γ) Q i (A),iN;
(1.3)

A is a product γ-diagonally dominant matrix ( PD n γ ) if there exists γ[0,1] such that

| a i i | [ P i ( A ) ] γ [ Q i ( A ) ] 1 γ ,iN.
(1.4)

If all inequalities in (1.1)-(1.4) are strict, then A is said to be a strictly (row) diagonally dominant matrix ( SD n ), a strictly doubly diagonally dominant matrix ( SDD n ), a strictly γ-diagonally dominant matrix ( SD n γ ) and a strictly product γ-diagonally dominant matrix ( SPD n γ ), respectively.

Liu and Zhang in [14] have pointed out the fact as follows. If A SDD n but A SD n , then there exists a unique index i 0 such that

| a i 0 i 0 | P i 0 (A).
(1.5)

As in [1, 2], for 1in and γ[0,1], we call | a i i | P i (A), | a i i |γ P i (A)(1γ) S i (A) and | a i i | [ P i ( A ) ] γ [ S i ( A ) ] 1 γ the i th (row) dominant degree, γ-dominant degree and product γ-dominant degree of A, respectively.

The comparison matrix of A, denoted by μ(A)=( t i j ), is defined to be

t i j ={ | a i j | , if  i = j , | a i j | , if  i j .

A matrix A is an M-matrix if it can be written in the form of A=mIP with P being nonnegative and m>ρ(P), where ρ(P) denotes the spectral radius of P. A matrix A is a H-matrix if μ(A) is a M-matrix. We denote by H n and M n the sets of n×n H- and M-matrices, respectively.

For αN, denote by |α| the cardinality of α and α =Nα. If α,βN, then A(α,β) is the submatrix of A lying in the rows indicated by α and the columns indicated by β. In particular, A(α,α) is abbreviated to A(α). Assume that A(α) is nonsingular. Then

A/α=A/A(α)=A ( α ) A ( α , α ) [ A ( α ) ] 1 A ( α , α ) ,

is called the Schur complement of A with respect to A(α).

The paper is organized as follows. In Section 2, we give several new estimates of diagonally dominant degree on the Schur complement of matrices, which improve some relative results. In Section 3, as an application of these derived results, the distributions for eigenvalues are obtained. In Section 4, we give a numerical example to show the advantages of our derived results.

2 The diagonally dominant degree for the Schur complement

In this section, we give several new estimates of diagonally dominant degree on the Schur complement of matrices, which improve some relative results.

Lemma 1 [4]

If A is an H-matrix, then [ μ ( A ) ] 1 | A 1 |.

Lemma 2 [4]

If A SD n or SDD n , then μ(A) M n , i.e., A H n .

Lemma 3 [11]

If A SD n or SDD n and αN, then the Schur complement of A is in SD | α | or SDD | α | , where α =Nα is the Schur complement of α in N and | α | is the cardinality of α .

Lemma 4 [1]

Let a>b, c>b, b>0 and 0r1. Then

a r c 1 r ( a b ) r ( c b ) 1 r +b.

Theorem 1 Let A C n × n , α={ i 1 , i 2 ,, i k } N r (A), α =Nα={ j 1 , j 2 ,, j l }, |α|<n and denote A/α=( a t s ). Then for all 1tl,

| a t t | P t (A/α)| a j t j t | P j t (A)+ w j t | a j t j t | P j t (A)
(2.1)

and

| a t t | + P t (A/α)| a j t j t |+ P j t (A) w j t | a j t j t |+ P j t (A),
(2.2)

where

w j t = min 1 ω k | a i ω i ω | P i ω ( A ) | a i ω i ω | u = 1 u ω k | a i ω i u | u = 1 k | a j t i u |.
(2.3)

Proof According to Lemmas 1 and 2, we have { μ [ A ( α ) ] } 1 [ A ( α ) ] 1 . Thus, ε>0 and t=1,2,,l,

(2.4)

Denote

B t ( x | a j t i 1 | | a j t i k | u = 1 l | a i 1 j u | u = 1 l | a i k j u | μ [ A ( α ) ] ) .
(2.5)

If

x> max 1 ω k u = 1 l | a i ω j u | | a i ω i ω | u = 1 u ω k | a i ω i u | u = 1 k | a j t i u |,

then

x u = 1 k | a j t i u | > max 1 ω k u = 1 l | a i ω j u | | a i ω i ω | u = 1 u ω k | a i ω i u | .

Choose ε t R + such that

x u = 1 k | a j t i u | > ε t > max 1 ω k u = 1 l | a i ω j u | | a i ω i ω | u = 1 u ω k | a i ω i u | ,

where we denote x u = 1 k | a j t i u | = if u = 1 k | a j t i u |=0. Set D=diag( y 1 , y 2 ,, y k + 1 ), where

y i ={ 1 , i = 1 , ε t , i = 2 , 3 , , k + 1 .

Denote C t = B t D=( c s v ). If s=1, then

| c s s | v i | c s v |=| c 11 | s = 2 k + 1 | c 1 v |=x v = 1 k ε t | a j t i v |=x ε t v = 1 k | a j t i v |>0;

otherwise,

| c s s | s v k + 1 | c s v | = ε t | a i ω i ω | v ω k ε t | a i ω i v | u = 1 l | a i ω j u | = ε t ( | a i ω i ω | v ω k | a i ω i v | ) u = 1 l | a i ω j u | > u = 1 l | a i ω j u | | a i ω i ω | u = 1 u ω k | a i ω i u | ( | a i ω i ω | u = 1 u ω k | a i ω i u | ) u = 1 l | a i ω j u | = 0 .

Therefore, we have C t SD k + 1 , and so B t H k + 1 . Note that B t =μ( B t ). So,

det B t >0.
(2.6)

Take x= u = 1 k | a j t i u | w j t +ε in (2.5), then

Noting that det{μ[A(α)]}>0, by (2.4) and (2.6), we have

| a t t | P t (A/α)| a j t j t | P j t (A)+ ω j t ε.

Let ε0, thus we easily get (2.1). Similarly, we obtain (2.2). □

Remark 1 Observe that

P i w ( A ) | a i w i w | u = 1 l | a i ω j u | | a i ω i ω | u = 1 u ω k | a i ω i u | .

This means that Theorem 1 improves Theorem 1 of [14].

Theorem 2 Let A SDD n , α={ i 0 , i 1 , i 2 ,, i k } with the index i 0 satisfying (1.5), α =Nα={ j 1 , j 2 ,, j l }, |α|<n and denote A/α=( a t s ). Then for all 1tl,

| a t t | P t ( A / α ) | a j t j t | P j t ( A ) + ( 1 u = 1 l | a i 0 j u | | a i 0 i 0 | j i 0 j α | a i 0 j | ) v = 1 k | a j t i v | | a j t j t | u = 1 l | a i 0 j u | | a i 0 i 0 | j i 0 j α | a i 0 j | P j t ( A )
(2.7)

and

| a t t | + P t ( A / α ) | a j t j t | + P j t ( A ) ( 1 u = 1 l | a i 0 j u | | a i 0 i 0 | j i 0 j α | a i 0 j | ) v = 1 k | a j t i v | | a j t j t | + u = 1 l | a i 0 j u | | a i 0 i 0 | j i 0 j α | a i 0 j | P j t ( A ) .
(2.8)

Proof For all 1tl, we have

| a t t | P t ( A / α ) = | a t t | s = 1 s t l | a t s | = | a j t j t ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j t a i k j t ) | s = 1 s t l | a j t j s ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j s a i k j s ) | | a j t j t | P j t ( A ) + ( 1 u = 1 l | a i 0 j u | | a i 0 i 0 | j i 0 j α | a i 0 j | ) v = 1 k | a j t i v | + u = 1 l | a i 0 j u | | a i 0 i 0 | j i 0 j α | a i 0 j | v = 1 k | a j t i v | s = 1 l | ( | a j t i 1 | , , | a j t i k | ) { μ [ A ( α ) ] } 1 ( | a i 1 j s | | a i k j s | ) | .

Thus,

Take x= u = 1 l | a i 0 j u | | a i 0 i 0 | j i 0 j α | a i 0 j | v = 1 k | a j t i v | in (2.5). By (2.6), thus it is not difficult to get that (2.7) follows. Similarly, we obtain (2.8). □

It is known that the Schur complements of diagonally dominant matrices are diagonally dominant (see [12, 13]). However, this property is not always true for γ-diagonally dominant matrices and for product γ-diagonally dominant matrices, as shown in [1].

In the sequel, we obtain some disc separations for the γ-diagonally and product γ-diagonally dominant degree of the Schur complement, from which we provide that the Schur complement of the γ-diagonally and product γ-diagonally dominant matrices is also γ-diagonally dominant and product γ-diagonally under some restrictive conditions.

Theorem 3 Let A C n × n , α={ i 1 , i 2 ,, i k } N r (A) N c (A), α =Nα={ j 1 , j 2 ,, j l }, |α|<n and denote A/α=( a t s ). Then for all 1tl,

| a t t | P t γ ( A / α ) S t ( 1 γ ) ( A / α ) > | a j t j t | ( P j t ( A ) w t ) γ ( S j t ( A ) w t T ) 1 γ > | a j t j t | P j t γ ( A ) S j t 1 γ ( A )
(2.9)

and

| a t t | + P t γ ( A / α ) S t ( 1 γ ) ( A / α ) < | a j t j t | + ( P j t ( A ) w t ) γ ( S j t ( A ) w t T ) 1 γ < | a j t j t | + P j t γ ( A ) S j t 1 γ ( A ) ,
(2.10)

where

Proof For all 1tl, we have

(2.11)

Similar as in the proof of Theorem 1, we easily obtain

(2.12)

Similarly,

(2.13)

Set

h= ( | a j t i 1 | , , | a j t i k | ) { μ [ A ( α ) ] } 1 ( | a i 1 j t | | a i k j t | ) .

From (2.11), (2.12), (2.13), using Lemma 4, we have

Thus we get (2.9). Similarly, we have (2.10). □

Remark 2 Observe that

This means that Theorem 3 improves Theorem 2 of [1].

In a similar way to the proof of Theorem 3, we get the following theorem immediately.

Theorem 4 Let A C n × n , α={ i 1 , i 2 ,, i k } N r (A) N c (A), α =Nα={ j 1 , j 2 ,, j l }, |α|<n and denote A/α=( a t s ). Then for all 1tl,

(2.14)

and

(2.15)

Corollary 1 Let A D γ and N r (A) N c (A). Then for any α N r (A) N c (A) with |α|<n,

A/α SD n | α | γ .

Proof By (2.14), we have

| a t t | γ P t (A/α)(1γ) S t (A/α)>| a j t j t |γ P j t (A)(1γ) S j t (A)0.

 □

Corollary 2 Let A PD n γ and N r (A) N c (A). Then for any α N r (A) N c (A) with |α|<n,

A/α SPD n | α | γ .

3 Distribution for eigenvalues

In this section, as an application of our results in Section 2, we present some locations for the eigenvalues of the Schur complements.

Theorem 5 Let A C n × n , α={ i 1 , i 2 ,, i k } N r , α =Nα={ j 1 , j 2 ,, j l }. Then for each eigenvalue λ of A/α, there exists 1tl such that

|λ a j t j t |< P j t (A) w j t .
(3.1)

Proof Set A/α=( a s t ). Using the Gerschgorin circle theorem, we know there exists 1tl such that

| λ a t t | P t (A/α).

Thus

0 | λ a t t | P t ( A / α ) = | λ a j t j t + ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j t a i k j t ) | s = 1 s t l | a j t j s ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j s a i k j s ) | | λ a j t j t | s = 1 s t l | a j t j s | s = 1 l | ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j s a i k j s ) | | λ a j t j t | s = 1 s t l | a j t j s | s = 1 l ( | a j t i 1 | , , | a j t i k | ) { μ [ A ( α ) ] } 1 ( | a i 1 j s | | a i k j s | ) (by Lemmas 1 and 2) = | λ a j t j t | P j t ( A ) + u = 1 k | a j t i u | + w j t w j t s = 1 l ( | a j t i 1 | , , | a j t i k | ) { μ [ A ( α ) ] } 1 ( | a i 1 j s | | a i k j s | ) = | λ a j t j t | P j t ( A ) + w j t + u = 1 k | a j t i u | w j t ( | a j t i 1 | , , | a j t i k | ) { μ [ A ( α ) ] } 1 ( s = 1 l | a i 1 j s | s = 1 l | a i k j s | ) = | λ a j t j t | P j t ( A ) + w j t + 1 det { μ [ A ( α ) ] } det ( u = 1 k | a j t i u | w j t | a j t i 1 | | a j t i k | s = 1 l | a i 1 j s | s = 1 l | a i k j s | μ [ A ( α ) ] ) .

Hence, it is not difficult to get by (2.6) that

0 | λ a t t | P t (A/α)>|λ a j t j t | P j t (A)+ w j t ,

i.e.,

|λ a j t j t |< P j t (A) w j t .

 □

Remark 3 By Remark 2, it is obvious that Theorem 5 improves Theorem 3 of [1].

Corollary 3 Let A C n × n , α={ i 1 , i 2 ,, i k } N r , α =Nα={ j 1 , j 2 ,, j l }. Then for each eigenvalue λ of A/α, there exists 1tl such that

|λ a j t j t |< P j t (A).

In a similar way to the proof of Theorem 5, we obtain the following theorem according to Theorem 2.

Theorem 6 Let A SDD n , α={ i 0 , i 1 , i 2 ,, i k } with the index i 0 satisfying (1.5), α =Nα={ j 1 , j 2 ,, j l }. Then for each eigenvalue λ of A/α, there exists 1tl such that

|λ a j t j t |< P j t (A) ( 1 u = 1 l | a i 0 j u | | a i 0 i 0 | j i 0 j α | a i 0 j | ) v = 1 k | a j t i v |.
(3.2)

Next, we obtain some distributions for the eigenvalues of the Schur complements of matrices under the conditions such as PD n γ degree.

Lemma 5 [1]

Let A C n × n and 0γ1. Then for every eigenvalue λ of A, there exists 1in such that

|λ a i i | P i γ (A) S i 1 γ (A).
(3.3)

Theorem 7 Let A C n × n , α={ i 1 , i 2 ,, i k } N r (A) N c (A), α =Nα={ j 1 , j 2 ,, j l }, |α|<n . Then for each eigenvalue λ of A/α, there exists 1tl such that

|λ a j t j t |< ( P j t ( A ) w t ) γ ( S j t ( A ) w t T ) 1 γ .
(3.4)

Proof Set A/α=( a t s ). From Lemma 5, we know that for each eigenvalue λ of A/α, there exists 1tl such that

| λ a t t | P t γ (A/α) S t 1 γ (A/α).
(3.5)

Hence,

0 | λ a t t | P t γ ( A / α ) S t ( 1 γ ) ( A / α ) = | λ a t t | ( s = 1 s t l | a t s | ) γ ( s = 1 s t l | a s t | ) 1 γ = | λ a j t j t + ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j t a i k j t ) | [ s = 1 s t l | a j t j s ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j s a i k j s ) | ] γ × [ s = 1 s t l | a j s j t ( a j s i 1 , , a j s i k ) [ A ( α ) ] 1 ( a i 1 j t a i k j t ) | ] ( 1 γ ) | λ a j t j t | | ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j t a i k j t ) | ( s = 1 s t l [ | a j t j s | + | ( a j t i 1 , , a j t i k ) [ A ( α ) ] 1 ( a i 1 j s a i k j s ) | ] ) γ × ( s = 1 s t l [ | a j s j t | + | ( a j s i 1 , , a j s i k ) [ A ( α ) ] 1 ( a i 1 j t a i k j t ) | ] ) 1 γ .
(3.6)

From the proof of Theorem 3, we know

Therefore, from (3.6) we obtain

0 | λ a t t | P t γ ( A / α ) S t 1 γ ( A / α ) > | λ a j t j t | ( P j t ( A ) w t ) γ ( S j t ( A ) w t T ) 1 γ .

Thus (3.4) holds. □

Remark 4 By Remark 2, it is obvious that Theorem 7 improves Theorem 4 of [1].

Corollary 4 Let A C n × n , α={ i 1 , i 2 ,, i k } N r (A) N c (A), α =Nα={ j 1 , j 2 ,, j l }, |α|<n. Then for each eigenvalue λ of A/α, there exists 1tl such that

|λ a j t j t |< P j t γ (A) S j t 1 γ γ P j t (A)+(1γ) S j t (A).

4 A numerical example

In this section, we show how to estimate the bounds for eigenvalues of the Schur complement with the elements of the original matrix to show the advantages of our results.

Example Let

A=( 330 323 2 3 1 321 328 3 1 2 3 2 6 7 2 2 1 1 2 6 2 1 2 3 4 ),α={1,2}.

If we estimate the bounds for eigenvalues of A/α by the elements of A/α, there would be great computations to do. However, as

Since α N r (A), according to Theorem 5, the eigenvalue z of A/α satisfies

z { z | z 6 | 13.29 } { z | z 2 | 9.58 } { z | z 4 | 7.58 } G 1 .
(4.1)

According to Theorem 3 in [1], the eigenvalue z of A/α satisfies

z { z | z 6 | 13.98 } { z | z 2 | 9.99 } { z | z 4 | 7.99 } G 1 .
(4.2)

Further, we use Figure 1 to illustrate (4.1) and (4.2).

Figure 1
figure 1

The red dotted line and blue dashed line denote the corresponding discs of G 1 and G 1 , respectively.

It is clear that G 1 G 1 from both (4.1), (4.2) and Figure 1.

In addition, since α N r (A) N c (A), by taking γ= 1 2 in Theorem 7, the eigenvalue z of A/α satisfies

z { z | z 6 | 9.64 } { z | z 2 | 11.24 } { z | z 4 | 8.87 } G 2 .
(4.3)

According to Theorem 4 in [1], the eigenvalue z of A/α satisfies

z { z | z 6 | 10.57 } { z | z 2 | 11.82 } { z | z 4 | 9.37 } G 2 .
(4.4)

Further, we use Figure 2 to illustrate (4.3) and (4.4).

Figure 2
figure 2

The red dotted line and blue dashed line denote the corresponding discs of G 2 and G 2 , respectively.

It is clear that G 2 G 2 from both (4.3), (4.4) and Figure 2.