1 Introduction

For a positive integer n, N denotes the set {1,2,,n}. The set of all n×n complex matrices is denoted by C n × n , and R n × n denotes the set of all n×n real matrices throughout.

Let A=( a i j ) R n × n and B=( b i j ) R n × n . We write AB (>B) if a i j b i j (> b i j ) for all 1in, 1jn. If A0 (>0), we say that A is a nonnegative (positive) matrix. The spectral radius of A is denoted by ρ(A). Let A be an irreducible nonnegative matrix. It is well known that there exists a positive vector u such that Au=ρ(A)u, u being called a right Perron eigenvector of A. This guarantees that ρ(A)σ(A), where σ(A) denotes the spectrum of A.

The set Z n R n × n is defined by

Z n { A = ( a i j ) R n × n : a i j 0  if  i j , i , j = 1 , , n } .

The simple sign patten of the matrices in Z n has many striking consequences. Let A=( a i j ) Z n and suppose A=αIP with αR and P0. Then αρ(P) is an eigenvalue of A, every eigenvalue of A lies in the disc {zC:|zα|ρ(P)}, and hence every eigenvalue λ of A satisfies Reλαρ(P). In particular, a matrix A Z n is called an M-matrix if αρ(P). If α>ρ(P), we call A nonsingular M-matrix, and denote the class of nonsingular M-matrices by M n .

Let A=( a i j ) Z n , we denote min{Re(λ):λσ(A)} by q(A). The following simple facts are needed for our purpose in proving (see Problems 16 and 19 in Section 2.5 of [1]):

  1. (i)

    q(A)σ(A); q(A) is called a minimum eigenvalue of A.

  2. (ii)

    If A M n and ρ( A 1 ) is the Perron eigenvalue of the nonnegative matrix A 1 , then q(A)= 1 ρ ( A 1 ) is a positive real eigenvalue of A.

Let A be an irreducible nonsingular M-matrix. It is well known that there exists a positive vector u such that Au=q(A)u, u being called a right Perron eigenvector of A.

If A=( a i j ) M n , we write C A = D A A, where D A =diag( a i i ). Note that a i i >0 for all iN if A M n . Thus we define the Jacobi iterative matrix of A by J A = D A 1 C A . It is easy to check that J A is nonnegative and ρ( J A )<1 (see [2]).

The Hadamard product of A=( a i j ) C n × n and B=( b i j ) C n × n is defined by AB( a i j b i j ) C n × n .

It has been noted [3, 4] that the Hadamard product B A 1 of an M-matrix B and the inverse of an M-matrix A is again an M-matrix.

In 1991, Horn et al. [[1], p. 375] showed the classical result: if A=( a i j ) M n , B=( b i j ) M n , B 1 =( β i j ), then

q ( A B 1 ) q(A) min 1 i n β i i .
(1.1)

Subsequently, Chen [5] improved the bound in (1.1) and obtained the following result:

q ( A B 1 ) q(A)q(B) min 1 i n { ( a i i q ( A ) + b i i q ( B ) 1 ) β i i b i i } .
(1.2)

In 2008, Huang [2] obtained the following result:

q ( A B 1 ) 1 ρ ( J A ) ρ ( J B ) 1 + ρ 2 ( J B ) min 1 i n a i i b i i .
(1.3)

This bound in (1.3) improved the bound in (1.1) in some cases. For example, if

B= ( 4 0 0 3 ) ,A= ( 3 1 0 2 ) ,

then q(A B 1 )= 1 ρ ( J A ) ρ ( J B ) 1 + ρ 2 ( J B ) min 1 i n a i i b i i = 2 3 q(A) min 1 i n β i i = 1 2 . But 1 ρ ( J A ) ρ ( J B ) 1 + ρ 2 ( J B ) min 1 i n a i i b i i q(A) min 1 i n β i i in Example 2.1 in this paper.

In practice, the bound of q(A B 1 ) can give a rough estimate before actually solving it and can serve as a check of whether the solution technique for it actually resulted in valid solution. Besides, a good bound of q(A B 1 ) can also help us reduce the computational burden. Therefore, it is necessary to study the bound. In this paper, we present some new lower bounds of the minimum eigenvalue q(A B 1 ) for the Hadamard product of M-matrices, which improve (1.1), (1.2) and (1.3) and generalize the corresponding result of Xiang [6].

2 Main results

In this section, we state and prove our main results. Firstly, we give some lemmas.

Lemma 2.1 (See [[7], Theorem 11])

Let A=( a i j ) C n × n , with n2. Then, if λ is an eigenvalue of A, there is a pair (r,q) of positive integers with rq (1r,qn) such that

|λ a r r ||λ a q q | k r | a r k | l q | a q l |.

Lemma 2.2 (See [[8], Lemma 2.2])

  1. (a)

    If A=( a i j ) is an n×n strictly diagonally dominant matrix by row, that is, | a i i |> j i | a i j | for any iN, then A 1 =( β i j ) exists, and

    | β j i | k j | a j k | | a j j | | β i i |,for all ji.

(b) If A=( a i j ) is an n×n strictly diagonally dominant matrix by column, that is, | a i i |> j i | a j i | for any iN, then A 1 =( β i j ) exists, and

| β i j | k j | a k j | | a j j | | β i i |,for all ji.

Proof We give a simple proof of (a) which is different from that in [8]. Similarly, one can prove (b). Firstly, we prove | β j i || β i i | for all ji. Suppose not. Let | β j i || β i i | for some j and ji. We can then assume | β j i || β k i | for all kN. Since A A 1 =I, we have k = 1 n a j k β k i =0. Thus

| a j j β j i | k j | a j k β k i | k j | a j k || β j i |<| a j j || β j i |,

which is a contradiction. Hence, | β j i || β i i | holds for all pairs i, j. Thus

| a j j β j i | k j | a j k β k i | k j | a j k || β i i |,for all ji,

that is,

| β j i | k j | a j k | | a j j | | β i i |,for all ji.

 □

Theorem 2.3 Let A=( a i j ) M n , B=( b i j ) M n and B 1 =( β i j ). Then

q ( A B 1 ) min i j 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 β i i β j j b i i b j j [ b i i q ( B ) ] [ a i i q ( A ) ] [ b j j q ( B ) ] [ a j j q ( A ) ] ] 1 2 } .
(2.1)

Proof If both A and B are irreducible. Let v=( v i ) and y=( y i ) be the right Perron eigenvectors of B T and A, respectively, i.e., B T v=q( B T )v=q(B)v, Ay=q(A)y. Define C= B T V, where V=diag( v 1 , v 2 ,, v n ). It is easy to check that C is diagonally dominant by row. It follows from Lemma 2.2, for all ij, we have

β i j v j k j | v k b k j | v j b j j β i i v i = ( b j j q ( B ) ) v j b j j v j β i i v i .

Thus

β i j ( b j j q ( B ) ) v j β i i b j j v i .

Let s j = ( b j j q ( B ) ) v j b j j y j , S=diag( s 1 , s 2 ,, s n ). Then S>0 and

S ( A B 1 ) S 1 = ( a 11 β 11 s 1 a 12 β 12 / s 2 s 1 a 1 n β 1 n / s n s 2 a 21 β 21 / s 1 a 22 β 22 s 2 a 2 n β 2 n / s n s n a n 1 β n 1 / s 1 s n a n 2 β n 2 / s 2 a n n β n n ) .

Hence, σ(A B 1 )=σ(S(A B 1 ) S 1 ). Since q(A B 1 ) is an eigenvalue of A B 1 , we have

q ( A B 1 ) σ ( S ( A B 1 ) S 1 ) .

Thus, by Lemma 2.1, there exists a pair (i,j) of positive integers with ij (1i,jn) such that

| q ( A B 1 ) a i i β i i | | q ( A B 1 ) a j j β j j | k i | a i k β i k | s k s i l j | a j l β j l | s l s j s i k i | a i k | ( b k k q ( B ) ) v k β i i b k k v i b k k y k ( b k k q ( B ) ) v k s j l j | a j l | ( b l l q ( B ) ) v l β j j b l l v j b l l y l ( b l l q ( B ) ) v l = s i β i i v i k i | a i k | y k s j β j j v j l j | a j l | y l = ( b i i q ( B ) ) v i b i i y i β i i v i ( a i i q ( A ) ) y i ( b j j q ( B ) ) v j b j j y j β j j v j ( a j j q ( A ) ) y j = β i i β j j b i i b j j [ b i i q ( B ) ] [ a i i q ( A ) ] [ b j j q ( B ) ] [ a j j q ( A ) ] .

From the above inequality and 0q(A B 1 ) a i i β i i , iN, we have

(2.2)

Thus, from inequality (2.2), we have

q ( A B 1 ) 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 β i i β j j b i i b j j [ b i i q ( B ) ] [ a i i q ( A ) ] [ b j j q ( B ) ] [ a j j q ( A ) ] ] 1 2 } min i j 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 β i i β j j b i i b j j [ b i i q ( B ) ] [ a i i q ( A ) ] [ b j j q ( B ) ] [ a j j q ( A ) ] ] 1 2 } .

Assume that one of A and B is reducible. It is well known that a matrix in Z n is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of [9]). If we denote by D=( d i j ) the n×n permutation matrix with d 12 = d 23 == d n 1 n = d n 1 =1, the remaining d i j zero, then both AtD and BtD are irreducible nonsingular M-matrices for any chosen positive real number t sufficiently small such that all the leading principal minors of both AtD and BtD are positive. Now, we substitute AtD and BtD for A and B, respectively, in the previous case, and then letting t0, the result follows by continuity. □

Using ideas of the proof of Theorem 2.3, we next give a new proof of inequality (2.2) in [5]. Similar to the proof of Theorem 2.3, by the theorem of Gerschgorin, there exist some positive integers iN such that

| q ( A B 1 ) a i i β i i | k i | a i k β i k | s k s i s i k i | a i k | ( b k k q ( B ) ) v k β i i b k k v i b k k y k ( b k k q ( B ) ) v k = s i β i i v i k i | a i k | y k = ( b i i q ( B ) ) v i b i i y i β i i v i ( a i i q ( A ) ) y i = β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] .

From the above inequality and 0q(A B 1 ) a i i β i i , iN, we have

a i i β i i q ( A B 1 ) β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] .
(2.3)

Thus, from inequality (2.3), we have

q ( A B 1 ) a i i β i i β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] = q ( A ) q ( B ) { ( a i i q ( A ) + b i i q ( B ) 1 ) β i i b i i } q ( A ) q ( B ) min 1 i n { ( a i i q ( A ) + b i i q ( B ) 1 ) β i i b i i } .

Remark 2.1 We next give a simple comparison between the upper bound in (2.1) and the upper bound in (1.2) and (1.1). Without loss of generality, for ij, assume that

a i i β i i β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] a j j β j j β j j b j j [ b j j q ( B ) ] [ a j j q ( A ) ] .
(2.4)

Thus, we can write (2.4) equivalently as

β j j b j j [ b j j q ( B ) ] [ a j j q ( A ) ] a j j β j j a i i β i i + β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] .
(2.5)

From (2.1), we have

( a i i β i i a j j β j j ) 2 + 4 β i i β j j b i i b j j [ b i i q ( B ) ] [ a i i q ( A ) ] [ b j j q ( B ) ] [ a j j q ( A ) ] ( a j j β j j a i i β i i ) 2 + 4 β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] ( a j j β j j a i i β i i ) + 4 [ β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] ] 2 = ( a j j β j j a i i β i i + 2 β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] ) 2 .

Thus, from (2.1), (2.5) and the above inequality, we have

q ( A B 1 ) min i j 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 β i i β j j b i i b j j [ b i i q ( B ) ] [ a i i q ( A ) ] [ b j j q ( B ) ] [ a j j q ( A ) ] ] 1 2 } min i j 1 2 { a i i β i i + a j j β j j a j j β j j + a i i β i i 2 β i i b i i [ b i i q ( B ) ] [ a i i q ( A ) ] } q ( A ) q ( B ) min 1 i n { ( a i i q ( A ) + b i i q ( B ) 1 ) β i i b i i } .

Hence, the bound in (2.1) is sharper than the bound in (1.2). According to Remark 2.4 in [5], we know

q(A)q(B) min 1 i n { ( a i i q ( A ) + b i i q ( B ) 1 ) β i i b i i } q(A) min 1 i n β i i .

So, the bound in (2.1) is sharper than the bound in (1.1).

Theorem 2.4 Let A=( a i j ) M n , B=( b i j ) M n . Then

q ( A B 1 ) ( 1 ρ ( J A ) ρ ( J B ) ) min 1 i n a i i b i i .
(2.6)

Proof Suppose that A and B are irreducible, D B is the diagonal matrix of B and C B = D B B, then D B is a diagonal matrix with positive diagonal entries, C B is an irreducible nonnegative matrix and J= D B 1 C B T is again an irreducible nonnegative matrix. Since the Jacobi iterative matrix of B is J B = D B 1 C B , we have

ρ( J B )=ρ ( D B 1 C B ) =ρ ( ( D B 1 C B ) T ) =ρ ( C B T D B 1 ) =ρ ( D B 1 C B T ) =ρ(J).
(2.7)

By the Perron-Frobenius theorem on irreducible nonnegative matrices, there is a positive eigenvector x= ( x 1 , x 2 , , x n ) T such that D B 1 C B T x=ρ(J)x. That is,

k i | b k i | x k b i i =ρ(J) x i iN.
(2.8)

Thus, we can write (2.8) equivalently as

k i | b k i | x k b i i x i =ρ(J)iN.

Set X=diag( x 1 , x 2 ,, x n ) and B ˜ =XB. It is easy to check that B ˜ is a strictly diagonally dominant matrix by column. Let B 1 =( β i j ). By Lemma 2.2, for all ij (1i,jn), we have

β i j x j 1 k j | b k j | x k b j j x j β i i x i 1 =ρ(J) β i i x i 1 .

Thus

β i j ρ(J) β i i x j x i iN.
(2.9)

Combining (2.9) with (2.7), we get

β i j ρ( J B ) β i i x j x i .
(2.10)

Since B 1 B=I, we obtain

β i i b i i =1+ k i β i k | b k i |1iN.

Thus

β i i 1 b i i iN.
(2.11)

Let J A y=ρ( J A )y for positive vectors y=( y i ). Set S=diag( x 1 y 1 , x 2 y 2 ,, x n y n ), then S>0. Hence, σ(A B 1 )=σ(S(A B 1 ) S 1 ). Since q(A B 1 ) is an eigenvalue of A B 1 , we have

q ( A B 1 ) σ ( S ( A B 1 ) S 1 ) .

By the theorem of Gerschgorin and (2.10), there exist some positive integers iN such that

| q ( A B 1 ) a i i β i i | k i | a i k β i k | x i y k y i x k x i y i k i | a i k | ρ ( J B ) β i i x k x i y k x k = ρ ( J B ) β i i y i k i | a i k | y k = a i i β i i ρ ( J A ) ρ ( J B ) .

From the above inequality and 0q(A B 1 ) a i i β i i , iN, we have

a i i β i i q ( A B 1 ) a i i β i i ρ( J A )ρ( J B ).
(2.12)

Thus, from inequality (2.11) and (2.12), we have

q ( A B 1 ) a i i β i i a i i β i i ρ ( J A ) ρ ( J B ) = ( 1 ρ ( J A ) ρ ( J B ) ) a i i β i i ( 1 ρ ( J A ) ρ ( J B ) ) a i i b i i ( 1 ρ ( J A ) ρ ( J B ) ) min 1 i n a i i b i i .

Assume that one of A and B is reducible. It is well known that a matrix in Z n is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of [9]). If we denote by D=( d i j ) the n×n permutation matrix with d 12 = d 23 == d n 1 n = d n 1 =1, the remaining d i j zero, then both AtD and BtD are irreducible nonsingular M-matrices for any chosen positive real number t sufficiently small such that all the leading principal minors of both AtD and BtD are positive. Now, we substitute AtD and BtD for A and B, respectively, in the previous case, and then letting t0, the result follows by continuity. □

Remark 2.2 If B M n is a diagonal matrix, the equality of (2.6) holds. Thus the bound (2.6) is sharp. Since 1+ ρ 2 ( J B )1, then

( 1 ρ ( J A ) ρ ( J B ) ) min 1 i n a i i b i i 1 ρ ( J A ) ρ ( J B ) 1 + ρ 2 ( J B ) min 1 i n a i i b i i .

The bound in (2.6) is sharper than the bound in (1.3).

If B=A, according to Theorem 2.4, we can deduce the following corollary.

Corollary 2.5 Let B M n , then

q ( B B 1 ) 1 ρ 2 ( J B ).

Remark 2.3 Corollary 2.5 is Theorem 2.8 of Xiang [6]. So, Theorem 2.4 generalizes Theorem 2.8 in [6].

If we apply Lemma 2.1 to J= D B 1 C B T and J B = D B 1 C B , then we have

ρ 2 ( J ) max i j k i | b k i | b i i l j | b l j | b j j , ρ 2 ( J B ) max i j k i | b i k | b i i l j | b j l | b j j .

Since ρ( J B )=ρ(J), then

ρ 2 ( J B )min { max i j k i | b i k | b i i l j | b j l | b j j , max i j k i | b k i | b i i l j | b l j | b j j } .
(2.13)

From (2.13) we have the following corollary.

Corollary 2.6 Let B=( b i j ) M n , then

q ( B B 1 ) 1min { max i j k i | b i k | b i i l j | b j l | b j j , max i j k i | b k i | b i i l j | b l j | b j j } .

Example 2.1 Let A and B be the same as in Example 2.1 in [10]:

B= ( 4 1 1 1 2 5 1 1 0 2 4 1 1 1 1 4 ) ,A= ( 1 1 / 2 0 0 1 / 2 1 1 / 2 0 0 1 / 2 1 1 / 2 0 0 1 / 2 1 ) .

It is easy to check that A,B M 4 . If we apply Theorem 5.7.31 of [1], we have

q ( A B 1 ) q(A) min 1 i n β i i =0.07003.

If we apply Theorem 9 of [2], we have

q ( A B 1 ) 1 ρ ( J A ) ρ ( J B ) 1 + ρ 2 ( J B ) min 1 i n a i i b i i =0.05229.

If we apply Theorem 2.1 of [10], we have

q ( A B 1 ) min 1 i n { a i i s i j i | a j i | b i i } = 0.08 .

But if we apply Theorem 2.4, we have

q ( A B 1 ) ( 1 ρ ( J A ) ρ ( J B ) ) min 1 i n a i i b i i =0.08291.

In fact, q(A B 1 )=0.21478. Example 2.1 shows that the bound in (2.6) is better than these corresponding bounds in [1, 2, 10].