Abstract
Several convergent sequences of the lower bounds of the minimum eigenvalue for the Hadamard product of an M-matrix and an inverse M-matrix are given. Numerical examples show that these sequences could reach the true value of the minimum eigenvalue in some cases. These bounds in this paper improve some existing results.
Similar content being viewed by others
1 Introduction
For a positive integer n, N denotes the set \(\{1, 2, \ldots, n\}\), and \(\mathbb{R}^{n\times n}(\mathbb{C}^{n\times n})\) denotes the set of all \({n\times n}\) real (complex) matrices throughout.
It is well known that a matrix \(A=[a_{ij}]\in\mathbb{R}^{n\times n}\) is called a nonsingular M-matrix if \(a_{ij}\leq0\), \(i,j\in N\), \(i\neq j\), A is nonsingular and \(A^{-1}\geq0\) (see [1, 2]). Denote by \(\mathcal{M}_{n}\) the set of all \(n\times n\) nonsingular M-matrices.
If A is a nonsingular M-matrix, then there exists a positive eigenvalue of A equal to \(\tau(A)\equiv[\rho(A^{-1})]^{-1}\), where \(\rho(A^{-1})\) is the Perron eigenvalue of the nonnegative matrix \(A^{-1}\). It is easy to prove that \(\tau(A)=\min\{|\lambda|:\lambda\in\sigma(A)\}\), where \(\sigma(A)\) denotes the spectrum of A (see [3]).
A matrix A is called reducible if there exists a nonempty proper subset \(I\subset N\) such that \(a_{ij}=0\), \(\forall i \in I\), \(\forall j\notin I\). If A is not reducible, then we call A irreducible (see [4]).
For two real matrices \(A=[a_{ij}]\) and \(B=[b_{ij}]\) of the same size, the Hadamard product of A and B is defined as the matrix \(A\circ B=[a_{ij}b_{ij}]\). If A and B are two nonsingular M-matrices, then it was proved in [3] that \(A\circ B^{-1}\) is also a nonsingular M-matrix.
Let \(A=[a_{ij}]\) be an \(n\times n\) matrix with all diagonal entries being nonzero throughout. For \(i,j,k\in N\), \(j\neq i\), denote
In 2013, Zhou et al. [5] gave the following result: If \(A=[a_{ij}]\in\mathcal{M}_{n}\) is a strictly row diagonally dominant matrix, \(B=[b_{ij}]\in\mathcal{M}_{n}\) and \(A^{-1}=[\alpha_{ij}]\), then
In 2013, Cheng et al. [6] obtained the following result: If \(A=[a_{ij}]\in\mathcal{M}_{n}\) and \(A^{-1}=[\alpha_{ij}]\) is a doubly stochastic matrix, then
In this paper, we present several convergent sequences of the lower bounds of \(\tau(B\circ A^{-1})\) and \(\tau(A\circ A^{-1})\), which improve (1) and (2). Numerical examples show that these sequences could reach the true value of \(\tau(A\circ A^{-1})\) in some cases.
2 Some lemmas and notations
In this section, we first give the following notations; these will be useful in the following proofs.
Let \(A= [a_{ij}]\in\mathbb{R}^{n\times n}\). For \(i,j,k\in N\), \(j\neq i\), \(t=1,2,\ldots\) , denote
Lemma 1
If \(A=[a_{ij}]\in\mathcal{M}_{n}\) is strictly row diagonally dominant, then, for all \(i,j\in{N}\), \(j\neq{i}\), \(t=1,2,\ldots\) ,
-
(a)
\(1>q_{ji}\geq{v^{(0)}_{ji}}\geq{p^{(1)}_{ji}}\geq{v^{(1)}_{ji}}\geq {p^{(2)}_{ji}}\geq{v^{(2)}_{ji}}\geq\cdots\geq{p^{(t)}_{ji}}\geq {v^{(t)}_{ji}}\geq\cdots\geq0\);
-
(b)
\({1}\geq{h_{i}}\geq{0}\), \({1}\geq{h^{(t)}_{i}}\geq{0}\).
Proof
Since A is a strictly row diagonally dominant matrix, that is, \(|a_{jj}|>\sum_{k\neq j}{|a_{jk}|}=\sum_{k\neq j,i}{|a_{jk}|}+|a_{ji}|\), we have \(0\leq{r_{ji}=\frac{|a_{ji}|}{|a_{jj}|-\sum_{k \neq j, i} |a_{jk}| }}<1\). By the definition of \(r_{i}\), we obtain \(0\leq{r_{i}}<1\). Since \(r_{i}=\max_{j\neq i }\{r_{ji} \}\), so \(r_{i}\geq{r_{ji}=\frac{|a_{ji}|}{|a_{jj}|-\sum_{k \neq j, i} |a_{jk}|}}\), i.e., \(r_{i}\geq{\frac{|a_{ji}|+\sum_{k \neq j, i} |a_{jk}|r_{i}}{|a_{jj}|}}\), from the definition of \({m_{ji}}\), we have \(1>{r_{i}}\geq{m_{ji}}\geq{0}\).
Since A is a strictly row diagonally dominant matrix, \(1>d_{j}\geq s_{ji}\geq0\). Then, by the definition of \(q_{ji}\), it is easy to see that \(0\leq q_{ji}<1\). Hence, if \(q_{ji}=s_{ji}\), then
else, i.e., if \(q_{ji}=m_{ji}\), then
furthermore, from the definition of \(h_{i}\), we have \(0\leq{h_{i}}\leq{1}\).
Since
we have
By \(0\leq{h_{i}}\leq{1}\), we have \({q_{ji}}\geq{v^{(0)}_{ji}}\geq{0}\). From the definition of \({v^{(0)}_{ji}}\), \({p^{(1)}_{ji}}\), we have \({v^{(0)}_{ji}}\geq{p^{(1)}_{ji}}\geq{0}\).
Hence,
furthermore, by the definition of \(h^{(1)}_{i}\), we have \(0\leq h^{(1)}_{i}\leq1\), \(i \in N\).
Since
we have
By \(0\leq{h^{(1)}_{i}}\leq{1}\), we have \({p^{(1)}_{ji}}\geq{v^{(1)}_{ji}}\geq{0}\). From the definition of \({v^{(1)}_{ji}}\), \({p^{(2)}_{ji}}\), we obtain \({v^{(1)}_{ji}}\geq{p^{(2)}_{ji}}\geq{0}\).
In the same way as above, we can also prove that
The proof is completed. □
Using the same technique as the proof of Lemma 2.2, Lemma 2.3, Lemma 3.1 in [6], we can obtain Lemma 2, Lemma 3, Lemma 4, respectively.
Lemma 2
If \(A=[a_{ij}]\in\mathcal{M}_{n}\) is a strictly row diagonally dominant matrix, then \(A^{-1}=[\alpha_{ij}]\) exists, and
Lemma 3
If \(A=[a_{ij}]\in\mathcal{M}_{n}\) is a strictly row diagonally dominant matrix, then \(A^{-1}=[\alpha_{ij}]\) exists, and
Lemma 4
If \(A\in\mathcal{M}_{n}\) and \(A^{-1}=[\alpha_{ij}]\) is a doubly stochastic matrix, then
Lemma 5
[7]
If \(A^{-1}\) is a doubly stochastic matrix, then \(A^{T}e=e\), \(Ae=e\), where \(e=(1, 1, \ldots, 1)^{T}\).
Lemma 6
[8]
Let \(A=[a_{ij}]\in\mathbb{C}^{n\times n}\) and \(x_{1}, x_{2}, \ldots, x_{n}\) be positive real numbers. Then all the eigenvalues of A lie in the region
3 Main results
In this section, we give several sequences of the lower bounds for \(\tau(B\circ{A^{-1}})\) and \(\tau(A\circ{A^{-1}})\).
Theorem 1
Let \(A=[a_{ij}], B=[b_{ij}]\in\mathcal{M}_{n}\). Then, for \(t=1,2,\ldots\) ,
Proof
It is evident that the result holds with equality for \(n=1\).
We next assume that \(n\geq2\).
Since \(A\in\mathcal{M}_{n}\), there exists a positive diagonal matrix D such that \(D^{-1}AD\) is a strictly row diagonally dominant M-matrix, and
Therefore, for convenience and without loss of generality, we assume that A is a strictly row diagonally dominant matrix.
If A is irreducible, then \(0< p_{i}^{(t)}<1\), for any \(i\in N\). Let \(A^{-1}=[\alpha_{ij}]\). Since \(\tau(B\circ A^{-1})\) is an eigenvalue of \(B\circ A^{-1}\), by Lemma 2 and Lemma 6, there exists an i such that
By Lemma 3, inequality (4), and \(\tau(B\circ A^{-1})\leq b_{ii}\alpha_{ii}\) for all \(i\in N\), we have
If A is reducible, it is well known that a matrix in \(Z_{n}=\{A=[a_{ij}]\in\mathbb{R}^{n\times n}:a_{ij}\leq0,i\neq{j}\}\) is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of [1]). If we denote by \(C=[c_{ij}]\) the \(n\times n\) permutation matrix with \(c_{12}=c_{23}=\cdots=c_{n-1,n}=c_{n1}=1\), the remaining \(c_{ij}\) zero, then \(A-{\varepsilon}C\) is an irreducible nonsingular M-matrix for any chosen positive real number ε, sufficiently small such that all the leading principal minors of \(A-{\varepsilon} C\) are positive. Now we substitute \(A-{\varepsilon} C\) for A, in the previous case, and then, letting \({\varepsilon}\rightarrow0\), the result follows by continuity. □
Theorem 2
The sequence \(\{\Omega_{t}\}\), \(t=1,2,\ldots\) obtained from Theorem 1 is monotone increasing with an upper bound \(\tau(B\circ A^{-1})\) and, consequently, is convergent.
Proof
By Lemma 1, we have \(p^{(t)}_{ji}\geq p^{(t+1)}_{ji}\geq 0\), \(t=1,2,\ldots\) , so by the definition of \(p^{(t)}_{i}\), it is easy to see that the sequence \(\{p^{(t)}_{i}\}\) is monotone decreasing. Then \(\Omega_{t}\) is a monotonically increasing sequence. Hence, the sequence is convergent. □
Remark 1
We give a simple comparison between (1) and (3). According to Lemma 1, we know that \(q_{ji}=\min\{{s_{ji},m_{ji}}\}\geq{p^{(t)}_{ji}}\). Furthermore, by the definition of \(m_{i}\), \(p^{(t)}_{i}\), we have \(m_{i}\geq p^{(t)}_{i}\). Therefore for \(t=1,2,\ldots\) ,
So the bound in (3) is bigger than the bound in (1).
Let \(A=[a_{ij}]\in\mathcal{M}_{n}\). By Lemma 5, we know that if \(A^{-1}\) is a doubly stochastic matrix, then \(A^{T}e=e\), \(Ae=e\), that is, \(a_{ii}=1+\sum_{j\neq i}|a_{ij}|=1+\sum_{j\neq i}|a_{ji}|\). So A is strictly diagonally dominant matrix by row and by column. By using Lemma 4 and Theorem 1, we can get the following corollaries.
Corollary 1
Let \(A=[a_{ij}], B=[b_{ij}]\in\mathcal{M}_{n}\) and \(A^{-1}\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,
Corollary 2
Let \(A=[a_{ij}]\in\mathcal{M}_{n}\) and \(A^{-1}\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,
Remark 2
(i) The sequence \(\{\Gamma_{t}\}\), \(t=1,2,\ldots\) obtained from Corollary 2 is monotone increasing with an upper bound \(\tau(A\circ A^{-1})\) and, consequently, is convergent.
(ii) Next, we give a simple comparison between (2) and (5). By Lemma 1, we know that \(q_{ji}=\min\{{s_{ji},m_{ji}}\}\geq{v^{(0)}_{ji}}\), so \(u_{ji}=\frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|m_{ki}}{|a_{jj}|}\geq \frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|v_{ki}^{(0)}}{|a_{jj}|}=p_{ji}^{(1)}\geq p_{ji}^{(t)}\). Furthermore, by the definition of \(u_{i}\), \(p^{(t)}_{i}\), we have \(u_{i}\geq p^{(t)}_{i}\). Obviously,
So the bound in (5) is bigger than the bound in (2).
Using the same technique as the proof of Theorem 1, the another lower upper of \(\tau(B\circ A^{-1})\) is given.
Theorem 3
Let \(A=[a_{ij}], B=[b_{ij}]\in\mathcal{M}_{n}\). Then, for \(t=1,2,\ldots\) ,
Using the same method as the proof of Theorem 2, the following theorem is obtained.
Theorem 4
The sequence \(\{\Delta_{t}\}\), \(t=1,2,\ldots\) obtained from Theorem 3 is monotone increasing with an upper bound \(\tau(B\circ A^{-1})\) and, consequently, is convergent.
Similarly, by Lemma 4 and Theorem 2, we can get the following corollaries.
Corollary 3
Let \(A=[a_{ij}],B=[b_{ij}]\in\mathcal{M}_{n}\) and \(A^{-1}\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,
Corollary 4
Let \(A=[a_{ij}]\in\mathcal{M}_{n}\) and \(A^{-1}\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,
Remark 3
The sequence \(\{\mathrm{T}_{t}\}\), \(t=1,2,\ldots\) , obtained from Corollary 4 is monotone increasing with an upper bound \(\tau(A\circ A^{-1})\) and, consequently, is convergent.
Let \(\Upsilon_{t}=\max\{\Gamma_{t},\mathrm{T}_{t}\}\). By Corollary 2 and Corollary 4, the following theorem is easily found.
Theorem 5
Let \(A=[a_{ij}]\in\mathcal{M}_{n}\) and \(A^{-1}\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,
4 Numerical examples
In this section, several numerical examples are given to verify the theoretical results.
Example 1
Let
By \(Ae=e\), \(A^{T}e=e\), we know that A is strictly diagonally dominant by row and column. Based on \(A\in Z_{n}\), it is easy to see that A is nonsingular M-matrix and \(A^{-1}\) is doubly stochastic. Numerical results are given in Table 1 for the total number of iterations \(T=10\). In fact, \(\tau(A\circ{A^{-1}})=0.9678\).
Remark 4
Numerical results in Table 1 show that:
-
(a)
Lower bounds obtained from Theorem 5 are greater than the bound in Theorem 3.1 of [6].
-
(b)
Sequence obtained from Theorem 5 is monotone increasing.
-
(c)
The sequence obtained from Theorem 5 approximates effectively to the true value of \(\tau(A\circ A^{-1})\), so we can estimate \(\tau(A\circ A^{-1})\) by Theorem 5.
Example 2
A nonsingular M-matrix \(A=[a_{ij}]\in\mathbb{R}^{n\times n}\) whose inverse is doubly stochastic is randomly generated by Matlab 7.1 (with 0-1 average distribution).
The numerical results obtained from Theorem 5 for \(T=500\) are listed in Table 2, where T are defined in Example 1.
Remark 5
Numerical results in Table 2 show that it is effective by Theorem 5 to estimate \(\tau(A\circ A^{-1})\) for large order matrices.
Example 3
Let \(A=[a_{ij}]\in\mathbb{R}^{n\times n}\), where \(a_{11}=a_{22}=\cdots=a_{n,n}=2\), \(a_{12}=a_{23}=\cdots=a_{n-1,n}=a_{n,1}=-1\), and \(a_{ij}=0\) elsewhere.
It is easy to see that A is a nonsingular M-matrix and \(A^{-1}\) is doubly stochastic. The results obtained from Theorem 5 for \(n=10,100\) and \(T=10\) are listed in Table 3, where T is defined in Example 1. In fact, \(\tau(A\circ{A^{-1}})=0.7507\) for \(n=10\) and \(\tau(A\circ{A^{-1}})=0.7500\) for \(n=100\).
Remark 6
Numerical results in Table 3 show that the lower bound obtained from Theorem 5 could reach the true value of \(\tau (A\circ A^{-1})\) in some cases.
5 Further work
In Theorem 5, we present a convergent sequence \(\{\Upsilon_{t}\}\), \(t=1,2,\ldots\) , to approximate \(\tau(A\circ A^{-1})\). Then an interesting problem is how accurately these bounds can be computed. At present, it is very difficult for the authors to give the error analysis. We will continue to study this problem in the future.
References
Berman, A, Plemmons, RJ: Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia (1994)
Horn, RA, Johnson, CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)
Fiedler, M, Markham, TL: An inequality for the Hadamard product of an M-matrix and inverse M-matrix. Linear Algebra Appl. 101, 1-8 (1988)
Chen, JL: Special Matrix. Tsinghua University Press, Beijing (2000)
Zhou, DM, Chen, GL, Wu, GX, Zhang, XY: On some new bounds for eigenvalues of the Hadamard product and the Fan product of matrices. Linear Algebra Appl. 438, 1415-1426 (2013)
Cheng, GH, Tan, Q, Wang, ZD: Some inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse. J. Inequal. Appl. 2013, 65 (2013)
Sinkhorn, R: A relationship between arbitrary positive matrices and doubly stochastic matrices. Ann. Math. Stat. 35, 876-879 (1964)
Varga, RS: Minimal Gerschgorin sets. Pac. J. Math. 15(2), 719-729 (1965)
Acknowledgements
This work is supported by the National Natural Science Foundation of China (11361074).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to this work. All authors read and approved the final manuscript.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Zhao, J., Wang, F. & Sang, C. Some inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and an inverse M-matrix. J Inequal Appl 2015, 92 (2015). https://doi.org/10.1186/s13660-015-0611-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0611-x