1 Introduction

Let \(\mathbb{C}^{n\times n}\) (\(\mathbb{R}^{n\times n}\)) denote the set of all \(n\times n\) complex (real) matrices, \(A=(a_{ij})\in \mathbb{C}^{n\times n}\), \(N= \{1, 2, \ldots, n\}\). We write \(A\geq 0\) if all \(a_{ij}\geq0\) (\(i,j\in N\)). A is called nonnegative if \(A\geq0\). Let \(Z_{n}\) denote the class of all \(n\times n\) real matrices all of whose off-diagonal entries are nonpositive. A matrix A is called an M-matrix [1] if \(A\in Z_{n}\) and the inverse of A, denoted by \(A^{-1}\), is nonnegative. \(M_{n}\) will be used to denote the set of all \(n\times n\) M-matrices.

Let A be an M-matrix. Then there exists a positive eigenvalue of A, \(\tau(A) =\rho(A^{-1})^{-1}\), where \(\rho(A^{-1})\) is the spectral radius of the nonnegative matrix \(A^{-1}\), \(\tau(A) = \min\{|\lambda| : \lambda\in\sigma(A)\}\), \(\sigma(A)\) denotes the spectrum of A. \(\tau(A)\) is called the minimum eigenvalue of A [2, 3].

The Hadamard product of two matrices \(A=(a_{ij})\in \mathbb{R}^{n\times n}\) and \(B=(b_{ij})\in\mathbb{R}^{n\times n}\) is the matrix \(A\circ B=(a_{ij}b_{ij})\in\mathbb{R}^{n\times n}\).

An \(n\times n\) matrix A is said to be reducible if there exists a permutation matrix P such that

$$P^{T}AP=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} A_{11}&0 \\ A_{21}&A_{22} \end{array}\displaystyle \right ), $$

where \(A_{11}\), \(A_{22}\) are square matrices of order at least one. We call A irreducible if it is not reducible. Note that any nonzero \(1\times1\) matrix is irreducible.

Estimating the bounds for the minimum eigenvalue \(\tau(A)\) of an M-matrix A is an interesting subject in matrix theory, and it has important applications in many practical problems [46]. Hence, it is necessary to estimate the bounds for \(\tau(A)\).

In [5], Shivakumar et al. obtained the following bound for \(\tau(A)\): Let \(A = (a_{ij})\in R^{n\times n}\) be a weakly chained diagonally dominant M-matrix. Then

$$ r(A) \leq\tau(A) \leq R(A), \qquad \tau(A) \leq\min _{ i\in N} a_{ii}, \qquad \frac{1}{M} \leq\tau(A) \leq\frac{1}{m}. $$
(1)

Subsequently, Tian and Huang [7] provided a lower bound for \(\tau(A)\) by using the spectral radius of the Jacobi iterative matrix \(J_{A}\) of A: Let \(A = (a_{ij})\in R^{n\times n}\) be an M-matrix and \(A^{-1} = (\alpha_{ij})\). Then

$$ \tau(A)\geq\frac{1}{1+(n-1)\rho(J_{A})}\frac{1}{\max_{i\in N}\{\alpha_{ii}\}}. $$
(2)

Recently, Li et al. [8] improved (2) and gave the following result: Let \(B = (b_{ij})\in\mathbb{R}^{n\times n}\) be an M-matrix and \(B^{-1}= (\beta_{ij})\). Then

$$ \tau(B) \geq\frac{2}{\max_{i\neq j}\{ \beta_{ii}+\beta_{jj}+[(\beta_{ii}-\beta_{jj})^{2}+4(n-1)^{2}\beta_{ii}\beta _{jj}\rho^{2}(J_{B}) ]^{\frac{1}{2}}\}}. $$
(3)

In this paper, we continue to research the problems mentioned above. For an M-matrix B, we establish some new inequalities on the bounds for \(\tau(B)\). Finally, some examples are given to illustrate our results.

For convenience, we employ the following notations throughout. Let \(A=(a_{ij})\) be an \(n\times n\) matrix. For \(i, j, k \in N\), \(i\neq j\), denote

$$\begin{aligned}& r_{ji}=\frac{|a_{ji}|}{|a_{jj}|-\sum_{k \neq j, i} |a_{jk}| }, \qquad r_{i}=\max _{j\neq i } \{{r_{ji}}\}, \\& m_{ji}= \frac{|a_{ji}|+\sum_{k \neq j, i} |a_{jk}|r_{i}}{|a_{jj}|}, \qquad h_{i}=\max_{j\neq i } \biggl\{ \frac{|a_{ji}|}{|a_{jj}|m_{ji}-\sum_{k \neq j, i} |a_{jk}| m_{ki}} \biggr\} , \\& u_{ji}=\frac{|a_{ji}|+\sum_{k \neq j, i} |a_{jk}|m_{ki} h_{i} }{|a_{jj}|}, \qquad u_{i}=\max _{j\neq i } \{u_{ij}\}. \end{aligned}$$

2 Main results

In this section, we present our main results. Firstly, we give some notations and lemmas.

Let \(A\geq0\) and \(D=\operatorname{diag}(a_{ii})\). Denote \(C = A-D\), \(\mathcal{J}_{A}=D_{1}^{-1}C\), \(D_{1}=\operatorname{diag}(d_{ii})\), where

$$d_{ii} = \left \{ \textstyle\begin{array}{l@{\quad}l} 1, & \mbox{if } a_{ii}=0, \\ a_{ii}, & \mbox{if }a_{ii}\neq0. \end{array}\displaystyle \right . $$

By the definition of \(\mathcal{J}_{A}\), we obtain

$$ \rho(\mathcal{J}_{A^{T}})=\rho\bigl(D_{1}^{-1}C^{T} \bigr)=\rho\bigl(CD_{1}^{-1}\bigr)=\rho\bigl( D_{1}^{-1}\bigl(CD_{1}^{-1} \bigr)D_{1} \bigr)=\rho\bigl(D_{1}^{-1}C\bigr)= \rho(\mathcal{J}_{A}). $$

Lemma 1

[9]

Let \(A\in\mathbb{C}^{n\times n}\), and let \(x_{1}, x_{2},\ldots, x_{n}\) be positive real numbers. Then all the eigenvalues of A lie in the region

$$\bigcup_{i} \biggl\{ z\in C :|z-a_{ii}| \leq x_{i}\sum_{j\neq i}\frac{1}{x_{j}}|a_{ji}|, i\in N \biggr\} . $$

Lemma 2

[3]

Let \(A\in\mathbb{C}^{n\times n}\), and let \(x_{1}, x_{2}, \ldots, x_{n}\) be positive real numbers. Then all the eigenvalues of A lie in the region

$$ \bigcup_{j\neq i} \biggl\{ z\in\mathbb{C}: |z-a _{ii}||z-a_{jj}| \leq \biggl( x_{i} \sum _{k \neq i}\frac{1}{x_{k}} |a_{ki}| \biggr) \biggl( x_{j} \sum_{l \neq j} \frac{1}{x_{l}} |a_{lj}| \biggr) \biggr\} . $$

Lemma 3

[3]

Let \(A, B\in\mathbb{R}^{n\times n}\), and let \(X, Y\in \mathbb{R}^{n\times n}\) be diagonal matrices. Then

$$X(A\circ B)Y=(XAY)\circ B=(XA)\circ(BY)=(AY)\circ(XB)=A\circ (XBY). $$

Lemma 4

[3]

Let \(A=(a_{ij})\in M_{n}\). Then there exists a positive diagonal matrix X such that \(X^{-1}AX\) is a strictly diagonally dominant M-matrix

Lemma 5

[10]

Let \(A=(a_{ij})\in R^{n\times n}\) be a strictly diagonally dominant matrix and let \(A^{-1}=(\alpha_{ij})\). Then for all \(i\in N\),

$$\alpha_{ij}\leq u_{ji} \alpha_{jj}, \quad j\in N, j\neq i. $$

Theorem 1

Let \(A=(a_{ij}) \geq0\), \(B=(b_{ij})\in M_{n}\), and let \(B^{-1}=(\beta_{ij})\). Then

$$ \rho\bigl(A\circ B^{-1}\bigr)\leq\max _{1\leq i \leq n } \bigl\{ \bigl(a_{ii}+u_{i} \rho( \mathcal{J}_{A}) d_{ii} \bigr)\beta_{ii} \bigr\} . $$
(4)

Proof

It is evident that the result holds with equality for \(n=1\).

We next assume that \(n\geq2\).

(i) First, we assume that A and B are irreducible matrices. Since B is an M-matrix, by Lemma 4, there exists a positive diagonal matrix X, such that \(X^{-1}BX\) is a strictly row diagonally dominant M-matrix, and

$$\rho\bigl(A\circ B^{-1}\bigr)=\rho\bigl(X^{-1}\bigl(A\circ B^{-1}\bigr)X \bigr)=\rho\bigl(A\circ \bigl(X^{-1}BX \bigr)^{-1} \bigr). $$

Hence, for convenience and without loss of generality, we assume that B is a strictly diagonally dominant matrix.

On the other hand, since A is irreducible and so is \(\mathcal{J}_{A^{T}}\). Then there exists a positive vector \(x=(x_{i})\) such that \(\mathcal{J}_{A^{T}}x=\rho(\mathcal{J}_{A^{T}})x=\rho(\mathcal{J}_{A})x\), thus, we obtain \(\sum_{j\neq i} a_{ji}x_{j}=\rho(\mathcal{J}_{A}) d_{ii}x_{i}\).

Let \(\widetilde{A}=(\tilde{a}_{ij})=XAX^{-1} \) in which X is the positive matrix \(X=\operatorname{diag}(x_{1}, x_{2}, \ldots, x_{n})\). Then, we have

$$ \widetilde{A}=(\tilde{a}_{ij})=XAX^{-1} = \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} a_{11}&\frac{ a_{12}x_{1} }{ x_{2} }& \cdots& \frac{ a_{1n}x_{1} }{ x_{n} }\\ \frac{ a_{21}x_{2} }{ x_{1} }& a_{22} & \cdots& \frac{ a_{2n}x_{2} }{ x_{n} }\\ \vdots& \vdots& \ddots&\vdots\\ \frac{ a_{n1}x_{n} }{ x_{1} }& \frac{ a_{n2}x_{n} }{ x_{2} } & \cdots& a_{nn} \end{array}\displaystyle \right ). $$

From Lemma 3, we have

$$\widetilde{A}\circ B^{-1}=\bigl(XAX^{-1}\bigr)\circ B^{-1}=X\bigl(A\circ B^{-1}\bigr)X^{-1}. $$

Thus, we obtain \(\rho(\widetilde{A}\circ B^{-1})=\rho(A\circ B^{-1})\). Let \(\lambda=\rho(\widetilde{A}\circ B^{-1})\), so that \(\lambda\geq a_{ii}\beta_{ii}\), \(\forall i \in N\). By Lemma 1, there exists \(i_{0}\in N \), such that

$$\begin{aligned} |\lambda-a_{i_{0}i_{0}}\beta_{i_{0}i_{0}}| \leq& u_{i_{0}} \sum _{t \neq i_{0}} \frac{1}{u_{t}} \tilde{a}_{ti_{0}} \beta_{ti_{0}} \leq u_{i_{0}} \sum_{t \neq i_{0}} \frac{1}{u_{t}} \tilde{a}_{ti_{0}} u_{ti_{0}} \beta_{i_{0} i_{0}} \\ \leq& u_{i_{0}} \sum_{t \neq i_{0}} \tilde{a}_{ti_{0}} \beta_{i_{0} i_{0}} = u_{i_{0}} \beta_{i_{0} i_{0}} \sum_{t \neq i_{0}} \frac{ a_{ti_{0}} x_{t} }{ x_{i_{0}}} = u_{i_{0}} \rho(\mathcal{J}_{A}) d_{i_{0}i_{0}} \beta_{i_{0} i_{0}}. \end{aligned}$$

Therefore,

$$ \lambda\leq a_{i_{0}i_{0}}\beta_{i_{0}i_{0}}+ u_{i_{0}} \rho( \mathcal{J}_{A}) d_{i_{0}i_{0}}\beta_{i_{0} i_{0}} = \bigl(a_{i_{0}i_{0}}+ u_{i_{0}} \rho(\mathcal{J}_{A}) d_{i_{0}i_{0}}\bigr)\beta_{i_{0} i_{0}}, $$

i.e.,

$$\begin{aligned} \rho\bigl(A\circ B^{-1}\bigr) \leq&\bigl(a_{i_{0}i_{0}}+ u_{i_{0}} \rho(\mathcal{J}_{A}) d_{i_{0}i_{0}}\bigr) \beta_{i_{0} i_{0}} \\ \leq&\max_{1\leq i \leq n } \bigl\{ \bigl(a_{ii}+u_{i} \rho(\mathcal{J}_{A}) d_{ii} \bigr)\beta_{ii} \bigr\} . \end{aligned}$$

(ii) Now, assume that one of A and B is reducible. It is well known that a matrix in \(Z_{n}\) is a nonsingular M-matrix if and only if all its leading principal minors are positive (see [1]). If we denote by \(T=(t_{ij})\) the \(n\times n\) permutation matrix with \(t_{12}=t_{23}=\cdots=t_{n-1,n}=t_{n1}=-1\), the remaining \(t_{ij}\) zero, then both \(A-\varepsilon T\) and \(B+\varepsilon T\) are irreducible matrices for any chosen positive real number ε, sufficiently small such that all the leading principal minors of \(B+\varepsilon T\) are positive. Now, we substitute \(A-\varepsilon T\) and \(B+\varepsilon T\) for A and B, respectively, in the previous case, and then letting \(\varepsilon \rightarrow0\), the result follows by continuity. □

Theorem 2

Let \(B=(b_{ij})\in M_{n}\) and \(B^{-1}=(\beta_{ij})\). Then

$$ \tau(B)\geq\frac{1}{ \max_{1\leq i \leq n } \{ (1+u_{i} (n-1 ))\beta_{ii} \} }. $$
(5)

Proof

Let all entries of A in (4) be 1. Then \(a_{ii}=1\) (\(\forall i\in N\)), \(\rho(\mathcal{J}_{A})=n-1\). Therefore, by (4), we have

$$ \tau(B)=\frac{1}{ \rho( B^{-1}) }\geq\frac{1}{ \max_{1\leq i \leq n } \{ (1+u_{i} (n-1 ))\beta_{ii} \} }. $$

The proof is completed. □

Theorem 3

Let \(A=(a_{ij}) \geq0\), \(B=(b_{ij})\in M_{n}\), and let \(B^{-1}=(\beta_{ij})\). Then

$$ \rho\bigl(A\circ B^{-1}\bigr)\leq\frac{1}{2} \max _{i\neq j } \{ a_{ii}\beta_{ii}+ a_{jj} \beta_{jj}+\Delta_{ij} \}, $$
(6)

where \(\Delta_{ij}=[(a_{ii}\beta_{ii}-a_{jj}\beta_{jj})^{2} +4u_{i}u_{j}\rho^{2}(\mathcal{J}_{A}) d_{ii}d_{jj}\beta_{ii}\beta_{jj}]^{\frac{1}{2}}\).

Proof

It is evident that the result holds with equality for \(n=1\).

We next assume that \(n\geq2\). For convenience and without loss of generality, we assume that B is a strictly row diagonally dominant matrix.

(i) First, we assume that A and B are irreducible matrices. Since A is irreducible and so is \(\mathcal{J}_{A^{T}}\). Then there exists a positive vector \(y=(y_{i})\) such that \(\mathcal{J}_{A^{T}}y=\rho(\mathcal{J}_{A^{T}})y=\rho(\mathcal{J}_{A})y\), thus, we obtain

$$\begin{aligned}& \sum_{k\neq i} a_{ki}y_{k}=\rho( \mathcal{J}_{A}) d_{ii}y_{i}, \\& \sum _{k\neq j} a_{kj}y_{k}=\rho( \mathcal{J}_{A}) d_{jj}y_{j}. \end{aligned}$$

Let \(\widehat{A}=(\hat{a}_{ij})=YAY^{-1} \) in which Y is the positive matrix \(Y=\operatorname{diag}(y_{1}, y_{2}, \ldots, y_{n})\). Then, we have

$$ \widehat{A}=(\hat{a}_{ij})=YAY^{-1} = \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} a_{11}&\frac{ a_{12}y_{1} }{ y_{2} }& \cdots& \frac{ a_{1n}y_{1} }{ y_{n} }\\ \frac{ a_{21}y_{2} }{ y_{1} }& a_{22} & \cdots& \frac{ a_{2n}y_{2} }{ y_{n} }\\ \vdots& \vdots& \ddots&\vdots\\ \frac{ a_{n1}y_{n} }{ y_{1} }& \frac{ a_{n2}y_{n} }{ y_{2} } & \cdots& a_{nn} \end{array}\displaystyle \right ). $$

From Lemma 3, we get

$$\widehat{A}\circ B^{-1}=\bigl(YAY^{-1}\bigr)\circ B^{-1}=Y\bigl(A\circ B^{-1}\bigr)Y^{-1}. $$

Thus, we obtain \(\rho(\widehat{A}\circ B^{-1})=\rho(A\circ B^{-1})\). Let \(\lambda=\rho(\widehat{A}\circ B^{-1})\), so that \(\lambda\geq a_{ii}\beta_{ii}\) (\(\forall i \in N\)). By Lemma 2, there exist \(i_{0}, j_{0}\in N \), \(i_{0}\neq j_{0}\), such that

$$ |\lambda-a_{i_{0}i_{0}}\beta_{i_{0}i_{0}}||\lambda-a_{j_{0}j_{0}} \beta_{j_{0}j_{0}}| \leq \biggl(u_{i_{0}}\sum _{k \neq i_{0}} \frac{1}{u_{k}} \hat{a}_{ki_{0}} \beta_{ki_{0}} \biggr) \biggl(u_{j_{0}}\sum _{k \neq j_{0}} \frac{1}{u_{k}} \hat{a}_{kj_{0}} \beta_{kj_{0}} \biggr). $$

Note that

$$\begin{aligned}& u_{i_{0}}\sum_{k \neq i_{0}} \frac{1}{u_{k}} \hat{a}_{ki_{0}}\beta_{ki_{0}} \leq u_{i_{0}}\sum _{k \neq i_{0}}\frac{1}{u_{k}}\hat{a}_{ki_{0}} u_{ki_{0}} \beta_{i_{0}i_{0}} \leq u_{i_{0}}\beta_{i_{0}i_{0}}\sum _{k \neq i_{0}}\hat{a}_{ki_{0}} = u_{i_{0}} \beta_{i_{0}i_{0}}\rho(\mathcal{J}_{A}) d_{i_{0}i_{0}}, \\& u_{j_{0}}\sum_{k \neq j_{0}} \frac{1}{u_{k}} \hat{a}_{kj_{0}}\beta_{kj_{0}} \leq u_{j_{0}}\sum _{k \neq j_{0}}\frac{1}{u_{k}}\hat{a}_{kj_{0}} u_{kj_{0}} \beta_{j_{0}j_{0}} \leq u_{j_{0}}\beta_{j_{0}j_{0}}\sum _{k \neq j_{0}}\hat{a}_{kj_{0}} = u_{j_{0}} \beta_{j_{0}j_{0}}\rho(\mathcal{J}_{A}) d_{j_{0}j_{0}}. \end{aligned}$$

Hence, we obtain

$$ \lambda\leq\frac{1}{2} ( a_{i_{0}i_{0}}\beta_{i_{0}i_{0}}+ a_{j_{0}j_{0}}\beta_{j_{0}j_{0}}+\Delta_{i_{0}j_{0}} ), $$

i.e.,

$$ \rho\bigl(A\circ B^{-1}\bigr)\leq\frac{1}{2} ( a_{i_{0}i_{0}} \beta_{i_{0}i_{0}}+ a_{j_{0}j_{0}}\beta_{j_{0}j_{0}}+\Delta_{i_{0}j_{0}} ) \leq\frac{1}{2}\max_{i\neq j } \{ a_{ii} \beta_{ii}+ a_{jj}\beta_{jj}+\Delta_{ij} \} , $$

where \(\Delta_{ij}=[(a_{ii}\beta_{ii}-a_{jj}\beta_{jj})^{2} +4u_{i}u_{j}\rho^{2}(\mathcal{J}_{A})d_{ii}d_{jj}\beta_{ii}\beta_{jj}]^{\frac{1}{2}}\).

(ii) Now, assume that one of A and B is reducible. We substitute \(A-\varepsilon T\) and \(B+\varepsilon T\) for A and B, respectively, in the previous case (as in the proof of Theorem 1), and then letting \(\varepsilon\rightarrow0\), the result follows by continuity. □

Theorem 4

Let \(B=(b_{ij})\in M_{n}\) and \(B^{-1}=(\beta_{ij})\). Then

$$ \tau( B)\geq\frac{2}{ \max_{i\neq j } \{ \beta_{ii}+ \beta_{jj}+\Delta_{ij} \} }, $$
(7)

where \(\Delta_{ij}=[(\beta_{ii}-\beta_{jj})^{2} +4(n-1)^{2} u_{i}u_{j}\beta_{ii}\beta_{jj}]^{\frac{1}{2}}\).

Proof

Let all entries of A in (6) be 1. Then

$$ a_{ii}=1\quad (\forall i\in N), \qquad \rho(\mathcal{J}_{A})=n-1, \qquad \Delta_{ij}=\bigl[(\beta_{ii}-\beta_{jj})^{2} +4(n-1)^{2} u_{i}u_{j}\beta_{ii} \beta_{jj}\bigr]^{\frac{1}{2}}. $$

Therefore, by (6), we have

$$ \tau(B)=\frac{1}{ \rho( B^{-1}) }\geq \frac{2}{ \max_{i\neq j } \{ \beta_{ii}+ \beta_{jj}+\Delta_{ij} \} }. $$

The proof is completed. □

Remark 1

We next give a simple comparison between (4) and (6), (5) and (7), respectively. For convenience and without loss of generality, we assume that, for \(i, j\in N\), \(i\neq j\),

$$ a_{jj}\beta_{jj}+u_{j} d_{jj} \beta_{jj}\rho(\mathcal{J}_{A})\leq a_{ii} \beta_{ii}+u_{i} d_{ii} \beta_{ii}\rho( \mathcal{J}_{A}), $$

i.e.,

$$ u_{j} d_{jj} \beta_{jj}\rho(\mathcal{J}_{A}) \leq a_{ii}\beta_{ii}-a_{jj}\beta_{jj}+u_{i} d_{ii} \beta_{ii}\rho(\mathcal{J}_{A}). $$

Hence,

$$\begin{aligned} \Delta_{ij} =&\bigl[(a_{ii}\beta_{ii}-a_{jj} \beta_{jj})^{2} +4u_{i}u_{j} \rho^{2}(\mathcal{J}_{A})d_{ii}d_{jj} \beta_{ii}\beta_{jj}\bigr]^{\frac {1}{2}} \\ \leq&\bigl[(a_{ii}\beta_{ii}-a_{jj} \beta_{jj})^{2} +4u_{i}\rho(\mathcal{J}_{A})d_{ii} \beta_{ii}\bigl( a_{ii}\beta_{ii}-a_{jj} \beta _{jj}+u_{i} d_{ii} \beta_{ii}\rho( \mathcal{J}_{A}) \bigr)\bigr]^{\frac{1}{2}} \\ =&a_{ii}\beta_{ii}-a_{jj}\beta_{jj}+2u_{i} d_{ii} \beta_{ii}\rho(\mathcal{J}_{A}). \end{aligned}$$

Further, we obtain

$$ a_{ii}\beta_{ii}+a_{jj}\beta_{jj}+ \Delta_{ij}\leq 2a_{ii}\beta_{ii}+2u_{i} d_{ii} \beta_{ii}\rho(\mathcal{J}_{A}), $$

i.e.,

$$ \rho\bigl(A\circ B^{-1}\bigr)\leq\frac{1}{2} \max _{i\neq j } \{ a_{ii}\beta_{ii}+ a_{jj} \beta_{jj}+\Delta_{ij} \} \leq \max_{1\leq i \leq n } \bigl\{ \bigl(a_{ii}+u_{i} \rho(\mathcal{J}_{A})d_{ii} \bigr)\beta_{ii} \bigr\} . $$

So, the bound in (6) is better than the bound in (4). Similarly, we can prove that the bound in (7) is better than the bound in (5).

3 Numerical examples

In this section, we present numerical examples to illustrate the advantages of our derived results.

Example 1

Let

$$ B=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1.1 &-0.6 & -0.1 \\ -0.3 &1 & -0.6 \\ -0.2 &-0.4 &0.7 \end{array}\displaystyle \right ). $$

It is easy to see that B is an M-matrix. By calculations with Matlab 7.1, we have

$$\begin{aligned}& \tau(B)\geq0.10000000 \quad (\mbox{by (1)}), \qquad \tau(B)\geq 0.11396723 \quad (\mbox{by (2)}), \\& \tau(B)\geq 0.11582163 \quad (\mbox{by (3)}), \qquad \tau(B)\geq 0.11834016 \quad (\mbox{by (5)}), \\& \tau(B)\geq 0.13163534 \quad (\mbox{by (7)}), \end{aligned}$$

respectively. In fact, \(\tau(B)= 0.16213193\). It is obvious that the bound in (7) is the best result.

Example 2

Let

$$ B=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1 &-0.2 & -0.1 &-0.2 &-0.1 \\ -0.4 &1 & -0.2 &-0.1 &-0.1 \\ -0.3 &-0.2 &1 &-0.1 &-0.1 \\ -0.2 &-0.3 &-0.3 & 1 &-0.1 \\ -0.1 &-0.3 &-0.2 &-0.2 &1 \end{array}\displaystyle \right ). $$

It is easy to see that B is an M-matrix. By calculations with Matlab 7.1, we have

$$\begin{aligned}& \tau(B)\geq0.10000000 \quad (\mbox{by (1)}),\qquad \tau(B)\geq 0.16082517 \quad (\mbox{by (2)}), \\& \tau(B) \geq 0.16831778 \quad (\mbox{by (3)}), \qquad \tau(B)\geq 0.18147932 \quad (\mbox{by (5)}), \\& \tau(B)\geq 0.19169108 \quad (\mbox{by (7)}), \end{aligned}$$

respectively. In fact, \(\tau(B)= 0.25807710\). It is obvious that the bound in (7) is the best result.