1 Introduction

For a positive integer n, N denotes the set \(\{1, 2, \ldots, n\}\), and \(\mathbb{R}^{n\times n}(\mathbb{C}^{n\times n})\) denotes the set of all \({n\times n}\) real (complex) matrices throughout.

A matrix \(A=[a_{ij}]\in\mathbb{R}^{n\times n}\) is called a nonsingular M-matrix if \(a_{ij}\leq0\), \(i,j\in N\), \(i\neq j\), A is nonsingular and \(A^{-1}\geq0\) (see [1]). Denote by \(M_{n}\) the set of all \(n\times n\) nonsingular M-matrices.

If A is a nonsingular M-matrix, then there exists a positive eigenvalue of A equal to \(\tau(A)\equiv[\rho(A^{-1})]^{-1}\), where \(\rho(A^{-1})\) is the Perron eigenvalue of the nonnegative matrix \(A^{-1}\). It is easy to prove that \(\tau(A)=\min\{|\lambda|:\lambda\in\sigma(A)\}\), where \(\sigma(A)\) denotes the spectrum of A (see [2]).

A matrix A is called reducible if there exists a nonempty proper subset \(I\subset N\) such that \(a_{ij}=0\), \(\forall i \in I\), \(\forall j\notin I\). If A is not reducible, then we call A irreducible (see [3]).

For two real matrices \(A=[a_{ij}]\) and \(B=[b_{ij}]\) of the same size, the Hadamard product of A and B is defined as the matrix \(A\circ B=[a_{ij}b_{ij}]\). If A and B are two nonsingular M-matrices, then it was proved in [4] that \(A\circ B^{-1}\) is also a nonsingular M-matrix.

Let \(A=[a_{ij}]\in M_{n}\). For \(i,j,k\in N\), \(j\neq i\), denote

$$\begin{aligned}& d_{i}=\frac{\sum_{j\neq i}|a_{ij}|}{|a_{ii}|},\qquad s_{ji}=\frac{|a_{ji}|+\sum_{k \neq j,i} |a_{jk}|d_{k}}{|a_{jj}|}, \qquad s_{i}=\max_{j\neq i } \{s_{ij}\}; \\& r_{ji}=\frac{|a_{ji}|}{|a_{jj}|-\sum_{k \neq j, i}|a_{jk}|},\qquad r_{i}=\max _{j\neq{i}}\{r_{ji}\},\qquad m_{ji}= \frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|r_{i}}{|a_{jj}|}. \end{aligned}$$

In 2015, Chen [5] gave the following result: Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\) be a doubly stochastic matrix. Then

$$\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr) \geq&\min_{i\neq j} \frac{1}{2} \biggl\{ \alpha _{ii}a_{ii}+ \alpha_{jj}a_{jj} \\ &{}- \biggl[(\alpha_{ii}a_{ii}- \alpha _{jj}a_{jj})^{2}+4 \biggl(u_{i} \sum_{k\neq i}|a_{ki}|\alpha_{ii} \biggr) \biggl(u_{j}\sum_{k\neq j}|a_{kj}| \alpha_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} , \end{aligned}$$
(1)

where

$$u_{ji}=\frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|s_{ki}}{|a_{jj}|},\qquad u_{i}=\max _{j\neq i } \{u_{ij}\}. $$

Soon after, Zhao et al. [6] obtained the following result: Let \(A=[a_{ij}],B=[b_{ij}]\in M_{n}\). Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(B\circ A^{-1}\bigr)\geq\min_{i\in{N}} \biggl\{ \frac{b_{ii}-p^{(t)}_{i}\sum_{j\neq i}|b_{ji}|}{a_{ii} } \biggr\} , \end{aligned}$$
(2)

where

$$\begin{aligned}& q_{ji}=\min\{s_{ji},m_{ji}\},\qquad h_{i}=\max _{j\neq{i}} \biggl\{ \frac {|a_{ji}|}{|a_{jj}|q_{ji}-\sum_{k\neq{j,i}}|a_{jk}|q_{ki}} \biggr\} ,\\& v^{(0)}_{ji}= \frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|q_{ki}h_{i}}{|a_{jj}|}, \qquad p_{ji}^{(t)}=\frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|v_{ki}^{(t-1)}}{|a_{jj}|},\qquad p^{(t)}_{i}= \max_{j\neq{i}}\bigl\{ p^{(t)}_{ij}\bigr\} ,\\& h^{(t)}_{i}=\max_{j\neq{i}} \biggl\{ \frac {|a_{ji}|}{|a_{jj}|p^{(t)}_{ji}-\sum_{k\neq {j,i}}|a_{jk}|p^{(t)}_{ki}} \biggr\} , \qquad v^{(t)}_{ji}=\frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|p^{(t)}_{ki}h^{(t)}_{i}}{|a_{jj}|}. \end{aligned}$$

In this paper, we present some new convergent sequences of the lower bounds of \(\tau(B\circ A^{-1})\) and \(\tau(A\circ A^{-1})\), which improve (1) and (2). Numerical examples show that these sequences could reach the true value of \(\tau(A\circ A^{-1})\) in some cases.

2 Some lemmas

In this section, we give the following lemmas. These will be useful in the following proofs.

Lemma 1

[6]

If \(A=[a_{ij}]\in M_{n}\) is strictly row diagonally dominant, then \(A^{-1}=[\alpha_{ij}]\) exists, and for all \(i,j\in{N}\), \(j\neq{i}\), \(t=1,2,\ldots\) ,

$$\begin{aligned}& (\mathrm{a})\quad 1>q_{ji}\geq{v^{(0)}_{ji}} \geq{p^{(1)}_{ji}}\geq {v^{(1)}_{ji}} \geq{p^{(2)}_{ji}}\geq{v^{(2)}_{ji}}\geq\cdots \geq {p^{(t)}_{ji}}\geq{v^{(t)}_{ji}}\geq \ldots\geq0; \\& (\mathrm{b})\quad {1}\geq{h_{i}}\geq{0}, \qquad {1} \geq{h^{(t)}_{i}}\geq{0}; \\& (\mathrm{c})\quad \alpha_{ji}\leq p_{ji}^{(t)} \alpha_{ii}; \\& (\mathrm{d})\quad \frac{1}{a_{ii}}\leq\alpha_{ii}. \end{aligned}$$

Lemma 2

[6]

If \(A\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\) is a doubly stochastic matrix, then

$$\alpha_{ii} \geq\frac{1 }{1+\sum_{j \neq i} p^{(t)}_{ji}}, \quad i,j \in N,t=1,2,\ldots. $$

Lemma 3

[7]

If \(A^{-1}\) is a doubly stochastic matrix, then \(A^{T}e=e\), \(Ae=e\), where \(e=(1, 1, \ldots, 1)^{T}\).

Lemma 4

[8]

Let \(A=[a_{ij}]\in\mathbb{C}^{n\times n}\) and \(x_{1}, x_{2}, \ldots, x_{n}\) be positive real numbers. Then all the eigenvalues of A lie in the region

$$\mathop{\bigcup_{i,j=1}}_{i\neq j}^{n} \biggl\{ z\in\mathbb{C} :|z-a_{ii}||z-a_{jj}| \leq \biggl(x_{i}\sum_{k \neq i} \frac{1}{x_{k}}|a_{ki}| \biggr) \biggl(x_{j}\sum _{k \neq j} \frac{1}{x_{k}}|a_{kj}| \biggr) \biggr\} . $$

3 Main results

In this section, we give several convergent sequences for \(\tau(B\circ{A^{-1}})\) and \(\tau(A\circ{A^{-1}})\).

Theorem 1

Let \(A=[a_{ij}],B=[b_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(B\circ A^{-1}\bigr) \geq&\min _{i\neq j}\frac{1}{2} \biggl\{ \alpha _{ii}b_{ii}+ \alpha_{jj}b_{jj} \\ &{}- \biggl[ (\alpha_{ii}b_{ii}- \alpha _{jj}b_{jj} )^{2}+4 \biggl(p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}|b_{ki}| \biggr) \biggl(p_{j}^{(t)}\alpha_{jj}\sum _{k\neq j}|b_{kj}| \biggr) \biggr]^{\frac{1}{2}} \biggr\} =\Omega_{t}. \end{aligned}$$
(3)

Proof

It is evident that the result holds with equality for \(n=1\).

We next assume that \(n\geq2\).

Since \(A\in M_{n}\), there exists a positive diagonal matrix D such that \(D^{-1}AD\) is a strictly row diagonally dominant M-matrix, and

$$\tau\bigl(B\circ A^{-1}\bigr)=\tau\bigl(D^{-1}\bigl(B\circ A^{-1}\bigr)D\bigr)=\tau\bigl(B\circ \bigl(D^{-1}AD \bigr)^{-1}\bigr). $$

Therefore, for convenience and without loss of generality, we assume that A is a strictly row diagonally dominant matrix.

(a) First, we assume that A and B are irreducible matrices. Since A is irreducible, \(0< p_{i}^{(t)}<1\), for any \(i\in N\). Let \(\tau(B\circ A^{-1})=\lambda\). Since λ is an eigenvalue of \(B\circ A^{-1}\), \(0<\lambda< b_{ii}\alpha_{ii}\). By Lemma 1 and Lemma 4, there is a pair \((i,j)\) of positive integers with \(i\neq j\) such that

$$\begin{aligned} |\lambda-b_{ii}\alpha_{ii}|| \lambda-b_{jj}\alpha_{jj}| \leq& \biggl(p_{i}^{(t)} \sum_{k\neq i}\frac{1}{p_{k}^{(t)}}|b_{ki} \alpha_{ki}| \biggr) \biggl(p_{j}^{(t)}\sum _{k\neq j}\frac{1}{p_{k}^{(t)}}|b_{kj}\alpha_{kj}| \biggr) \\ \leq& \biggl(p_{i}^{(t)}\sum _{k\neq i}\frac{1}{p_{k}^{(t)}}\bigl|b_{ki}p_{ki}^{(t)} \alpha_{ii}\bigr| \biggr) \biggl(p_{j}^{(t)}\sum _{k\neq j}\frac{1}{p_{k}^{(t)}}\bigl|b_{kj}p_{kj}^{(t)} \alpha_{jj}\bigr| \biggr) \\ \leq& \biggl(p_{i}^{(t)}\sum_{k\neq i} \frac{1}{p_{k}^{(t)}}\bigl|b_{ki}p_{k}^{(t)} \alpha_{ii}\bigr| \biggr) \biggl(p_{j}^{(t)}\sum _{k\neq j}\frac{1}{p_{k}^{(t)}}\bigl|b_{kj}p_{k}^{(t)} \alpha_{jj}\bigr| \biggr) \\ =& \biggl( p_{i}^{(t)}\alpha_{ii}\sum _{k\neq i}|b_{ki}| \biggr) \biggl( p_{j}^{(t)} \alpha_{jj}\sum_{k\neq j}|b_{kj}| \biggr). \end{aligned}$$
(4)

From inequality (4), we have

$$\begin{aligned} (\lambda-b_{ii}\alpha_{ii}) ( \lambda-b_{jj}\alpha_{jj})\leq \biggl( p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}|b_{ki}| \biggr) \biggl( p_{j}^{(t)}\alpha_{jj}\sum _{k\neq j}|b_{kj}| \biggr). \end{aligned}$$
(5)

Thus, (5) is equivalent to

$$\lambda\geq\frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+ \alpha_{jj}b_{jj}- \biggl[ (\alpha_{ii}b_{ii}- \alpha_{jj}b_{jj} )^{2}+4 \biggl(p_{i}^{(t)} \alpha _{ii}\sum_{k\neq i}|b_{ki}| \biggr) \biggl(p_{j}^{(t)}\alpha_{jj}\sum _{k\neq j}|b_{kj}| \biggr) \biggr]^{\frac{1}{2}} \biggr\} . $$

That is,

$$\begin{aligned} \tau\bigl(B\circ A^{-1}\bigr) \geq& \frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+\alpha _{jj}b_{jj}\\ &{}- \biggl[ (\alpha_{ii}b_{ii}-\alpha_{jj}b_{jj} )^{2}+4 \biggl(p_{i}^{(t)}\alpha_{ii}\sum _{k\neq i}|b_{ki}| \biggr) \biggl(p_{j}^{(t)} \alpha_{jj}\sum_{k\neq j}|b_{kj}| \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ \geq& \min_{i\neq j}\frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+\alpha_{jj}b_{jj}\\ &{}- \biggl[ (\alpha_{ii}b_{ii}-\alpha_{jj}b_{jj} )^{2}+4 \biggl(p_{i}^{(t)}\alpha _{ii}\sum _{k\neq i}|b_{ki}| \biggr) \biggl(p_{j}^{(t)} \alpha_{jj}\sum_{k\neq j}|b_{kj}| \biggr) \biggr]^{\frac{1}{2}} \biggr\} . \end{aligned}$$

(b) Now, assume that one of A and B is reducible. It is well known that a matrix in \(Z_{n}=\{A=[a_{ij}]\in\mathbb{R}^{n\times n}:a_{ij}\leq0,i\neq{j}\}\) is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of [1]). If we denote by \(C=[c_{ij}]\) the \(n\times n\) permutation matrix with \(c_{12}=c_{23}=\cdots=c_{n-1,n}=c_{n1}=1\), the remaining \(c_{ij}\) zero, then both \(A-{\varepsilon}C\) and \(B-{\varepsilon}C\) are irreducible nonsingular M-matrices for any chosen positive real number ε, sufficiently small such that all the leading principal minors of both \(A-{\varepsilon} C\) and \(B-{\varepsilon}C\) are positive. Now we substitute \(A-{\varepsilon} C\) and \(B-{\varepsilon}C\) for A and B, in the previous case, and then letting \({\varepsilon}\rightarrow0\), the result follows by continuity. □

Theorem 2

The sequence \(\{\Omega_{t}\}\), \(t=1,2,\ldots\) obtained from Theorem  1 is monotone increasing with an upper bound \(\tau(B\circ A^{-1})\) and, consequently, is convergent.

Proof

By Lemma 1, we have \(p^{(t)}_{ji}\geq p^{(t+1)}_{ji}\geq 0\), \(t=1,2,\ldots\) , so by the definition of \(p^{(t)}_{i}\), it is easy to see that the sequence \(\{p^{(t)}_{i}\}\) is monotone decreasing. Then \(\Omega_{t}\) is a monotonically increasing sequence. Hence, the sequence is convergent. □

Next, we give the following comparison theorem for (2) and (3).

Theorem 3

Let \(A=[a_{ij}],B=[b_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then, for \(t=1,2,\ldots\) ,

$$\tau\bigl(B\circ A^{-1}\bigr)\geq\Omega_{t}\geq\min _{i\in{N}} \biggl\{ \frac {b_{ii}-p^{(t)}_{i}\sum_{j\neq i}|b_{ji}|}{a_{ii} } \biggr\} . $$

Proof

Without loss of generality, for any \(i\neq j\), assume that

$$\begin{aligned} \alpha_{ii}b_{ii}-p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}|b_{ki}|\leq \alpha_{jj}b_{jj}-p_{j}^{(t)} \alpha_{jj}\sum_{k\neq j}|b_{kj}|. \end{aligned}$$
(6)

Thus, (6) is equivalent to

$$\begin{aligned} p_{j}^{(t)}\alpha_{jj}\sum _{k\neq j}|b_{kj}|\leq\alpha_{jj}b_{jj}- \alpha_{ii}b_{ii}+p_{i}^{(t)}\alpha _{ii}\sum_{k\neq i}|b_{ki}|. \end{aligned}$$
(7)

From (6), (7), and Lemma 1, we have

$$\begin{aligned} &\frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+ \alpha_{jj}b_{jj}- \biggl[ (\alpha_{ii}b_{ii}- \alpha_{jj}b_{jj} )^{2}+4 \biggl(p_{i}^{(t)} \alpha _{ii}\sum_{k\neq i}|b_{ki}| \biggr) \biggl(p_{j}^{(t)}\alpha_{jj}\sum _{k\neq j}|b_{kj}| \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad\geq \frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+ \alpha_{jj}b_{jj} \\ &\qquad{} -\biggl[ (\alpha _{ii}b_{ii}- \alpha_{jj}b_{jj} )^{2}+4 \biggl(p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}|b_{ki}| \biggr) \biggl(\alpha_{jj}b_{jj}-\alpha_{ii}b_{ii}+p_{i}^{(t)} \alpha _{ii}\sum_{k\neq i}|b_{ki}| \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad=\frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+ \alpha_{jj}b_{jj} \\ &\qquad{}- \biggl[ (\alpha_{ii}b_{ii}- \alpha_{jj}b_{jj} )^{2}+4 \biggl(p_{i}^{(t)} \alpha _{ii}\sum_{k\neq i}|b_{ki}| \biggr) (\alpha_{jj}b_{jj}-\alpha_{ii}b_{ii} )+4 \biggl(p_{i}^{(t)}\alpha_{ii}\sum _{k\neq i}|b_{ki}| \biggr)^{2} \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad=\frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+ \alpha_{jj}b_{jj}- \biggl[ \biggl(\alpha_{jj}b_{jj}- \alpha_{ii}b_{ii}+2p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}|b_{ki}| \biggr)^{2} \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad=\frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+ \alpha_{jj}b_{jj}- \biggl(\alpha _{jj}b_{jj}- \alpha_{ii}b_{ii}+2p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}|b_{ki}| \biggr) \biggr\} \\ &\quad=\alpha_{ii}b_{ii}-p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}|b_{ki}| \\ &\quad=\alpha_{ii} \biggl(b_{ii}-p_{i}^{(t)} \sum_{k\neq i}|b_{ki}| \biggr) \\ &\quad\geq\frac{ b_{ii}-p_{i}^{(t)}\sum_{k\neq i}|b_{ki}|}{a_{ii}}. \end{aligned}$$
(8)

Thus we have

$$\begin{aligned} \Omega_{t} =&\min_{i\neq j}\frac{1}{2} \biggl\{ \alpha_{ii}b_{ii}+\alpha_{jj}b_{jj}\\ &{}- \biggl[ (\alpha_{ii}b_{ii}-\alpha_{jj}b_{jj} )^{2}+4 \biggl(p_{i}^{(t)}\alpha _{ii}\sum _{k\neq i}|b_{ki}| \biggr) \biggl(p_{j}^{(t)} \alpha_{jj}\sum_{k\neq j}|b_{kj}| \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ \geq&\min_{i\neq j} \biggl\{ \frac{ b_{ii}-p_{i}^{(t)}\sum_{j\neq i}|b_{ji}|}{a_{ii}} \biggr\} . \end{aligned}$$

This proof is completed. □

Using Lemma 2 in (8), it can be seen that the following corollary holds clearly.

Corollary 1

Let \(A=[a_{ij}],B=[b_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,

$$\tau\bigl(B\circ A^{-1}\bigr)\geq\Omega_{t}\geq\min _{i\in{N}} \biggl\{ \frac {b_{ii}-p^{(t)}_{i}\sum_{j\neq i}|b_{ji}|}{1+\sum_{j\neq i}p_{ji}^{(t)} } \biggr\} . $$

Remark 1

Theorem 3 and Corollary 1 show that the bound in (3) is bigger than the bound in (2) and the bound in Corollary 1 of [6].

If \(B=A\), according to Theorem 1 and Corollary 1, the following corollaries are established, respectively.

Corollary 2

Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr) \geq&\min _{i\neq j}\frac{1}{2} \biggl\{ \alpha _{ii}a_{ii}+ \alpha_{jj}a_{jj} \\ &{}- \biggl[(\alpha_{ii}a_{ii}- \alpha _{jj}a_{jj})^{2}+4 \biggl(p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}|a_{ki}| \biggr) \biggl(p_{j}^{(t)}\alpha_{jj}\sum _{k\neq j}|a_{kj}| \biggr) \biggr]^{\frac{1}{2}} \biggr\} =\Gamma_{t}. \end{aligned}$$
(9)

Corollary 3

Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha _{ij}]\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,

$$\tau\bigl(A\circ A^{-1}\bigr)\geq\Gamma_{t}\geq\min _{i\in{N}} \biggl\{ \frac {a_{ii}-p^{(t)}_{i}\sum_{j\neq i}|a_{ji}|}{1+\sum_{j\neq i}p_{ji}^{(t)} } \biggr\} . $$

Remark 2

(a) We give a simple comparison between (1) and (9). According to Lemma 1, we know that \(s_{ji}\geq q_{ji}=\min\{{s_{ji},m_{ji}}\}\) and \(1\geq h_{i}\geq0\), so it is easy to see that \(u_{ji}\geq{v^{(0)}_{ji}}\geq p^{(t)}_{ji}\). Furthermore, by the definition of \(u_{i}\), \(p^{(t)}_{i}\), we have \(u_{i}\geq p^{(t)}_{i}\). Obviously, for \(t=1,2,\ldots\) , the bound in (9) is bigger than the bound in (1).

(b) Corollary 3 shows that the bound in Corollary 2 is bigger than the bound in Corollary 2 of [6].

Similar to the proof of Theorem 1, Theorem 2 and Theorem 3, we can obtain Theorem 4, Theorem 5, and Theorem 6, respectively.

Theorem 4

Let \(A=[a_{ij}],B=[b_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(B\circ A^{-1}\bigr) \geq&\min_{i\neq j} \frac{1}{2} \biggl\{ \alpha _{ii}b_{ii}+ \alpha_{jj}b_{jj}\\ &{}- \biggl[ (\alpha_{ii}b_{ii}- \alpha _{jj}b_{jj} )^{2}+4 \biggl(s_{i} \alpha_{ii}\sum_{k\neq i}\frac {|b_{ki}|p_{ki}^{(t)}}{s_{k}} \biggr) \biggl(s_{j}\alpha_{jj}\sum _{k\neq j}\frac{|b_{kj}|p_{kj}^{(t)}}{s_{k}} \biggr) \biggr]^{\frac{1}{2}} \biggr\} =\Delta_{t}. \end{aligned}$$

Theorem 5

The sequence \(\{\Delta_{t}\}\), \(t=1,2,\ldots\) obtained from Theorem  4 is monotone increasing with an upper bound \(\tau(B\circ A^{-1})\) and, consequently, is convergent.

Theorem 6

Let \(A=[a_{ij}],B=[b_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(B\circ A^{-1}\bigr)\geq\Delta_{t}\geq\min _{i\in{N}} \biggl\{ \frac {b_{ii}-s_{i}\sum_{j \neq i} \frac{|b_{ji}|p^{(t)}_{ji}}{s_{j}}}{a_{ii}} \biggr\} . \end{aligned}$$

Corollary 4

Let \(A=[a_{ij}],B=[b_{ij}]\in M_{n}\) and \(A^{-1}\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(B\circ A^{-1}\bigr)\geq\Delta_{t}\geq \min _{i\in{N}} \biggl\{ \frac{b_{ii}-s_{i}\sum_{j \neq i} \frac{|b_{ji}|p^{(t)}_{ji}}{s_{j}}}{1+\sum_{j\neq i}p_{ji}^{(t)} } \biggr\} . \end{aligned}$$

Remark 3

Theorem 6 and Corollary 4 show that the bound in Theorem 4 is bigger than the bound in Theorem 3 of [6] and the bound in Corollary 3 of [6].

If \(B=A\), according to Theorem 4 and Corollary 4, the following corollaries are established, respectively.

Corollary 5

Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr) \geq&\min_{i\neq j} \frac{1}{2} \biggl\{ \alpha _{ii}a_{ii}+ \alpha_{jj}a_{jj}\\ &{}- \biggl[ (\alpha_{ii}a_{ii}- \alpha _{jj}a_{jj} )^{2}+4 \biggl(s_{i} \alpha_{ii}\sum_{k\neq i}\frac {|a_{ki}|p_{ki}^{(t)}}{s_{k}} \biggr) \biggl(s_{j}\alpha_{jj}\sum _{k\neq j}\frac{|a_{kj}|p_{kj}^{(t)}}{s_{k}} \biggr) \biggr]^{\frac{1}{2}} \biggr\} =\mathrm{T}_{t}. \end{aligned}$$

Corollary 6

Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\mathrm{T}_{t}\geq \min _{i\in{N}} \biggl\{ \frac{a_{ii}-s_{i}\sum_{j \neq i} \frac{|a_{ji}|p^{(t)}_{ji}}{s_{j}}}{1+\sum_{j\neq i}p_{ji}^{(t)} } \biggr\} . \end{aligned}$$

Remark 4

Corollary 6 shows that the bound in Corollary 5 is bigger than the bound in Corollary 4 of [6].

Let \(\Upsilon_{t}=\max\{\Gamma_{t},\mathrm{T}_{t}\}\). By Corollary 2 and Corollary 5, the following theorem is easily found.

Theorem 7

Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}\) be a doubly stochastic matrix. Then, for \(t=1,2,\ldots\) ,

$$\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\Upsilon_{t}. \end{aligned}$$

4 Numerical examples

In this section, several numerical examples are given to verify the theoretical results.

Example 1

Let

$$A = \begin{bmatrix} 20&-1&-2&-3&-4&-1&-1&-3&-2&-2\\ -1&18&-3&-1&-1&-4&-2&-1&-3&-1\\ -2&-1&10&-1&-1&-1&0&-1&-1&-1\\ -3&-1&0&16&-4&-2&-1&-1&-1&-2\\ -1&-3&0&-2&15&-1&-1&-1&-2&-3\\ -3&-2&-1&-1&-1&12&-2&0&-1&0\\ -1&-3&-1&-1&0&-1&9&0&-1&0\\ -3&-1&-1&-4&-1&0&0&12&0&-1\\ -2&-4&-1&-1&-1&0&-1&-3&14&0\\ -3&-1&0&-1&-1&-1&0&-1&-2&11 \end{bmatrix}. $$

Based on \(A\in Z_{n}\) and \(Ae=e\), \(A^{T}e=e\), it is easy to see that A is nonsingular M-matrix and \(A^{-1}\) is doubly stochastic. Numerical results are given in Table 1 for the total number of iterations \(T=10\). In fact, \(\tau(A\circ{A^{-1}})=0.9678\).

Table 1 The lower upper of \(\pmb{\tau(A\circ{A^{-1}})}\)

Remark 5

The numerical results in Table 1 show that:

  1. (a)

    The lower bounds obtained from Theorem 7 are bigger than these corresponding bounds in [46, 9, 10].

  2. (b)

    The sequence obtained from Theorem 7 is monotone increasing.

  3. (c)

    The sequence obtained from Theorem 7 approximates effectively the true value of \(\tau(A\circ A^{-1})\), so we can estimate \(\tau(A\circ A^{-1})\) by Theorem 7.

Example 2

Let \(A=[a_{ij}]\in\mathbb{R}^{n\times n}\), where \(a_{11}=a_{22}=\cdots=a_{n,n}=2\), \(a_{12}=a_{23}=\cdots=a_{n-1,n}=a_{n,1}=-1\), and \(a_{ij}=0\) elsewhere.

It is easy to see that A is a nonsingular M-matrix and \(A^{-1}\) is doubly stochastic. If we apply Theorem 7 for \(n=10\) and \(n=100\), we have \(\tau(A\circ{A^{-1}})=0.7507\) and \(\tau(A\circ{A^{-1}})=0.7500\) when \(t=1\), respectively. In fact, \(\tau(A\circ{A^{-1}})=0.7507\) for \(n=10\) and \(\tau(A\circ{A^{-1}})=0.7500\) for \(n=100\).

Remark 6

Numerical results in Example 2 show that the lower bound obtained from Theorem 7 could reach the true value of \(\tau(A\circ A^{-1})\) in some cases.

5 Further work

In this paper, we present a new convergent sequence \(\{\Upsilon_{t}\}\), \(t=1,2,\ldots\) , which is more accurate than the convergent sequence in Theorem 5 of [6], to approximate \(\tau(A\circ A^{-1})\), and we do not give the error analysis, i.e., how accurately these bounds can be computed. At present, it is very difficult for the authors to do this. Next, we will study this problem.