1 Introduction

Let \({\mathbb {C}} ({\mathbb {R}}) \) be the set of all complex (real) numbers, and \({\mathbb {C}}^n ({\mathbb {R}}^n)\) be the set of all dimension n complex (real) vectors. An order m dimension n complex (real) tensor \({\mathcal {A}} = (a_{i_1 i_2...i_m}),\) denoted by \({\mathcal {A}} \in {\mathbb {C}}^{[m,n]} ( {\mathcal {A}} \in {\mathbb {R}}^{[m,n]} ,\) respectively), consists of \(n^m\) entries:

$$\begin{aligned} a_{i_1 i_2...i_m} \in {\mathbb {C}}({\mathbb {R}}), \quad \; \forall \; i_j = 1,. . . ,n , \quad j = 1,. . .,m. \end{aligned}$$

A tensor \({\mathcal {A}} = (a_{i_1 i_2...i_m}) \in {\mathbb {R}}^{[m,n]}\) is called nonnegative (positive) if:

$$\begin{aligned} a_{i_1 i_2...i_m} \ge 0 \; (a_{i_1 i_2...i_m}>0 ), \quad \; \forall \; i_j = 1,. . . ,n , \quad j = 1,. . .,m. \end{aligned}$$

Tensors have many similarities with matrices and many related results of matrices such as eigenvector and eigenvalue can be extended to higher order tensors (e.g., see Qi and Luo 2017; Wei and Ding 2017).

For a vector \(x \in {\mathbb {C}}^n,\) we use \(x_i\) to denote its components and \(x^{[m-1]}\) to denote a vector in \({\mathbb {C}}^n\), such that:

$$\begin{aligned} x_{i}^{[m-1]}=x_{i}^{m-1} \quad {\mathrm{for\, all }} \;i. \end{aligned}$$

\({\mathcal {A}}x^{m-1}\) denotes a vector in \({\mathbb {C}}^n,\) whose ith component is:

$$\begin{aligned} ({\mathcal {A}}x^{m - 1} )_i = \sum \limits _{i_2 ,i_3 ,....,i_m = 1}^n {a_{ii_2 ...i_m } x_{i_2 } ...x_{i_m } }. \end{aligned}$$

A pair \((\lambda ,x)\in {\mathbb {C}}\times \left( {\mathbb {C}}^n \backslash \left\{ 0 \right\} \right) \) is called an eigenpair (eigenvalue–eigenvector pair) of \({\mathcal {A}}\) if they satisfy:

$$\begin{aligned} {\mathcal {A}}x^{m-1}=\lambda x^{[m-1]}. \end{aligned}$$

Specifically, \( (\lambda , x) \) is called an H-eigenpair if \( (\lambda , x)\in {\mathbb {R}} \times {\mathbb {R}}^n \backslash \left\{ 0 \right\} . \)

With the introduction of tensor eigenvalue, it is of interest to investigate eigenvalue inclusion regions, i.e., to find the regions including all eigenvalues in the complex plane. The first work owes to Liqun Qi. Qi (2005) gave an eigenvalue inclusion set for real symmetric tensors, which is a generalization of the well-known Gershgorin set of matrices (Horn and Johnson 2012). Subsequently, due to its fundamental applications in various fields, many researchers are interested in investigating eigenvalue inclusion regions for tensors, e.g., Bu et al. (2017), Li and Li (2016a), Li et al. (2014, 2020), Xu et al. (2019), etc.

In 1958, Fan (1958) obtained the famous Ky Fan theorem that plays an important role in the study of the nonnegative eigenvalue problem. Li and Huang (2005) presented an improvement on the Ky Fan theorem for any weakly irreducible \(n \times n\) complex matrix A. Yang and Yang (2010) generalized the Ky Fan-type theorem from matrices to tensors. Later, Li and Ng (2015) improved it, and recently, Wang et al. (2020) proved some new Ky Fan-type theorems for weakly irreducible nonnegative tensors with Brualdi-type eigenvalue inclusion set. In this paper, we provide some new improvements on the Ky Fan theorem, which improves the existing bounds.

2 Preliminaries

We begin this section with some definitions and statements that are needed for the main results of our work which are taken from Bu et al. (2017), Li and Li (2016a, 2016b), Li et al. (2014), Qi and Luo (2017), and Yang and Yang (2010).

Definition 1

Let \({\mathcal {A}}\) be a tensor of order m dimension n.

  1. (i)

    We call \(\sigma ({\mathcal {A}})\) as the set of all eigenvalues of \({\mathcal {A}}\) and the spectral radius of \( {\mathcal {A}}\) is denoted by:

    $$\begin{aligned} \rho ({\mathcal {A}}) =\max \left\{ |\lambda | \; : \; \lambda \in \sigma (A) \right\} . \end{aligned}$$
  2. (ii)

    We call a tensor \( {\mathcal {A}}\) reducible if there exists a nonempty proper index subset \( I \subset \langle n \rangle :=\{1,2, ... , n \} \), such that:

    $$\begin{aligned} a_{i_1 i_2 ... i_m }= 0, \quad \forall \; i_1 \in I, \;\;\; i_2, ... , i_m \notin I. \end{aligned}$$

    If \( {\mathcal {A}} \) is not reducible, then we call \( {\mathcal {A}}\) irreducible.

  3. (iii)

    We call a tensor \( {\mathcal {A}}\) nonnegative weakly irreducible, if, for any nonempty proper index subset \(I \subset \langle n \rangle ,\) there is at least an entry \( a_{i_1 i_2 ...i_m } > 0,\) where \( i_1 \in I ,\) and at least an \( i_j \notin I, \;\; j = 2,...,m.\)

  4. (iv)

    We denote by \( \delta _{i_1 i_2 ... i_m}, \) the Kronecker symbol for the case of m indices, that is:

    $$\begin{aligned} \delta _{i_1 i_2 ...i_m } = \left\{ {\begin{array}{*{20}l} {1 } &{}\quad {i_1 = i_2 = \cdots = i_m } \\ {0 } &{}\quad {\mathrm{{otherwise. }}} \\ \end{array}} \right. \end{aligned}$$

Let us recall the Perron–Frobenius theorem for irreducible nonnegative tensors given in Yang and Yang (2010).

Theorem 1

(see Theorem 2.2 of Yang and Yang 2010) Suppose that \({\mathcal {A}}\) is an irreducible nonnegative tensor of order m dimension n. Then, \( \rho ({\mathcal {A}})>0 \) is an eigenvalue of \( {\mathcal {A}} \) with a positive eigenvector x corresponding to it.

Remark 1

It is noted that the spectral radius \( \rho ({\mathcal {A}}) \) is the largest H-eigenvalue for the nonnegative tensor Yang and Yang (2010).

Note that \( \rho ({\mathcal {A}}) \) and x in Theorem 1 are called the Perron root and the Perron vector of \( {\mathcal {A}}, \) respectively, and \( (\rho ({\mathcal {A}}),x ) \) is regarded as a Perron eigenpair.

Definition 2

If \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) is a real tensor of order m dimension n and D is a matrix, then the product of \( {\mathcal {A}} \) and D is defined as follows:

$$\begin{aligned} {\mathcal {A}}_D = {\mathcal {A}}\, . \, D^{ - (m - 1)} \, . \, \overbrace{D ... D}^{m - 1}, \end{aligned}$$

where

$$\begin{aligned} {({\mathcal {A}}_D)}_{i_1 i_2 ...i_m } = \sum \limits _{j_1 ,j_2 , ... ,j_m = 1}^n {a_{j_1 ...j_m } D_{i_1 ,j_1 }^{ - (m - 1)} D_{i_2 ,j_2 } ... D_{i_m ,j_m }}. \end{aligned}$$

If D is a diagonal matrix, then:

$$\begin{aligned} {({\mathcal {A}}_D)}_{i_1 i_2 ...i_m } = a_{i_1 ...i_m } D_{i_1 ,i_1 }^{ - (m - 1)} D_{i_2 ,i_2 } ... D_{i_m ,i_m }. \end{aligned}$$

Remark 2

(Yang and Yang 2010) In fact, if:

$$\begin{aligned} {\mathcal {A}}_D = {\mathcal {A}}\, . \, D^{ - (m - 1)} \, . \, \overbrace{D ... D}^{m - 1}, \end{aligned}$$

where D is a diagonal nonsingular matrix, and then, \({\mathcal {A}}_D \) and \( {\mathcal {A}}\) have the same eigenvalues. If \(\lambda \) is an eigenvalue of \({\mathcal {A}}_D\) with corresponding eigenvector x,  then \(\lambda \) is also an eigenvalue of \({\mathcal {A}}\) with corresponding eigenvector Dx;  if \(\mu \) is an eigenvalue of \({\mathcal {A}}\) with corresponding eigenvector y,  then \(\mu \) is an eigenvalue of \({\mathcal {A}}_D\) with corresponding eigenvector \(D^{-1}y.\)

In recent years, the spectral theory of tensors has attracted much attention. In 2005, Qi and Luo (2017) gave an eigenvalue localization set for real symmetric tensors, which is a generalization of the well-known Gershgorin’s eigenvalue localization theorem of matrices (Horn and Johnson 2012; Varga 2004). This result has also been generalized to general tensors (Li et al. 2014; Yang and Yang 2010).

Theorem 2

(Li et al. 2014, Theorem 1.1) Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) be a tensor of order m dimension n. Then:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq {\varGamma }({\mathcal {A}}) =\bigcup \limits _{ i \in \langle n \rangle } {\varGamma _i({\mathcal {A}})}, \end{aligned}$$

where \(\varGamma _i({\mathcal {A}})= \left\{ z \in {\mathbb {C}}:\left| z - a_{i i ...i } \right| \le r_{i } ({\mathcal {A}}) \right\} ,\) and \(r_i ({\mathcal {A}}) = \sum \nolimits _{\begin{array}{c} \scriptstyle i_2 ,...i_m = 1 \\ \scriptstyle \delta _{ii_2 ...i_m = 0} \end{array}}^n {\left| {a_{ii_2 ...i_m } } \right| }.\)

The well-known Brauer’s eigenvalue inclusion set of matrices was given in Brauer (1947), which is always contained in the Gershgorin set. Recently, Li et al. (2014) extended the Brauer’s eigenvalue inclusion set of matrices to tensors, and gave the following Brauer-type eigenvalue inclusion set for tensors.

Theorem 3

(Li et al. 2014, Theorem 2.1) Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) be a tensor of order m dimension n,  and then:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq {\mathcal {F}}({\mathcal {A}}) = \bigcup \limits _{i,j \in \langle n \rangle , \;j \ne i} {{\mathcal {F}}_{i,j} ({\mathcal {A}})}, \end{aligned}$$

where

$$\begin{aligned} {\mathcal {F}}_{i,j} = \left\{ z \in {\mathbb {C}}: \; \left( \left| {z - a_{ii...i} } \right| - r_i^j ({\mathcal {A}}) \right) \left| {z - a_{jj...j} } \right| \le \left| {a_{ij...j} } \right| r_j ({\mathcal {A}}) \right\} , \end{aligned}$$

and

$$\begin{aligned} r_i^j ({\mathcal {A}}) := \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...i_m = 1 \\ \scriptstyle \delta _{\begin{array}{c} ii_2 ...i_m = 0 \end{array}} \\ \scriptstyle \delta _{ji_2 ...i_m = 0} \end{array}}^n {\left| {a_{ii_2 ...i_m } } \right| } = r_i ({\mathcal {A}}) - \left| {a_{ij...j} } \right| . \end{aligned}$$
(1)

Very recently, Bu et al. (2017) gave another Brauer-type eigenvalue localization sets of tensors as follows, which is proved to be tighter than the sets \( \varGamma ({\mathcal {A}}) \) and \( {\mathcal {F}}({\mathcal {A}}). \)

Theorem 4

(Bu et al. 2017, Theorem 3.1) Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) be a tensor of order m dimension n. Then, every eigenvalue of \({\mathcal {A}}\) lies in the region:

$$\begin{aligned} {\mathfrak {B}}({\mathcal {A}}) = \bigcup \limits _{\begin{array}{c} \scriptstyle i,j \in \langle n \rangle \\ \scriptstyle i \ne j \end{array}} { \left\{ {z \in {\mathbb {C}}: {\left| {z - a_{i i ...i } } \right| ^{m-1} \; \left| {z - a_{j j ...j } } \right| } \le {r_{i } ({\mathcal {A}})^{m-1}} \; r_{j } ({\mathcal {A}}) } \right\} }. \end{aligned}$$
(2)

Theorem 5

(Bu et al. 2017, Theorem 3.3) Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) be a tensor of order m dimension n,  and \(r_i ({\mathcal {A}}) \ne 0, \; \; i = 1,...,n.\) Then, every eigenvalue of \({\mathcal {A}}\) lies in the region:

$$\begin{aligned} {\mathcal {Z}}({\mathcal {A}}) = \bigcup \limits _{\begin{array}{*{20}c} {a_{i_1 i_2 ...i_m } \ne 0} \\ {\delta _{i_1 i_2 ...i_m } = 0} \\ \end{array}} {\left\{ {z \in {\mathbb {C}}:\prod \limits _{j = 1}^m {\left| {z - a_{i_j i_j ...i_j } } \right| } \le \prod \limits _{j = 1}^m {r_{i_j } ({\mathcal {A}})} } \right\} }. \end{aligned}$$

To find such relationships between other Brauer-type eigenvalue localization sets, see (Li and Li 2016b; Xu et al. 2019).

3 Improvements of Ky Fan theorem

The Ky Fan theorem (e.g., see Theorem 8.2.12 of Horn and Johnson 2012) plays an important role in estimating the eigenvalues of matrices. Yang and Yang (2010, Theorem 4.1), gave the Ky Fan-type inequality for tensors as follows:

Theorem 6

Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) and \({\mathcal {B}}=(b_{i_{1}i_{2}...i_{m}})\) be complex tensors of order m dimension n with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}}. \) If

$$\begin{aligned} {\mathcal {K}}^{(i)}({\mathcal {A}}) = \{ z \in {\mathbb {C}}:\left| {z - a_{ii...i} } \right| \le \rho ({\mathcal {B}})- b_{ii...i}\}, \quad i=1,...,n, \end{aligned}$$

denotes the \( i\mathrm{{th}} \) Ky Fan disk for \( {\mathcal {A}},\) then, the union of these n Ky Fan disks contains all eigenvalues of \( {\mathcal {A}}, \) that is:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq {\mathcal {K}}({\mathcal {A}}) :=\bigcup \limits _{i = 1}^n {{\mathcal {K}}^{(i)}({\mathcal {A}}).} \end{aligned}$$

The Ky Fan-type theorem naturally extends from matrices to tensors (see Yang and Yang 2010). Later, Li and Ng (2015) improved it by applying the ratio of the entries in the Perron vector, and recently, Wang et al. (2020) proved some new Ky Fan-type theorems for weakly irreducible nonnegative tensors with Brualdi-type eigenvalue inclusion sets. In the following sections, we will present some improvements of Ky Fan theorem. Also, we will give examples showing that our results are sharp.

3.1 Improvements of Ky Fan theorem based on Brualdi-type sets

In this section, an improvement of Ky Fan theorem is suggested by Brauer-type eigenvalue inclusion set \( {\mathcal {Z}}({\mathcal {A}}), \) as defined in Theorem 5.

Theorem 7

Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) and \({\mathcal {B}}=(b_{i_{1}i_{2}...i_{m}})\) be complex tensors of order m dimension n. If \(\left| {\mathcal {A}} \right| \le {\mathcal {B}}, \) and \(r_i ({\mathcal {A}}) \ne 0, \; \; i = 1,...,n,\) then:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq \bigcup \limits _{\begin{array}{c} \scriptstyle a_{i_1 i_2 ...i_m } \ne 0 \\ \scriptstyle \delta _{i_1 i_2 ...i_m } = 0 \end{array}} {\left\{ {z \in {\mathbb {C}}:\prod \limits _{j = 1}^m {\left| {z - a_{i_j i_j ...i_j } } \right| } \le \prod \limits _{j = 1}^m {(\rho ({\mathcal {B}}) - b_{i_j i_j ...i_j } )} } \right\} } : = {\mathcal {Z}}_1({\mathcal {A}}). \end{aligned}$$

Proof

We may assume that \({\mathcal {B}}>0\) (if some entries of \({\mathcal {B}}\) are zero, we may consider \({\mathcal {B}}_\varepsilon \equiv (b_{i_1 i_2 ...i_m } + \varepsilon ),\) for \(\varepsilon > 0;\) \({\mathcal {B}}_\varepsilon > \left| {\mathcal {A}} \right| \) and \(\rho ({\mathcal {B}}_\varepsilon ) - (b_{ii...i} + \varepsilon ) \rightarrow \rho ({\mathcal {B}}) - b_{ii...i} \) when \(\varepsilon \rightarrow 0\) ). By Perron–Frobenius theorem (Chang et al. 2008), there exists a positive vector \(x = (x_1 ,x_2 ,...,x_n )^T, \) such that:

$$\begin{aligned} {\mathcal {B}}x^{m - 1} = \rho ({\mathcal {B}})x^{\left[ {m - 1} \right] }. \end{aligned}$$
(3)

Let \( X: = diag(x_1 ,x_2 ,...,x_n ), \) and \( {\mathcal {A}}_X: = {\mathcal {A}}X^{ - (m - 1)} \overbrace{X...X}^{m - 1}. \) Then, by Remark 2\(\sigma ({\mathcal {A}}) = \sigma ({\mathcal {A}}_X)\) and \( |({\mathcal {A}}_X)_{ii...i}| = | a_{ii...i} | < b_{ii...i}, \) for all \(i \in \langle n \rangle .\)

By Theorem 5 for any eigenvalue \(\lambda \) of \({\mathcal {A}}_X,\) we have:

$$\begin{aligned} \prod \limits _{j = 1}^m {\left| {\lambda - a_{i_j i_j ...i_j } } \right| }&\le \prod \limits _{j = 1}^m {r_{i_j } ({\mathcal {A}}_X)} = \prod \limits _{j = 1}^m {\left( {\sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...i_m = 1 \\ \scriptstyle \delta _{i_j i_2 ...i_m = 0} \end{array}}^n {\frac{{\left| {a_{i_j i_2 ...i_m } } \right| x_{i_2 } ...x_{i_m } }}{{x_{i_j }^{m - 1} }}} } \right) } \\&\le \prod \limits _{j = 1}^m {\left( {\sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...i_m = 1 \\ \scriptstyle \delta _{i_j i_2 ...i_m = 0} \end{array}}^n {\frac{{b_{i_j i_2 ...i_m } x_{i_2 } ...x_{i_m } }}{{x_{i_j }^{m - 1} }}} } \right) } \\&= \prod \limits _{j = 1}^m {\left( {\sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...i_m = 1 \\ \scriptstyle \delta _{i_j i_2 ...i_m = 0} \end{array}}^n {\frac{{b_{i_j i_2 ...i_m } x_{i_2 } ...x_{i_m } }}{{x_{i_j }^{m - 1} }}} + b_{i_j i_j ...i_j } -b_{i_j i_j ...i_j }} \right) } \\&= \prod \limits _{j = 1}^m {\left( {\sum \limits _{i_2 ,...i_m = 1}^n {\frac{{b_{i_j i_2 ...i_m } x_{i_2 } ...x_{i_m } }}{{x_{i_j }^{m - 1} }}} - b_{i_j i_j ...i_j } } \right) } \\&= \prod \limits _{j = 1}^m {\left( {\rho ({\mathcal {B}}) - b_{i_j i_j ...i_j } } \right) .} \end{aligned}$$

Thus, we have:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq \bigcup \limits _{\begin{array}{c} \scriptstyle a_{i_1 i_2 ...i_m } \ne 0 \\ \scriptstyle \delta _{i_1 i_2 ...i_m } = 0 \end{array}} {\left\{ {z \in {\mathbb {C}}:\prod \limits _{j = 1}^m {\left| {z - a_{i_j i_j ...i_j } } \right| } \le \prod \limits _{j = 1}^m {(\rho ({\mathcal {B}}) - b_{i_j i_j ...i_j } )} } \right\} } : = {\mathcal {Z}}_1({\mathcal {A}}). \end{aligned}$$

The theorem is proved. \(\square \)

Remark 3

Note that in Theorem 7, \( \sigma ({\mathcal {A}}) \subseteq {\mathcal {Z}}_1({\mathcal {A}}) \) is also obtained by removing the condition \(r_i ({\mathcal {A}}) \ne 0, \; \; i = 1,...,n.\)

Similarly, using Theorem 4 and the method of proof employed in Theorem 7, we have the following:

Corollary 1

Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) and \({\mathcal {B}}=(b_{i_{1}i_{2}...i_{m}})\) be complex tensors of order m dimension n with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}}. \) Then, every eigenvalue of \({\mathcal {A}}\) lies in the region:

$$\begin{aligned}&\bigcup \limits _{\begin{array}{c} \scriptstyle i,j = 1 \\ \scriptstyle i \ne j \end{array}}^n {\left\{ {z \in C:\left| {z - a_{ii...i} } \right| ^{m-1}.\left| {z - a_{jj...j} } \right| \le (\rho ({\mathcal {B}}) - b_{ii...i} )^{m-1}(\rho ({\mathcal {B}}) - b_{jj...j} )} \right\} }\\&:={\mathfrak {B}}_{1}({\mathcal {A}}). \end{aligned}$$

3.2 Improvements of Ky Fan theorem based on the ratio of the entries in the Perron vector

Li and Ng (2015) considered lower bound the problem of estimating the ratio \( \frac{{x_{\min }}}{{x_{\max } }}\) for an irreducible nonnegative tensor \( {\mathcal {A}} \) as follows:

$$\begin{aligned} \kappa ({\mathcal {A}}) \le \frac{{x_{\min }}}{{x_{\max } }}, \end{aligned}$$
(4)

where \( x_{\min } = \mathop {\min }\nolimits _{1 \le i \le n} x_i \;\; \textit{and} \;\; x_{\max } = \mathop {\max }\nolimits _{1 \le i \le n} x_i,\) and:

$$\begin{aligned} \kappa ({\mathcal {A}}) = \mathop {\max }\limits _{2 \le k,k' \le m} \quad \mathop {\min }\limits _{\begin{array}{c} \scriptstyle 1 \le i_1 ,i_1 ^\prime \le n \\ \scriptstyle 1 \le i_k = i_k ^\prime \le n \end{array}} \;\;\frac{{\sum \limits _{\underbrace{i_2 ,...,i_m }_{\mathrm{except}\;\;i_k } = 1}^n {a_{i_1 ,...,i_m } } }}{{\sum \limits _{\underbrace{i_2 ^\prime ,...,i_m ^\prime }_{\mathrm{except}\;\;i_k ^\prime } = 1}^n {a_{i_1 ^\prime ,...,i_m ^\prime } } }}. \end{aligned}$$
(5)

Using similar technique, the lower bound of (4) can be further improved as below (see Lemma 4.1 in Wang et al. 2020):

$$\begin{aligned} \iota ({\mathcal {A}}) \le \left( \frac{{x_{\min }}}{{x_{\max } }}\right) ^2, \end{aligned}$$
(6)

where

$$\begin{aligned} \iota (A) = \mathop {\max }\limits _{2 \le k,k',l,l' \le m} \quad \mathop {\min }\limits _{\begin{array}{c} \scriptstyle 1 \le i_1 ,i_1 ^\prime \le n \\ \scriptstyle 1 \le i_k = i_{\begin{array}{c} k ^\prime \le n \end{array}} \\ \scriptstyle 1 \le i_l = i_l ^\prime \le n \end{array}} \;\;\frac{{\sum \limits _{\underbrace{i_2 ,...,i_m }_{\mathrm{except}\;\;i_k ,\;i_l } = 1}^n {a_{i_1 ,...,i_m } } }}{{\sum \limits _{\underbrace{i_2 ^\prime ,...,i_m ^\prime }_{\mathrm{except}\;\;i_k ^\prime ,\;i_l ^\prime } = 1}^n {a_{i_1 ^\prime ,...,i_m ^\prime } } }}. \end{aligned}$$
(7)

Also, using similar technique as presented in Wang et al. (2020, Lemma 4.1) (or Li and Ng 2015, Lemma 3.2), we obtain the lower bound for the ratio of the smallest and largest entries in a Perron vector.

Lemma 1

Let \({\mathcal {A}}\) be a weakly irreducible nonnegative tensor of order m dimension n with maximal eigenvector x. Then:

$$\begin{aligned} \tau ({\mathcal {A}}) =\mathop {\max }\limits _{1 \le k \le m-1} \tau ^{(k)}({\mathcal {A}}) \le \frac{{x_{min}}}{{x_{max} }}, \end{aligned}$$
(8)

where

$$\begin{aligned} \tau ^{(k)}({\mathcal {A}}) = \mathop {\max }\limits _{\begin{array}{c} \scriptstyle \left\{ {j_1 ,...,j_k } \right\} \subseteq \left\{ {2,...,m} \right\} \\ \scriptstyle \left\{ {j'_1 ,...,j'_k } \right\} \subseteq \left\{ {2,...,m} \right\} \end{array}} \quad \mathop {\min }\limits _{\begin{array}{c} \scriptstyle 1 \le i_1 ,i_1 ^\prime \le n \\ \scriptstyle 1 \le i_{j_t } = i'_{\begin{array}{c} j_t \end{array}} \le n \\ \scriptstyle 1 \le t \le k \end{array}} \;\;\left( {\frac{{\sum \limits _{\underbrace{i_2 ,...,i_m }_{except\;i_{j_1 } , \ldots ,\;i_{j_k } } = 1}^n {\quad a_{i_1 ,...,i_m } } }}{{\sum \limits _{\underbrace{i_2 ^\prime ,...,i'_m }_{except\;\;i'_{j_1 } ,...,\;i'_{j_k } } = 1}^n {\quad a_{i_1 ^\prime ,...,i'_m } } }}} \right) ^{{\textstyle {1 \over k}}}. \end{aligned}$$
(9)

Proof

Let \( x_{\alpha } = \min x_i \) and \( x_{\beta } = \max x_i, \) Then, by (3), we have:

$$\begin{aligned} \rho ({\mathcal {A}}) (x_{\alpha })^{m-1}&= \sum \limits _{i_2 ,...,i_m = 1}^n {a_{\alpha ,i_2 ,...,i_m } \;x_{i_2 } ...\;x_{i_m } } \\&\ge \left( \sum \limits _{i_2 ,...,i_m = 1}^n {a_{\alpha ,i_2 ,...,i_m } \;x_{i_{j_1} } ...\;x_{i_{j_k} } }\right) x_{\alpha }^{m-1-k}, \end{aligned}$$

where \( \left\{ {j_1 ,...,j_k } \right\} \subseteq \left\{ {2,...,m} \right\} . \) It implies that:

$$\begin{aligned} \rho ({\mathcal {A}}) (x_{\alpha })^{k} \ge \sum \limits _{i_2 ,...,i_m = 1}^n {a_{\alpha ,i_2 ,...,i_m } \;x_{i_{j_1} } ...\;x_{i_{j_k} } }. \end{aligned}$$

Similarly, we get:

$$\begin{aligned} \rho ({\mathcal {A}}) (x_{\beta })^{k} \le \sum \limits _{i_2 ,...,i_m = 1}^n {a_{\beta ,i_2 ,...,i_m } \;x_{i_{j_1} } ...\;x_{i_{j_k} } }. \end{aligned}$$

Therefore:

$$\begin{aligned} \left( \dfrac{x_{\alpha }}{x_{\beta }}\right) ^{k}&\ge \dfrac{ \sum \limits _{i_2 ,...,i_m = 1}^n {a_{\alpha ,i_2 ,...,i_m } \;x_{i_{j_1} } ...\;x_{i_{j_k} } }}{ \sum \limits _{i'_2 ,...,i'_m = 1}^n {a_{\beta ,i'_2 ,...,i'_m } \;x_{i'_{j_1} } ...\;x_{i'_{j_k} } }} \\&= \dfrac{ \sum \limits _{i_{j_1} ,...,i_{j_k} = 1}^n { \sum \limits _{\underbrace{i_2 ,...,i_m }_{except\;i_{j_1 } , \ldots ,\;i_{j_k } } = 1}^n a_{\alpha ,i_2 ,...,i_m } \;x_{i_{j_1} } ...\;x_{i_{j_k} } }}{ \sum \limits _{i'_{j_1} ,...,i'_{j_k} = 1}^n {\sum \limits _{\underbrace{i'_2 ,...,i'_m }_{except\;i'_{j_1 } , \ldots ,\;i'_{j_k } } = 1}^n a_{\beta ,i'_2 ,...,i'_m } \;x_{i'_{j_1} } ...\;x_{i'_{j_k} } }} \\&\ge \mathop {\min }\limits _{\begin{array}{c} \scriptstyle 1 \le i_{j_t } = i'_{j_t } \le n \\ \scriptstyle 1 \le t \le k \end{array}} \dfrac{ \sum \limits _{\underbrace{i_2 ,...,i_m }_{except\;i_{j_1 } , \ldots ,\;i_{j_k } } = 1}^n { a_{\alpha ,i_2 ,...,i_m } } }{ {\sum \limits _{\underbrace{i'_2 ,...,i'_m }_{except\;i'_{j_1 } , \ldots , \;i'_{j_k } } = 1}^n a_{\beta ,i'_2 ,...,i'_m } }} \\&\ge \mathop {\min }\limits _{1 \le i_1 = i'_1 \le n} \mathop {\min }\limits _{\begin{array}{c} \scriptstyle 1 \le i_{j_t } = i'_{j_t } \le n \\ \scriptstyle 1 \le t \le k \end{array}} \dfrac{ \sum \limits _{\underbrace{i_2 ,...,i_m }_{except\;i_{j_1 } , \ldots ,\;i_{j_k } } = 1}^n { a_{i_1 ,i_2 ,...,i_m } } }{ {\sum \limits _{\underbrace{i'_2 ,...,i'_m }_{except\;i'_{j_1 } , \ldots ,\;i'_{j_k } } = 1}^n a_{i'_1 ,i'_2 ,...,i'_m } }}, \end{aligned}$$

which proves the lemma. \(\square \)

Based on (4), Li and Ng (2015, Theorem 3.3) proposed the following theorem.

Theorem 8

Let \({\mathcal {A}}\) and \({\mathcal {B}}\) be nonnegative tensors of order m dimension n. If \( {\mathcal {B}} \) is irreducible with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}}. \) Then, for any eigenvalue \( \lambda \) of \( {\mathcal {A}},\) there exists i, such that:

$$\begin{aligned} {\left| {\lambda - a_{i ...i } } \right| } \le \rho ({\mathcal {B}}) - b_{i i ...i} - (\kappa ({\mathcal {B}}))^{m-1} r_{i}({\mathcal {E}}), \end{aligned}$$
(10)

where \( {\mathcal {E}}={\mathcal {B}} - |{\mathcal {A}}| \) and \( \kappa ({\mathcal {B}}) \) is given in (5).

Recently, Wang et al. (2020, Theorem 4.6) improved the eigenvalue inclusion set (10), using (6) and \( \upsilon ({\mathcal {A}})=\dfrac{l}{\rho ({\mathcal {A}})-R_{min}({\mathcal {A}}) +l}, \) which \( l=\mathop {\min }\nolimits _{\delta (i_1 ,...,i_m ) = 0} \;a_{i_1 ...i_m } \) and \( R_i({\mathcal {A}})=\sum \nolimits _{i_2 ,...,i_m = 1}^n {\left| {a_{i,i_2 ,...,i_m } } \right| }. \)

Theorem 9

Let \({\mathcal {A}}\) be a weakly irreducible tensors of order m dimensional n,  and \({\mathcal {B}}\) be a tensors of order m dimensional n with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}}. \) Then, for any eigenvalue \( \lambda \) of \( {\mathcal {A}},\) there exists \(\gamma \in C({\mathcal {A}})\), such that:

$$\begin{aligned} \prod \limits _{i \in \gamma } {\left| {\lambda - a_{i ...i } } \right| } \le \prod \limits _{i \in \gamma } (\rho ({\mathcal {B}}) - b_{i i ...i}) - P^{|\gamma |} \prod \limits _{i \in \gamma } r_{i}({\mathcal {E}}), \end{aligned}$$
(11)

where \( P=max \{ (\kappa ({\mathcal {B}}))^{m-1} , (\iota ({\mathcal {B}}))^{(m-1)/2} , \upsilon ({\mathcal {B}}) \} \; , \;\; {\mathcal {E}}={\mathcal {B}} - |{\mathcal {A}}| \) and \( C({\mathcal {A}})\) is the set of circuits of digraph \( {\mathcal {A}}\) and \( |\gamma | \) denotes the length of circuit \( \gamma . \)

Based on Brauer-type eigenvalue inclusion sets (Theorem 5) and estimating the ratio of the smallest and largest values of a Perron vector (Lemma 1), we can obtain the following Ky Fan-type theorem.

Theorem 10

Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) and \({\mathcal {B}}=(b_{i_{1}i_{2}...i_{m}})\) be complex tensors of order m dimension n with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}}. \) Then, every eigenvalue of \({\mathcal {A}}\) lies in the region:

$$\begin{aligned} \bigcup \limits _{\begin{array}{c} \scriptstyle a_{i_1 i_2 ...i_m } \ne 0 \\ \scriptstyle \delta _{i_1 i_2 ...i_m } = 0 \end{array}} \left\{ {z \in {\mathbb {C}}:\prod \limits _{j = 1}^m {\left| {z - a_{i_j i_j ...i_j } } \right| } \le \prod \limits _{j = 1}^m { \left( \rho ({\mathcal {B}}) - b_{i_j i_j ...i_j } - (\tau ({\mathcal {B}} ))^{m-1} r_{i_j}({\mathcal {E}}) \right) } } \right\} := {{\mathcal {Z}}}_2({\mathcal {A}}), \end{aligned}$$

where \( {\mathcal {E}}={\mathcal {B}} - |{\mathcal {A}}| \) and \(\tau ({\mathcal {B}} ) \) is given in (8).

Proof

The proof of this theorem runs parallel to proof for the corresponding part of Theorem 7. We may assume that \({\mathcal {B}}>0. \) By Perron–Frobenius theorem (Chang et al. 2008), there exists a positive vector \(x = (x_1 ,x_2 ,...,x_n )^T, \) such that:

$$\begin{aligned} {\mathcal {B}}x^{m - 1} = \rho ({\mathcal {B}})x^{\left[ {m - 1} \right] }. \end{aligned}$$

Let \( X: = diag(x_1 ,x_2 ,...,x_n ), \) \( {\mathcal {A}}_X: = {\mathcal {A}} X^{ - (m - 1)} \overbrace{X...X}^{m - 1}, \) and \( {\mathcal {B}}_X: = {\mathcal {B}} X^{ - (m - 1)} \overbrace{X...X}^{m - 1}. \) Then, by Remark 2 for all \(i \in N \), we have:

$$\begin{aligned} \sigma ({\mathcal {A}})&= \sigma ({\mathcal {A}}_X), \quad ({\mathcal {A}}_X)_{ii...i} = a_{ii...i}, \\ \sigma ({\mathcal {B}})&= \sigma ({\mathcal {B}}_X), \quad ({\mathcal {B}}_X)_{ii...i} = b_{ii...i}. \end{aligned}$$

Let \( {\mathcal {E}}={\mathcal {B}} - |{\mathcal {A}}| \) and \( {\mathcal {E}}_X={\mathcal {B}}_X - |{\mathcal {A}}|_X. \) By Theorem 5 for any eigenvalue \(\lambda \) of \({\mathcal {A}}_X,\) we have:

$$\begin{aligned} \prod \limits _{j = 1}^m {\left| {\lambda - a_{i_j i_j ...i_j } } \right| }&\le \prod \limits _{j = 1}^m {r_{i_j } ({\mathcal {A}}_X)} = \prod \limits _{j = 1}^m {r_{i_j } ({\mathcal {B}}_X - {\mathcal {E}}_X) } \\&= \prod \limits _{j = 1}^m \left( r_{i_j } ( {\mathcal {B}}_X) - {r_{i_j } ({\mathcal {E}}_X)} \right) \\&= \prod \limits _{j = 1}^m \left( \rho ({\mathcal {B}}) - b_{i_j i_j ...i_j } - \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...i_m = 1 \\ \scriptstyle \delta _{i_j i_2 ...i_m = 0} \end{array}}^n { \frac{{{\mathcal {E}}_{i_j i_2 ...i_m } x_{i_2 } ...x_{i_m } }}{{x_{i_j }^{m - 1} }} } \right) \\&\le \prod \limits _{j = 1}^m \left( \rho ({\mathcal {B}}) - b_{i_j i_j ...i_j } - \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...i_m = 1 \\ \scriptstyle \delta _{i_j i_2 ...i_m = 0} \end{array}}^n { {\mathcal {E}}_{i_j i_2 ...i_m } \left( \frac{ x_{min} }{ x_{max } } \right) ^{m-1} } \right) \\&\le \prod \limits _{j = 1}^m \left( \rho ({\mathcal {B}}) - b_{i_j i_j ...i_j } - \left( \tau ({\mathcal {B}}) \right) ^{m-1} \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...i_m = 1 \\ \scriptstyle \delta _{i_j i_2 ...i_m = 0} \end{array}}^n { {\mathcal {E}}_{i_j i_2 ...i_m } } \right) \\&= \prod \limits _{j = 1}^m \left( \rho ({\mathcal {B}}) - b_{i_j i_j ...i_j } - \left( \tau ({\mathcal {B}}) \right) ^{m-1} {r_{i_j}({\mathcal {E}})} \right) , \end{aligned}$$

where the last inequality follows from (8). \(\square \)

Similarly, using Theorem 4 and the method of proof employed in Theorem 10, we can obtain improvement of (2) as follows:

Corollary 2

Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) and \({\mathcal {B}}=(b_{i_{1}i_{2}...i_{m}})\) be complex tensors of order m dimension n with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}}. \) Then, every eigenvalue of \({\mathcal {A}}\) lies in the region:

$$\begin{aligned} \bigcup \limits _{\begin{array}{c} \scriptstyle i,j = 1 \\ \scriptstyle i \ne j \end{array}}^n\{ z :\left| {z - a_{i...i} } \right| ^{m-1}\left| {z - a_{j...j} } \right|&\le \left( \rho ({\mathcal {B}}) - b_{i...i} - (\tau ({\mathcal {B}} ))^{m-1} r_{i}({\mathcal {E}}) \right) ^{m-1} \\ \left( \rho ({\mathcal {B}}) - b_{j...j} - (\tau ({\mathcal {B}} ))^{m-1} r_{j}({\mathcal {E}}) \right) \}&:={{\mathfrak {B}}}_2({\mathcal {A}}). \\ \end{aligned}$$

In the following, we consider the relationships between Ky Fan theorem and its improvements.

Theorem 11

Let \({\mathcal {A}}\) be tensor of order m dimension n,  and then:

$$\begin{aligned} {\mathcal {Z}}_1({\mathcal {A}}) \subseteq {{\mathfrak {B}}}_1({\mathcal {A}}) \subseteq {\mathcal {K}}({\mathcal {A}}). \end{aligned}$$
(12)

Proof

First, consider the right-hand inclusion of (12). Fix any \(\, i , j \;(1 \le i, j \le n; \; i \ne j)\) and let z be any point of \({\mathfrak {B}}_{1}^{(i,j)}({\mathcal {A}}),\) where:

$$\begin{aligned} {\mathfrak {B}}_1^{\left( {i,j} \right) }({\mathcal {A}}) : = \left| {z - a_{ii...i} } \right| ^{m-1}\left| {z - a_{jj...j} } \right| \le \left( {\rho \left( {\mathcal {B}} \right) - b_{ii...i} } \right) ^{m-1}\left( {\rho \left( {\mathcal {B}} \right) - b_{jj...j} } \right) . \end{aligned}$$

By Corollary 1, we have:

$$\begin{aligned} \left| {z - a_{i...i} } \right| ^{m-1}\left| {z - a_{j...j} } \right| \le \left( {\rho \left( {\mathcal {B}} \right) - b_{i...i} } \right) ^{m-1} \left( {\rho \left( {\mathcal {B}} \right) - b_{j...j} } \right) , \; \; i,j \in \left\langle n \right\rangle , \; i \ne j.\nonumber \\ \end{aligned}$$
(13)

If \(\left( {\rho \left( {\mathcal {B}} \right) - b_{ii...i} } \right) ^{m-1} \left( {\rho \left( {\mathcal {B}} \right) - b_{jj...j} } \right) =0,\) then:

$$\begin{aligned} z=a_{ii...i} \quad {\mathrm{or}} \quad z=a_{jj...j}. \end{aligned}$$

Since \(a_{ii...i} \in {\mathcal {K}}^{(i)} \left( {\mathcal {A}} \right) \) and \( a_{jj...j} \in {\mathcal {K}}^{(j)} \left( {\mathcal {A}} \right) , \) then:

$$\begin{aligned} z \in {\mathcal {K}}^{(i)} \left( {\mathcal {A}} \right) \cup {\mathcal {K}}^{(j)} \left( {\mathcal {A}} \right) . \end{aligned}$$

If \(\left( {\rho \left( {\mathcal {B}} \right) - b_{ii...i} } \right) ^{m-1} \left( {\rho \left( {\mathcal {B}} \right) - b_{jj...j} } \right) >0, \) then by (13), we have:

$$\begin{aligned} \left( {\frac{{\left| {z - a_{ii....i} } \right| }}{{\rho ({\mathcal {B}}) - b_{ii...i} }}} \right) ^{m-1} \left( {\frac{{\left| {z - a_{jj...j} } \right| }}{{\rho ({\mathcal {B}}) - b_{jj...j} }}} \right) \le 1. \end{aligned}$$
(14)

As the factors on the left-hand of (14) cannot all exceed unity, then at least one of these factors is unity, at most, i.e., \(z \in {\mathcal {K}}^{(i)} \left( {\mathcal {A}} \right) \cup {\mathcal {K}}^{(j)} \left( {\mathcal {A}} \right) .\) Hence, in either case, it follows that \(z \in {\mathcal {K}}^{(i)} \left( {\mathcal {A}} \right) \cup {\mathcal {K}}^{(j)} \left( {\mathcal {A}} \right) . \) Therefore:

$$\begin{aligned} {\mathfrak {B}}_1 ({\mathcal {A}}) \subseteq \bigcup \limits _{i = 1}^n {{\mathcal {K}}^{(i)} ({\mathcal {A}})} = {\mathcal {K}}({\mathcal {A}}). \end{aligned}$$

Next, we prove that the left-hand inclusion of (12) is valid. Using the first part of Theorem 7, we have:

$$\begin{aligned} (\rho ({\mathcal {B}}) - b_{ii...i} ) > 0, \quad \forall i \in \left\langle n \right\rangle . \end{aligned}$$

Let z be any point of \( {\mathcal {Z}}_1 ({\mathcal {A}}),\) that is:

$$\begin{aligned} \prod \limits _{j = 1}^m {\left| {z - a_{i_j ...i_j } } \right| } \le \prod \limits _{j = 1}^m {(\rho ({\mathcal {B}}) - b_{i_j ...i_j } )}. \end{aligned}$$
(15)

On taking the power of m for the inequality (15), we have:

$$\begin{aligned} \prod \limits _{j = 1}^m {\left| {z - a_{i_j ...i_j } } \right| ^{m} } \le \prod \limits _{j = 1}^m {(\rho ({\mathcal {B}}) - b_{i_j ...i_j } )^{m}}. \end{aligned}$$
(16)

Since \(\left( \rho ({\mathcal {B}}) - b_{i_j ...i_j } \right) \) are all positive for \( j=1,2,...,m, \) we may equivalently express the inequality (16) in the following form:

$$\begin{aligned}&\left( {\frac{{\left| {z - a_{i_1 ...i_1 } } \right| ^{m-1} \left| {z - a_{i_2 ...i_2 } } \right| }}{{ (\rho ({\mathcal {B}}) - b_{i_1 ...i_1 } )^{m-1} (\rho ({\mathcal {B}}) - b_{i_2 ...i_2 } )}}} \right) .\left( {\frac{{\left| {z - a_{i_2 ...i_2 } } \right| ^{m-1} \left| {z - a_{i_3 ...i_3 } } \right| }}{{(\rho ({\mathcal {B}}) - b_{i_2 ...i_2 } )^{m-1} (\rho ({\mathcal {B}}) - b_{i_3 ...i_3 } )}}} \right) \nonumber \\&\qquad ...\left( {\frac{{\left| {z - a_{i_ ...i_m } } \right| ^{m-1} \left| {z - a_{i_1 ...i_1 } } \right| }}{{(\rho ({\mathcal {B}}) - b_{i_m ...i_m } )^{m-1} (\rho ({\mathcal {B}}) - b_{i_1 ...i_1 } )}}} \right) \le 1. \end{aligned}$$
(17)

As the factors on the left of (17) cannot all exceed unity, then at least one of these factors is unity, at most. That is:

$$\begin{aligned} \left( {\frac{{\left| {z - a_{i_1 ...i_1 } } \right| ^{m-1} \left| {z - a_{i_2 ...i_2 } } \right| }}{{ (\rho ({\mathcal {B}}) - b_{i_1 ...i_1 } )^{m-1} (\rho ({\mathcal {B}}) - b_{i_2 ...i_2 } )}}} \right) \le 1 \end{aligned}$$

or

$$\begin{aligned} \left( {\frac{{\left| {z - a_{i_2 ...i_2 } } \right| ^{m-1} \left| {z - a_{i_3 ...i_3 } } \right| }}{{(\rho ({\mathcal {B}}) - b_{i_2 ...i_2 } )^{m-1} (\rho ({\mathcal {B}}) - b_{i_3 ...i_3 } )}}} \right) \le 1 \end{aligned}$$

or

$$\begin{aligned} ... \end{aligned}$$

or

$$\begin{aligned} \left( {\frac{{\left| {z - a_{i_m ...i_m } } \right| ^{m-1} \left| {z - a_{i_1 ...i_1 } } \right| }}{{(\rho ({\mathcal {B}}) - b_{i_m ...i_m } )^{m-1} (\rho ({\mathcal {B}}) - b_{i_1 ...i_1 } )}}} \right) \le 1. \end{aligned}$$

Hence, there exists an \( \,\alpha \, (1< \alpha < m) \), such that:

$$\begin{aligned} \left| {z - a_{i_\alpha ...i_\alpha } } \right| ^{m-1}\left| {z - a_{i_{\alpha + 1} ...i_{\alpha + 1} } } \right| \le (\rho ({\mathcal {B}}) - b_{i_\alpha ...i_\alpha } )(\rho ({\mathcal {B}}) - b_{i_{\alpha + 1} ...i_{\alpha + 1} } ); \end{aligned}$$

that is, \(z \in {\mathfrak {B}}_{1}^{(\alpha , \alpha +1)}({\mathcal {A}}). \) Therefore, by definition of \({\mathfrak {B}}_1 ({\mathcal {A}}),\) we have \(z \in {\mathfrak {B}}_1 ({\mathcal {A}}).\) Thus, \({\mathcal {Z}}_1 ({\mathcal {A}}) \subseteq {\mathfrak {B}}_1 ({\mathcal {A}}).\) Therefore, the proof is complete. \(\square \)

Corollary 3

Let \({\mathcal {A}}\) be tensor of order m dimension n,  and then:

$$\begin{aligned} {{\mathcal {Z}}_2}({\mathcal {A}}) \subseteq {\mathcal {Z}}_1({\mathcal {A}}) \subseteq {{\mathfrak {B}}}_1({\mathcal {A}}) \subseteq {\mathcal {K}}({\mathcal {A}}). \end{aligned}$$
(18)

Remark 4

Using the method of proof employed in Theorem 11, we conclude:

$$\begin{aligned} {{\mathcal {Z}}_2}({\mathcal {A}}) \subseteq {{\mathfrak {B}}_2}({\mathcal {A}}). \end{aligned}$$

The following examples show the improvements of the bounds obtained in this section.

Example 1

Let \({\mathcal {A}} = (a_{ijk})\) be a tensor of order 3 dimension 2 with entries defined as follows:

$$\begin{aligned} a_{111}=a_{122}=a_{211}=1, \;\; a_{222}=-1 \;\; \mathrm{and}\;\; \mathrm{other }\; a_{ijk}=0. \end{aligned}$$

Let \({\mathcal {B}}=(a_{ijk})\) be a tensor of order 3 dimension 2,  where \({\mathcal {B}}= \left| {\mathcal {A}} \right| . \) By computation (Chang et al. 2008), we have \(\sigma ({\mathcal {A}})={ \{\sqrt{2},-\sqrt{2} \} },\) and \(\rho ({\mathcal {B}})=2.\)

By Ky Fan theorem (Theorem 6), we obtain:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq {\mathcal {K}}({\mathcal {A}}):=\left\{ z: \left| {z - 1} \right| \le (2-1) \right\} \cup \left\{ z: \left| {z + 1} \right| \le (2-1) \right\} . \end{aligned}$$

From Corollary 1, we obtain:

$$\begin{aligned}\sigma ({\mathcal {A}}) \subseteq {\mathfrak {B}}_1({\mathcal {A}}):=\left\{ z: \left| {z - 1} \right| ^2 \left| {z + 1} \right| \le (2-1)^3 \right\} \cup \left\{ z: \left| {z + 1} \right| ^2 \left| {z - 1} \right| \le (2-1)^3 \right\} . \end{aligned}$$

Also from Theorem  7, we obtain:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq {\mathcal {Z}}_1({\mathcal {A}}):=\left\{ z: \left| {z - 1} \right| \left| {z + 1} \right| \le (2-1)^2 \right\} . \end{aligned}$$

The blue line in Fig. 1 shows the boundary of \( {\mathcal {K}}({\mathcal {A}}) \) (Ky Fan theorem), the yellow line shows the boundary of \({\mathfrak {B}}_1({\mathcal {A}}) \) (Corollary 1), and the red line shows the boundary of \({\mathcal {Z}}_1({\mathcal {A}}) \) (Theorem  7). This example shows that \({\mathcal {Z}}_1({\mathcal {A}}) \subseteq {\mathfrak {B}}_1({\mathcal {A}}). \)

Remark 5

In Example 1, \( {\mathcal {E}}=0 \; ({\mathcal {B}}=|{\mathcal {A}}|) \), and therefore, the eigenvalue inclusion sets \( { {\mathcal {Z}}_1}({\mathcal {A}}) \) and \( {\mathcal {Z}}_2({\mathcal {A}}) \) are equal. In the following example, we consider \( {\mathcal {B}} \gneqq |{\mathcal {A}}|. \)

Example 2

Let \({\mathcal {A}} = (a_{ijk})\) be an order 3 dimension 2 tensor with entries defined as follows:

$$\begin{aligned} a_{111}=a_{122}=a_{211}=1, \;\; a_{222}=-1 \;\; \textit{and other } a_{ijk}=0. \end{aligned}$$

Let \({\mathcal {B}}=(b_{ijk})\) be an order 3 dimension 2 tensor, where \(b_{i j k }=2 \) for nay \( 1 \le i , j, k \le 2. \) By Yang and Yang (2010, Lemma 5.1), we have \(\rho ({\mathcal {B}})=8.\)

From Corollary 2, we obtain:

$$\begin{aligned} {{\mathfrak {B}}}_2({\mathcal {A}}):=&\left\{ z: \left| {z - 1} \right| ^2 \left| {z + 1} \right| \le (8-2- 5 \times 1)^3 (8-2- 5 \times 1)\right\} \\ \cup&\left\{ z: \left| {z + 1} \right| ^2 \left| {z - 1} \right| \le (8-2- 5 \times 1)^3 (8-2- 5 \times 1) \right\} . \end{aligned}$$

Also from Theorem 10, we obtain:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq { {\mathcal {Z}}_2}({\mathcal {A}}):=\left\{ z: \left| {z - 1} \right| \left| {z + 1} \right| \le (8-2- 5 \times 1)^2 \right\} . \end{aligned}$$

However, from Theorem 4.6 of Wang et al. (2020) (Theorem 9), we obtain:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq \left\{ z: \left| {z - 1} \right| \left| {z + 1} \right| \le (8-2)^2 - (5 \times 1)^2 \right\} . \end{aligned}$$
(19)

It is clear that \( { {\mathcal {Z}}_2}({\mathcal {A}}) \) is strictly better that the bounds \( {{\mathfrak {B}}_2}({\mathcal {A}}) \) and (19).

Fig. 1
figure 1

Comparisons of Ky Fan theorem, Corollary 2, and Theorem 7. The “\(*\)” symbols show the location of the eigenvalues of \({\mathcal {A}}\)

3.3 New improvements of Ky Fan theorem

In this section, we obtain another improvement of Ky Fan theorem. First, we prove the following lemma that will be needed.

Lemma 2

Let \({\mathcal {A}}=(a_{i_{1}i_{2}...i_{m}})\) be a tensor of order m dimension n. For any eigenvalue \(\lambda \) of \({\mathcal {A}},\) there exits a nonzero \(x \in {\mathbb {C}}^n\), such that:

$$\begin{aligned} (\lambda - a_{s...s} )\left( {\lambda - \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {a_{ti_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} } \right) = a_{ts...s} \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {a_{si_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }},} \end{aligned}$$
(20)

for all \(s \in \left\langle n \right\rangle \) and \( \, t \in T,\) where \(T = \{ t \in \left\langle n \right\rangle : \; t \ne s, \;x_t \ne 0\}.\)

Proof

Let \( \lambda \) be eigenvalue of \({\mathcal {A}} \) with the eigenvector x,  that is:

$$\begin{aligned} {\mathcal {A}}x^{m - 1} = \lambda x^{[m - 1]}. \end{aligned}$$

For each \( s \in \left\langle n \right\rangle \) and \(t \in T, \) we have:

$$\begin{aligned}&(\lambda - a_{ss...s} )x_s^{m - 1} = \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {a_{si_2 ...i_m } x_{i_2 } ...x_{i_m } }, \end{aligned}$$
(21)
$$\begin{aligned}&(\lambda - a_{tt...t} )x_t^{m - 1} = \sum \limits _{\begin{array}{c} \scriptstyle \delta _{ti_2 ...i_m } = 0 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}} {a_{ti_2 ...i_m } x_{i_2 } ...x_{i_m } } + a_{ts...s} x_s^{m - 1}. \end{aligned}$$
(22)

If \( (\lambda - a_{ss...s} )\ne 0 \) by (22), one has:

$$\begin{aligned}&(\lambda - a_{ss...s} )(\lambda - a_{tt...t} )x_t^{m - 1} - \sum \limits _{\begin{array}{c} \scriptstyle \delta _{ti_2 ...i_m } = 0 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}} {(\lambda - a_{ss...s} )a_{ti_2 ...i_m } x_{i_2 } ...x_{i_m } } \nonumber \\&\quad = (\lambda - a_{ss...s} )a_{ts...s} x_s^{m - 1}. \end{aligned}$$
(23)

Also if \( a_{ts..s} \ne 0 \) by (21), one has:

$$\begin{aligned} (\lambda - a_{ss...s} )a_{ts..s} x_s^{m - 1} = a_{ts...s} \left( \sum \limits _{\begin{array}{c} \scriptstyle \delta _{si_2 ...i_m } = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}} {a_{si_2 ...i_m } x_{i_2 } ...x_{i_m } } + a_{st...t} x_t^{m - 1} \right) . \end{aligned}$$
(24)

By (23) and (24), we have:

$$\begin{aligned}&(\lambda - a_{ss...s} )\left( {(\lambda - a_{tt...t} ) - \sum \limits _{\begin{array}{c} \scriptstyle \delta _{si_2 ...i_m } = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}} {a_{ti_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} } \right) \\&\quad = a_{ts...s} \left( {\sum \limits _{\begin{array}{c} \scriptstyle \delta _{si_2 ...i_m } = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}} {a_{si_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} + a_{st...t} } \right) , \end{aligned}$$

where is equal to:

$$\begin{aligned} (\lambda - a_{ss...s} )\left( {\lambda - \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {a_{ti_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} } \right) = a_{ts...s} \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {a_{si_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}.} \end{aligned}$$

If \( (\lambda - a_{ss...s} ) , \; a_{ts...s} =0 \), then the relation (20) is also valid for this case in fact. Thus, the proof is complete.

\(\square \)

Theorem 12

Let \({\mathcal {A}}\) and \({\mathcal {B}}\) be tensor of order m dimension n with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}},\) and then:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq \bigcap \limits _{s \in N} {\left( {\bigcup \limits _{\begin{array}{c} \scriptstyle t \in N \\ \scriptstyle s \ne t \end{array}} {{{\mathcal {O}} }_{s,t} } } \right) } : = {{\mathcal {K}} }_{s,t} ({\mathcal {A}}), \end{aligned}$$

where

$$\begin{aligned} {\mathcal {O}}_{s,t}&=\left\{ z \in {\mathbb {C}} :\left| {z - \dfrac{{a_{s...s} + a_{t...t} }}{2}} \right| \right. \\&\left. \le \frac{1}{2}\left| { \rho _t ({\mathcal {B}}) + \sqrt{(\left| {a_{s....s} - a_{t...t} } \right| + \rho _t ({{\mathcal {B}} }) )^2 + 4 b_{ts...s} \rho _s ({\mathcal {B}}) } } \right| \right\} \end{aligned}$$

\( \rho _t ({\mathcal {B}})= \rho ({\mathcal {B}}) - b_{tt...t} - b_{ts...s} (\frac{{u_s }}{{u_t }})^{m-1} -r_{t}^s({\mathcal {E}}) (\frac{{u_{min} }}{{u_t }})^{m-1}, \)

\( \rho _s ({\mathcal {B}})=( \rho ({\mathcal {B}}) - b_{ss...s} ) (\frac{{u_s }}{{u_t }})^{m-1} -r_{s}({\mathcal {E}}) (\frac{{u_{min} }}{{u_t }})^{m-1}, \) and u is a Perron vector of \({\mathcal {B}}.\)

Proof

Similarly by proof of Lemma 2, we define:

$$\begin{aligned} z_i = \frac{{\left| {x_i } \right| }}{{u_i }}, \quad i \in \left\langle n \right\rangle , \quad i \ne s. \end{aligned}$$

Let

$$\begin{aligned} T': = \mathop {\max }\limits _{\begin{array}{c} \scriptstyle j \in \left\langle n \right\rangle \\ \scriptstyle j \ne s \end{array}} \left\{ {j:z_j = \frac{{\left| {x_j } \right| }}{{u_j }}} \right\} ; \end{aligned}$$

it is evident that \( T' \subseteq T.\) For all \(s \in \left\langle n \right\rangle \) and \( \, t \in T',\) we set:

$$\begin{aligned} \alpha _s = \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {a_{si_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} \end{aligned}$$
(25)

and

$$\begin{aligned} \alpha _t = \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {a_{ti_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} = \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}}^n {a_{ti_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} + a_{tt...t}. \end{aligned}$$
(26)

Since u is a Perron vector of the positive tensor \({\mathcal {B}}, \) so:

$$\begin{aligned} {\mathcal {B}}u^{m - 1} = \rho ({\mathcal {B}})u^{[m - 1]}. \end{aligned}$$

Let \( {\mathcal {E}} = {\mathcal {B}} -|{\mathcal {A}}|, \) and then, the triangle inequality permits us to conclude that:

$$\begin{aligned} \left| {\alpha _s } \right|&\le \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {\left| {a_{si_2 ...i_m } } \right| \left| {\frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} \right| } \nonumber \\&= \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n { ( b_{si_2 ...i_m } - e_{si_2 ...i_m } ) \left| {\frac{{(z_{i_2 } u_{i_2 } )...(z_{i_m } u_{i_m } )}}{{(z_t u_t )^{m - 1} }}} \right| } \nonumber \\&\le \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {b_{si_2 ...i_m } \frac{{u_{i_2 } ...u_{i_m } }}{{u_t ^{m - 1} }}} - \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}}^n {e_{si_2 ...i_m } \frac{{u_{i_2 } ...u_{i_m } }}{{u_t ^{m - 1} }}}\nonumber \\&\le \frac{1}{{u_t^{m - 1} }}\sum \limits _{i_2 ,...,i_m = 1}^n {b_{si_2 ...i_m } u_{i_2 } ...u_{i_m } } - b_{ss...s} u_s^{m - 1} - \left( \frac{u_{min} }{u_t } \right) ^{m-1} r_s( {\mathcal {E}}) \nonumber \\&= \left( \frac{u_s }{u_t } \right) ^{m-1} \left( {\rho \left( {\mathcal {B}} \right) - b_{ss...s} } \right) - \left( \frac{u_{min} }{u_t } \right) ^{m-1} r_s( {\mathcal {E}})\equiv \rho _s ({\mathcal {B}}), \end{aligned}$$
(27)

and

$$\begin{aligned} \left| {\alpha _t - a_{tt...t} } \right|&\le \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}}^n {\left| {a_{ti_2 ...i_m } } \right| \left| {\frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} \right| } \nonumber \\&= \sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}}^n { \left( b_{ti_2 ...i_m } -e_{ti_2 ...i_m } \right) \left| {\frac{{(z_{i_2 } u_{i_2 } )...(z_{i_m } u_{i_m } )}}{{(z_t u_t )^{m - 1} }}} \right| } \nonumber \\&\le {\sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}}^n {b_{ti_2 ...i_m } \frac{{u_{i_2 } ...u_{i_m } }}{{u_t^{m - 1} }}} } -\sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}}^n {e_{ti_2 ...i_m } \frac{{u_{i_2 } ...u_{i_m } }}{{u_t^{m - 1} }}} \nonumber \\&\le {\sum \limits _{\begin{array}{c} \scriptstyle i_2 ,...,i_m = 1 \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \end{array}}^n {b_{ti_2 ...i_m } \frac{{u_{i_2 } ...u_{i_m } }}{{u_t^{m - 1} }}} }- b_{tt...t} - \left( {\frac{{u_{min} }}{{u_t }}} \right) ^{m - 1} r_{t}^s({\mathcal {E}}) \nonumber \\&= {\rho \left( {\mathcal {B}} \right) - b_{tt...t} - b_{ts...s} \left( {\frac{{u_s }}{{u_t }}} \right) ^{m - 1} } - \left( {\frac{{u_{min} }}{{u_t }}} \right) ^{m - 1} r_{t}^s({\mathcal {E}}) \equiv \rho _t ({\mathcal {B}}), \end{aligned}$$
(28)

where \( r_{t}^s({\mathcal {E}}) \) is defined in (1). By Lemma 2, we have:

$$\begin{aligned} \left( {\lambda - a_{ss...s} } \right) \left( {\lambda - \alpha _t } \right) = a_{ts...s} \alpha _s . \end{aligned}$$

Therefore:

$$\begin{aligned} \lambda = \frac{{\left( {\alpha _t + a_{ss...s} } \right) \pm \sqrt{\left( {\alpha _t + a_{ss...s} } \right) ^2 - 4\left( {a_{ss...s} \alpha _t - a_{ts...s} \alpha _s } \right) } }}{2}. \end{aligned}$$

Using the inequalities (27) and (28), we obtain:

$$\begin{aligned} \left| {\lambda - \frac{{\left( {a_{ss...s} + a_{tt...t} } \right) }}{2}} \right|&= \left| {\frac{{\left( {\alpha _t - a_{tt...t} } \right) \pm \sqrt{\left( {a_{ss...s} - \alpha _t } \right) ^2 - 4a_{ts...s} \alpha _s } }}{2}} \right| \\&\le \frac{1}{2}\left| { \rho _t ({\mathcal {B}}) + \sqrt{(\left| {a_{s....s} - a_{t...t} } \right| + \rho _t ({{\mathcal {B}} }) )^2 + 4 b_{ts...s} \rho _s ({\mathcal {B}}) } } \right| . \end{aligned}$$

Since s is arbitrary, we have:

$$\begin{aligned} \sigma \left( {\mathcal {A}} \right) \subseteq \bigcap \limits _{s \in \left\langle n \right\rangle } {\left( {\bigcup \limits _{\begin{array}{c} \scriptstyle t \in \left\langle n \right\rangle \\ \scriptstyle t \ne s \end{array}} {{\mathcal {O}}_{s,t} } } \right) }. \end{aligned}$$

The proof of the theorem is complete. \(\square \)

Theorem 13

Let \({\mathcal {A}}, {\mathcal {B}}\) be tensor of order m dimension n with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}},\) and then:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq \bigcap \limits _{s \in N} {\left( {\bigcup \limits _{\begin{array}{c} \scriptstyle t \in N \\ \scriptstyle s \ne t \end{array}} {{{\mathcal {M}} }_{s,t} } } \right) } : = {{\mathcal {K}} }_{2} ({\mathcal {A}}), \end{aligned}$$

where

$$\begin{aligned} {{\mathcal {M}}}_{s,t} = \left\{ z \in {\mathbb {C}} :\left| {z - a_{ss...s} } \right| \left| {z - a_{tt...t} } \right| \le \left| {z - a_{ss...s} } \right| \rho _t ({\mathcal {B}}) + b_{ts...s} \; \rho _s ({\mathcal {B}}) \right\} , \end{aligned}$$

and \(\rho _t ({\mathcal {B}}) , \; \rho _s ({\mathcal {B}}) \) are the same as those in Theorem 12.

Proof

First, for all \(s \in \left\langle n \right\rangle \) and \( \, t \in T',\) according to the definition of \({T'},\) we have \(T' \subseteq T.\) We write Eq. (20) in the form:

$$\begin{aligned}&\lambda ^2 - \lambda \left( {\sum \limits _{\begin{array}{c} \scriptstyle i_2 ...i_m \in \left\langle n \right\rangle \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}} {a_{ti_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} + a_{tt...t} } \right) - a_{s...s} \lambda \\&\quad + a_{s...s} \left( {\sum \limits _{\begin{array}{c} \scriptstyle i_2 ...i_m \in \left\langle n \right\rangle \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}} {a_{ti_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} + a_{tt...t} } \right) \\&\quad = a_{ts...s} \sum \limits _{\begin{array}{c} \scriptstyle i_2 ...i_m \in \left\langle n \right\rangle \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}} {a_{si_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}}. \end{aligned}$$

Therefore, we have:

$$\begin{aligned}&\left( {\lambda - a_{ss...s} } \right) \left( {\lambda - a_{tt...t} } \right) \nonumber \\&\quad = \left( {\lambda - a_{ss...s} } \right) \sum \limits _{\begin{array}{c} \scriptstyle i_2 ...i_m \in \left\langle n \right\rangle \\ \scriptstyle \delta _{\begin{array}{c} si_2 ...i_m \end{array}} = 0 \\ \scriptstyle \delta _{ti_2 ...i_m } = 0 \end{array}} {a_{ti_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}} + a_{ts...s} \sum \limits _{\begin{array}{c} \scriptstyle i_2 ...i_m \in \left\langle n \right\rangle \\ \scriptstyle \delta _{si_2 ...i_m } = 0 \end{array}} {a_{si_2 ...i_m } \frac{{x_{i_2 } ...x_{i_m } }}{{x_t^{m - 1} }}}. \end{aligned}$$

Using the inequalities (27) and (28), we have:

$$\begin{aligned} \left| {\lambda - a_{ss...s} } \right| \left| {\lambda - a_{tt...t} } \right| \le \left| {\lambda - a_{ss...s} } \right| \rho _t ( {\mathcal {B}} ) + \rho _s ( {\mathcal {B}} ) \; b_{ts...s}, \end{aligned}$$

i.e., \(\lambda \in {\mathcal {M}}_{s,t}.\) Since s is arbitrary, we have:

$$\begin{aligned} \sigma ({\mathcal {A}}) \subseteq \bigcap \limits _{s \in N} {\left( {\bigcup \limits _{\begin{array}{c} \scriptstyle t \in N \\ \scriptstyle s \ne t \end{array}} {{{\mathcal {M}} }_{s,t} } } \right) } : = {{\mathcal {K}} }_{2} ({\mathcal {A}}). \end{aligned}$$

\(\square \)

By estimating the ratio of the smallest component and the largest component of the Perron vector of \( {\mathcal {B}} \) (Lemma 1), the following result is obtained.

Theorem 14

Let \({\mathcal {A}} , \; {\mathcal {B}}\) be tensor of order m dimension n with \(\left| {\mathcal {A}} \right| \le {\mathcal {B}}.\) Then, every eigenvalue of \({\mathcal {A}}\) lies in the region:

$$\begin{aligned} \bigcap \limits _{i \in \left\langle n \right\rangle } {\left( {\bigcup \limits _{\begin{array}{c} \scriptstyle j \in \left\langle n \right\rangle \\ \scriptstyle i \ne j \end{array}} {\tilde{{\mathcal {M}}}_{i,j} } } \right) } : = \tilde{{\mathcal {K}}}_2 \left( {\mathcal {A}} \right) , \end{aligned}$$

where

$$\begin{aligned} \tilde{{\mathcal {M}}}_{i,j} : = \left\{ {z \in {\mathbb {C}}:\left| {z - a_{ii...i} } \right| \left| {z - a_{jj...j} } \right| \le \left| {z - a_{ii...i} } \right| {\tilde{\rho }}_j \left( {\mathcal {B}} \right) + b_{ji...i} {\tilde{\rho }}_i \left( {\mathcal {B}} \right) } \right\} , \end{aligned}$$

\( {\tilde{\rho }}_j ({\mathcal {B}})= \rho ({\mathcal {B}}) - b_{jj...j} - \left( \tau ({\mathcal {B}}) \right) ^{m-1} (b_{ji...i} +r_{j}^i({\mathcal {E}})), \)

\( {\tilde{\rho }}_i ({\mathcal {B}})=( \rho ({\mathcal {B}}) - b_{ii...i} ) (\frac{1}{\tau ({\mathcal {B}})})^{m-1} - \left( \tau ({\mathcal {B}}) \right) ^{m-1} r_{i}({\mathcal {E}}), \) and \( \tau ({\mathcal {B}}) \) is given in (8).

Wang et al. (2020) in Example 4.11 show that their bounds are better than the bound (10). By the same example, we show that our bounds are better than others given in the literature.

Example 3

(Wang et al. 2020, Example 4.11) Consider 3 order 2 dimensional tensors \({\mathcal {A}} = (a_{ijk}) \) and \( {\mathcal {B}} = (b_{ijk}) \) defined by:

$$\begin{aligned} \begin{array}{l} a_{ijk} = \left\{ {\begin{array}{*{20}c} {a_{111} = 0;\;\;\;a_{112} = 0;\;\;\;a_{121} = 0;\;\;\;a_{122} = - 1;} \\ {a_{211} = - 4;\;\;\;a_{212} = 0;\;\;\;a_{221} = 0;\;\;\;a_{222} = 0;} \\ \end{array}} \right. \\ \\ b_{ijk} = \left\{ {\begin{array}{*{20}c} {b_{111} = 1;\;\;\;b_{112} = {\textstyle {1 \over 4}};\;\;\;b_{121} = {\textstyle {1 \over 4}};\;\;\;b_{122} = 2;} \\ {b_{211} = 4;\;\;\;b_{212} = {\textstyle {1 \over 2}};\;\;\;b_{221} = {\textstyle {1 \over 2}};\;\;\;b_{222} = {\textstyle {3 \over 2}}.} \\ \end{array}} \right. \\ \end{array} \end{aligned}$$

Obviously, \({\mathcal {B}} \ge |{\mathcal {A}}|\) and \( {\mathcal {A}} \) is weakly irreducible. By computations (Liu and Chen 2019), we can calculate:

$$\begin{aligned} \left\{ \rho ({\mathcal {B}}) , x \right\} =\left\{ 4.8095, (0.6930, 0.8738) \right\} , \quad r_1({\mathcal {E}})=\frac{3}{2}, \;\; r_2({\mathcal {E}})=1, \end{aligned}$$

where \( {\mathcal {E}}={\mathcal {B}}-|{\mathcal {A}}|. \) From Theorem 3.3 of Li and Ng (2015), we may compute \( \kappa ({\mathcal {B}}) =\frac{5}{18}\) and:

$$\begin{aligned} |\lambda | \le 4.8095 -1 -\left( \frac{5}{18} \right) ^2 \times \frac{3}{2}=3.6938. \end{aligned}$$

From Theorem 4.6 of Wang et al. (2020), we may compute \( P=\max \left\{ \left( \frac{5}{18} \right) ^2 , \frac{1}{4} , 0.1603 \right\} =\frac{1}{4}\) and:

$$\begin{aligned} |\lambda | \le \sqrt{(4.8095 -1) \times \left( 4.8095 -\frac{3}{2} \right) - \left( \frac{1}{4} \right) ^2 \times \frac{3}{2} \times 1 } =3.5375. \end{aligned}$$

By Theorem 4.7 of Wang et al. (2020) (Theorem 7), we get:

$$\begin{aligned} |\lambda | \le \sqrt{(4.8095 -1) \times \left( 4.8095 -\frac{3}{2} \right) } =3.5507. \end{aligned}$$

From Theorem 4.9 of Wang et al. (2020), we obtain:

$$\begin{aligned} |\lambda | \le 4.8095 -1 -\frac{1}{4} \times \frac{3}{2}=3.4345. \end{aligned}$$

From Lemma  1, we may compute \( \tau ({\mathcal {B}})=\max \left\{ \left( \frac{5}{18} \right) ^2 , \left( \frac{1}{2} \right) ^2 \right\} =\frac{1}{4}\), and therefore, by Theorem 10, we have:

$$\begin{aligned} |\lambda | \le \sqrt{ \left[ 4.8095 -1 - \left( \frac{1}{4} \times \frac{3}{2} \right) \right] \times \left[ 4.8095 -\frac{3}{2} - \left( \frac{1}{4} \times 1 \right) \right] } =3.2416. \end{aligned}$$

From Corollary 2, we obtain:

$$\begin{aligned} |\lambda | \le \left( \left[ 4.8095 -1 - \left( \frac{1}{4} \times \frac{3}{2} \right) \right] ^2 \times \left[ 4.8095 -\frac{3}{2} - \left( \frac{1}{4} \times 1 \right) \right] \right) ^{\frac{1}{3}} =3.3047. \end{aligned}$$

From Theorem  12 by \( s=2 , \; t=1, \) we may compute:

$$\begin{aligned} \rho _1({\mathcal {B}})&= (4.8095 -1) -2 \times \left( \frac{0.8738}{0.6930} \right) ^2 - \frac{1}{2} \times \left( \frac{0.6930}{0.6930} \right) ^2 =0.1298,\\ \rho _2({\mathcal {B}})&= \left( 4.8095 -\frac{3}{2} \right) \times \left( \frac{0.8738}{0.6930} \right) ^2 - 1 \times \left( \frac{0.6930}{0.6930} \right) ^2 =4.2616, \;\; \textit{and} \\ |\lambda |&\le \frac{1}{2} \left[ 0.1298 + \sqrt{ (0.1298)^2 + 4 \times 2 \times 4.2616 } \right] =2.9851. \end{aligned}$$

Also from Theorem  13 by \( s=2 , \; t=1, \) we obtain:

$$\begin{aligned} |\lambda |^2 \le |\lambda | \times 0.1298 + 2 \times 4.2616, \end{aligned}$$

which this eigenvalue inclusion set is equal to \( |\lambda | \le 2.9851.\) This example shows that if we use the ratio of the smallest and largest entries of the Perron vector \((\tau ({\mathcal {B}})),\) then the bound of Theorem 10 is tighter than other bounds. However, if there exist the perron vector of \( {\mathcal {B}}, \) then the bounds of Theorems 12 and 13 are tighter.

4 Conclusion

In this paper, we proposed some new improvements of Ky Fan theorem for tensors. We obtain new lower bound for the ratio of the smallest and largest entries of the Perron vector using a technique due to Wang et al. (2020) and Li and Ng (2015). Based on these new lower bounds, we obtain Theorems 10 and 12, which improve the bounds in Li and Ng (2015) and Wang et al. (2020). Each of these new bounds is always better than the existing one. Several numerical examples are given to show that our bounds are better than others given in the literature.