Skip to main content
Log in

Abstract

This paper concentrates on perturbation theory concerning the tensor T-eigenvalues within the framework of tensor-tensor multiplication. Notably, it serves as a cornerstone for the extension of semidefinite programming into the domain of tensor fields, referred to as T-semidefinite programming. The analytical perturbation analysis delves into the sensitivity of T-eigenvalues for third-order tensors with square frontal slices, marking the first main part of this study. Three classical results from the matrix domain into the tensor domain are extended. Firstly, this paper presents the Gershgorin disc theorem for tensors, demonstrating the confinement of all T-eigenvalues within a union of Gershgorin discs. Afterward, generalizations of the Bauer-Fike theorem are provided, each applicable to different cases involving tensors, including those that are F-diagonalizable and those that are not. Lastly, the Kahan theorem is presented, addressing the perturbation of a Hermite tensor by any tensors. Additionally, the analysis establishes connections between the T-eigenvalue problem and various optimization problems. The second main part of the paper focuses on tensor pseudospectra theory, presenting four equivalent definitions to characterize tensor \(\varepsilon \)-pseudospectra. Accompanied by a thorough analysis of their properties and illustrative visualizations, this section also explores the application of tensor \(\varepsilon \)-pseudospectra in identifying more T-positive definite tensors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

No data was used for the research described in the article.

References

  1. Bauer, F.L., Fike, C.T.: Norms and exclusion theorems. Numer. Math. 2, 137–141 (1960)

    Article  MathSciNet  Google Scholar 

  2. Beik, F., Saad, Y.: On the tubular eigenvalues of third-order tensors. arXiv preprint arXiv:2305.06323 (2023)

  3. Braman, K.: Third-order tensors as linear operators on a space of matrices. Linear Algebra Appl. 433(7), 1241–1253 (2010)

    Article  MathSciNet  Google Scholar 

  4. Brazell, M., Li, N., Navasca, C., Tamon, C.: Solving multilinear systems via tensor inversion. SIAM J. Matrix Anal. Appl. 34(2), 542–570 (2013)

    Article  MathSciNet  Google Scholar 

  5. Cao, Z., Xie, P.: Perturbation analysis for t-product-based tensor inverse, Moore-Penrose inverse and tensor system. Commun. Appl. Math. Comput. 4(4), 1441–1456 (2022)

    Article  MathSciNet  Google Scholar 

  6. Cao, Z., Xie, P.: On some tensor inequalities based on the t-product. Linear Multilinear Algebra 71(3), 377–390 (2023)

    Article  MathSciNet  Google Scholar 

  7. Chang, S.Y., Wei, Y.: T-product tensors—part II: tail bounds for sums of random T-product tensors. Comput. Appl. Math. 41(3), Paper No. 99, 32 (2022)

  8. Chang, S.Y., Wei, Y.: T-square tensors—Part I: inequalities. Comput. Appl. Math. 41(1), Paper No. 62, 27 (2022)

  9. Chen, C., Surana, A., Bloch, A.M., Rajapakse, I.: Multilinear control systems theory. SIAM J. Control. Optim. 59(1), 749–776 (2021)

    Article  MathSciNet  Google Scholar 

  10. Chen, J., Ma, W., Miao, Y., Wei, Y.: Perturbations of Tensor-Schur decomposition and its applications to multilinear control systems and facial recognitions. Neurocomputing 547 Art. 126359, (2023)

  11. Chu, K.-W.E.: Generalization of the Bauer-Fike theorem. Numer. Math. 49(6), 685–691 (1986)

    Article  MathSciNet  Google Scholar 

  12. Cui, Y.-N., Ma, H.-F.: The perturbation bound for the T-Drazin inverse of tensor and its application. Filomat 35(5), 1565–1587 (2021)

    Article  MathSciNet  Google Scholar 

  13. Davis, P.J.: Circulant Matrices, 2nd edn. Wiley, New York (1979)

    Google Scholar 

  14. Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. Johns Hopkins University Press, Baltimore (2013)

    Book  Google Scholar 

  15. Greenbaum, A., Li, R.C., Overton, M.L.: First-order perturbation theory for eigenvalues and eigenvectors. SIAM Rev. 62(2), 463–482 (2020)

    Article  MathSciNet  Google Scholar 

  16. Hachimi, A.E., Jbilou, K., Ratnani, A., Reichel, L.: Spectral computation with third-order tensors using the t-product. Appl. Numer. Math. 193, 1–21 (2023)

    Article  MathSciNet  Google Scholar 

  17. Han, F., Miao, Y., Sun, Z., Wei, Y.: T-ADAF: adaptive data augmentation framework for image classification network based on tensor T-product operator. Neural Process. Lett. 55, 10993–11016 (2023)

    Article  Google Scholar 

  18. Hao, N., Kilmer, M.E., Braman, K., Hoover, R.C.: Facial recognition using tensor-tensor decompositions. SIAM J. Imaging Sci. 6(1), 437–463 (2013)

    Article  MathSciNet  Google Scholar 

  19. Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)

    Google Scholar 

  20. Kato, T.: Perturbation Theory for Linear Operators. Springer-Verlag, New York (1966)

    Book  Google Scholar 

  21. Kilmer, M.E., Braman, K., Hao, N.: Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. Technical Report 2011-01, Tufts University (2011). https://www.cs.tufts.edu/t/tr/techreps/TR-2011-01

  22. Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34(1), 148–172 (2013)

    Article  MathSciNet  Google Scholar 

  23. Kilmer, M.E., Horesh, L., Avron, H., Newman, E.: Tensor-tensor algebra for optimal representation and compression of multiway data. Proc. Natl. Acad. Sci. USA 118(28), Paper No. e2015851,118, 12 (2021)

  24. Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435(3), 641–658 (2011)

    Article  MathSciNet  Google Scholar 

  25. Kilmer, M.E., Martin, C.D., Perrone, L.: A third-order generalization of the matrix SVD as a product of third-order tensors. Technical Report 2008-4, Tufts University (2008). https://www.cs.tufts.edu/t/tr/techreps/TR-2008-4

  26. Kostić, V. R., Cvetković, Lj., Cvetković, D. Lj.: Pseudospectra localizations and their applications. Numer. Linear Algebra Appl. 23(2), 356–372 (2016)

  27. Li, C., Liu, Q., Wei, Y.: Pseudospectra localizations for generalized tensor eigenvalues to seek more positive definite tensors. Comput. Appl. Math. 38(4), Paper No. 183, 22 (2019)

  28. Liu, W.-H., Jin, X.-Q.: A study on T-eigenvalues of third-order tensors. Linear Algebra Appl. 612, 357–374 (2021)

    Article  MathSciNet  Google Scholar 

  29. Liu, Y., Chen, L., Zhu, C.: Improved robust tensor principal component analysis via low-rank core matrix. IEEE J. Sel. Top. Signal Process. 12(6), 1378–1389 (2018)

    Article  Google Scholar 

  30. Liu, Y., Ma, H.: Weighted generalized tensor functions based on the tensor-product and their applications. Filomat 36(18), 6403–6426 (2022)

    Article  MathSciNet  Google Scholar 

  31. Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis with a new tensor nuclear norm. IEEE Trans. Pattern Anal. Mach. Intell. 42(4), 925–938 (2019)

    Article  Google Scholar 

  32. Lund, K.: The tensor t-function: a definition for functions of third-order tensors. Numer. Linear Algebra Appl. 27(3), e2288, 17 (2020)

  33. Lund, K., Schweitzer, M.: The Fréchet derivative of the tensor t-function. Calcolo 60(3), Paper No. 35, 34 (2023)

  34. Luo, Y.S., Zhao, X.L., Jiang, T.X., Chang, Y., Ng, M.K., Li, C.: Self-supervised nonlinear transform-based tensor nuclear norm for multi-dimensional image recovery. IEEE Trans. Image Process. 31, 3793–3808 (2022)

    Article  Google Scholar 

  35. Miao, Y., Qi, L., Wei, Y.: Generalized tensor function via the tensor singular value decomposition based on the T-product. Linear Algebra Appl. 590, 258–303 (2020)

    Article  MathSciNet  Google Scholar 

  36. Miao, Y., Qi, L., Wei, Y.: T-Jordan canonical form and T-Drazin inverse based on the T-product. Commun. Appl. Math. Comput. 3(2), 201–220 (2021)

    Article  MathSciNet  Google Scholar 

  37. Miao, Y., Wang, T., Wei, Y.: Stochastic conditioning of tensor functions based on the tensor-tensor product. Pac. J. Optim. 19(2), 205–235 (2023)

    MathSciNet  Google Scholar 

  38. Mo, C., Li, C., Wang, X., Wei, Y.: \(Z\)-eigenvalues based structured tensors: \(\cal{M}_z\)-tensors and strong \(\cal{M}_z\)-tensors. Comput. Appl. Math. 38(4), Paper No. 175, 25 (2019)

  39. Mo, C., Wang, X., Wei, Y.: Time-varying generalized tensor eigenanalysis via Zhang neural networks. Neurocomputing 407, 465–479 (2020)

    Article  Google Scholar 

  40. Newman, E., Kilmer, M.E.: Nonnegative tensor patch dictionary approaches for image compression and deblurring applications. SIAM J. Imaging Sci. 13(3), 1084–1112 (2020)

    Article  MathSciNet  Google Scholar 

  41. Olson, B.J., Shaw, S.W., Shi, C., Pierre, C., Parker, R.G.: Circulant matrices and their application to vibration analysis. Appl. Mech. Rev. 66(4), 040803 (2014)

    Article  Google Scholar 

  42. Pakmanesh, M., Afshin, H.: \(M\)-numerical ranges of odd-order tensors based on operators. Ann. Funct. Anal. 13(3), Paper No. 37, 22 (2022)

  43. Qi, L.: Eigenvalues of a real supersymmetric tensor. J. Symbolic Comput. 40(6), 1302–1324 (2005)

    Article  MathSciNet  Google Scholar 

  44. Qi, L., Zhang, X.: T-quadratic forms and spectral analysis of T-symmetric tensors. arXiv preprint arXiv:2101.10820 (2021)

  45. Rayleigh, L.: The Theory of Sound, vol. I. Macmillan, London (1927)

    Google Scholar 

  46. Rellich, F.: Perturbation Theory of Eigenvalue Problems. Gordon and Breach Science Publishers, New York-London-Paris (1969)

    Google Scholar 

  47. Schrödinger, E.: Quantisierung als Eigenwertproblem. Annalen Phys. 386(18), 109–139 (1926)

    Article  Google Scholar 

  48. Shi, X., Wei, Y.: A sharp version of Bauer-Fike’s theorem. J. Comput. Appl. Math. 236(13), 3218–3227 (2012)

    Article  MathSciNet  Google Scholar 

  49. Stewart, G.W., Sun, J.G.: Matrix Perturbation Theory. Computer Science and Scientific Computing. Academic Press Inc, Boston, MA (1990)

    Google Scholar 

  50. Sun, J.: Matrix Perturbation Analysis (In Chinese). Academic Press, Beijing (1987)

    Google Scholar 

  51. Tang, L., Yu, Y., Zhang, Y., Li, H.: Sketch-and-project methods for tensor linear systems. Numer. Linear Algebra Appl. 30(2), Paper No. e2470, 32 (2023)

  52. Trefethen, L.N., Embree, M.: Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators. Princeton University Press, Princeton, NJ (2005)

    Book  Google Scholar 

  53. Turatti, E.: On tensors that are determined by their singular tuples. SIAM J. Appl. Algebra Geom. 6(2), 319–338 (2022)

    Article  MathSciNet  Google Scholar 

  54. Wang, X., Che, M., Mo, C., Wei, Y.: Solving the system of nonsingular tensor equations via randomized Kaczmarz-like method. J. Comput. Appl. Math. 421, Paper No. 114,856, 15 (2023)

  55. Wang, X., Wei, P., Wei, Y.: A fixed point iterative method for third-order tensor linear complementarity problems. J. Optim. Theory Appl. 197(1), 334–357 (2023)

    Article  MathSciNet  Google Scholar 

  56. Wang, Y., Yang, Y.: Hot-SVD: higher order t-singular value decomposition for tensors based on tensor-tensor product. Comput. Appl. Math. 41(8), Paper No. 394, 33 (2022)

  57. Wei, P., Wang, X., Wei, Y.: Neural network models for time-varying tensor complementarity problems. Neurocomputing 523, 18–32 (2023)

    Article  Google Scholar 

  58. Wu, T.: Graph regularized low-rank representation for submodule clustering. Pattern Recognit. 100, Art. 107145, (2020)

  59. Yang, Y., Zhang, J.: Perron-Frobenius type theorem for nonnegative tubal matrices in the sense of \(t\)-product. J. Math. Anal. Appl. 528(2), Paper No. 127, 541, 17 (2023)

  60. Zhao, X.L., Xu, W.H., Jiang, T.X., Wang, Y., Ng, M.K.: Deep plug-and-play prior for low-rank tensor completion. Neurocomputing 400, 137–149 (2020)

    Article  Google Scholar 

  61. Zheng, M.M., Huang, Z.H., Wang, Y.: T-positive semidefiniteness of third-order symmetric tensors and T-semidefinite programming. Comput. Optim. Appl. 78(1), 239–272 (2021)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the handling editor and two referees for their very detailed comments. Changxin Mo acknowledges support from the National Natural Science Foundation of China (Grant No. 12201092), the Natural Science Foundation Project of CQ CSTC (Grant No. CSTB2022NSCQ-MSX0896), the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant No. KJQN202200512), the Chongqing Talents Project (Grant No. cstc2022ycjh-bgzxm0040), and the Research Foundation of Chongqing Normal University (Grant No. 21XLB040), P. R. of China. Weiyang Ding’s research is supported by the Science and Technology Commission of Shanghai Municipality under grants 23ZR1403000, 20JC1419500, and 2018SHZDZX0. Yimin Wei is supported by the National Natural Science Foundation of China under Grant 12271108, the Ministry of Science and Technology of China under grant G2023132005L and the Science and Technology Commission of Shanghai Municipality under grant 23JC1400501.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yimin Wei.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Communicated by Guoyin Li.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Theorem 3

The theorem is evident when \(\mu \in \varLambda (\mathcal {A})\), as the left-hand sides of (12) and (13) vanish. Therefore, we assume that \(\mu \notin \varLambda (\mathcal {A})\). By Lemma 1, we can see that

$$\begin{aligned} \mu I_{mn} - {\text {bcirc}}(\mathcal {A} + \varepsilon \mathcal {B}) = \mu I_{mn} - {\text {bcirc}}(\mathcal {A}) - {\text {bcirc}}(\varepsilon \mathcal {B}), \end{aligned}$$

and moreover \(\mu I_{mn} - {\text {bcirc}}(\mathcal {A} + \varepsilon \mathcal {B}) \) is singular. This means that

$$\begin{aligned} (F_n^{\textrm{H}} \otimes I_m) {\text {bcirc}}(\mathcal {Q})^{-1} [ \mu I_{mn} - {\text {bcirc}}(\mathcal {A}) - {\text {bcirc}}(\varepsilon \mathcal {B})] {\text {bcirc}}(\mathcal {Q}) (F_n \otimes I_m) \end{aligned}$$
(16)

is also singular since the matrices multiplied on the left and right sides are nonsingular. Notice that

$$\begin{aligned} \begin{aligned}&(F_n^{\textrm{H}} \otimes I_m) {\text {bcirc}}(\mathcal {Q})^{-1} {\text {bcirc}}(\mathcal {A}) {\text {bcirc}}(\mathcal {Q}) (F_n \otimes I_m)\\&\quad = (F_n^{\textrm{H}} \otimes I_m) [ {\text {bcirc}}(\mathcal {D}) + {\text {bcirc}}(\mathcal {N}) ] (F_n \otimes I_m)\\&\quad = \left[ \begin{array}{llll} D^{(1)} &{} &{} &{} \\ &{} D^{(2)} &{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} D^{(n)} \end{array}\right] + \left[ \begin{array}{llll} N^{(1)} &{} &{} &{} \\ &{} N^{(2)} &{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} N^{(n)} \end{array}\right] \\:=&D+N. \end{aligned} \end{aligned}$$

Therefore (16) can be rewritten as

$$\begin{aligned} \mu I_{mn} - D - N - (F_n^{\textrm{H}} \otimes I_m) {\text {bcirc}}(\mathcal {Q})^{-1} {\text {bcirc}}(\varepsilon \mathcal {B}) {\text {bcirc}}(\mathcal {Q}) (F_n \otimes I_m), \end{aligned}$$

and thus the following matrix

$$\begin{aligned} I_{mn} - (\mu I_{mn} - D - N)^{-1} (F_n^{\textrm{H}} \otimes I_m) {\text {bcirc}}(\mathcal {Q})^{-1} {\text {bcirc}}(\varepsilon \mathcal {B}) {\text {bcirc}}(\mathcal {Q}) (F_n \otimes I_m) \nonumber \\ \end{aligned}$$
(17)

is singular.

By the assumption that \(|N|^q = 0\) and note that \(\mu I_{mn} - D \) is a nonsingular diagonal matrix, it follows that \(((\mu I_{mn} - D)^{-1}N)^q = 0\). Hence,

$$\begin{aligned} ((\mu I_{mn}-D)-N)^{-1}=\sum _{k=0}^{q-1}\left( (\mu I_{mn}-D)^{-1} N\right) ^{k}(\mu I_{mn}-D)^{-1}, \end{aligned}$$

and

$$\begin{aligned} \Vert ((\mu I_{mn}-D)-N)^{-1}\Vert \le \frac{1}{ \min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |} \sum _{k=0}^{q-1}\left( \frac{\Vert N\Vert }{ \min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |}\right) ^{k} \end{aligned}$$

holds under the 1-, 2- and \(\infty \)-norms cases.

If \(\min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |\ge 1\), then

$$\begin{aligned} \Vert ((\mu I_{mn}-D)-N)^{-1}\Vert \le \frac{1}{ \min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |} \sum _{k=0}^{q-1}\Vert N\Vert ^{k}. \end{aligned}$$

In the case of \(\min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |<1\), then

$$\begin{aligned} \Vert ((\mu I_{mn}-D)-N)^{-1}\Vert \le \frac{1}{ (\min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |)^q} \sum _{k=0}^{q-1}\Vert N\Vert ^{k}. \end{aligned}$$

By (17), we can obtain

$$\begin{aligned} \begin{aligned} 1&\le \left\| (\mu I_{mn} - D - N)^{-1} (F_n^{\textrm{H}} \otimes I_m) \cdot {\text {bcirc}}(\mathcal {Q})^{-1} {\text {bcirc}}(\varepsilon \mathcal {B}) {\text {bcirc}}(\mathcal {Q}) \cdot (F_n \otimes I_m)\right\| \\&=\left\| (\mu I_{mn} - D - N)^{-1}\right\| \Vert {\text {bcirc}}(\varepsilon \mathcal {B}) \Vert \Vert F_n^{\textrm{H}} \otimes I_m\Vert \Vert F_n \otimes I_m\Vert \Vert {\text {bcirc}}(\mathcal {Q})^{-1} \Vert \Vert {\text {bcirc}}(\mathcal {Q}) \Vert . \end{aligned} \end{aligned}$$

Combining the above theoretical analyses, in the spectral norm case, we can see

$$\begin{aligned} \min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu | \le \Vert {\text {bcirc}}(\varepsilon \mathcal {B}) \Vert _2 \sum _{k=0}^{q-1}\Vert N\Vert _2^{k} \end{aligned}$$

or

$$\begin{aligned} (\min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |)^q \le \Vert {\text {bcirc}}(\varepsilon \mathcal {B}) \Vert _2 \sum _{k=0}^{q-1}\Vert N\Vert _2^{k} \end{aligned}$$

for

$$\begin{aligned} \min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |\ge 1 \quad \text {or } \quad \min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu |<1, \end{aligned}$$

respectively. Let \(\theta = \Vert {\text {bcirc}}(\varepsilon \mathcal {B}) \Vert _2 \sum _{k=0}^{q-1}\Vert N\Vert _2^{k} \). Then we get the result (12) for the spectral norm. Similar to the proof given in Theorem 2, the Frobenius norm case can be obtained easily.

For the 1- and \(\infty \)-norms, by using (17) again we get

$$\begin{aligned} \min _{\lambda \in \varLambda (\mathcal {A})}|\lambda -\mu | \le \max \{\theta _p, \theta _p^{1/q}\}, \end{aligned}$$

where

$$\begin{aligned} \theta _p = \Vert {\text {bcirc}}(\varepsilon \mathcal {B}) \Vert _p \kappa _{p}(\mathcal {Q}) \kappa _{p}(F_n \otimes I_m) \sum _{k=0}^{q-1}\Vert N\Vert _2^{k}. \end{aligned}$$

The proof is completed. \(\square \)

Proof of Theorem 4

We only need to consider the case that \(\mu \) is not a T-eigenvalue of \(\mathcal {A}\). Hence \(\mu I_{mn} - \tilde{D}-\tilde{N}\) is nonsingular. Similar as (17), the matrix

$$\begin{aligned} I_{mn} - (\mu I_{mn}- \tilde{D} - \tilde{N})^{-1} (F_n \otimes I_m) {\text {bcirc}}(\mathcal {X})^{-1} {\text {bcirc}}(\varepsilon \mathcal {B}) {\text {bcirc}}(\mathcal {X}) (F_n^{\textrm{H}} \otimes I_m) \end{aligned}$$

is singular. By using a similar proof process given for Theorem 3 in the above, we could get the conclusion. \(\square \)

Proof of Theorem 8

To prove the assertion of (I), we utilize (14) and Remark 5, which allow us to exploit the properties of \(\varLambda _{\varepsilon }(A^{(i)})\) for each matrix \(A^{(i)}\). It is well-known that \(\varLambda _{\varepsilon }(A^{(i)})\) is nonempty, open, and bounded, with at most m connected components, each containing one or more eigenvalues of \(A^{(i)}\) [52, Theorem 2.4]. Consequently, these same properties also hold for the given tensor \(\mathcal {A}\) by the above analysis. Additionally, the number of connected components is bounded by nm due to the relationship expressed by

$$\begin{aligned} \varLambda _{\varepsilon }(\mathcal {A})= \bigcup _{i=1}^{n} \varLambda _{\varepsilon }(A^{(i)}). \end{aligned}$$

Now, we proceed to part (II). Denote \(\left( F_{n}^{\textrm{H}} \otimes I_{m}\right) {\text {bcirc}}(\mathcal {A}) \left( F_{n}\otimes I_{m}\right) :=A\). First, note that

$$\begin{aligned} {\text {bcirc}}(\mathcal {A}+c\mathcal {E})&= \left[ \begin{array}{ccccc} {A_{1} + cI_m} &{} {A_{n}} &{} {A_{n-1}} &{} {\cdots } &{} {A_{2}} \\ {A_{2}} &{} {A_{1} + c I_m } &{} {A_{n}} &{} {\cdots } &{} {A_{3}} \\ {\vdots } &{} {\ddots } &{} {\ddots } &{} {\ddots } &{} {\vdots } \\ {A_{n}} &{} {A_{n-1}} &{} {\ddots } &{} {A_{2}} &{} {A_{1} + cI_m } \end{array}\right] \\&= {\text {bcirc}}(\mathcal {A}) + c {\text {bcirc}}(\mathcal {E}) \\&= \left( F_{n} \otimes I_{m}\right) A \left( F_{m}^{\textrm{H}}\otimes I_{n}\right) + c[\left( F_{m} \otimes I_{n}\right) I_{mn} \left( F_{m}^{\textrm{H}}\otimes I_{n}\right) ]\\&= \left( F_{n} \otimes I_{m}\right) (A+cI_{mn}) \left( F_{n}^{\textrm{H}}\otimes I_{m}\right) \\&= \left( F_{n} \otimes I_{m}\right) \left[ \begin{array}{cccc} {A^{(1)}+cI_m} &{} {} &{} {} &{} {} \\ {} &{} {A^{(2)}+cI_m} &{} {} &{} {} \\ {} &{} {} &{} {\ddots } &{} {} \\ {} &{} {} &{} {} &{} {A^{(n)}+cI_m} \end{array}\right] \left( F_{n}^{\textrm{H}}\otimes I_{m}\right) . \end{aligned}$$

Therefore, for any \(c\in \mathbb {C}\), we have

$$\begin{aligned} \varLambda _{\varepsilon }(\mathcal {A}+c)= \bigcup _{i=1}^{n} \varLambda _{\varepsilon }(A^{(i)}+cI_m) = \bigcup _{i=1}^{n} [\varLambda _{\varepsilon }(A^{(i)}) +c] = c + \varLambda _{\varepsilon }(\mathcal {A}). \end{aligned}$$

We complete the proof of this part.

For part (III), by Lemma 1, we know that

$$\begin{aligned} {\text {bcirc}}(c \mathcal {A})=c {\text {bcirc}}(\mathcal {A}) = c [(F_{n} \otimes I_{m}) A \left( F_{n}^{\textrm{H}}\otimes I_{m}\right) ] = \left( F_{n} \otimes I_{m}\right) (cA) \left( F_{n}^{\textrm{H}}\otimes I_{m}\right) , \end{aligned}$$

which implies that

$$\begin{aligned} \varLambda _{|c| \varepsilon }(c \mathcal {A}) = \bigcup _{i=1}^{n} \varLambda _{|c|\varepsilon }(cA_i) = \bigcup _{i=1}^{n}c \varLambda _{\varepsilon }(A_i) = c \bigcup _{i=1}^{n} \varLambda _{\varepsilon }(A_i), \end{aligned}$$

since for any nonzero \(c\in \mathbb {C}\) and matrix \(A\in \mathbb {C}^{m\times m}\), the following equality

$$\begin{aligned} \varLambda _{|c|\varepsilon }(cA) = c \varLambda _{\varepsilon }(A) \end{aligned}$$

holds [52, Theorem 2.4]. Thus we get the result that \(\varLambda _{|c| \varepsilon }(c \mathcal {A})=c \varLambda _{\varepsilon }(\mathcal {A})\) for any nonzero \(c \in \mathbb {C}\).

Now, we prove the last part of this theorem. By Lemma 1, we know that

$$\begin{aligned} {\text {bcirc}}\left( \mathcal {A}^{\textrm{H}}\right) = \left( F_{n} \otimes I_{m}\right) A^{\textrm{H}} \left( F_{n}^{\textrm{H}}\otimes I_{m}\right) . \end{aligned}$$

Therefore,

$$\begin{aligned} \varLambda _{\varepsilon }\left( \mathcal {A}^{{\textrm{H}} }\right) = \bigcup _{i=1}^{n} \varLambda _{\varepsilon }((A^{(i)})^{\textrm{H}} ) = \bigcup _{i=1}^{n}\overline{ \varLambda _{\varepsilon }(A^{(i)})} = \overline{ \bigcup _{i=1}^{n}\varLambda _{\varepsilon }(A^{(i)})} = \overline{ \varLambda _{\varepsilon }\left( \mathcal {A}\right) }, \end{aligned}$$

where the conclusion \(\varLambda _{\varepsilon }(A^{\textrm{H}}) = \overline{ \varLambda _{\varepsilon }(A)}\) under the two-norm case for any matrix \(A\in \mathbb {C}^{m\times m}\) [52, Theorem 2.4] is applied in the second equality. \(\square \)

Proof of Theorem 9

If \(\lambda \) is a T-eigenvalue of tensor \(\mathcal {A}\), then it is an eigenvalue of the matrix \({\text {bcirc}}(\mathcal {A})\). Therefore, for any \(\mu \in \mathbb {C}\), \(\lambda + \mu \) is an eigenvalue of \({\text {bcirc}}(\mathcal {A}) + \mu I\). Note that \(\Vert \mu I\Vert = |\mu |\), and by the definition of pseudospectra on tensors, we obtain \(\lambda + \mu \in \varLambda _{\varepsilon }(\mathcal {A})\) for any \(|\mu | < \varepsilon \). Thus, we have completed the proof of (15).

For the normal tensor case, by Lemmas 4 and 1, we obtain

$$\begin{aligned} {\text {bcirc}}(\mathcal {U})^{\textrm{H}} {\text {bcirc}}(\mathcal {A})({\text {bcirc}}(\mathcal {U})) = {\text {bcirc}}(\mathcal {D}), \end{aligned}$$

and

$$\begin{aligned} (F_n^{\textrm{H}} \otimes I_m) {\text {bcirc}}(\mathcal {D}) (F_n\otimes I_m) = \left[ \begin{array}{cccc} {D^{(1)}} &{} {} &{} {} &{} {} \\ {} &{} {D^{(2)}} &{} {} &{} {} \\ {} &{} {} &{} {\ddots } &{} {} \\ {} &{} {} &{} {} &{} {D^{(n)}} \end{array}\right] : = D, \end{aligned}$$
(18)

in which \(D^{(i)}\) is diagonal for \(i = 1, \cdots , n\) by Lemma 3. Also note that \(\Vert \cdot \Vert =\Vert \cdot \Vert _{2}\), we may assume directly that \(\mathcal {A}\) is F-diagonal. Therefore, the diagonal entries of \({\text {bcirc}}(\mathcal {A})\) are exactly the T-eigenvalues. As we all know, the \(\varepsilon \)-pseudospectra is just the union of the open \(\varepsilon \)-balls about the points of the spectra for any normal matrix; equivalently, we have

$$\begin{aligned} \left\| (z-{\text {bcirc}}(\mathcal {A}))^{-1}\right\| _{2}=\frac{1}{{\text {dist}}(z, \varLambda ({\text {bcirc}}(\mathcal {A})))}, \end{aligned}$$

which implies

$$\begin{aligned} {\text {dist}}(z, \varLambda ({\text {bcirc}}(\mathcal {A}))) < \varepsilon \end{aligned}$$

by the \(\varepsilon \)-pseudospectra of tensors. We get the conclusion since \(\varLambda (\mathcal {A})+\varDelta _{\varepsilon }\) is the same as \(\{z: {\text {dist}}(z, \varLambda (\mathcal {A}))<\varepsilon \}\). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mo, C., Ding, W. & Wei, Y. Perturbation Analysis on T-Eigenvalues of Third-Order Tensors. J Optim Theory Appl (2024). https://doi.org/10.1007/s10957-024-02444-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10957-024-02444-z

Keywords

Mathematics Subject Classification

Navigation