Skip to main content
Log in

Perturbations of the Tcur Decomposition for Tensor Valued Data in the Tucker Format

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

The tensor CUR decomposition in the Tucker format is a special case of Tucker decomposition with a low multilinear rank, where factor matrices are obtained by selecting some columns from the mode-n unfolding of the tensor. We perform a thorough investigation of what happens to the approximations in the presence of noise. We present two forms of the tensor CUR decomposition and deduce the errors of the approximation. We illustrate how the choice of columns from each mode-n unfolding reflects the quality of the tensor CUR approximation via some numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. These two video datasets are at http://trace.eas.asu.edu/yuv/.

References

  1. Bader, B.W., Kolda, T.G.: Algorithm 862: Matlab tensor classes for fast algorithm prototyping. ACM Trans. Math. Softw. 32(4), 635–653 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bartholdi, J.J.: A good submatrix is hard to find. Oper. Res. Lett. 1(5), 190–193 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  3. Boutsidis, C., Woodruff, D.P.: Optimal CUR matrix decompositions. SIAM J. Comput. 46(2), 543–589 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  4. Breiding, P., Vannieuwenhoven, N.: The condition number of join decompositions. SIAM J. Matrix Anal. Appl. 39(1), 287–309 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  5. Breiding, P., Vannieuwenhoven, N.: On the average condition number of tensor rank decompositions. IMA J. Numer. Anal. 40(3), 1908–1936 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  6. Cai, H., Hamm, K., Huang, L., Needell, D.: Mode-wise tensor decompositions: multi-dimensional generalizations of CUR decompositions. J. Mach. Learn. Res. 22(185), 1–36 (2021)

    MathSciNet  MATH  Google Scholar 

  7. Caiafa, C.F., Cichocki, A.: Generalizing the column-row matrix decomposition to multi-way arrays. Linear Algebra Appl. 433, 557–573 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Carroll, J.D., Chang, J.: Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of “Eckart–Young’’ decomposition. Psychometrika 35(3), 283–319 (1970)

    Article  MATH  Google Scholar 

  9. Chaturantabut, S., Sorensen, D.C.: Nonlinear model reduction via discrete empirical interpolation. SIAM J. Sci. Comput. 32(5), 2737–2764 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Che, M., Wei, Y.: Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv. Comput. Math. 45(1), 395–428 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  11. Che, M., Wei, Y.: Theory and Computation of Complex Tensors and Its Applications. Springer, Singapore (2020)

    Book  MATH  Google Scholar 

  12. Che, M., Wei, Y., Yan, H.: The computation for low multilinear rank approximations of tensors via power scheme and random projection. SIAM J. Matrix Anal. Appl. 41(2), 605–636 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  13. Che, M., Wei, Y., Yan, H.: An efficient randomized algorithm for computing the approximate Tucker decomposition. J. Sci. Comput. 88, 1–29 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  14. Cichocki, A., Lee, N., Oseledets, I.V., Phan, A.H., Zhao, Q., Mandic, D.P.: Tensor networks for dimensionality reduction and large-scale optimization: part 1 low-rank tensor decompositions. Found. Trends Mach. Learn. 9(4–5), 249–429 (2016)

    Article  MATH  Google Scholar 

  15. Cichocki, A., Lee, N., Oseledets, I.V., Phan, A.H., Zhao, Q., Mandic, D.P.: Tensor networks for dimensionality reduction and large-scale optimization: part 2 applications and future perspectives. Found. Trends Mach. Learn. 9(6), 431–673 (2017)

    Article  MATH  Google Scholar 

  16. Clarkson, K.L., Woodruff, D.P.: Low-rank approximation and regression in input sparsity time. J. ACM 63(6), 81–90 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. De Lathauwer, L.: First-order perturbation analysis of the best rank-\(({R}_1,{R}_2,{R}_3)\) approximation in multilinear algebra. J. Chemom. 18(1), 2–11 (2004)

    Article  Google Scholar 

  18. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  19. Drineas, P., Ipsen, I.C.F.: Low-rank matrix approximations do not need a singular value gap. SIAM J. Matrix Anal. Appl. 40(1), 299–319 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  20. Drineas, P., Kannan, R., Mahoney, M.W.: Fast Monte Carlo algorithms for matrices III: computing a compressed approximate matrix decomposition. SIAM J. Comput. 36(1), 184–206 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  21. Drineas, P., Mahoney, M.W.: On the Nyström method for approximating a Gram matrix for improved kernel-based learning. J. Mach. Learn. Res. 6(12), 2153–2175 (2005)

    MathSciNet  MATH  Google Scholar 

  22. Drineas, P., Mahoney, M.W.: A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebra Appl. 420, 553–571 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  23. Drineas, P., Mahoney, M.W., Muthukrishnan, S.: Relative-error CUR matrix decompositions. SIAM J. Matrix Anal. Appl. 30(2), 844–881 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  24. Drineas, P., Mahoney, M.W., Muthukrishnan, S., Sarlos, T.: Faster least squares approximation. Numer. Math. 117(2), 219–249 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  25. Eldén, L., Savas, B.: Perturbation theory and optimality conditions for the best multilinear rank approximation of a tensor. SIAM J. Matrix Anal. Appl. 32(4), 1422–1450 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  26. Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. Johns Hopkins University Press, Baltimore (2013)

    MATH  Google Scholar 

  27. Goreinov, S.A., Tyrtyshnikov, E.E., Zamarashkin, N.L.: A theory of pseudoskeleton approximations. Linear Algebra Appl. 261, 1–21 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  28. Grasedyck, L., Kressner, D., Tobler, C.: A literature survery of low-rank tensor approximation techniques. Ges. Angew. Math. Mech. 36(1), 53–78 (2013)

    MATH  Google Scholar 

  29. Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31(4), 2029–2054 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  30. Hamm, K., Huang, L.: Perspectives on CUR decompositions. Appl. Comput. Harmon. Anal. 48(3), 1088–1099 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  31. Hamm, K., Huang, L.: Perturbations of CUR decompositions. SIAM J. Matrix Anal. Appl. 42(1), 351–375 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  32. Higham, N.J.: Accuracy and Stability of Numerical Algorithms, 2nd edn. Society for Industrial and Applied Mathematics, Philadelphia (2002)

    Book  MATH  Google Scholar 

  33. Hwang, T., Lin, W., Pierce, D.J.: Improved bound for rank revealing LU factorizations. Linear Algebra Appl. 261, 173–186 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  34. Johnson, W.B., Lindenstrauss, J.: Extensions of Lipschitz mappings into a Hilbert space. In: Beals, R., Beck, A., Bellow, A., Hajian, A. (eds.) Conference in Modern Analysis and Probability (New Haven, CT, 1982), Contemporary Mathematics, vol. 26, pp. 189–206. American Mathematical Society, Providence, Rhode Islan, (1984)

  35. Kilmer, M., Martin, C.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435, 641–658 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  36. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  37. Kumar, S., Mohri, M., Talwalkar, A.: Sampling methods for the Nyström method. J. Mach. Learn. Res. 13(1), 981–1006 (2012)

    MathSciNet  MATH  Google Scholar 

  38. Mahoney, M.W., Drineas, P.: CUR matrix decompositions for improved data analysis. Proc. Natl. Acad. Sci. USA 106(3), 697–702 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  39. Mahoney, M.W., Maggioni, M., Drineas, P.: Tensor-CUR decompositions for tensor-based data. SIAM J. Matrix Anal. Appl. 30(3), 957–987 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  40. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vancouver, British Columbia, Canada, vol. 2, pp. 416–423 (2001)

  41. Mikhalev, A., Oseledets, I.V.: Rectangular maximum-volume submatrices and their applications. Linear Algebra Appl. 538, 187–211 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  42. Miranian, L., Gu, M.: Strong rank revealing LU factorizations. Linear Algebra Appl. 367, 1–16 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  43. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  44. Osinsky, A.: Rectangular maximum volume and projective volume search algorithms. arXiv preprint arXiv:1809.02334 (2018)

  45. Osinsky, A., Zamarashkin, N.L.: Pseudo-skeleton approximations with better accuracy estimates. Linear Algebra Appl. 537, 221–249 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  46. Saibaba, A.K.: HOID: higher order interpolatory decomposition for tensors based on Tucker representation. SIAM J. Matrix Anal. Appl. 37(3), 1223–1249 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  47. Sidiropoulos, N.D., De Lathauwer, L., Fu, X., Huang, K., Papalexakis, E.E., Faloutsos, C.: Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 65(13), 3551–3582 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  48. Song, Y., Qi, L.: Infinite and finite dimensional Hilbert tensors. Linear Algebra Appl. 451, 1–14 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  49. Song, Z., Woodruff, D.P., Zhong, P.: Relative error tensor low rank approximation. In: SODA’19: Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, California, San Diego, January 6- 9, pp. 2772–2789 (2019)

  50. Sorensen, D.C., Embree, M.: A DEIM induced CUR factorization. SIAM J. Sci. Comput. 38(3), A1454–A1482 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  51. Stewart, G.W.: On the perturbation of pseudo-inverses, projections and linear least squares problems. SIAM Rev. 19(4), 634–662 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  52. Stewart, G.W.: Four algorithms for the the [sic] efficient computation of truncated pivoted QR approximations to a sparse matrix. Numer. Math. 83(2), 313–323 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  53. Stewart, G.W., Sun, J.G.: Matrix Perturbation Theory. Academic Press, Boston (1990)

    MATH  Google Scholar 

  54. Tarzanagh, D.A., Michailidis, G.: Fast randomized algorithms for t-product based tensor operations and decompositions with applications to imaging data. SIAM J. Imaging Sci. 11(4), 2629–2664 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  55. Tropp, J.A.: Improved analysis of the subsampled randomized Hadamard transform. Adv. Adapt. Data Anal. 3(1–2), 115–126 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  56. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  57. Vannieuwenhoven, N.: Condition numbers for the tensor rank decomposition. Linear Algebra Appl. 535, 35–86 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  58. Wang, G., Wei, Y., Qiao, S.:,Generalized Inverses: Theory and Computations. 2nd edn. Springer, Singapore; Science Press, Beijing (2018)

  59. Wang, S., Luo, L., Zhang, Z.: SPSD matrix approximation vis column selection: theories, algorithms, and extensions. J. Mach. Learn. Res. 17(1), 1697–1745 (2016)

    MathSciNet  MATH  Google Scholar 

  60. Wang, S., Zhang, Z.: Improving CUR matrix decomposition and the Nyström approximation via adaptive sampling. J. Mach. Learn. Res. 14(1), 2729–2769 (2013)

    MathSciNet  MATH  Google Scholar 

  61. Wang, S., Zhang, Z., Zhang, T.: Towards more efficient SPSD matrix approximation and CUR matrix decomposition. J. Mach. Learn. Res. 17(1), 7329–7377 (2016)

    MathSciNet  MATH  Google Scholar 

  62. Woodruff, D.P.: Sketching as a tool for numerical linear algebra. Found. Trends Theor. Comput. Sci. 10(1–2), 1–157 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  63. Xie, J., Xu, Z.: Subset selection for matrices with fixed blocks. Israel J. Math. 245(1), 1–26 (2021)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the handling editor Liqun Qi, the anonymous referees and Professor Eric Chu for their valuable suggestions which greatly help us to improve the manuscript. The first author is supported by the National Natural Science Foundation of China under grant 11901471. The second and third authors are supported by the National Natural Science Foundation of China under grant 11771099, the Innovation Program of Shanghai Municipal Education Commission and Shanghai Municipal Science and Technology Commission under grant 22WZ2501900.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yimin Wei.

Additional information

Communicated by Liqun Qi.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Che, M., Chen, J. & Wei, Y. Perturbations of the Tcur Decomposition for Tensor Valued Data in the Tucker Format. J Optim Theory Appl 194, 852–877 (2022). https://doi.org/10.1007/s10957-022-02051-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-022-02051-w

Keywords

Mathematics Subject Classification

Navigation