, Volume 85, Issue 3, pp 169–188 | Cite as

Linear algebra for tensor problems

  • I. V. Oseledets
  • D. V. Savostyanov
  • E. E. Tyrtyshnikov


By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters—by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.


Multidimensional arrays Tucker decomposition Tensor approximations Low rank approximations Skeleton decompositions Dimensionality reduction Data compression Large-scale matrices Data-sparse methods 

Mathematics Subject Classification (2000)

65F15 15A69 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bader BW, Kolda TG (2006) Algorithm 862: MATLAB tensor classes for fast algorithm prototyping. ACM Trans Math Softw 32(4): 635–653CrossRefMathSciNetGoogle Scholar
  2. 2.
    Beylkin G, Mohlenkamp MJ (2002) Numerical operator calculus in higher dimensions. Proc Natl Acad Sci USA 99(16): 10246–10251zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Bro R (1997) PARAFAC: tutorial and applications. Chemom Intell Lab Syst 38(2): 149–171CrossRefGoogle Scholar
  4. 4.
    Caroll JD, Chang JJ (1970) Analysis of individual differences in multidimensional scaling via n-way generalization of Eckart-Young decomposition. Psychometrika 35: 283–319CrossRefGoogle Scholar
  5. 5.
    Comon P (2000) Tensor decomposition: state of the art and applications. In: IMA Conference Mathematics in Signal Processing, Warwick, UKGoogle Scholar
  6. 6.
    Eldén L, Savas B (2009) A newton-grassmann method for computing the best multilinear rank-(r 1, r 2, r 3) approximation of a tensor. SIAM J Matrix Anal Appl 31(2): 248–271CrossRefMathSciNetzbMATHGoogle Scholar
  7. 7.
    Flad H-J, Khoromskij BN, Savostyanov DV, Tyrtyshnikov EE (2008) Verification of the cross 3D algorithm on quantum chemistry data. Rus J Numer Anal Math Model 23(4): 210–220MathSciNetGoogle Scholar
  8. 8.
    Ford JM, Tyrtyshnikov EE (2003) Combining Kronecker product approximation with discrete wavelet transforms to solve dense, function-related systems. SIAM J Sci Comput 25(3): 961–981zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Goreinov SA (2008) On cross approximation of multi-index array. Doklady Math 420(4): 1–3MathSciNetGoogle Scholar
  10. 10.
    Goreinov SA, Tyrtyshnikov EE (2001) The maximal-volume concept in approximation by low-rank matrices. Contemp Math 208: 47–51MathSciNetGoogle Scholar
  11. 11.
    Goreinov SA, Tyrtyshnikov EE, Zamarashkin NL (1997) A theory of pseudo-skeleton approximations. Linear Algebra Appl 261: 1–21zbMATHMathSciNetCrossRefGoogle Scholar
  12. 12.
    Hackbusch W, Khoromskij BN, Tyrtyshnikov EE (2005) Hierarchical Kronecker tensor-product approximations. J Numer Math 13: 119–156zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Hackbusch W, Khoromskij BN, Tyrtyshnikov EE (2008) Approximate iterations for structured matrices. Numer Mathematik 109(3): 365–383zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Harshman RA (1970) Foundations of the Parafac procedure: models and conditions for an explanatory multimodal factor analysis. UCLA Working Papers in Phonetics 16: 1–84Google Scholar
  15. 15.
    de Lathauwer L, de Moor B, Vandewalle J (2000) A multlinear singular value decomposition. SIAM J Matrix Anal Appl 21: 1253–1278zbMATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    de Lathauwer L, de Moor B, Vandewalle J (2000) On best rank-1 and rank-(R 1, R 2, . . . , R N) approximation of high-order tensors. SIAM J Matrix Anal Appl 21: 1324–1342zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    de Lathauwer L, de Moor B, Vandewalle J (2004) Computing of canonical decomposition by means of a simultaneous generalized Schur decomposition. SIAM J Matrix Anal Appl 26: 295–327zbMATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Olshevsky V, Oseledets IV, Tyrtyshnikov EE (2008) Superfast inversion of two-level Toeplitz matrices using Newton iteration and tensor-displacement structure. Oper Theory Adv Appl 179: 229–240CrossRefMathSciNetGoogle Scholar
  19. 19.
    Oseledets IV, Savostianov DV, Tyrtyshnikov EE (2008) Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM J Matrix Anal Appl 30(3): 939–956CrossRefMathSciNetGoogle Scholar
  20. 20.
    Oseledets IV, Savostyanov DV, Tyrtyshnikov EE (2009) Fast simultaneous orthogonal reduction to triangular matrices. SIAM J Matrix Anal Appl 31(2): 316–330CrossRefMathSciNetzbMATHGoogle Scholar
  21. 21.
    Oseledets IV, Tyrtyshnikov EE (2005) Approximate inversion of matrices in the process of solving a hypersingular integral equation. Comput Math Math Phys 45(2): 302–313MathSciNetGoogle Scholar
  22. 22.
    Oseledets IV, Tyrtyshnikov EE, Zamarashkin NL (2009) Matrix inversion cases with size-independent rank estimates. Linear Algebra Appl 431(5–7): 558–570zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31: 279–311CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • I. V. Oseledets
    • 1
  • D. V. Savostyanov
    • 1
  • E. E. Tyrtyshnikov
    • 1
  1. 1.Institute of Numerical MathematicsRussian Academy of SciencesMoscowRussia

Personalised recommendations