Advertisement

Tensor Completion in Hierarchical Tensor Representations

  • Holger Rauhut
  • Reinhold SchneiderEmail author
  • Željka Stojanac
Chapter
Part of the Applied and Numerical Harmonic Analysis book series (ANHA)

Abstract

Compressed sensing extends from the recovery of sparse vectors from undersampled measurements via efficient algorithms to the recovery of matrices of low rank from incomplete information. Here we consider a further extension to the reconstruction of tensors of low multi-linear rank in recently introduced hierarchical tensor formats from a small number of measurements. Hierarchical tensors are a flexible generalization of the well-known Tucker representation, which have the advantage that the number of degrees of freedom of a low rank tensor does not scale exponentially with the order of the tensor. While corresponding tensor decompositions can be computed efficiently via successive applications of (matrix) singular value decompositions, some important properties of the singular value decomposition do not extend from the matrix to the tensor case. This results in major computational and theoretical difficulties in designing and analyzing algorithms for low rank tensor recovery. For instance, a canonical analogue of the tensor nuclear norm is NP-hard to compute in general, which is in stark contrast to the matrix case. In this book chapter we consider versions of iterative hard thresholding schemes adapted to hierarchical tensor formats. A variant builds on methods from Riemannian optimization and uses a retraction mapping from the tangent space of the manifold of low rank tensors back to this manifold. We provide first partial convergence results based on a tensor version of the restricted isometry property (TRIP) of the measurement map. Moreover, an estimate of the number of measurements is provided that ensures the TRIP of a given tensor rank with high probability for Gaussian measurement maps.

Keywords

Hierarchical Tensor Tensor Completion Iterative Hard Thresholding (IHT) Tensor Recovery Nuclear Norm Minimization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Absil, P.-A., Mahony, R.E., Sepulchre, R.: Optimization algorithms on matrix manifolds. Found. Comput. Math.10, 241–244 (2010)CrossRefGoogle Scholar
  2. 2.
    Arnold, A., Jahnke, T.: On the approximation of high-dimensional differential equations in the hierarchical Tucker format. BIT Numer. Math. 54, 305–341 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Beck, M.H., Jäckle, A., Worth, G.A., Meyer, H.-D.: The multi-configuration time-dependent Hartree (MCTDH) method: a highly efficient algorithm for propagating wavepackets. Phys. Rep. 324, 1–105 (2000)CrossRefGoogle Scholar
  4. 4.
    Beylkin, G., Mohlenkamp, M.J.: Algorithms for numerical analysis in high dimensions. SIAM J. Sci. Comput. 26, 2133–2159 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Beylkin, G., Garcke, J., Mohlenkamp, M.J.: Multivariate regression and machine learning with sums of separable functions. SIAM J. Sci. Comput. 31, 1840–1857 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Bhatia, R.: Matrix Analysis. Graduate Texts in Mathematics, vol. 169. Springer, New York (1997)Google Scholar
  7. 7.
    Blumensath, T., Davies, M.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14, 629–654 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Blumensath, T., Davies, M.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27, 265–274 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Candès, E.J., Plan, Y.: Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements. IEEE Trans. Inf. Theory 57, 2342–2359 (2011)CrossRefGoogle Scholar
  11. 11.
    Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2010)CrossRefGoogle Scholar
  12. 12.
    Carlini, E., Kleppe, J.: Ranks derived from multilinear maps. J. Pure Appl. Algebra 215, 1999–2004 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Da Silva, C., Herrmann, F.J.: Hierarchical Tucker tensor optimization - applications to tensor completion. In: Proceedings of 10th International Conference on Sampling Theory and Applications (2013)Google Scholar
  14. 14.
    De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Eldar, Y.C., Kutyniok, K. (eds.): Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge (2012)Google Scholar
  16. 16.
    Falcó, A., Hackbusch, W.: On minimal subspaces in tensor representations. Found. Comput. Math. 12, 765–803 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Falcó, A., Hackbusch, W., Nouy, A.: Geometric structures in tensor representations. Technical Reports, vol. 9. MPI MIS Leipzig (2013)Google Scholar
  18. 18.
    Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University, CA (2002)Google Scholar
  19. 19.
    Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkhäuser, New York (2013)CrossRefzbMATHGoogle Scholar
  20. 20.
    Friedland, S., Lim, L.-H.: Computational complexity of tensor nuclear norm, preprint, ArXiv:1410.6072 (2014)Google Scholar
  21. 21.
    Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 27, 025010 (2011)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM. J. Matrix Anal. Appl. 31, 2029–2054 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36, 53–78 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57, 1548–1566 (2011)CrossRefGoogle Scholar
  25. 25.
    Hackbusch, W.: Tensorisation of vectors and their efficient convolution. Numer. Math. 119, 465–488 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus. Springer Series in Computational Mathematics, vol. 42. Springer, New York (2012)Google Scholar
  27. 27.
    Hackbusch, W.: Numerical tensor calculus. Acta Numerica 23, 651–742 (2014)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Hackbusch, W., Kühn, S.: A new scheme for the tensor representation. J. Fourier Anal. Appl. 15, 706–722 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Hackbusch, W., Schneider, R.: Tensor spaces and hierarchical tensor representations, In: Dahlke, S., Dahmen, W., Griebel, M., Hackbusch, W., Ritter, K., Schneider, R., Schwab, C., Yserentant, H. (eds.), Extraction of quantifiable information from complex systems, Lecture notes in computational science and engineering, vol. 102, publisher, Springer, New York, pp. 237–361 (2014)Google Scholar
  30. 30.
    Haegeman, J., Osborne, T., Verstraete, F.: Post-matrix product state methods: to tangent space and beyond. Phys. Rev. B 88, 075133 (2013)CrossRefGoogle Scholar
  31. 31.
    Hastad, J.: Tensor rank is NP-complete. J. Algorithms 11, 644–654 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Hillar, C.J., Lim, L.-H.: Most tensor problems are NP hard. J. ACM 60, 45:1–45:39 (2013)Google Scholar
  33. 33.
    Holtz, S., Rohwedder, T., Schneider, R.: On manifolds of tensors of fixed TT rank. Numer. Math. 120, 701–731 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Holtz, S., Rohwedder, T., Schneider, R.: The alternating linear scheme for tensor optimisation in the tensor train format. SIAM J. Sci. Comput. 34, A683–A713 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Huang, B., Mu, C., Goldfarb, D., Wright, J.: Provable low-rank tensor recovery. http://www.optimization-online.org/DB_FILE/2014/02/4252.pdf (2014)
  36. 36.
    Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Kressner, D., Steinlechner, M., Vandereycken, B.: Low-rank tensor completion by Riemannian optimization. BIT Numer. Math. 54, 447–468 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Landsberg, J.M.: Tensors: Geometry and Applications. Graduate Studies in Mathematics, vol. 128. AMS, Providence (2012)Google Scholar
  39. 39.
    Legeza, Ö., Rohwedder, T., Schneider, R., Szalay, S.: Tensor product approximation (DMRG) and coupled cluster method in quantum chemistry. In: Bach, V., Delle Site, L. (eds.) Many-Electron Approaches in Physics, Chemistry and Mathematics, pp. 53–76. Springer, Switzerland (2014)Google Scholar
  40. 40.
    Levin, J.: Three-mode factor analysis. Ph.D. thesis, University of Illinois, Urbana (1963)Google Scholar
  41. 41.
    Lim, L.-H., De Silva, V.: Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30, 1084–1127 (2008)MathSciNetCrossRefGoogle Scholar
  42. 42.
    Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. Trans. Pattern Anal. Mach. Intell. (PAMI) 35, 208–220 (2012)Google Scholar
  43. 43.
    Lubich, C.: From Quantum to Classical Molecular Dynamics: Reduced Methods and Numerical Analysis. Zürich Lectures in Advanced Mathematics, vol. 12. EMS, Zürich (2008)Google Scholar
  44. 44.
    Lubich, C., Rohwedder, T., Schneider, R., Vandereycken, B.: Dynamical approximation by hierarchical Tucker and tensor-train tensors. SIAM J. Matrix Anal. Appl. 34, 470–494 (2013)MathSciNetCrossRefGoogle Scholar
  45. 45.
    Mu, C., Huang, B., Wright, J., Goldfarb, D.: Square deal: lower bounds and improved relaxations for tensor recovery. arxiv.org/abs/1307.5870v2 (2013)Google Scholar
  46. 46.
    Oseledets, I.V.: A new tensor decomposition. Dokl. Math. 80, 495–496 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  47. 47.
    Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  48. 48.
    Oseledets, I.V., Tyrtyshnikov, E.E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31, 3744–3759 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  49. 49.
    Oseledets, I.V., Tyrtyshnikov, E.E.: Algebraic wavelet transform via quantics tensor train decomposition. SIAM J. Sci. Comput. 33, 1315–1328 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    Rauhut, H., Schneider, R., Stojanac, Ž.: Tensor recovery via iterative hard thresholding. In: Proceedings of 10th International Conference of Sampling Theory and Applications (2013)Google Scholar
  51. 51.
    Rauhut, H., Schneider, R., Stojanac, Ž.: Low rank tensor recovery via iterative hard thresholding (in preparation)Google Scholar
  52. 52.
    Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solution of linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  53. 53.
    Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)MathSciNetzbMATHGoogle Scholar
  54. 54.
    Rohwedder, T., Uschmajew, A.: On local convergence of alternating schemes for optimization of convex problems in the tensor train format. SIAM J. Numer. Anal. 51, 1134–1162 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  55. 55.
    Romera-Paredes, B., Pontil, M.: A new convex relaxation for tensor completion. NIPS 26, 2967–2975 (2013)Google Scholar
  56. 56.
    Schneider, R., Uschmajew, A.: Approximation rates for the hierarchical tensor format in periodic Sobolev spaces. J. Complexity 30, 56–71 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  57. 57.
    Schneider, R., Uschmajew, A.: Convergence results for projected line-search methods on varieties of low-rank matrices via Lojasiewicz inequality. SIAM J. Optim., 25(1), 622–646 (2015)MathSciNetCrossRefGoogle Scholar
  58. 58.
    Schollwöck, U.: The density-matrix renormalization group in the age of matrix product states. Ann. Phys. (NY) 326, 96–192 (2011)Google Scholar
  59. 59.
    Signoretto, M., De Lathauwer, L., Suykens, J.A.K.: Nuclear norms for tensors and their use for convex multilinear estimation. International Report 10–186, ESAT-SISTA, K. U. Leuven (2010)Google Scholar
  60. 60.
    Tanner, J., Wei, K.: Normalized iterative hard thresholding for matrix completion. SIAM J. Sci. Comput. 35, S104–S125 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  61. 61.
    Tucker, L.R.: Implications of factor analysis of three-way matrices for measurement of change. In: Harris, C.W. (ed.) Problems in Measuring Change, pp. 122–137. University of Wisconsin Press, Madison (1963)Google Scholar
  62. 62.
    Tucker, L.R.: The extension of factor analysis to three-dimensional matrices. In: Gulliksen, H., Frederiksen, N. (eds.) Contributions to Mathematical Psychology, pp. 110–127. Holt, Rinehart & Winston, New York (1964)Google Scholar
  63. 63.
    Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)MathSciNetCrossRefGoogle Scholar
  64. 64.
    Uschmajew, A.: Well-posedness of convex maximization problems on Stiefel manifolds and orthogonal tensor product approximations. Numer. Math. 115, 309–331 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  65. 65.
    Uschmajew, A., Vandereycken, B.: The geometry of algorithms using hierarchical tensors. Linear Algebra Appl. 439, 133–166 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  66. 66.
    Vandereycken, B.: Low-rank matrix completion by Riemannian optimization. SIAM J. Optim. 23, 1214–1236 (2013)MathSciNetCrossRefGoogle Scholar
  67. 67.
    Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. In: Eldar, C.Y., Kutyniok, G. (eds.) Compressed Sensing: Theory and Applications, pp. 210–268. Cambridge University Press, Cambridge (2012)CrossRefGoogle Scholar
  68. 68.
    Vidal, G.: Efficient classical simulation of slightly entangled quantum computations. Phys. Rev. Lett. 91, 147902 (2003)CrossRefGoogle Scholar
  69. 69.
    Wang, H., Thoss, M.: Multilayer formulation of the multi-configuration time-dependent Hartree theory. J. Chem. Phys. 119, 1289–1299 (2003)CrossRefzbMATHGoogle Scholar
  70. 70.
    Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4, 333–361 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  71. 71.
    White, S.: Density matrix formulation for quantum renormalization groups. Phys. Rev. Lett. 69, 2863–2866 (1992)CrossRefGoogle Scholar
  72. 72.
    Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6, 1758–1789 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  73. 73.
    Xu, Y., Hao, R., Yin, W., Su, Z.: Parallel matrix factorisation for low-rank tensor completion. UCLA CAM, 13–77 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Holger Rauhut
    • 1
  • Reinhold Schneider
    • 2
    Email author
  • Željka Stojanac
    • 1
  1. 1.RWTH Aachen UniversityAachenGermany
  2. 2.Technische Universität BerlinBerlinGermany

Personalised recommendations