Skip to main content

Tensor Completion in Hierarchical Tensor Representations

  • Chapter
Compressed Sensing and its Applications

Abstract

Compressed sensing extends from the recovery of sparse vectors from undersampled measurements via efficient algorithms to the recovery of matrices of low rank from incomplete information. Here we consider a further extension to the reconstruction of tensors of low multi-linear rank in recently introduced hierarchical tensor formats from a small number of measurements. Hierarchical tensors are a flexible generalization of the well-known Tucker representation, which have the advantage that the number of degrees of freedom of a low rank tensor does not scale exponentially with the order of the tensor. While corresponding tensor decompositions can be computed efficiently via successive applications of (matrix) singular value decompositions, some important properties of the singular value decomposition do not extend from the matrix to the tensor case. This results in major computational and theoretical difficulties in designing and analyzing algorithms for low rank tensor recovery. For instance, a canonical analogue of the tensor nuclear norm is NP-hard to compute in general, which is in stark contrast to the matrix case. In this book chapter we consider versions of iterative hard thresholding schemes adapted to hierarchical tensor formats. A variant builds on methods from Riemannian optimization and uses a retraction mapping from the tangent space of the manifold of low rank tensors back to this manifold. We provide first partial convergence results based on a tensor version of the restricted isometry property (TRIP) of the measurement map. Moreover, an estimate of the number of measurements is provided that ensures the TRIP of a given tensor rank with high probability for Gaussian measurement maps.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Absil, P.-A., Mahony, R.E., Sepulchre, R.: Optimization algorithms on matrix manifolds. Found. Comput. Math.10, 241–244 (2010)

    Article  Google Scholar 

  2. Arnold, A., Jahnke, T.: On the approximation of high-dimensional differential equations in the hierarchical Tucker format. BIT Numer. Math. 54, 305–341 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  3. Beck, M.H., Jäckle, A., Worth, G.A., Meyer, H.-D.: The multi-configuration time-dependent Hartree (MCTDH) method: a highly efficient algorithm for propagating wavepackets. Phys. Rep. 324, 1–105 (2000)

    Article  Google Scholar 

  4. Beylkin, G., Mohlenkamp, M.J.: Algorithms for numerical analysis in high dimensions. SIAM J. Sci. Comput. 26, 2133–2159 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Beylkin, G., Garcke, J., Mohlenkamp, M.J.: Multivariate regression and machine learning with sums of separable functions. SIAM J. Sci. Comput. 31, 1840–1857 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bhatia, R.: Matrix Analysis. Graduate Texts in Mathematics, vol. 169. Springer, New York (1997)

    Google Scholar 

  7. Blumensath, T., Davies, M.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14, 629–654 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Blumensath, T., Davies, M.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27, 265–274 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Candès, E.J., Plan, Y.: Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements. IEEE Trans. Inf. Theory 57, 2342–2359 (2011)

    Article  Google Scholar 

  11. Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2010)

    Article  Google Scholar 

  12. Carlini, E., Kleppe, J.: Ranks derived from multilinear maps. J. Pure Appl. Algebra 215, 1999–2004 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Da Silva, C., Herrmann, F.J.: Hierarchical Tucker tensor optimization - applications to tensor completion. In: Proceedings of 10th International Conference on Sampling Theory and Applications (2013)

    Google Scholar 

  14. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  15. Eldar, Y.C., Kutyniok, K. (eds.): Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge (2012)

    Google Scholar 

  16. Falcó, A., Hackbusch, W.: On minimal subspaces in tensor representations. Found. Comput. Math. 12, 765–803 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  17. Falcó, A., Hackbusch, W., Nouy, A.: Geometric structures in tensor representations. Technical Reports, vol. 9. MPI MIS Leipzig (2013)

    Google Scholar 

  18. Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University, CA (2002)

    Google Scholar 

  19. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkhäuser, New York (2013)

    Book  MATH  Google Scholar 

  20. Friedland, S., Lim, L.-H.: Computational complexity of tensor nuclear norm, preprint, ArXiv:1410.6072 (2014)

    Google Scholar 

  21. Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 27, 025010 (2011)

    Article  MathSciNet  Google Scholar 

  22. Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM. J. Matrix Anal. Appl. 31, 2029–2054 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  23. Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36, 53–78 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  24. Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57, 1548–1566 (2011)

    Article  Google Scholar 

  25. Hackbusch, W.: Tensorisation of vectors and their efficient convolution. Numer. Math. 119, 465–488 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  26. Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus. Springer Series in Computational Mathematics, vol. 42. Springer, New York (2012)

    Google Scholar 

  27. Hackbusch, W.: Numerical tensor calculus. Acta Numerica 23, 651–742 (2014)

    Article  MathSciNet  Google Scholar 

  28. Hackbusch, W., Kühn, S.: A new scheme for the tensor representation. J. Fourier Anal. Appl. 15, 706–722 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  29. Hackbusch, W., Schneider, R.: Tensor spaces and hierarchical tensor representations, In: Dahlke, S., Dahmen, W., Griebel, M., Hackbusch, W., Ritter, K., Schneider, R., Schwab, C., Yserentant, H. (eds.), Extraction of quantifiable information from complex systems, Lecture notes in computational science and engineering, vol. 102, publisher, Springer, New York, pp. 237–361 (2014)

    Google Scholar 

  30. Haegeman, J., Osborne, T., Verstraete, F.: Post-matrix product state methods: to tangent space and beyond. Phys. Rev. B 88, 075133 (2013)

    Article  Google Scholar 

  31. Hastad, J.: Tensor rank is NP-complete. J. Algorithms 11, 644–654 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  32. Hillar, C.J., Lim, L.-H.: Most tensor problems are NP hard. J. ACM 60, 45:1–45:39 (2013)

    Google Scholar 

  33. Holtz, S., Rohwedder, T., Schneider, R.: On manifolds of tensors of fixed TT rank. Numer. Math. 120, 701–731 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  34. Holtz, S., Rohwedder, T., Schneider, R.: The alternating linear scheme for tensor optimisation in the tensor train format. SIAM J. Sci. Comput. 34, A683–A713 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  35. Huang, B., Mu, C., Goldfarb, D., Wright, J.: Provable low-rank tensor recovery. http://www.optimization-online.org/DB_FILE/2014/02/4252.pdf (2014)

  36. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  37. Kressner, D., Steinlechner, M., Vandereycken, B.: Low-rank tensor completion by Riemannian optimization. BIT Numer. Math. 54, 447–468 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  38. Landsberg, J.M.: Tensors: Geometry and Applications. Graduate Studies in Mathematics, vol. 128. AMS, Providence (2012)

    Google Scholar 

  39. Legeza, Ö., Rohwedder, T., Schneider, R., Szalay, S.: Tensor product approximation (DMRG) and coupled cluster method in quantum chemistry. In: Bach, V., Delle Site, L. (eds.) Many-Electron Approaches in Physics, Chemistry and Mathematics, pp. 53–76. Springer, Switzerland (2014)

    Google Scholar 

  40. Levin, J.: Three-mode factor analysis. Ph.D. thesis, University of Illinois, Urbana (1963)

    Google Scholar 

  41. Lim, L.-H., De Silva, V.: Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30, 1084–1127 (2008)

    Article  MathSciNet  Google Scholar 

  42. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. Trans. Pattern Anal. Mach. Intell. (PAMI) 35, 208–220 (2012)

    Google Scholar 

  43. Lubich, C.: From Quantum to Classical Molecular Dynamics: Reduced Methods and Numerical Analysis. Zürich Lectures in Advanced Mathematics, vol. 12. EMS, Zürich (2008)

    Google Scholar 

  44. Lubich, C., Rohwedder, T., Schneider, R., Vandereycken, B.: Dynamical approximation by hierarchical Tucker and tensor-train tensors. SIAM J. Matrix Anal. Appl. 34, 470–494 (2013)

    Article  MathSciNet  Google Scholar 

  45. Mu, C., Huang, B., Wright, J., Goldfarb, D.: Square deal: lower bounds and improved relaxations for tensor recovery. arxiv.org/abs/1307.5870v2 (2013)

    Google Scholar 

  46. Oseledets, I.V.: A new tensor decomposition. Dokl. Math. 80, 495–496 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  47. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  48. Oseledets, I.V., Tyrtyshnikov, E.E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31, 3744–3759 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  49. Oseledets, I.V., Tyrtyshnikov, E.E.: Algebraic wavelet transform via quantics tensor train decomposition. SIAM J. Sci. Comput. 33, 1315–1328 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  50. Rauhut, H., Schneider, R., Stojanac, Ž.: Tensor recovery via iterative hard thresholding. In: Proceedings of 10th International Conference of Sampling Theory and Applications (2013)

    Google Scholar 

  51. Rauhut, H., Schneider, R., Stojanac, Ž.: Low rank tensor recovery via iterative hard thresholding (in preparation)

    Google Scholar 

  52. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solution of linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  53. Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)

    MathSciNet  MATH  Google Scholar 

  54. Rohwedder, T., Uschmajew, A.: On local convergence of alternating schemes for optimization of convex problems in the tensor train format. SIAM J. Numer. Anal. 51, 1134–1162 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  55. Romera-Paredes, B., Pontil, M.: A new convex relaxation for tensor completion. NIPS 26, 2967–2975 (2013)

    Google Scholar 

  56. Schneider, R., Uschmajew, A.: Approximation rates for the hierarchical tensor format in periodic Sobolev spaces. J. Complexity 30, 56–71 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  57. Schneider, R., Uschmajew, A.: Convergence results for projected line-search methods on varieties of low-rank matrices via Lojasiewicz inequality. SIAM J. Optim., 25(1), 622–646 (2015)

    Article  MathSciNet  Google Scholar 

  58. Schollwöck, U.: The density-matrix renormalization group in the age of matrix product states. Ann. Phys. (NY) 326, 96–192 (2011)

    Google Scholar 

  59. Signoretto, M., De Lathauwer, L., Suykens, J.A.K.: Nuclear norms for tensors and their use for convex multilinear estimation. International Report 10–186, ESAT-SISTA, K. U. Leuven (2010)

    Google Scholar 

  60. Tanner, J., Wei, K.: Normalized iterative hard thresholding for matrix completion. SIAM J. Sci. Comput. 35, S104–S125 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  61. Tucker, L.R.: Implications of factor analysis of three-way matrices for measurement of change. In: Harris, C.W. (ed.) Problems in Measuring Change, pp. 122–137. University of Wisconsin Press, Madison (1963)

    Google Scholar 

  62. Tucker, L.R.: The extension of factor analysis to three-dimensional matrices. In: Gulliksen, H., Frederiksen, N. (eds.) Contributions to Mathematical Psychology, pp. 110–127. Holt, Rinehart & Winston, New York (1964)

    Google Scholar 

  63. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  64. Uschmajew, A.: Well-posedness of convex maximization problems on Stiefel manifolds and orthogonal tensor product approximations. Numer. Math. 115, 309–331 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  65. Uschmajew, A., Vandereycken, B.: The geometry of algorithms using hierarchical tensors. Linear Algebra Appl. 439, 133–166 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  66. Vandereycken, B.: Low-rank matrix completion by Riemannian optimization. SIAM J. Optim. 23, 1214–1236 (2013)

    Article  MathSciNet  Google Scholar 

  67. Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. In: Eldar, C.Y., Kutyniok, G. (eds.) Compressed Sensing: Theory and Applications, pp. 210–268. Cambridge University Press, Cambridge (2012)

    Chapter  Google Scholar 

  68. Vidal, G.: Efficient classical simulation of slightly entangled quantum computations. Phys. Rev. Lett. 91, 147902 (2003)

    Article  Google Scholar 

  69. Wang, H., Thoss, M.: Multilayer formulation of the multi-configuration time-dependent Hartree theory. J. Chem. Phys. 119, 1289–1299 (2003)

    Article  MATH  Google Scholar 

  70. Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4, 333–361 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  71. White, S.: Density matrix formulation for quantum renormalization groups. Phys. Rev. Lett. 69, 2863–2866 (1992)

    Article  Google Scholar 

  72. Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6, 1758–1789 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  73. Xu, Y., Hao, R., Yin, W., Su, Z.: Parallel matrix factorisation for low-rank tensor completion. UCLA CAM, 13–77 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reinhold Schneider .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Rauhut, H., Schneider, R., Stojanac, Ž. (2015). Tensor Completion in Hierarchical Tensor Representations. In: Boche, H., Calderbank, R., Kutyniok, G., Vybíral, J. (eds) Compressed Sensing and its Applications. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-16042-9_14

Download citation

Publish with us

Policies and ethics