Advertisement

On the Best Low Multilinear Rank Approximation of Higher-order Tensors*

  • Mariya Ishteva
  • P.-A. Absil
  • Sabine Van Huffel
  • Lieven De Lathauwer
Conference paper

Summary

This paper deals with the best low multilinear rank approximation of higher-order tensors. Given a tensor, we are looking for another tensor, as close as possible to the given one and with bounded multilinear rank. Higher-order tensors are used in higher-order statistics, signal processing, telecommunications and many other fields. In particular, the best low multilinear rank approximation is used as a tool for dimensionality reduction and signal subspace estimation.

Computing the best low multilinear rank approximation is a nontrivial task. Higher-order generalizations of the singular value decomposition lead to suboptimal solutions. The higher-order orthogonal iteration is a widely used linearly convergent algorithm for further refinement. We aim for conceptually faster algorithms. However, applying standard optimization algorithms directly is not a good idea since there are infinitely many equivalent solutions. Nice convergence properties are observed when the solutions are isolated. The present invariance can be removed by working on quotient manifolds. We discuss three algorithms, based on Newton’s method, the trust-region scheme and conjugate gradients. We also comment on the local minima of the problem.

Keywords

Matrix Anal Multilinear Algebra Steep Descent Direction Nonlinear Conjugate Gradient Method Quotient Manifold 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    P.-A. Absil, C. G. Baker, K. A. Gallivan. Trust-region methods on Riemannian manifolds. Found. Comput. Math., 7(3):303–330, 2007.zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    P.-A. Absil, M. Ishteva, L. De Lathauwer, S. Van Huffel. A geometric Newton method for Oja’s vector field. Neural Comput., 21(5):1415–1433, 2009.zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    P.-A. Absil, R. Mahony, R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, NJ, 2008.zbMATHGoogle Scholar
  4. 4.
    E. Acar, C. A. Bingol, H. Bingol, R. Bro, B. Yener. Multiway analysis of epilepsy tensors. ISMB 2007 Conference Proc., Bioinformatics, 23(13):i10–i18, 2007.Google Scholar
  5. 5.
    R. L. Adler, J.-P. Dedieu, J. Y. Margulies, M. Martens, M. Shub. Newton’s method on Riemannian manifolds and a geometric model for the human spine. IMA J. Numer. Anal., 22(3):359–390, 2002.zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    C. A. Andersson, R. Bro. The N-way toolbox for matlab. Chemometrics and Intelligent Laboratory Systems, 52 (1):1–4, 2000. See also http://www.models.kvl.dk/source/nwaytoolbox/.
  7. 7.
    R. Badeau, R. Boyer. Fast multilinear singular value decomposition for structured tensors. SIAM J. Matrix Anal. Appl., 30(3):1008–1021, 2008.CrossRefMathSciNetGoogle Scholar
  8. 8.
    C. Caiafa, A. Cichocki. Reconstructing matrices and tensors from few rows and columns. In Proc. of 2009 International Symposium on Nonlinear Theory and its Applications, 2009. In press.Google Scholar
  9. 9.
    J. Carroll, J. Chang. Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika, 35(3):283–319, 1970.zbMATHCrossRefGoogle Scholar
  10. 10.
    J. Chen, Y. Saad. On the tensor SVD and the optimal low rank orthogonal approximation of tensors. SIAM J. Matrix Anal. Appl., 30(4):1709–1734, 2009.zbMATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    A. Cichocki, R. Zdunek, A. H. Phan, and S.-I. Amari. Nonnegative Matrix and Tensor Factorizations. Wiley, 2009.CrossRefGoogle Scholar
  12. 12.
    P. Comon. Independent component analysis, a new concept? Signal Processing, 36(3):287–314, 1994.zbMATHCrossRefGoogle Scholar
  13. 13.
    P. Comon. Tensor decompositions. In J. G. McWhirter, I. K. Proudler (eds), Mathematics in Signal Processing V, pp. 1–24. Clarendon Press, Oxford, 2002.Google Scholar
  14. 14.
    P. Comon, G. Golub, L.-H. Lim, B. Mourrain. Symmetric tensors and symmetric tensor rank. SIAM J. Matrix Anal. Appl., 30(3):1254–1279, 2008.CrossRefMathSciNetGoogle Scholar
  15. 15.
    A. R. Conn, N. I. M. Gould, P. L. Toint. Trust-Region Methods. MPS-SIAM Series on Optimization. SIAM, Philadelphia, PA, 2000.zbMATHGoogle Scholar
  16. 16.
    L. De Lathauwer. Signal Processing Based on Multilinear Algebra. PhD thesis, Dept. of Electrical Engineering, Katholieke Universiteit Leuven, 1997.Google Scholar
  17. 17.
    L. De Lathauwer. A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization. SIAM J. Matrix Anal. Appl., 28(3):642–666, 2006.zbMATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    L. De Lathauwer. Decompositions of a higher-order tensor in block terms — Part I: Lemmas for partitioned matrices. SIAM J. Matrix Anal. Appl., 30(3):1022–1032, 2008.zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    L. De Lathauwer. Decompositions of a higher-order tensor in block terms — Part II: Definitions and uniqueness. SIAM J. Matrix Anal. Appl., 30(3):1033– 1066, 2008.zbMATHCrossRefMathSciNetGoogle Scholar
  20. 20.
    L. De Lathauwer. A survey of tensor methods. In Proc. of the 2009 IEEE International Symposium on Circuits and Systems (ISCAS 2009), pp. 2773– 2776, Taipei, Taiwan, 2009.Google Scholar
  21. 21.
    L. De Lathauwer, J. Castaing. Tensor-based techniques for the blind separation of DS-CDMA signals. Signal Processing, 87(2):322–336, 2007.zbMATHCrossRefGoogle Scholar
  22. 22.
    L. De Lathauwer, B. De Moor, J. Vandewalle. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl., 21(4):1253–1278, 2000.zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    L. De Lathauwer, B. De Moor, J. Vandewalle. On the best rank-1 and rank-(R 1,R 2,…,R N) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl., 21 (4):1324–1342, 2000.zbMATHCrossRefMathSciNetGoogle Scholar
  24. 24.
    L. De Lathauwer, B. De Moor, J. Vandewalle. Independent component analysis and (simultaneous) third-order tensor diagonalization. IEEE Trans. Signal Process., 49(10):2262–2271, 2001.CrossRefGoogle Scholar
  25. 25.
    L. De Lathauwer, B. De Moor, J. Vandewalle. Computation of the canonical decomposition by means of a simultaneous generalized Schur decomposition.SIAM J. Matrix Anal. Appl., 26(2):295–327, 2004.zbMATHCrossRefMathSciNetGoogle Scholar
  26. 26.
    L. De Lathauwer, D. Nion. Decompositions of a higher-order tensor in block terms — Part III: Alternating least squares algorithms. SIAM J. Matrix Anal. Appl., 30(3):1067–1083, 2008.zbMATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    L. De Lathauwer, J. Vandewalle. Dimensionality reduction in higher-order signal processing and rank-(R 1,R 2,…,R N) reduction in multilinear algebra. Linear Algebra Appl., 391:31–55, 2004.zbMATHCrossRefMathSciNetGoogle Scholar
  28. 28.
    V. de Silva, L.-H. Lim. Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl., 30(3):1084–1127, 2008.CrossRefMathSciNetGoogle Scholar
  29. 29.
    M. De Vos, L. De Lathauwer, B. Vanrumste, S. Van Huffel, W. Van Paesschen.Canonical decomposition of ictal scalp EEG and accurate source localization:Principles and simulation study. Journal of Computational Intelligence and Neuroscience, 2007(Article ID 58253):1–10, 2007.Google Scholar
  30. 30.
    M. De Vos, A. Vergult, L. De Lathauwer, W. De Clercq, S. Van Huffel, P. Dupont, A. Palmini, W. Van Paesschen. Canonical decomposition of ictal scalp EEG reliably detects the seizure onset zone. NeuroImage, 37(3):844–854, 2007.CrossRefGoogle Scholar
  31. 31.
    M. Elad, P. Milanfar, G. H. Golub. Shape from moments — an estimation theory perspective. IEEE Trans. on Signal Processing, 52(7):1814–1829, 2004.CrossRefMathSciNetGoogle Scholar
  32. 32.
    L. Eldén, B. Savas. A Newton–Grassmann method for computing the best multi-linear rank-(r 1,r 2,…,r 3) approximation of a tensor. SIAM J. Matrix Anal. Appl., 31(2):248–271, 2009.zbMATHCrossRefMathSciNetGoogle Scholar
  33. 33.
    R. Fletcher, C. M. Reeves. Function minimization by conjugate gradients. Comput. J., 7:149–154, 1964.zbMATHCrossRefMathSciNetGoogle Scholar
  34. 34.
    G. H. Golub, C. F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, Maryland, 3rd edition, 1996.zbMATHGoogle Scholar
  35. 35.
    M. Haardt, F. Roemer, G. Del Galdo. Higher-order SVD-based subspace estimation to improve the parameter estimation accuracy in multidimensional harmonic retrieval problems. IEEE Trans. on Signal Processing, 56(7):3198– 3213, 2008.CrossRefGoogle Scholar
  36. 36.
    W. W. Hager, H. Zhang. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM Journal on Optimization, 16(1):170– 192, 2005.zbMATHCrossRefMathSciNetGoogle Scholar
  37. 37.
    R. A. Harshman. Foundations of the PARAFAC procedure: Model and conditions for an “explanatory” multi-mode factor analysis. UCLA Working Papers in Phonetics, 16(1):1–84, 1970.Google Scholar
  38. 38.
    U. Helmke, J. B. Moore. Optimization and Dynamical Systems. Springer-Verlag, 1993.Google Scholar
  39. 39.
    M. R. Hestenes, E. Stiefel. Methods of conjugate gradients for solving linear systems. J. Research Nat. Bur. Standards, 49:409–436 (1953), 1952.MathSciNetGoogle Scholar
  40. 40.
    F. L. Hitchcock. The expression of a tensor or a polyadic as a sum of products. Journal of Mathematical Physics, 6(1):164–189, 1927.Google Scholar
  41. 41.
    F. L. Hitchcock. Multiple invariants and generalized rank of a p-way matrix or tensor. Journal of Mathematical Physics, 7(1):39–79, 1927.Google Scholar
  42. 42.
    M. Ishteva. Numerical methods for the best low multilinear rank approximation of higher-order tensors. PhD thesis, Dept. of Electrical Engineering, Katholieke Universiteit Leuven, 2009.Google Scholar
  43. 43.
    M. Ishteva, P.-A. Absil, S. Van Huffel, L. De Lathauwer. Best low multilinear rank approximation with conjugate gradients. Tech. Rep. 09-246, ESAT-SISTA, K.U.Leuven, Belgium, 2009.Google Scholar
  44. 44.
    M. Ishteva, P.-A. Absil, S. Van Huffel, L. De Lathauwer. Tucker compression and local optima. Tech. Rep. UCL-INMA-2010.012, Université catholique de Louvain and 09-247, ESAT-SISTA, K.U.Leuven, Belgium, 2010.Google Scholar
  45. 45.
    M. Ishteva, L. De Lathauwer, P.-A. Absil, S. Van Huffel. Dimensionality reduction for higher-order tensors: algorithms and applications. International Journal of Pure and Applied Mathematics, 42(3):337–343, 2008.MathSciNetGoogle Scholar
  46. 46.
    M. Ishteva, L. De Lathauwer, P.-A. Absil, S. Van Huffel. Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trustregion scheme. Tech. Rep. 09-142, ESAT-SISTA, K.U.Leuven, Belgium, 2009.Google Scholar
  47. 47.
    M. Ishteva, L. De Lathauwer, P.-A. Absil, S. Van Huffel. Differential-geometric Newton method for the best rank-(R 1, R 2, R 3) approximation of tensors. Numerical Algorithms, 51(2):179–194, 2009.zbMATHCrossRefMathSciNetGoogle Scholar
  48. 48.
    M. Ishteva, L. De Lathauwer, S. Van Huffel. Comparison of the performance of matrix and tensor based multi-channel harmonic analysis. In 7th International Conf. on Mathematics in Signal Processing, Cirencester, UK, pp. 77–80, 2006.Google Scholar
  49. 49.
    E. Kofidis, P. A. Regalia. On the best rank-1 approximation of higher-order supersymmetric tensors. SIAM J. Matrix Anal. Appl, 23(3):863–884, 2002.zbMATHCrossRefMathSciNetGoogle Scholar
  50. 50.
    T. Kolda. Orthogonal tensor decompositions. SIAM J. Matrix Anal. Appl., 23:243–255, 2001.zbMATHCrossRefMathSciNetGoogle Scholar
  51. 51.
    T. G. Kolda, B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009.zbMATHCrossRefMathSciNetGoogle Scholar
  52. 52.
    P. M. Kroonenberg. Applied Multiway Data Analysis. Wiley, 2008.Google Scholar
  53. 53.
    P. M. Kroonenberg, J. de Leeuw. Principal component analysis of three-mode data by means of alternating least squares algorithms. Psychometrika, 45(1):69– 97, 1980.zbMATHCrossRefMathSciNetGoogle Scholar
  54. 54.
    M. W. Mahoney, M. Maggioni, P. Drineas. Tensor-CUR decompositions for tensor-based data. SIAM J. Matrix Anal. Appl., 30(3):957–987, 2008.CrossRefMathSciNetGoogle Scholar
  55. 55.
    J. H. Manton. Optimization algorithms exploiting unitary constraints. IEEE Trans. Signal Process., 50(3):635–650, 2002.CrossRefMathSciNetGoogle Scholar
  56. 56.
    C. D. Moravitz Martin, C. F. Van Loan. A Jacobi-type method for computing orthogonal tensor decompositions. SIAM J. Matrix Anal. Appl., 30(3):1219– 1232, 2008.CrossRefMathSciNetGoogle Scholar
  57. 57.
    J. J. Moré, D. C. Sorensen. Computing a trust region step. SIAM J. Sci. Statist. Comput., 4(3):553–572, 1983.zbMATHMathSciNetGoogle Scholar
  58. 58.
    J. Nocedal, S. J. Wright. Numerical Optimization. Springer Verlag, New York, 2nd edition, 2006. Springer Series in Operations Research.zbMATHGoogle Scholar
  59. 59.
    I. V. Oseledets, D. V. Savostianov, E. E. Tyrtyshnikov. Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM J. Matrix Anal. Appl., 30(3):939–956, 2008.CrossRefMathSciNetGoogle Scholar
  60. 60.
    J.-M. Papy, L. De Lathauwer, S. Van Huffel. Exponential data fitting using multilinear algebra: The single-channel and the multichannel case. Numer. Linear Algebra Appl., 12(8):809–826, 2005.zbMATHCrossRefMathSciNetGoogle Scholar
  61. 61.
    J.-M. Papy, L. De Lathauwer, S. Van Huffel. Exponential data fitting using multilinear algebra: The decimative case. J. Chemometrics, 23(7–8):341–351, 2009.CrossRefGoogle Scholar
  62. 62.
    E. Polak, G. Ribière. Note sur la convergence de méthodes de directions conjugu ées. Rev. Française Informat. Recherche Opérationnelle, 3(16):35–43, 1969.zbMATHGoogle Scholar
  63. 63.
    B. Savas, L. Eldén. Krylov subspace methods for tensor computations. Tech. Rep. LITH-MAT-R-2009-02-SE, Dept. of Mathematics, Link¨oping University, 2009.Google Scholar
  64. 64.
    B. Savas, L.-H. Lim. Best multilinear rank approximation of tensors with quasi- Newton methods on Grassmannians. Tech. Rep. LITH-MAT-R-2008-01-SE, Dept. of Mathematics, Link¨oping University, 2008.Google Scholar
  65. 65.
    M. Shub. Some remarks on dynamical systems and numerical analysis. In L. Lara-Carrero, J. Lewowicz (eds), Proc. VII ELAM., pp. 69–92. Equinoccio, U. Sim´on Bol´ıvar, Caracas, 1986.Google Scholar
  66. 66.
    N. Sidiropoulos, R. Bro, G. Giannakis. Parallel factor analysis in sensor array processing. IEEE Trans. Signal Process., 48:2377–2388, 2000.CrossRefGoogle Scholar
  67. 67.
    A. Smilde, R. Bro, P. Geladi. Multi-way Analysis. Applications in the Chemical Sciences. John Wiley and Sons, Chichester, U.K., 2004.CrossRefGoogle Scholar
  68. 68.
    S. T. Smith. Geometric Optimization Methods for Adaptive Filtering. PhD thesis, Division of Applied Sciences, Harvard University, Cambridge, MA, 1993.Google Scholar
  69. 69.
    S. T. Smith. Optimization techniques on Riemannian manifolds. In A. Bloch (ed), Hamiltonian and gradient flows, algorithms and control, volume 3 of Fields Inst. Commun., pp. 113–136. Amer. Math. Soc., Providence, RI, 1994.Google Scholar
  70. 70.
    T. Steihaug. The conjugate gradient method and trust regions in large scale optimization. SIAM J. Numer. Anal., 20(3):626–637, 1983.zbMATHCrossRefMathSciNetGoogle Scholar
  71. 71.
    P. L. Toint. Towards an efficient sparsity exploiting Newton method for minimization. In I. S. Duff (ed), Sparse Matrices and Their Uses, pp. 57–88. Academic Press, London, 1981.Google Scholar
  72. 72.
    L. R. Tucker. The extension of factor analysis to three-dimensional matrices. In H. Gulliksen, N. Frederiksen (eds), Contributions to mathematical psychology, pp. 109–127. Holt, Rinehart & Winston, NY, 1964.Google Scholar
  73. 73.
    L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31:279–311, 1966.CrossRefMathSciNetGoogle Scholar
  74. 74.
    L. Vanhamme, S. Van Huffel. Multichannel quantification of biomedical magnetic resonance spectroscopy signals. In F. Luk (ed), Proc. of SPIE, Advanced Signal Processing Algorithms, Architectures, and Implementations VIII, volume 3461, pp. 237–248, San Diego, California, 1998.Google Scholar
  75. 75.
    T. Zhang, G. H. Golub. Rank-one approximation to high order tensors. SIAM J. Matrix Anal. Appl., 23:534–550, 2001.zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer -Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Mariya Ishteva
    • 1
  • P.-A. Absil
    • 1
  • Sabine Van Huffel
    • 2
  • Lieven De Lathauwer
    • 2
    • 3
  1. 1.Department of Mathematical EngineeringUniversité catholique de LouvainLouvain-la-NeuveBelgium
  2. 2.Department of Electrical Engineering - ESATSCD, K.U.LeuvenLeuvenBelgium
  3. 3.Group Science, Engineering and TechnologyK.U.Leuven Campus KortrijkKortrijkBelgium

Personalised recommendations