Skip to main content
Log in

Rank Properties and Computational Methods for Orthogonal Tensor Decompositions

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

The orthogonal decomposition factorizes a tensor into a sum of an orthogonal list of rank-one tensors. The corresponding rank is called orthogonal rank. We present several properties of orthogonal rank, which are different from those of tensor rank in many aspects. For instance, a subtensor may have a larger orthogonal rank than the whole tensor. To fit the orthogonal decomposition, we propose an algorithm based on the augmented Lagrangian method. The gradient of the objective function has a nice structure, inspiring us to use gradient-based optimization methods to solve it. We guarantee the orthogonality by a novel orthogonalization process. Numerical experiments show that the proposed method has a great advantage over the existing methods for strongly orthogonal decompositions in terms of the approximation error.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data Availability

The datasets analysed during the current study are public available and we have provided the URLs when using them.

Notes

  1. Strongly orthogonal decomposition has a different definition in Ref. [17].

  2. Such tensors exist. See [9, Lemma 4.7] for an example.

  3. The hyperspectral image data have been used in [36] and available at https://rslab.ut.ac.ir/data.

  4. The video data are from the video trace library [29] and available at http://trace.eas.asu.edu/yuv/.

  5. A Matlab implementation, adapted by Dianne P. O’Leary, is available at http://www.cs.umd.edu/users/oleary/software/.

References

  1. Acar, E., Dunlavy, D.M., Kolda, T.G.: A scalable optimization approach for fitting canonical tensor decompositions. J. Chemom. 25(2), 67–86 (2011)

    Article  Google Scholar 

  2. Bader, B.W., Kolda, T.G. et al.: MATLAB Tensor Toolbox Version 3.0-dev. Available online, Oct. (2017)

  3. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic press, Cambridge (1982)

    MATH  Google Scholar 

  4. Carroll, J.D., Chang, J.-J.: Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young’’ decomposition. Psychometrika 35(3), 283–319 (1970)

    Article  MATH  Google Scholar 

  5. Chen, J., Saad, Y.: On the tensor SVD and the optimal low rank orthogonal approximation of tensors. SIAM J. Matrix Anal. Appl. 30(4), 1709–1734 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Comon, P.: Independent component analysis, A new concept? Signal Process. 36(3), 287–314 (1994)

    Article  MATH  Google Scholar 

  7. Conn, A.R., Gould, N., Sartenaer, A., Toint, P.L.: Convergence properties of an augmented Lagrangian algorithm for optimization with a combination of general equality and linear constraints. SIAM J. Optim. 6(3), 674–703 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  8. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. De Silva, V., Lim, L.-H.: Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30(3), 1084–1127 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  10. De Sterck, H., Howse, A.J.: Nonlinearly preconditioned L-BFGS as an acceleration mechanism for alternating least squares with application to tensor decomposition. Num. Linear Algebra Appl. 25(6), e2202 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  11. Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1(3), 211–218 (1936)

    Article  MATH  Google Scholar 

  12. Espig, M., Hackbusch, W.: A regularized Newton method for the efficient approximation of tensors represented in the canonical tensor format. Numer. Math. 122(3), 489–525 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Guan, Y., Chu, D.: Numerical computation for orthogonal low-rank approximation of tensors. SIAM J. Matrix Anal. Appl. 40(3), 1047–1065 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  14. Harshman, R.A. et al.: Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multimodal factor analysis. (1970)

  15. Håstad, J.: Tensor rank is NP-complete. J. Algorithms 11(4), 644–654 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hillar, C.J., Lim, L.-H.: Most tensor problems are NP-hard. J. ACM (JACM) 60(6), 45 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Kolda, T.G.: Orthogonal tensor decompositions. SIAM J. Matrix Anal. Appl. 23(1), 243–255 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kolda, T.G.: A counterexample to the possibility of an extension of the Eckart-Young low-rank approximation theorem for the orthogonal rank tensor decomposition. SIAM J. Matrix Anal. Appl. 24(3), 762–767 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  19. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Krijnen, W.P., Dijkstra, T.K., Stegeman, A.: On the non-existence of optimal solutions and the occurrence of “degeneracy’’ in the CANDECOMP/PARAFAC model. Psychometrika 73(3), 431–439 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  21. Kruskal, J.B.: Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra Appl. 18(2), 95–138 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  22. Li, Z., Nakatsukasa, Y., Soma, T., Uschmajew, A.: On orthogonal tensors and best rank-one approximation ratio. SIAM J. Matrix Anal. Appl. 39(1), 400–425 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  23. Lim, L.-H., Comon, P.: Blind multilinear identification. IEEE Trans. Inf. Theory 60(2), 1260–1280 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  24. Martin, C.D.M., Van Loan, C.F.: A Jacobi-type method for computing orthogonal tensor decompositions. SIAM J. Matrix Anal. Appl. 30(3), 1219–1232 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  25. More, J.J., Thuente, D.J.: Line search algorithms with guaranteed sufficient decrease. ACM Trans. Math. Softw. 20(3), 286–307 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  26. Nazih, M., Minaoui, K., Comon, P.: Using the proximal gradient and the accelerated proximal gradient as a canonical polyadic tensor decomposition algorithms in difficult situations. Signal Process. 171, 107472 (2020)

    Article  Google Scholar 

  27. Nocedal, J., Wright, S.: Numerical Optimization. Springer Science & Business Media, New York (2006)

    MATH  Google Scholar 

  28. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer Science & Business Media, New York (2009)

    MATH  Google Scholar 

  29. Seeling, P., Reisslein, M.: Video transport evaluation with H. 264 video traces. IEEE Commun. Surv. Tutor. 14(4), 1142–1165 (2011)

    Article  Google Scholar 

  30. Sidiropoulos, N.D., Bro, R.: On the uniqueness of multilinear decomposition of N-way arrays. J. Chemometr. J. Chemometr. Soc. 14(3), 229–239 (2000)

    Article  Google Scholar 

  31. Sørensen, M., De Lathauwer, L., Comon, P., Icart, S., Deneire, L.: Canonical polyadic decomposition with a columnwise orthonormal factor matrix. SIAM J. Matrix Anal. Appl. 33(4), 1190–1213 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  32. Sterck, H.D.: A nonlinear GMRES optimization algorithm for canonical tensor decomposition. SIAM J. Sci. Comput. 34(3), A1351–A1379 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  33. Sun, W., Yuan, Y.-X.: Optimization Theory and Methods: Nonlinear Programming. Springer Optimization and Its Applications. Springer Science & Business Media, New York (2010)

    Google Scholar 

  34. Wang, L., Chu, M.T., Yu, B.: Orthogonal low rank tensor approximation: alternating least squares method and its global convergence. SIAM J. Matrix Anal. Appl. 36(1), 1–19 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  35. Yang, Y.: The epsilon-alternating least squares for orthogonal low-rank tensor approximation and its global convergence. SIAM J. Matrix Anal. Appl. 41(4), 1797–1825 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  36. Zhu, F., Wang, Y., Fan, B., Xiang, S., Meng, G., Pan, C.: Spectral unmixing via data-guided sparsity. IEEE Trans. Image Process. 23(12), 5412–5427 (2014)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The author is extremely grateful to the two anonymous referees for their valuable feedback, which improved this paper significantly. This work was partially supported by the National Natural Science Foundation of China (12201319).

Funding

This work was partially supported by the National Natural Science Foundation of China (12201319).

Author information

Authors and Affiliations

Authors

Contributions

CZ is the single author of the manuscript and responsible for this work.

Corresponding author

Correspondence to Chao Zeng.

Ethics declarations

Conflict of interest

The author declares he/she has no financial interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zeng, C. Rank Properties and Computational Methods for Orthogonal Tensor Decompositions. J Sci Comput 94, 6 (2023). https://doi.org/10.1007/s10915-022-02054-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-022-02054-9

Keywords

Mathematics Subject Classification

Navigation