Skip to main content
Log in

On global convergence of alternating least squares for tensor approximation

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Alternating least squares is a classic, easily implemented, yet widely used method for tensor canonical polyadic approximation. Its subsequential and global convergence is ensured if the partial Hessians of the blocks during the whole sequence are uniformly positive definite. This paper shows that this positive definiteness assumption can be weakened in two ways. Firstly, if the smallest positive eigenvalues of the partial Hessians are uniformly positive, and the solutions of the subproblems are properly chosen, then global convergence holds. This allows the partial Hessians to be only positive semidefinite. Next, if at a limit point, the partial Hessians are positive definite, then global convergence also holds. We also discuss the connection of such an assumption to the uniqueness of exact CP decomposition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

References

  1. Absil, P.A., Mahony, R., Andrews, B.: Convergence of the iterates of descent methods for analytic cost functions. SIAM J. Optim. 16(2), 531–547 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  2. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137(1–2), 91–129 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bertsekas, D.P.: Nonlinear programming. Athena scientific Belmont (1999)

  4. Chen, Y., Han, D., Qi, L.: New ALS methods with extrapolating search directions and optimal step size for complex-valued tensor decompositions. IEEE Trans. Signal Process. 59(12), 5888–5898 (2011). https://doi.org/10.1109/TSP.2011.2164911

    Article  MathSciNet  MATH  Google Scholar 

  5. Cichocki, A., Mandic, D., De Lathauwer, L., Zhou, G., Zhao, Q., Caiafa, C., Phan, H.A.: Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 32(2), 145–163 (2015)

    Article  Google Scholar 

  6. Comon, P.: Tensors: a brief introduction. IEEE Signal Process. Mag. 31(3), 44–53 (2014)

    Article  Google Scholar 

  7. Comon, P., Luciani, X., De Almeida, A.: Tensor decompositions, alternating least squares and other tales. J. Chemometr. 23(7–8), 393–405 (2009)

    Article  Google Scholar 

  8. Espig, M., Hackbusch, W., Khachatryan, A.: On the convergence of alternating least squares optimisation in tensor format representations. arXiv preprint arXiv:1506.00062 (2015)

  9. Grippo, L., Sciandrone, M.: On the convergence of the block nonlinear Gauss-Seidel method under convex constraints. Op. Res. Lett. 26(3), 127–136 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kolda, T.G.: Multilinear operators for higher-order decompositions. Tech. rep., Sandia National Laboratories (SNL), Albuquerque, NM, and Livermore, CA (2006)

  11. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Kolda, T.G., Mayo, J.R.: Shifted power method for computing tensor eigenpairs. SIAM J. Matrix Anal. Appl. 32(4), 1095–1124 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Kruskal, J.B.: Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra Appl. 18(2), 95–138 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  14. Li, N., Kindermann, S., Navasca, C.: Some convergence results on the regularized alternating least-squares method for tensor decomposition. Linear Algebra Appl. 438(2), 796–812 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  15. Mohlenkamp, M.J.: Musings on multilinear fitting. Linear Algebra Appl. 438(2), 834–852 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  16. Oseledets, I.V., Rakhuba, M.V., Uschmajew, A.: Alternating least squares as moving subspace correction. SIAM J. Numer. Anal. 56(6), 3459–3479 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  17. Phan, A.H., Tichavský, P., Cichocki, A.: Low complexity damped Gauss-Newton algorithms for CANDECOMP/PARAFAC. SIAM J. Matrix Anal. Appl. 34(1), 126–147 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Powell, M.J.D.: On search directions for minimization algorithms. Math. Program. 4(1), 193–201 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  19. Rajih, M., Comon, P., Harshman, R.A.: Enhanced line search: a novel method to accelerate PARAFAC. SIAM J. Matrix Anal. Appl. 30(3), 1128–1147 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  20. Rohwedder, T., Uschmajew, A.: On local convergence of alternating schemes for optimization of convex problems in the tensor train format. SIAM J. Numer. Anal. 51(2), 1134–1162 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  21. Savas, B., Lim, L.H.: Quasi-Newton methods on Grassmannians and multilinear approximations of tensors. SIAM J. Sci. Comput. 32(6), 3352–3393 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  22. Sidiropoulos, N.D., Bro, R.: On the uniqueness of multilinear decomposition of N-way arrays. J. Chemom. 14(3), 229–239 (2000)

    Article  Google Scholar 

  23. Sidiropoulos, N.D., De Lathauwer, L., Fu, X., Huang, K., Papalexakis, E.E., Faloutsos, C.: Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 65(13), 3551–3582 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  24. Silva, V.D., Lim, L.H.: Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30(3), 1084–1127 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  25. Sorber, L., Domanov, I., Van Barel, M., De Lathauwer, L.: Exact line and plane search for tensor optimization. Comput. Optim. Appl. 63(1), 121–142 (2016). https://doi.org/10.1007/s10589-015-9761-5

    Article  MathSciNet  MATH  Google Scholar 

  26. Sorber, L., Van Barel, M., De Lathauwer, L.: Optimization-based algorithms for tensor decompositions: canonical polyadic decomposition, decomposition in rank-(\({L}_r,{L}_r,1\)) terms, and a new generalization. SIAM J. Optim. 23(2), 695–720 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. Tseng, P.: Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109(3), 475–494 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  28. Uschmajew, A.: Local convergence of the alternating least squares algorithm for canonical tensor approximation. SIAM J. Matrix Anal. Appl. 33(2), 639–652 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Uschmajew, A.: A new convergence proof for the higher-order power method and generalizations. Pac. J. Optim. 11(2), 309–321 (2015)

    MathSciNet  MATH  Google Scholar 

  30. Wang, L., Chu, M.T.: On the global convergence of the alternating least squares method for rank-one approximation to generic tensors. SIAM J. Matrix Anal. Appl. 35(3), 1058–1072 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  31. Wang, L., Chu, M.T., Yu, B.: Orthogonal low rank tensor approximation: alternating least squares method and its global convergence. SIAM J. Matrix Anal. Appl. 36(1), 1–19 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. Warga, J.: Minimizing certain convex functions. J. Soc. Ind. Appl. Math. 11(3), 588–593 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  33. Xu, Y.: On the convergence of higher-order orthogonal iteration. Linear Multilinear Algebra 66(11), 2247–2265 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  34. Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6(3), 1758–1789 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  35. Zhang, T., Golub, G.H.: Rank-one approximation to high order tensors. SIAM J. Matrix Anal. Appl. 23(2), 534–550 (2001)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank the anonymous reviewer for the insightful comments and suggestions that helped improve this manuscript. The author was supported by the National Natural Science Foundation of China Grants 11801100 and 12171105, and the Fok Ying Tong Education Foundation Grant 171094.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuning Yang.

Ethics declarations

Confict of interest

The authors declare that they have no confict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Y. On global convergence of alternating least squares for tensor approximation. Comput Optim Appl 84, 509–529 (2023). https://doi.org/10.1007/s10589-022-00428-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-022-00428-1

Keywords

Mathematics Subject Classification

Navigation