Advertisement

Journal of Fourier Analysis and Applications

, Volume 25, Issue 6, pp 2957–2972 | Cite as

Analysis of Singular Value Thresholding Algorithm for Matrix Completion

  • Yunwen LeiEmail author
  • Ding-Xuan Zhou
Article
  • 85 Downloads

Abstract

This paper provides analysis for convergence of the singular value thresholding algorithm for solving matrix completion and affine rank minimization problems arising from compressive sensing, signal processing, machine learning, and related topics. A necessary and sufficient condition for the convergence of the algorithm with respect to the Bregman distance is given in terms of the step size sequence \(\{\delta _k\}_{k\in \mathbb {N}}\) as \(\sum _{k=1}^{\infty }\delta _k=\infty \). Concrete convergence rates in terms of Bregman distances and Frobenius norms of matrices are presented. Our novel analysis is carried out by giving an identity for the Bregman distance as the excess gradient descent objective function values and an error decomposition after viewing the algorithm as a mirror descent algorithm with a non-differentiable mirror map.

Keywords

Matrix completion Singular value thresholding Mirror descent Bregman distance 

Mathematics Subject Classification

68Q32 93E35 

Notes

References

  1. 1.
    Bauer, F., Pereverzev, S., Rosasco, L.: On regularization algorithms in learning theory. J. Complex. 23(1), 52–72 (2007)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31(3), 167–175 (2003)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Borwein, J.M., Lewis, A.S.: Convex Analysis and Nonlinear Optimization: Theory and Examples. Springer, Berlin (2010)Google Scholar
  4. 4.
    Cai, J.-F., Osher, S., Shen, Z.: Convergence of the linearized bregman iteration for \(\ell _1\)-norm minimization. Math. Comput. 78(268), 2127–2136 (2009)CrossRefGoogle Scholar
  5. 5.
    Cai, J.-F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Gerfo, L.L., Rosasco, L., Odone, F., Vito, E.D., Verri, A.: Spectral algorithms for supervised learning. Neural Comput. 20(7), 1873–1897 (2008)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Guo, Z.-C., Xiang, D.-H., Guo, X., Zhou, D.-X.: Thresholded spectral algorithms for sparse approximations. Anal. Appl. 15(03), 433–455 (2017)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Kakade, S.M., Shalev-Shwartz, S., Tewari, A.: Regularization techniques for learning with matrices. J. Mach. Learn. Res. 13, 1865–1890 (2012)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Lei, Y., Zhou, D.-X.: Analysis of online composite mirror descent algorithm. Neural Comput. 29(3), 825–860 (2017)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Lei, Y., Zhou, D.-X.: Convergence of online mirror descent. Appl. Comput. Harmon. Anal. (2018).  https://doi.org/10.1016/j.acha.2018.05.005 MathSciNetCrossRefGoogle Scholar
  13. 13.
    Lei, Y., Zhou, D.-X.: Learning theory of randomized sparse Kaczmarz method. SIAM J. Imaging Sci. 11(1), 547–574 (2018)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Lin, J., Zhou, D.-X.: Learning theory of randomized Kaczmarz algorithm. J. Mach. Learn. Res. 16, 3341–3365 (2015)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Minh, H.Q.: Infinite-dimensional log-determinant divergences between positive definite trace class operators. Linear Algebra Appl. 528, 331–383 (2017)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Nemirovsky, A.-S., Yudin, D.-B.: Problem Complexity and Method Efficiency in Optimization. Wiley, New York (1983)Google Scholar
  17. 17.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, New York (2013)zbMATHGoogle Scholar
  18. 18.
    Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1960)Google Scholar
  20. 20.
    Steinwart, I., Christmann, A.: Support Vector Machines. Springer, Berlin (2008)zbMATHGoogle Scholar
  21. 21.
    Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15(2), 262 (2009)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Ying, Y., Zhou, D.-X.: Unregularized online learning algorithms with general loss functions. Appl. Comput. Harmon. Anal. 42(2), 224–244 (2017)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of MathematicsCity University of Hong KongKowloonHong Kong
  2. 2.School of Data Science and Department of MathematicsCity University of Hong KongKowloonHong Kong

Personalised recommendations