Advertisement

Foundations of Computational Mathematics

, Volume 11, Issue 2, pp 183–210 | Cite as

Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization

  • Donald GoldfarbEmail author
  • Shiqian Ma
Article

Abstract

The matrix rank minimization problem has applications in many fields, such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization problem (Math. Program., doi: 10.1007/s10107-009-0306-5, 2009). By incorporating an approximate singular value decomposition technique in this algorithm, the solution to the matrix rank minimization problem is usually obtained. In this paper, we study the convergence/recoverability properties of the fixed-point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving affinely constrained matrix rank minimization problems are reported.

Keywords

Matrix rank minimization Matrix completion Greedy algorithm Fixed-point method Restricted isometry property Singular value decomposition 

Mathematics Subject Classification (2000)

90C59 15B52 15A18 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    T. Blumensath, M.E. Davies, Gradient pursuits, IEEE Trans. Signal Process. 56(6), 2370–2382 (2008). MathSciNetCrossRefGoogle Scholar
  2. 2.
    T. Blumensath, M.E. Davies, Iterative hard thresholding for compressed sensing, Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009). MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    J.M. Borwein, A.S. Lewis, Convex Analysis and Nonlinear Optimization (Springer, Berlin, 2003). Google Scholar
  4. 4.
    J. Cai, E.J. Candès, Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim. 20(4), 1956–1982 (2010). MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    E.J. Candès, Y. Plan, Matrix completion with noise, Proc. IEEE (2009). Google Scholar
  6. 6.
    E.J. Candès, B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math. 9, 717–772 (2009). MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    E.J. Candès, J. Romberg, 1-MAGIC: Recovery of sparse signals via convex programming, Tech. rep., Caltech, 2005. Google Scholar
  8. 8.
    E.J. Candès, T. Tao, The power of convex relaxation: near-optimal matrix completion, IEEE Trans. Inf. Theory 56(5), 2053–2080 (2009). CrossRefGoogle Scholar
  9. 9.
    E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory 52, 489–509 (2006). CrossRefGoogle Scholar
  10. 10.
    Rice compressed sensing website. http://dsp.rice.edu/cs.
  11. 11.
    W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009). MathSciNetCrossRefGoogle Scholar
  12. 12.
    D. Donoho, Compressed sensing, IEEE Trans. Inf. Theory 52, 1289–1306 (2006). MathSciNetCrossRefGoogle Scholar
  13. 13.
    D.L. Donoho, Y. Tsaig, Fast solution of 1-norm minimization problems when the solution may be sparse, IEEE Trans. Inf. Theory 54(11), 4789–4812 (2008). MathSciNetCrossRefGoogle Scholar
  14. 14.
    D. Donoho, Y. Tsaig, I. Drori, J.-C. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, Tech. rep., Stanford University, 2006. Google Scholar
  15. 15.
    P. Drineas, R. Kannan, M.W. Mahoney, Fast Monte Carlo algorithms for matrices II: Computing low-rank approximations to a matrix, SIAM J. Comput. 36, 158–183 (2006). MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    M. Fazel, H. Hindi, S. Boyd, A rank minimization heuristic with application to minimum order system approximation, in Proceedings of the American Control Conference, vol. 6 (2001), pp. 4734–4739. Google Scholar
  17. 17.
    M. Fazel, H. Hindi, S. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices, in Proceedings of the American Control Conference (2003), pp. 2156–2162. Google Scholar
  18. 18.
    M. Fazel, H. Hindi, S. Boyd, Rank minimization and applications in system theory, in American Control Conference (2004), pp. 3273–3278. Google Scholar
  19. 19.
    M.A.T. Figueiredo, R.D. Nowak, S.J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems, IEEE J. Sel. Top. Signal Process. 1, 4 (2007). CrossRefGoogle Scholar
  20. 20.
    L.E. Ghaoui, P. Gahinet, Rank minimization under LMI constraints: A framework for output feedback problems, in Proceedings of the European Control Conference (1993). Google Scholar
  21. 21.
    E.T. Hale, W. Yin, Y. Zhang, Fixed-point continuation for 1-minimization: Methodology and convergence, SIAM J. Optim. 19(3), 1107–1130 (2008). MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    J.-B. Hiriart-Urruty, C. Lemaréchal, Convex Analysis and Minimization Algorithms II: Advanced Theory and Bundle Methods (Springer, New York, 1993). zbMATHGoogle Scholar
  23. 23.
    R.H. Keshavan, A. Montanari, S. Oh, Matrix completion from noisy entries (2009). arXiv:0906.2027.
  24. 24.
    R.H. Keshavan, A. Montanari, S. Oh, Matrix completion from a few entries, IEEE Trans. Inf. Theory 56, 2980–2998 (2010). MathSciNetCrossRefGoogle Scholar
  25. 25.
    S.J. Kim, K. Koh, M. Lustig, S. Boyd, D. Gorinevsky, A method for large-scale 1-regularized least-squares, IEEE J. Sel. Top. Signal Process. 4(1), 606–617 (2007). CrossRefGoogle Scholar
  26. 26.
    R.M. Larsen, PROPACK—software for large and sparse SVD calculations, available from http://sun.stanford.edu/~rmunk/PROPACK.
  27. 27.
    K. Lee, Y. Bresler, ADMIRA: atomic decomposition for minimum rank approximation (2009). arXiv:0905.0044.
  28. 28.
    K. Lee, Y. Bresler, Efficient and guaranteed rank minimization by atomic decomposition (2009). arXiv:0901.1898v1.
  29. 29.
    K. Lee, Y. Bresler, Guaranteed minimum rank approximation from linear observations by nuclear norm minimization with an ellipsoidal constraint (2009). arXiv:0903.4742.
  30. 30.
    N. Linial, E. London, Y. Rabinovich, The geometry of graphs and some of its algorithmic applications, Combinatorica 15, 215–245 (1995). MathSciNetzbMATHCrossRefGoogle Scholar
  31. 31.
    Y. Liu, D. Sun, K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Preprint, National University of Singapore, 2009. Google Scholar
  32. 32.
    Z. Liu, L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl. 31(3), 1235–1256 (2009). MathSciNetCrossRefGoogle Scholar
  33. 33.
    S. Ma, D. Goldfarb, L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program. (2009). doi: 10.1007/s10107-009-0306-5. Google Scholar
  34. 34.
    R. Meka, P. Jain, I.S. Dhillon, Guaranteed rank minimization via singular value projection (2009). arXiv:0909.5457.
  35. 35.
    B.K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput. 24, 227–234 (1995). MathSciNetzbMATHCrossRefGoogle Scholar
  36. 36.
    D. Needell, J.A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal. 26, 301–321 (2009). MathSciNetzbMATHCrossRefGoogle Scholar
  37. 37.
    Netfix prize website. http://www.netflixprize.com/.
  38. 38.
    B. Recht, M. Fazel, P. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev. 52(3), 471–501 (2010). MathSciNetzbMATHCrossRefGoogle Scholar
  39. 39.
    E. Sontag, Mathematical Control Theory (Springer, New York, 1998). zbMATHGoogle Scholar
  40. 40.
    N. Srebro, Learning with Matrix Factorizations. PhD thesis, Massachusetts Institute of Technology, 2004. Google Scholar
  41. 41.
    N. Srebro, T. Jaakkola, Weighted low-rank approximations, in Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (2003). Google Scholar
  42. 42.
    R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Stat. Soc. B 58, 267–288 (1996). MathSciNetzbMATHGoogle Scholar
  43. 43.
    K.-C. Toh, S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pac. J. Optim. 6, 615–640 (2010). MathSciNetzbMATHGoogle Scholar
  44. 44.
    K.-C. Toh, M.J. Todd, R.H. Tütüncü, SDPT3—a Matlab software package for semidefinite programming, Optim. Methods Softw. 11, 545–581 (1999). MathSciNetCrossRefGoogle Scholar
  45. 45.
    J. Tropp, Just relax: Convex programming methods for identifying sparse signals, IEEE Trans. Inf. Theory 51, 1030–1051 (2006). MathSciNetCrossRefGoogle Scholar
  46. 46.
    E. van den Berg, M.P. Friedlander, Probing the Pareto frontier for basis pursuit solutions, SIAM J. Sci. Comput. 31(2), 890–912 (2008). MathSciNetzbMATHCrossRefGoogle Scholar
  47. 47.
    W. Yin, S. Osher, D. Goldfarb, J. Darbon, Bregman iterative algorithms for 1-minimization with applications to compressed sensing, SIAM J. Imaging Sci. 1(1), 143–168 (2008). MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© SFoCM 2011

Authors and Affiliations

  1. 1.Department of Industrial Engineering and Operations ResearchColumbia UniversityNew YorkUSA

Personalised recommendations