Mathematical Programming

, Volume 128, Issue 1–2, pp 321–353 | Cite as

Fixed point and Bregman iterative methods for matrix rank minimization

Full Length Paper Series A

Abstract

The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htm for non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10−5 in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.

Keywords

Matrix rank minimization Matrix completion problem Nuclear norm minimization Fixed point iterative method Bregman distances Singular value decomposition 

Mathematics Subject Classification (2000)

65K05 90C25 90C06 93C41 68Q32 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bach F.R.: Consistency of trace norm minimization. J. Mach. Learn. Res. 9(Jun), 1019–1048 (2008)MathSciNetGoogle Scholar
  2. 2.
    Bertalmío, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of SIGGRAPH 2000, New Orleans, USA (2000)Google Scholar
  3. 3.
    Borwein J.M., Lewis A.S.: Convex Analysis and Nonlinear Optimization. Springer, New York (2003)Google Scholar
  4. 4.
    Bregman L.: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)CrossRefGoogle Scholar
  5. 5.
    Burer S., Monteiro R.D.C.: A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math. Program. (Ser. B) 95, 329–357 (2003)MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Burer S., Monteiro R.D.C.: Local mimima and convergence in low-rank semidefinite programming. Math. Program. 103(3), 427–444 (2005)MathSciNetMATHCrossRefGoogle Scholar
  7. 7.
    Cai, J., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. Preprint available at http://arxiv.org/abs/0810.3286 (2008)
  8. 8.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. (2009)Google Scholar
  9. 9.
    Candès, E.J., Romberg, J.: 1-MAGIC: recovery of sparse signals via convex programming. Technical Report, Caltech (2005)Google Scholar
  10. 10.
    Candès E.J., Romberg J., Tao T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)CrossRefGoogle Scholar
  11. 11.
    Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. Preprint available at http://arxiv.org/abs/0903.1476 (2009)
  12. 12.
    Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing: closing the gap between performance and complexity. Preprint available at arXiv: 0803.0811 (2008)Google Scholar
  13. 13.
    Donoho D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Donoho, D.L., Tsaig, Y.: Fast solution of 1-norm minimization problems when the solution may be sparse. Technical Report, Department of Statistics, Stanford University (2006)Google Scholar
  15. 15.
    Donoho, D., Tsaig, Y., Drori, I., Starck, J.C.: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory (2006) (submitted)Google Scholar
  16. 16.
    Drineas P., Kannan R., Mahoney M.W.: Fast Monte Carlo algorithms for matrices ii: computing low-rank approximations to a matrix. SIAM J. Comput. 36, 158–183 (2006)MathSciNetMATHCrossRefGoogle Scholar
  17. 17.
    Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University (2002)Google Scholar
  18. 18.
    Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American Control Conference, vol. 6, pp. 4734–4739 (2001)Google Scholar
  19. 19.
    Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4) (2007)Google Scholar
  20. 20.
    Ghaoui, L.E., Gahinet, P.: Rank minimization under LMI constraints: a framework for output feedback problems. In: Proceedings of the European Control Conference (1993)Google Scholar
  21. 21.
    Goldberg K., Roeder T., Gupta D., Perkins C.: Eigentaste: a constant time collaborative filtering algorithm. Inf. Retr. 4(2), 133–151 (2001)MATHCrossRefGoogle Scholar
  22. 22.
    Goldfarb, D., Ma, S.: Convergence of fixed point continuation algorithms for matrix rank minimization. Technical Report, Department of IEOR, Columbia University (2009)Google Scholar
  23. 23.
    Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for 1-regularized minimization with applications to compressed sensing. Technical Report, CAAM TR07-07 (2007)Google Scholar
  24. 24.
    Hiriart-Urruty J.B., Lemaréchal C.: Convex Analysis and Minimization Algorithms II: Advanced Theory and Bundle Methods. Springer, New York (1993)MATHGoogle Scholar
  25. 25.
    Horn R.A., Johnson C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)MATHGoogle Scholar
  26. 26.
    Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries. Preprint available at http://arxiv.org/abs/0901.3150 (2009)
  27. 27.
    Kim S.J., Koh K., Lustig M., Boyd S., Gorinevsky D.: A method for large-scale 1-regularized least-squares. IEEE J. Sel. Top. Signal Process. 4(1), 606–617 (2007)CrossRefGoogle Scholar
  28. 28.
    Linial N., London E., Rabinovich Y.: The geometry of graphs and some of its algorithmic applications. Combinatorica 15, 215–245 (1995)MathSciNetMATHCrossRefGoogle Scholar
  29. 29.
    Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. Preprint available at http://www.ee.ucla.edu/~vandenbe/publications/nucnrm.pdf (2008)
  30. 30.
    Natarajan B.K.: Sparse approximation solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)MathSciNetMATHCrossRefGoogle Scholar
  31. 31.
    Osher S., Burger M., Goldfarb D., Xu J., Yin W.: An iterative regularization method for total varitaion-based image restoration. SIAM MMS 4(2), 460–489 (2005)MathSciNetMATHGoogle Scholar
  32. 32.
    Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. Preprint available at http://arxiv.org/abs/0706.4138 (2007)
  33. 33.
    Rennie, J.D.M., Srebro, N.: Fast maximum margin matrix factorization for collaborative prediction. In: Proceedings of the International Conference of Machine Learning (2005)Google Scholar
  34. 34.
    Rudin L., Osher S., Fatemi E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)MATHCrossRefGoogle Scholar
  35. 35.
    Spellman P.T., Sherlock G., Zhang M.Q., Iyer V.R., Anders K., Eisen M.B., Brown P.O., Botstein D., Futcher B.: Comprehensive identification of cell cycle-regulated genes of the yeast saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell 9, 3273–3297 (1998)Google Scholar
  36. 36.
    Srebro, N.: Learning with matrix factorizations. Ph.D. thesis, Massachusetts Institute of Technology (2004)Google Scholar
  37. 37.
    Srebro, N., Jaakkola, T.: Weighted low-rank approximations. In: Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (2003)Google Scholar
  38. 38.
    Sturm J.F.: Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Opt. Methods Softw. 11(12), 625–653 (1999)MathSciNetCrossRefGoogle Scholar
  39. 39.
    Tibshirani R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)MathSciNetMATHGoogle Scholar
  40. 40.
    Tropp J.: Just relax: convex programming methods for identifying sparse signals. IEEE Trans. Inf. Theory 51, 1030–1051 (2006)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Troyanskaya O., Cantor M., Sherlock G., Brown P., Hastie T., Tibshirani R., Botstein D., Altman R.B.: Missing value estimation methods for DNA microarrays. Bioinformatics 17(6), 520–525 (2001)CrossRefGoogle Scholar
  42. 42.
    Tütüncü R.H., Toh K.C., Todd M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. Ser. B 95, 189–217 (2003)MATHCrossRefGoogle Scholar
  43. 43.
    van den Berg E., Friedlander M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)MathSciNetMATHCrossRefGoogle Scholar
  44. 44.
    Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. Technical Report, Department of IEOR, Columbia University (2009)Google Scholar
  45. 45.
    Yin W., Osher S., Goldfarb D., Darbon J.: Bregman iterative algorithms for 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer and Mathematical Programming Society 2009

Authors and Affiliations

  1. 1.Department of Industrial Engineering and Operations ResearchColumbia UniversityNew YorkUSA

Personalised recommendations