Efficient algorithms for robust and stable principal component pursuit problems

  • Necdet Serhat Aybat
  • Donald Goldfarb
  • Shiqian Ma
Article

Abstract

The problem of recovering a low-rank matrix from a set of observations corrupted with gross sparse error is known as the robust principal component analysis (RPCA) and has many applications in computer vision, image processing and web data ranking. It has been shown that under certain conditions, the solution to the NP-hard RPCA problem can be obtained by solving a convex optimization problem, namely the robust principal component pursuit (RPCP). Moreover, if the observed data matrix has also been corrupted by a dense noise matrix in addition to gross sparse error, then the stable principal component pursuit (SPCP) problem is solved to recover the low-rank matrix. In this paper, we develop efficient algorithms with provable iteration complexity bounds for solving RPCP and SPCP. Numerical results on problems with millions of variables and constraints such as foreground extraction from surveillance video, shadow and specularity removal from face images and video denoising from heavily corrupted data show that our algorithms are competitive to current state-of-the-art solvers for RPCP and SPCP in terms of accuracy and speed.

Keywords

Principal component analysis Compressed sensing Matrix completion Convex optimization Smoothing Alternating linearization method Alternating direction augmented Lagrangian method Accelerated proximal gradient method Iteration complexity 

References

  1. 1.
    Aybat, N.S., Iyengar, G.: A fast first-order method for stable principal component pursuit. Working paper. arXiv:1309.6553 [math.OC]
  2. 2.
    Aybat, N.S., Iyengar, G.: An augmented Lagrangian method for conic convex programming. Math. Program. J., Ser. A (2012, submitted). arXiv:1302.6322 [math.OC]
  3. 3.
    Aybat, N.S., Iyengar, G.: A unified approach for minimizing composite norms. Math. Program. J., Ser. A (2012, forthcoming). doi:10.1007/s10107-012-0622-z
  4. 4.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009) CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    Cai, J., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010) CrossRefMATHMathSciNetGoogle Scholar
  6. 6.
    Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58, 1–37 (2011) CrossRefGoogle Scholar
  7. 7.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009) CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006) CrossRefMATHGoogle Scholar
  9. 9.
    Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2009) CrossRefGoogle Scholar
  10. 10.
    Chandrasekaran, V., Sanghavi, S., Parrilo, P., Willsky, A.: Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21, 572–596 (2011) CrossRefMATHMathSciNetGoogle Scholar
  11. 11.
    Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004) CrossRefMATHGoogle Scholar
  12. 12.
    Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006) CrossRefMathSciNetGoogle Scholar
  13. 13.
    Fazel, M.: Matrix rank minimization with applications. PhD thesis, Stanford University (2002) Google Scholar
  14. 14.
    Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. on Pattern Analysis and Machine Intelligence 23 (2001) Google Scholar
  15. 15.
    Goldfarb, D., Ma, S., Scheinberg, K.: Fast alternating linearization methods for minimizing the sum of two convex functions. Math. Program. Ser. A 141, 349–382 (2013) CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for 1-minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008) CrossRefMATHMathSciNetGoogle Scholar
  17. 17.
    Kiwiel, K.C., Rosa, C.H., Ruszczynski, A.: Proximal decomposition via alternating linearization. SIAM J. Optim. 9, 668–689 (1999) CrossRefMATHMathSciNetGoogle Scholar
  18. 18.
    Larsen, R.M.: PROPACK—software for large and sparse SVD calculations. Available from http://sun.stanford.edu/~rmunk/PROPACK
  19. 19.
    Li, L., Huang, W., Gu, I., Tian, Q.: Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13, 1459–1472 (2004) CrossRefGoogle Scholar
  20. 20.
    Lin, Z., Chen, M., Wu, L., Ma, Y.: The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices (2011). arXiv:1009.5055v2
  21. 21.
    Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., Ma, Y.: Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Tech. report, UIUC technical report UILU-ENG-09-2214, 2009 Google Scholar
  22. 22.
    Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program., Ser. A 128, 321–353 (2011) CrossRefMATHMathSciNetGoogle Scholar
  23. 23.
    Nesterov, Y.E.: Introductory Lectures on Convex Optimization: A Basic Course, vol. 87, pp. xviii+236 (2004). Google Scholar
  24. 24.
    Nesterov, Y.E.: Smooth minimization for non-smooth functions. Math. Program., Ser. A 103, 127–152 (2005) CrossRefMATHMathSciNetGoogle Scholar
  25. 25.
    Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010) CrossRefMATHMathSciNetGoogle Scholar
  26. 26.
    Tao, M., Yuan, X.: Recovering low-rank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim. 21, 57–81 (2011) CrossRefMATHMathSciNetGoogle Scholar
  27. 27.
    Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization. NIPS (2009) Google Scholar
  28. 28.
    Yang, J., Zhang, Y., Yin, W.: An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise. SIAM J. Sci. Comput. 31, 2842–2865 (2009) CrossRefMATHMathSciNetGoogle Scholar
  29. 29.
    Yuan, X., Yang, J.: Sparse and low rank matrix decomposition via alternating direction methods. Pacif. J. Optim. 9, 167–180 (2013) MATHGoogle Scholar
  30. 30.
    Zhou, Z., Li, X., Wright, J., Candès, E.J., Ma, Y.: Stable principal component pursuit. In: Proceedings of International Symposium on Information Theory (2010) Google Scholar
  31. 31.
    Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992) CrossRefMATHMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Necdet Serhat Aybat
    • 1
  • Donald Goldfarb
    • 2
  • Shiqian Ma
    • 3
  1. 1.Department of Industrial EngineeringPenn State UniversityUniversity ParkUSA
  2. 2.Department of Industrial Engineering and Operations ResearchColumbia UniversityNew YorkUSA
  3. 3.Department of Systems Engineering and Engineering ManagementThe Chinese University of Hong KongShatinHong Kong

Personalised recommendations