Robust PCA Using Nonconvex Rank Approximation and Sparse Regularizer

  • Jing DongEmail author
  • Zhichao Xue
  • Wenwu Wang


We consider the robust principal component analysis (RPCA) problem where the observed data are decomposed to a low-rank component and a sparse component. Conventionally, the matrix rank in RPCA is often approximated using a nuclear norm. Recently, RPCA has been formulated using the nonconvex \(\ell _{\gamma }\)-norm, which provides a closer approximation to the matrix rank than the traditional nuclear norm. However, the low-rank component generally has sparse property, especially in the transform domain. In this paper, a sparsity-based regularization term modeled with \(\ell _1\)-norm is introduced to the formulation. An iterative optimization algorithm is developed to solve the obtained optimization problem. Experiments using synthetic and real data are utilized to validate the performance of the proposed method.


Robust principal component analysis \(\ell _{\gamma }\)-norm Sparse prior Low-rank 



This work was supported by the National Natural Science Foundation of China (61906087), the Natural Science Foundation of Jiangsu Province of China (BK20180692), and the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province of China (17KJB510025). The authors thank the associate editor and the anonymous reviewers for their contributions to improving the quality of the paper.


  1. 1.
    A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefGoogle Scholar
  2. 2.
    S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRefGoogle Scholar
  3. 3.
    S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge Univeristy Press, Cambridge, 2004)CrossRefGoogle Scholar
  4. 4.
    E.J. Candès, X. Li, Y. Ma, J. Wright, Robust principal component analysis? J. ACM (JACM) 58(3), 11 (2011)MathSciNetCrossRefGoogle Scholar
  5. 5.
    E.J. Candès, B. Recht, Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717 (2009)MathSciNetCrossRefGoogle Scholar
  6. 6.
    E.J. Candes, T. Tao, The Power of Convex Relaxation: Near-Optimal Matrix Completion (IEEE Press, Piscataway, 2010)zbMATHGoogle Scholar
  7. 7.
    F. Cao, J. Chen, H. Ye, J. Zhao, Z. Zhou, Recovering low-rank and sparse matrix based on the truncated nuclear norm. Neural Netw. 85, 10–20 (2017)CrossRefGoogle Scholar
  8. 8.
    W. Cao, Y. Wang, J. Sun, D. Meng, C. Yang, A. Cichocki, Z. Xu, Total variation regularized tensor RPCA for background subtraction from compressive measurements. IEEE Trans. Image Process. 25(9), 4075–4090 (2016)MathSciNetCrossRefGoogle Scholar
  9. 9.
    J. Dong, Z. Xue, J. Guan, Z.F. Han, W. Wang, Low rank matrix completion using truncated nuclear norm and sparse regularizer. Sig. Process. Image Commun. 68, 76–87 (2018)CrossRefGoogle Scholar
  10. 10.
    D. Goldfarb, S. Ma, K. Scheinberg, Fast alternating linearization methods for minimizing the sum of two convex functions. Math. Program. 141(1–2), 349–382 (2013)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Y. Hu, D. Zhang, J. Ye, X. Li, X. He, Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 35(9), 2117–2130 (2013)CrossRefGoogle Scholar
  12. 12.
    P.S. Huang, S.D. Chen, P. Smaragdis, Hasegawa-Johnson, M.: Singing-voice separation from monaural recordings using robust principal component analysis, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 57–60Google Scholar
  13. 13.
    Y. Jianchao, W. John, H. Thomas, M. Yi, Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)MathSciNetCrossRefGoogle Scholar
  14. 14.
    W. John, A.Y. Yang, G. Arvind, S.S. Shankar, M. Yi, Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)CrossRefGoogle Scholar
  15. 15.
    I. Jolliffe, Principal Component Analysis (Springer, Berlin, 2011)zbMATHGoogle Scholar
  16. 16.
    Z. Kang, C. Peng, Q. Cheng, Robust PCA via nonconvex rank approximation, in IEEE International Conference on Data Mining (ICDM) (IEEE, 2015), pp. 211–220Google Scholar
  17. 17.
    V.A. Kumar, C.V.R. Rao, A. Dutta, Performance analysis of blind source separation using canonical correlation. Circuits Syst. Sig. Process. 37(2), 658–673 (2018)MathSciNetCrossRefGoogle Scholar
  18. 18.
    K.C. Lee, J. Ho, D.J. Kriegman, Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005)CrossRefGoogle Scholar
  19. 19.
    L. Li, W. Huang, I.Y.H. Gu, Q. Tian, Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11), 1459–1472 (2004)CrossRefGoogle Scholar
  20. 20.
    Z. Lin, M. Chen, Y. Ma, The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv:1009.5055 (2010)
  21. 21.
    G. Liu, Z. Lin, Y. Yu, Robust subspace segmentation by low-rank representation, in Proceedings of the 27th International Conference on Machine Learning (ICML-10) (2010), pp. 663–670Google Scholar
  22. 22.
    N. Merhav, R. Kresch, Approximate convolution using DCT coefficient multipliers. IEEE Trans. Circuits Syst. Video Technol. 8(4), 378–385 (1998)CrossRefGoogle Scholar
  23. 23.
    R. Otazo, E. Candès, D.K. Sodickson, Low-rank plus sparse matrix decomposition for accelerated dynamic mri with separation of background and dynamic components. Magn. Reson. Med. 73(3), 1125–1136 (2015)CrossRefGoogle Scholar
  24. 24.
    Y. Peng, A. Ganesh, J. Wright, W. Xu, Y. Ma, RASL: robust alignment by sparse and low-rank decomposition for linearly correlated images. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2233–2246 (2012)CrossRefGoogle Scholar
  25. 25.
    Y. Shen, Z. Wen, Y. Zhang, Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization. Optim. Methods Softw. 29(2), 239–263 (2014)MathSciNetCrossRefGoogle Scholar
  26. 26.
    K.-C. Toh, S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. J. Optim. 6(3), 615–640 (2010)MathSciNetzbMATHGoogle Scholar
  27. 27.
    E. Vincent, R. Gribonval, C. Févotte, Performance measurement in blind audio source separation. IEEE Trans. Audio Speech Lang. Process. 14(4), 1462–1469 (2006)CrossRefGoogle Scholar
  28. 28.
    R. Werner, M. Wilmsy, B. Cheng, N.D. Forkert, Beyond cost function masking: RPCA-based non-linear registration in the context of VLSM, in International Workshop on Pattern Recognition in Neuroimaging (PRNI) (IEEE, 2016), pp. 1–4Google Scholar
  29. 29.
    J. Wright, A. Ganesh, S. Rao, Y. Peng, Y. Ma, Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization, in Advances in Neural Information Processing Systems (2009), pp. 2080–2088Google Scholar
  30. 30.
    J. Wright, Y. Ma, J. Mairal, G. Sapiro, T.S. Huang, S. Yan, Sparse representation for computer vision and pattern recognition. Proc. IEEE 98(6), 1031–1044 (2010)CrossRefGoogle Scholar
  31. 31.
    Z. Xue, J. Dong, Y. Zhao, C. Liu, R. Chellali, Low-rank and sparse matrix decomposition via the truncated nuclear norm and a sparse regularizer. Vis. Comput. 35(11), 1549–1566 (2018) Google Scholar
  32. 32.
    X. Yuan, J. Yang, Sparse and low rank matrix decomposition via alternating direction method. Pac. J. Optim. 9(1), 1–11 (2009)MathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.College of Electrical Engineering and Control ScienceNanjing Tech UniversityNanjingChina
  2. 2.Centre for Vision, Speech and Signal ProcessingUniversity of SurreyGuildfordUK

Personalised recommendations