Skip to main content
Log in

Robust principal component analysis via truncated nuclear norm minimization

  • Published:
Journal of Shanghai Jiaotong University (Science) Aims and scope Submit manuscript

Abstract

Robust principal component analysis (PCA) is widely used in many applications, such as image processing, data mining and bioinformatics. The existing methods for solving the robust PCA are mostly based on nuclear norm minimization. Those methods simultaneously minimize all the singular values, and thus the rank cannot be well approximated in practice. We extend the idea of truncated nuclear norm regularization (TNNR) to the robust PCA and consider truncated nuclear norm minimization (TNNM) instead of nuclear norm minimization (NNM). This method only minimizes the smallest Nr singular values to preserve the low-rank components, where N is the number of singular values and r is the matrix rank. Moreover, we propose an effective way to determine r via the shrinkage operator. Then we develop an effective iterative algorithm based on the alternating direction method to solve this optimization problem. Experimental results demonstrate the efficiency and accuracy of the TNNM method. Moreover, this method is much more robust in terms of the rank of the reconstructed matrix and the sparsity of the error.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. CANDÈS E J, LI X D, MA Y, et al. Robust principal component analysis? [J]. Journal of the ACM, 2011, 58(3): 11.

    Article  MathSciNet  MATH  Google Scholar 

  2. PENG Y G, GANESH A, WRIGHT J, et al. RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2233–2246.

    Article  Google Scholar 

  3. LIU Y Y, JIAO L C, SHANG F H. A fast trifactorization method for low-rank matrix recovery and completion [J]. Pattern Recognition, 2013, 46(1): 163–173.

    Article  MATH  Google Scholar 

  4. WRIGHT J, MA Y, GANESH A, et al. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization [C]//Proceedings of the 23rd Neural Information Processing Systems. Vancouver, BC, Canada: ACM, 2009: 2080–2088.

    Google Scholar 

  5. CAI J F, CANDÈS E J, SHEN Z W. A singular value thresholding algorithm for matrix completion [J]. SIAM Journal on Optimization, 2010, 20(4): 1956–1982.

    Article  MathSciNet  MATH  Google Scholar 

  6. LIN Z C, LIU R S, SU Z X. Linearized alternating direction method with adaptive penalty for low rank representation [C]//Proceedings of the 25th Neural Information Processing Systems. Granada, Spain: [s.n.], 2011: 612–620.

    Google Scholar 

  7. ZHANG D B, HU Y, YE J P, et al. Matrix completion by truncated nuclear norm regularization [C]//Proceedings of the 25th IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA: IEEE, 2012: 2192–2199.

    Google Scholar 

  8. HU Y, ZHANG D B, YE J P, et al. Fast and accurate matrix completion via truncated nuclear norm regularization [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(9): 2117–2130.

    Article  Google Scholar 

  9. GU S H, ZHANG L, ZUO W M, et al. Weighted nuclear norm minimization with application to image denoising [C]//Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition. [s.l.]: IEEE, 2014: 2862–2869.

    Google Scholar 

  10. DONOHO D L. Denoising by soft-thresholding [J]. IEEE Transactions on Information Theory, 1995, 41(3): 613–627.

    Article  MathSciNet  MATH  Google Scholar 

  11. YANG J F, ZHANG Y. Alternating direction algorithms for l1-problems in compressive sensing [J]. SIAM Journal on Scientific Computing, 2011, 33(1): 250–278.

    Article  MathSciNet  MATH  Google Scholar 

  12. YUAN X M, YANG J F. Sparse and low-rank matrix decomposition via alternating direction methods [J]. Pacific Journal of Optimization, 2013, 9(1): 167–180.

    MathSciNet  MATH  Google Scholar 

  13. BABACAN S D, LUESSI M, MOLINA R, et al. Sparse Bayesian methods for low-rank matrix estimation [J]. IEEE Transactions on Signal Processing, 2012, 60(8): 3964–3977.

    Article  MathSciNet  Google Scholar 

  14. LI L Y, HUANG W M, GU I Y H, et al. Statistical modeling of complex backgrounds for foreground object detection [J]. IEEE Transactions on Image Processing, 2004, 13(11): 1459–1472.

    Article  Google Scholar 

  15. GEORGHIADES A S, BELHUMEUR P N, KRIEGMAN D J. From few to many: Illumination cone models for face recognition under variable lighting and pose [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(6): 643–660.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jichang Guo  (郭继昌).

Additional information

Foundation item: the Doctoral Program of Higher Education of China (No. 20120032110034)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Guo, J., Zhao, J. et al. Robust principal component analysis via truncated nuclear norm minimization. J. Shanghai Jiaotong Univ. (Sci.) 21, 576–583 (2016). https://doi.org/10.1007/s12204-016-1765-5

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12204-016-1765-5

Keywords

CLC number

Document code

Navigation