Advertisement

Principal component analysis based on block-norm minimization

  • Jian-Xun Mi
  • Quanwei Zhu
  • Jia Lu
Article
  • 42 Downloads

Abstract

Principal Component Analysis (PCA) has attracted considerable interest for years in the studies of image recognition. So far, several state-of-the-art PCA-based robust feature extraction techniques have been proposed, such as PCA-L1 and R1-PCA. Since those methods treat image by its transferred vector form, it leads to the loss of latent information carried by images and loses sight of the spatial structural details of image. To exploit these two kinds of information and improve robustness to outliers, we propose principal component analysis based on block-norm minimization (Block-PCA) which employs block-norm to measure the distance between an image and its reconstruction. Block-norm imposes L2-norm constrain on a local group of pixel blocks and uses L1-norm constrain among different groups. In the case where parts of an image are corrupted, Block-PCA can effectively depress the effect of corrupted blocks and make full use of the rest. In addition, we propose an alternative iterative algorithm to solve the Block-PCA model. Performance is evaluated on several datasets and the results are compared with those of other PCA-based methods.

Keywords

Principal component analysis Block-norm Images recognition 

Notes

Acknowledgements

This work was supported by the National Nature Science Foundation of China (under Grant Nos. 61601070 and 61472055) and sponsored by Natural Science Foundation of Chongqing. (under Grant Nos. cstc2018jcyjAX0532 and cstc2014jcyjA40011)

References

  1. 1.
    Aanæs H, Fisker R, Astrom K, Carstensen JM (2002) Robust factorization. IEEE Trans Pattern Anal Mach Intell 24(9):1215–1225CrossRefGoogle Scholar
  2. 2.
    Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdiscip Rev Comput Stat 2(4):433–459CrossRefGoogle Scholar
  3. 3.
    Baccini A, Besse P, Falguerolles A (1996) A l1-norm pca and a heuristic approach. Ordinal Symb Data Anal 1(1):359–368CrossRefGoogle Scholar
  4. 4.
    Brooks JP, Dulá J, Boone EL (2013) A pure l1-norm principal component analysis. Comput Stat Data Anal 61:83–98CrossRefGoogle Scholar
  5. 5.
    Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis J ACM (JACM) 58 (3):11MathSciNetCrossRefGoogle Scholar
  6. 6.
    De La Torre F, Black MJ (2003) A framework for robust subspace learning. Int J Comput Vis 54 (1–3):117–142CrossRefGoogle Scholar
  7. 7.
    Ding C, Zhou D, He X, Zha H (2006) R1-pca: rotational invariant l1-norm principal component analysis for robust subspace factorization. In: Proceedings of the 23rd international conference on Machine learning. ACM, pp 281–288Google Scholar
  8. 8.
    Georghiades A, Belhumeur P, Kriegman D (2001) From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans Pattern Anal Mach Intell 23(6):643–660CrossRefGoogle Scholar
  9. 9.
    Gottumukkal R, Asari VK (2004) An improved face recognition technique based on modular pca approach. Pattern Recogn Lett 25(4):429–436CrossRefGoogle Scholar
  10. 10.
    Ke Q, Kanade T (2005) Robust l/sub 1/norm factorization in the presence of outliers and missing data by alternative convex programming. In: IEEE computer society conference on computer vision and pattern recognition, 2005. CVPR 2005, vol 1. IEEE, pp 739–746Google Scholar
  11. 11.
    Kumar N, Singh S, Kumar A (2017) Random permutation principal component analysis for cancelable biometric recognition. Appl Intell.  https://doi.org/10.1007/s10489-017-1117-7
  12. 12.
    Kwak N (2008) Principal component analysis based on l1-norm maximization. IEEE Trans Pattern Anal Mach Intell 30(9):1672–1680CrossRefGoogle Scholar
  13. 13.
    Li BN, Yu Q, Wang R, Xiang K, Wang M, Li X (2016) Block principal component analysis with nongreedy l1-norm maximization. IEEE Trans Cybern 46(11):2543–2547.  https://doi.org/10.1109/TCYB.2015.2479645 CrossRefGoogle Scholar
  14. 14.
    Luo M, Nie F, Chang X, Yang Y, Hauptmann AG, Zheng Q (2017) Avoiding optimal mean l2,1-norm maximization-based robust pca for reconstruction. Neural Comput 29(4):1124–1150CrossRefGoogle Scholar
  15. 15.
    Luo T, Yang Y, Yi D, Ye J (2017) Robust discriminative feature learning with calibrated data reconstruction and sparse low-rank model. Appl Intell.  https://doi.org/10.1007/s10489-017-1060-7
  16. 16.
    Martinez AM (1998) The ar face database. CVC technical report 24Google Scholar
  17. 17.
    Mi JX, Luo Z, Fu Q, He A (2018) Double direction matrix based sparse representation for face recognition. In: International conference on security, pattern analysis, and cybernetics, pp 660–665Google Scholar
  18. 18.
    Mi JX, Sun Y, Lu J (2018) Robust face recognition based on supervised sparse representation. In: International conference on intelligent computing, pp 253–259Google Scholar
  19. 19.
    Nie F, Huang H (2016) Non-greedy l21-norm maximization for principal component analysis. arXiv preprint arXiv:1603.08293
  20. 20.
    Nie F, Huang H, Cai X, Ding C (2010) Efficient and robust feature selection via joint l2, 1-norms minimization. In: Advances in neural information processing systems, pp 1813–1821Google Scholar
  21. 21.
    Nie F, Huang H, Ding C, Luo D, Wang H (2011) Robust principal component analysis with non-greedy l1-norm maximization. In: IJCAI proceedings-international joint conference on artificial intelligence, vol 22, p 1433. CiteseerGoogle Scholar
  22. 22.
    Nie F, Yuan J, Huang H (2014) Optimal mean robust principal component analysis. In: International conference on machine learning, pp 1062–1070Google Scholar
  23. 23.
    Ren CX, Dai DQ, Yan H (2012) Robust classification using l2, 1-norm based regression model. Pattern Recognit 45(7):2708–2718CrossRefGoogle Scholar
  24. 24.
    Sim T, Baker S, Bsat M (2002) The cmu pose, illumination, and expression (pie) database. In: Fifth IEEE international conference on automatic face and gesture recognition. Proceedings, pp 53–58. IEEEGoogle Scholar
  25. 25.
    Skocaj D, Leonardis A (2003) Weighted and robust incremental method for subspace learning. In: ICCV, pp 1494–1501Google Scholar
  26. 26.
    Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1):71–86CrossRefGoogle Scholar
  27. 27.
    Wang H (2012) Block principal component analysis with l1-norm for image analysis. Pattern Recogn Lett 33 (5):537–542.  https://doi.org/10.1016/j.patrec.2011.11.029 CrossRefGoogle Scholar
  28. 28.
    Yi S, Lai Z, He Z, Cheung YM, Liu Y (2017) Joint sparse principal component analysis. Pattern Recognit 61:524–536.  https://doi.org/10.1016/j.patcog.2016.08.025 CrossRefGoogle Scholar
  29. 29.
    Zainuddin N, Selamat A, Ibrahim R (2018) Hybrid sentiment classification on twitter aspect-based sentiment analysis. Appl Intell 48(5):1218–1232.  https://doi.org/10.1007/s10489-017-1098-6 CrossRefGoogle Scholar
  30. 30.
    Zia Uddin M, Lee JJ, Kim TS (2010) Independent shape component-based human activity recognition via hidden markov model. Appl Intell 33(2):193–206.  https://doi.org/10.1007/s10489-008-0159-2 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceChongqing University of Posts and TelecommunicationsChongqingChina
  2. 2.Chongqing Key Laboratory of Image CognitionChongqing University of Posts and TelecommunicationsChongqingChina
  3. 3.College of Computer and Information SciencesChongqing Normal UniversityChongqingChina

Personalised recommendations