Soft Computing

, Volume 21, Issue 9, pp 2347–2356 | Cite as

Marginal patch alignment for dimensionality reduction

Methodologies and Application

Abstract

Patch alignment (PA) framework provides us a useful way to obtain the explicit mapping for dimensionality reduction. Under the PA framework, we propose the marginal patch alignment (MPA) for dimensionality reduction. MPA performs the optimization from the part to the whole. In the phase of the patch optimization, the marginal between-class and within-class local neighborhoods of each training sample are selected to build the local marginal patches. By performing the patch optimization, on the one hand, the contributions of each sample for optimal subspace selection are distinguished. On the other hand, the marginal structure information is exploited to extract discriminative features such that the marginal distance between the two different categories is enlarged in the low transformed subspace. In the phase of the whole alignment, a trick is performed to unify all of the local patches into a globally linear system and make MPA obtain the whole optimization. The experimental results on the Yale face database, the UCI Wine dataset, the Yale-B face database, and the AR face database, show the effectiveness and efficiency of MPA.

Keywords

Patch alignment framework Dimensionality reduction  Margin Classification 

References

  1. Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720CrossRefGoogle Scholar
  2. Belkin M, Niyogi P (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering. Advances in neural information processing systems. MIT Press, Cambridge, pp 585–591Google Scholar
  3. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396CrossRefMATHGoogle Scholar
  4. Bengio Y, Paiement J, Vincent P (2003) Out-of-sample extensions for LLE, isomap, MDS, eigenmaps, and spectral clustering. in Proc. Adv. Neural Inf. Process. Syst. 177–184Google Scholar
  5. Fukunaga K (1990) Statistical pattern recognition. Academic Press, New YorkGoogle Scholar
  6. Fukunaga K (1991) Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, New YorkMATHGoogle Scholar
  7. Gonzalez RC, Woods RE (1997) Digital Image Processing. Addison WesleyGoogle Scholar
  8. He X, Niyogi P (2003) Locality Preserving Projections. In: Proceedings of the 16th conference on neural information processing systemsGoogle Scholar
  9. He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using Laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27(3):328–340CrossRefGoogle Scholar
  10. He X, Cai D, Yan S, Zhang HJ (2005) Neighborhood preserving embedding. In Proc. Int. Conf. omputer Vision (ICCV’05)Google Scholar
  11. Lee KC, Ho J, Kriegman DJ (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698Google Scholar
  12. Li H, Jiang T, Zhang K (2006) Efficient and robust feature extraction by maximum margin criterion. IEEE Trans Neural Netw 17(1):157–165CrossRefGoogle Scholar
  13. Liu Q, Lu H, Ma S (2004) Improving kernel Fisher discriminant analysis for face recognition. IEEE Trans Circuits Syst Video Technol 14(1):42–49CrossRefGoogle Scholar
  14. Martinez AM, Benavente R (1998) The AR Face Database. CVC Technical Report #24Google Scholar
  15. Martinez AM, Benavente R (2006) The AR face database. http://rvl1.ecn.purdue.edu/aleix/~aleix_face_DB.html
  16. Mtiller K, Mika S, Riitsch G, Tsuda K, Scholkopf B (2001) An introduction to kernel-based learning algorithms. IEEE Trans Neural Netw 12(2):181–201CrossRefGoogle Scholar
  17. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326CrossRefGoogle Scholar
  18. Tenenbaum JB, deSilva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290:2319–2323CrossRefGoogle Scholar
  19. Xu J, Yang J, Gu Z, Zhang N (2014) Median-mean line based discriminant analysis. Neurocomputing 123:233–246CrossRefGoogle Scholar
  20. Yan S, Xu D, Zhang B (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51CrossRefGoogle Scholar
  21. Yang W, Wang J, Ren M, Yang J (2009) Feature extraction based on laplacian bidirectional maximum margin criterion. Pattern Recogn 42(11):2327–2334CrossRefMATHGoogle Scholar
  22. Yang W, Sun C, Zhang L (2011) A multi-manifold discriminant analysis method for image feature extraction. Pattern Recogn 44(8):1649–1657CrossRefMATHGoogle Scholar
  23. Zhang T, Tao DC, Yang J (2008) Discriminative locality alignment. In: Proceedings of the 10th European Conference on Computer Vision (ECCV). Springer, Berlin, Heidelberg, pp 725–738Google Scholar
  24. Zhang T, Tao DH, Li XL, Yang J (2009) Patch alignment for dimensionality reduction. IEEE Trans Knowl Data Eng 21(9):1299–1313CrossRefGoogle Scholar
  25. Zhang Z, Zha H (2004) Principle manifolds and nonlinear dimensionality reduction via local tangent space alignment. SIAM J Sci Comput 26(1):313–338MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Faculty of AutomationGuangdong University of TechnologyGuangzhouPeople’s Republic of China
  2. 2.College of Computer Science and Software EngineeringShenzhen UniversityShenzhenChina
  3. 3.School of mathematics and statisticsShaoguan UniversityShaoguanChina

Personalised recommendations