Marginal patch alignment for dimensionality reduction
- 156 Downloads
Patch alignment (PA) framework provides us a useful way to obtain the explicit mapping for dimensionality reduction. Under the PA framework, we propose the marginal patch alignment (MPA) for dimensionality reduction. MPA performs the optimization from the part to the whole. In the phase of the patch optimization, the marginal between-class and within-class local neighborhoods of each training sample are selected to build the local marginal patches. By performing the patch optimization, on the one hand, the contributions of each sample for optimal subspace selection are distinguished. On the other hand, the marginal structure information is exploited to extract discriminative features such that the marginal distance between the two different categories is enlarged in the low transformed subspace. In the phase of the whole alignment, a trick is performed to unify all of the local patches into a globally linear system and make MPA obtain the whole optimization. The experimental results on the Yale face database, the UCI Wine dataset, the Yale-B face database, and the AR face database, show the effectiveness and efficiency of MPA.
KeywordsPatch alignment framework Dimensionality reduction Margin Classification
This work was partially supported by the National Nature Science Foundation of China (Grant nos. 61305036, 61322306, 61333013, and 61273192), the China Postdoctoral Science Foundation funded project (Grant 2014M560657 and 2015T80898), Scientific Funds approved in 2013 for Higher Level Talents by Guangdong Provincial universities and Project supported by GDHVPS 2014.
Compliance with ethical standards
Conflict of interest
Jie Xu, Shengli Xie and Wenkang Zhu, their immediate family, and any research foundation with which they are affiliated did not receive any financial payments or other benefits from any commercial entity related to the subject of this article.
- Belkin M, Niyogi P (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering. Advances in neural information processing systems. MIT Press, Cambridge, pp 585–591Google Scholar
- Bengio Y, Paiement J, Vincent P (2003) Out-of-sample extensions for LLE, isomap, MDS, eigenmaps, and spectral clustering. in Proc. Adv. Neural Inf. Process. Syst. 177–184Google Scholar
- Fukunaga K (1990) Statistical pattern recognition. Academic Press, New YorkGoogle Scholar
- Gonzalez RC, Woods RE (1997) Digital Image Processing. Addison WesleyGoogle Scholar
- He X, Niyogi P (2003) Locality Preserving Projections. In: Proceedings of the 16th conference on neural information processing systemsGoogle Scholar
- He X, Cai D, Yan S, Zhang HJ (2005) Neighborhood preserving embedding. In Proc. Int. Conf. omputer Vision (ICCV’05)Google Scholar
- Lee KC, Ho J, Kriegman DJ (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698Google Scholar
- Martinez AM, Benavente R (1998) The AR Face Database. CVC Technical Report #24Google Scholar
- Martinez AM, Benavente R (2006) The AR face database. http://rvl1.ecn.purdue.edu/aleix/~aleix_face_DB.html
- Zhang T, Tao DC, Yang J (2008) Discriminative locality alignment. In: Proceedings of the 10th European Conference on Computer Vision (ECCV). Springer, Berlin, Heidelberg, pp 725–738Google Scholar