Encyclopedia of Biometrics

Living Edition
| Editors: Stan Z. Li, Anil K. Jain

Linear Dimension Reduction Techniques

  • Wei-Shi Zheng
  • Jian-Huang Lai
  • Pong C. Yuen
Living reference work entry
DOI: https://doi.org/10.1007/978-3-642-27733-7_9220-2

Synonyms

Definition

Linear dimension reduction technique reduces the dimension of biometric data using a linear transform. The linear transform is always learned by optimization of a criterion. Biometric data are then projected onto the range space of this transform. Subsequent processing will then be performed in that lower-dimensional space.

Introduction

In biometrics, data are always represented in vectors and the dimensionality is always very high. It would be computationally expensive to process them directly by many algorithms. Moreover, it is sometimes desirable to exact robust, informative or discriminative information from the data. For these reasons, a lower-dimensional subspace is always found such that most important information of data is retained for linear representation. Among the techniques for learning a subspace, linear dimension reduction methods are always popular.

Suppose given a set of N data samples { x 1, ⋯ ,  x N}, where \(\mathbf{x}_{i}...

Keywords

Linear Discriminant Analysis Independent Component Analysis Dimension Reduction Independent Component Analysis Nonnegative Matrix Factorization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in to check access.

References

  1. 1.
    M. Turk, A. Pentland, Eigenfaces for recognition. J. Cogn. Neurosci. 3 (1), 71–86 (1991)CrossRefGoogle Scholar
  2. 2.
    N. Kwak, Principal component analysis based on L1-norm maximization. IEEE Trans. Pattern Anal. Mach. Intell. 30 (9), 1672–1680 (2008)CrossRefGoogle Scholar
  3. 3.
    A. Hyvärinen, E. Oja, Independent component analysis: algorithms and applications. Neural Netw. 13, 411–430 (2000)CrossRefGoogle Scholar
  4. 4.
    D.D. Lee, H.S. Seung, Learning the parts of objects by non-negative matrix factorization. Nature 401, 788–791 (1999)CrossRefGoogle Scholar
  5. 5.
    D.D. Lee, H.S. Seung, Algorithms for non-negative matrix factorization, in Advances in Neural Information Processing Systems, Denver, 2000, pp. 556–562Google Scholar
  6. 6.
    C.H. Ding, T. Li, M.I. Jordan, Convex and semi-nonnegative matrix factorizations. IEEE Trans. Pattern Anal. Mach. Intell. 32 (1), 45–55 (2010)CrossRefGoogle Scholar
  7. 7.
    W.S. Zheng, J. Lai, S. Liao, R. He, Extracting non-negative basis images using pixel dispersion penalty. Pattern Recognit. 45 (8), 2912–2926 (2012)CrossRefGoogle Scholar
  8. 8.
    X. He, P. Niyogi, Locality preserving projections, in Advances in Neural Information Processing Systems, Vancouver, 2003, pp. 153–160Google Scholar
  9. 9.
    R.A., Fisher, The use of multiple measures in taxonomic problems. Ann. Eugen. 7, 179–188 (1936)Google Scholar
  10. 10.
    P.N. Belhumeour, J.P. Hespanha, D.J. Kriegman, Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19 (7), 711–720 (1997)Google Scholar
  11. 11.
    L.F. Chen, H.Y.M. Liao, M.T. Ko, J.C. Lin, G.J. Yu, A new LDA-based face recognition system which can solve the small sample size problem. Pattern Recognit. 33, 1713–1726 (2000)CrossRefGoogle Scholar
  12. 12.
    A.R. Webb (ed.), Statistical Pattern Recognition, 2nd edn. (Wiley, West Sussex, 2002)zbMATHGoogle Scholar
  13. 13.
    H. Li, T. Jiang, K. Zhang, Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. Neural Netw. 17 (1), 157–165 (2006)CrossRefGoogle Scholar
  14. 14.
    A.M. Martínez, A.C. Kak, PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 23 (2), 228–233 (2001)CrossRefGoogle Scholar
  15. 15.
    D. Lin, X. Tang, Inter-modality face recognition, in European Conference on Computer Vision, Graz, 2006Google Scholar
  16. 16.
    D. Cai, X. He, J. Han, Semi-supervised discriminant analysis, in IEEE International Conference on Computer Vision, Rio de Janeiro, 2007Google Scholar
  17. 17.
    J. Yang, D. Zhang, A.F. Frangi, J.y. Yang, Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26 (1), 131–137 (2004)Google Scholar
  18. 18.
    H. Xiong, M.N.S. Swamy, M.O. Ahmad, Two-dimensional FLD for face recognition. Pattern Recognit. 38, 1121–1124 (2005)CrossRefGoogle Scholar
  19. 19.
    H. Kong, X. Li, L. Wang, E.K. Teoh, J.-G. Wang, R. Venkateswarlu, Generalized 2D principal component analysis, in IEEE International Joint Conference on Neural Networks, Oxford, vol. 1, 2005, pp. 108–113Google Scholar
  20. 20.
    J.P. Ye, R. Janardan, Q. Li, Two-dimensional linear discriminant analysis, in Advances in Neural Information Processing Systems, Vancouver, 2004, pp. 1569–1576Google Scholar
  21. 21.
    W.S. Zheng, J.H. Lai, S.Z. Li, 1D-LDA versus 2D-LDA: when is vector-based linear discriminant analysis better than matrix-based? Pattern Recognit. 41 (7), 2156–2172 (2008)CrossRefzbMATHGoogle Scholar
  22. 22.
    S.J. Pan, I.W. Tsang, J.T. Kwok, Q. Yang, Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22 (2), 199–210 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.School of Information Science and TechnologySun Yat-sen UniversityGuangzhouPeople’s Republic of China
  2. 2.School of Information Science and TechnologySun Yat-Sen UniversityGuangzhouPeople’s Republic of China
  3. 3.Department of Computer ScienceHong Kong Baptist UniversityKowloon TongHong Kong