Advertisement

Nonlinear PCA & Feature Extraction

Chapter
  • 1.5k Downloads

Abstract

The theorem of principal component analysis (PCA) was proposed by Pearson in 1901. As an important statistics method, it was applied by Hotelling in the research of correlation structures of random multi-variates in 1933. Oja proposed a fast algorithm for computing the eigenvalue and eigenvector of the covariance matrix and a Hebb learning rules-based neural networks model for adaptive extractions of maximum principal components in 1982[1,2]. Afterwards Sanger proposed a neural network model for adaptive extraction of multiple principal components. Kung proposed an APEX (Adaptive Principal Component Extraction) multiple principal component learning algorithm[3], which improved the efficiency of the PCA method. Related to linear transform, the basic idea of PCA is: Obtain a group of new features from the original features through linear transform, which keeps the same number of features, and the first few features contain the main information of the original features. Then by keeping the main information, the number of features decreases, and signal subspace is separated from noise subspace, so some of noise can be removed. Specifically, by the orthogonal eigenvector matrix of autocorrelation matrix of samples, we can reconstruct signals and remove correlations, then find the sample with the largest variance and also the highest energy.

Keywords

Support Vector Machine Hide Layer Blind Source Separation Kernel Principal Component Analysis Principal Component Analysis Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Oja E (1982) A simplified neuron model as a principal component analyzer. Journal of Mathematical Biology 15: 267–273MathSciNetzbMATHCrossRefGoogle Scholar
  2. [2]
    Oja E (1992) Principal components, minor components, and linear neural networks. Neural Networks 5: 927–935CrossRefGoogle Scholar
  3. [3]
    Diamantaras K I, Kung S Y (1996) Principal component neural networks: Theory and Applications. Wiley, New YorkzbMATHGoogle Scholar
  4. [4]
    Oja E (1997) The nonlinear PCA learning rule in independent components analysis. Neurocomputing 17(1): 25–46CrossRefGoogle Scholar
  5. [5]
    Karhunen J, Pajunen P, Oja E (1998) The nonlinear PCA criterion in blind source separationgRelations with other approaches. Neurocomputing 22(1): 5–20zbMATHCrossRefGoogle Scholar
  6. [6]
    Xu L (1993) Least mean square error reconstruction principle for selforganizing neural nets. Neural Networks 6: 627–648CrossRefGoogle Scholar
  7. [7]
    Jollife I (1986) Principal components analysis. Springer, New YorkCrossRefGoogle Scholar
  8. [8]
    Scholkopf B, Smola A, Muller K (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10(5): 1299–1319CrossRefGoogle Scholar
  9. [9]
    Scholkopf B, Smola A, Muller K (1999) Kernel principal component analysis. In: Advances in Kernel Methods. The MIT Press, Cambridge, MAGoogle Scholar
  10. [10]
    Vipnik V (1995) The nature of statistical learning theory. Springer, New YorkCrossRefGoogle Scholar
  11. [11]
    Romdhani S, Gong S, Psarrou A (1999) A multi-view nonlinear active shape model using kernel PCA. In: BMVC.Nottingham, 1999, pp 483–492Google Scholar
  12. [12]
    Hsieh W W (2001) Nonlinear principal component analysis by neural networks. Tellus 53A: 599–615Google Scholar
  13. [13]
    Kim T, Adali T (2002) Fully complex multi-layer perceptron network for nonlinear signal processing. Journal of VLSI Signal Processing 32(1): 29–43zbMATHGoogle Scholar
  14. [14]
    Rattan S P, Hsieh W W (2005) Complex valued neural networks for nonlinear complex principal components analysis. Neural Networks 18(1): 61–69zbMATHCrossRefGoogle Scholar
  15. [15]
    Nitta T (1997) An extension of the back-propagation algorithm to complex numbers. Neural Networks 10(8): 1391–1415CrossRefGoogle Scholar

Copyright information

© Shanghai Jiao Tong University Press, Shanghai and Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Institute of Vibration Shock & NoiseShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations