The theorem of principal component analysis (PCA) was proposed by Pearson in 1901. As an important statistics method, it was applied by Hotelling in the research of correlation structures of random multi-variates in 1933. Oja proposed a fast algorithm for computing the eigenvalue and eigenvector of the covariance matrix and a Hebb learning rules-based neural networks model for adaptive extractions of maximum principal components in 1982[1,2]. Afterwards Sanger proposed a neural network model for adaptive extraction of multiple principal components. Kung proposed an APEX (Adaptive Principal Component Extraction) multiple principal component learning algorithm, which improved the efficiency of the PCA method. Related to linear transform, the basic idea of PCA is: Obtain a group of new features from the original features through linear transform, which keeps the same number of features, and the first few features contain the main information of the original features. Then by keeping the main information, the number of features decreases, and signal subspace is separated from noise subspace, so some of noise can be removed. Specifically, by the orthogonal eigenvector matrix of autocorrelation matrix of samples, we can reconstruct signals and remove correlations, then find the sample with the largest variance and also the highest energy.
KeywordsSupport Vector Machine Hide Layer Blind Source Separation Kernel Principal Component Analysis Principal Component Analysis Method
Unable to display preview. Download preview PDF.
- Scholkopf B, Smola A, Muller K (1999) Kernel principal component analysis. In: Advances in Kernel Methods. The MIT Press, Cambridge, MAGoogle Scholar
- Romdhani S, Gong S, Psarrou A (1999) A multi-view nonlinear active shape model using kernel PCA. In: BMVC.Nottingham, 1999, pp 483–492Google Scholar
- Hsieh W W (2001) Nonlinear principal component analysis by neural networks. Tellus 53A: 599–615Google Scholar