Kernel Relative Principal Component Analysis for Pattern Recognition

  • Yoshikazu Washizawa
  • Kenji Hikida
  • Toshihisa Tanaka
  • Yuhikiko Yamashita
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3138)


Principal component analysis (PCA) is widely used in signal processing, pattern recognition, etc. PCA was extended to the relative PCA (RPCA). RPCA provides principal components of a signal while suppressing effects of other signals. PCA was also extended to the kernel PCA (KPCA). By using a mapping from the original space to a higher dimensional space and its kernel, we can perform PCA in the higher dimensional space. In this paper, we propose the kernel RPCA (KRPCA) and give its solution. Similarly to KPCA, the order of matrices that we should calculate for the solution is the number of samples, that is ‘kernel trick’. We provide experimental results of an application to pattern recognition in order to show the advantages of KRPCA over KPCA.


  1. 1.
    Watanabe, S., Pakvasa, N.: Subspace method in pattern recognition.In: Proc. 1st Int. J. Conf on Pattern Recognition, Washington DC,pp. 25–32 (1973)Google Scholar
  2. 2.
    Oja, E.: Subspace Methods of Pattern Recognition. Research Studies Press, Hertfordshire (1983)Google Scholar
  3. 3.
    Diamantaras, K.I., Kung, S.Y.: Principal Component Neural Networks. John Wiley & Sons, Inc., Yew York (1996)zbMATHGoogle Scholar
  4. 4.
    Yamashita, Y., Ogawa, H.: Relative Karhunen-Loève transform. IEEE Trans. on Signal Processing 44, 371–378 (1996)CrossRefGoogle Scholar
  5. 5.
    Ikeno, Y., Yamashita, Y., Ogawa, H.: Relative Karhunen-Loève transform method for pattern recognition. In: Proc. of the 14th International Conference on Pattern Recognition, Brisben, Austraria, vol. 2, pp. 1031–1033 (1998)Google Scholar
  6. 6.
    Scharf, L.: The SVD and reduced rank signal processing. Signal Processing 25, 113–133 (1991)zbMATHCrossRefGoogle Scholar
  7. 7.
    Fisher, R.A.: The use of multiple measurements in taxonomic problems. Annal Eugenics 7, 179–188 (1936)Google Scholar
  8. 8.
    Schölkopf, B., Smola, A., Müller, K.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10, 1299–1319 (1998)CrossRefGoogle Scholar
  9. 9.
    Mika, S., Schölkopf, B., Smola, A.J., Müller, K.R., Scholz, M., Rätsch, G.: Kernel PCA and de-noising in feature spaces. In: Kearns, M.S., Solla, S.A., Cohn, D.A. (eds.) Advances in Neural Information Processing Systems, vol. 11, pp. 536–542. MIT Press, Cambridge (1999)Google Scholar
  10. 10.
    Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K.R., Rätsch, G., Smola, A.: Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks 10, 1000–1017 (1999)CrossRefGoogle Scholar
  11. 11.
    Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A.J., Müller, K.R.: Invariant feature extraction and classification in kernel spaces. In: Kearns, M.S., Solla, S.A., Cohn, D.A. (eds.) Advances in Neural Information Processing Systems, vol. 12, pp. 526–532. MIT Press, Cambridge (2000)Google Scholar
  12. 12.
    Maeda, E., Murase, H.: Kernel based nonlinear subspace method for multi-category classification. Tech. Rep., Information Science Laboratory ISRL-98-1 (1998)Google Scholar
  13. 13.
    Tsuda, K.: Subspace classifier in the Hilbert space. Pattern Recognition Letters 20, 513–519 (1999)CrossRefGoogle Scholar
  14. 14.
    Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Müller, K.R.: Fisher discriminant analysis with kernels. In Y.-H. Hu, J. Larsen, E.W., Douglas, S., eds.: Neural Networks for Signal Processing IX, IEEE (1999) 41–48 Google Scholar
  15. 15.
    Mika, S., Rätsch, G., Müller, K.R.: A mathematical programming approach to the kernel Fisher algorithm. In: Leen, T.K., Tresp, T.D.,, V. (eds.) Advances in Neural Information Processing Systems 13, pp. 591–597. MIT Press, Cambridge (2001)Google Scholar
  16. 16.
    Mika, S., Smola, A.J., Schölkopf, B.: An improved training algorithm for kernel Fisher discriminants. In: Jaakkola, T., Richardson, T. (eds.) Proceedings of Eighth International Workshop on Artificial Intelligence and Statistics, pp. 98–104. Morgan Kaufmann, San Francisco (2001)Google Scholar
  17. 17.
    Schatten, R.: Norm Ideals of Completely Continuous Operators. Springer-Verlag, Berlin (1970)zbMATHGoogle Scholar
  18. 18.
    Ben-Israel, A., Greville, T.N.E.: Generalized Inverses: Theory and Applications. JohnWiley & Sons, New York (1974)zbMATHGoogle Scholar
  19. 19.
    Wakabayashi, T., Tsuruoka, S., Miyake, F.K., Y.: Increasing the feature size in handwritten numeral recognition to improve accuracy. Systems and Computers in Japan (Scripta Technica) 26, 35–44 (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Yoshikazu Washizawa
    • 1
  • Kenji Hikida
    • 2
  • Toshihisa Tanaka
    • 3
    • 4
  • Yuhikiko Yamashita
    • 5
  1. 1.Toshiba Solutions CorporationTokyoJapan
  2. 2.Wireless Terminals, Texas Instruments Japan LimitedTokyoJapan
  3. 3.Department of Electrical and Electronic EngineeringTokyo University of Agriculture and TechnologyTokyoJapan
  4. 4.Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science InstituteSaitamaJapan
  5. 5.Graduate School of Science and Engineering, Tokyo Institute of TechnologyTokyoJapan

Personalised recommendations