Advertisement

Kernel principal component analysis

  • Bernhard Schölkopf
  • Alexander Smola
  • Klaus-Robert Müller
Part IV: Signal Processing: Blind Source Separation Vector Quantization, and Self-Organization
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1327)

Abstract

A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible d-pixel products in images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

Keywords

Support Vector Machine Feature Space Independent Component Analysis Kernel Principal Component Analysis Standard Principal Component Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. A. Aizerman, E. M. Braverman, & L. I. Rozonoér. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821–837, 1964.Google Scholar
  2. B. E. Boser, I. M. Guyon, & V Vapnik. A training algorithm for optimal margin classifiers. In Fifth Annual Workshop on COLT, Pittsburgh, 1992. ACM.Google Scholar
  3. C. Cortes & V. Vapnik. Support vector networks. Machine Learning, 20:273–297, 1995.Google Scholar
  4. T. Hastie & W. Stuetzle. Principal curves. JASA, 84:502–516, 1989.Google Scholar
  5. M. Kirby & L. Sirovich. Application of the Karhunen-Loève procedure for the characterization of human faces. IEEE Transactions, PAMI-12(1):103–108, 1990.Google Scholar
  6. E. Oja. A simplified neuron model as a principal component analyzer. J. Math. Biology, 15:267–273, 1982.Google Scholar
  7. B. Schölkopf, C. Burges, & V. Vapnik. Extracting support data for a given task. In U. M. Fayyad & R. Uthurusamy, eds., Proceedings, First International Conference on Knowledge Discovery & Data Mining, Menlo Park, CA, 1995. AAAI Press.Google Scholar
  8. B. Schölkopf, C. Burges, & V. Vapnik.Incorporating invariances in support vector learning machines. In C. v. d. Malsburg, W. v. Seelen, J. C. Vorbrüggen, & B. Sendhoff, eds., ICANN'96, p. 47–52, Berlin, 1996. Springer LNCS Vol. 1112.Google Scholar
  9. B. Schölkopf, A. J. Smola, & K.-R. Müller Nonlinear component analysis as a kernel eigenvalue problem. Technical Report 44, Max-Planck-Institut fur biologische Kybernetik, 1996. Submitted to Neural Computation.Google Scholar
  10. P. Simard, Y. LeCun. & J. Denker. Efficient pattern recognition using a new transformation distance. In S. J. Hanson, J. D. Cowan, & C. L. Giles, editors, Advances in NIPS 5, San Mateo, CA, 1993. Morgan Kaufmann.Google Scholar
  11. V. Vapnik & A. Chervonenkis. Theory of Pattern Recognition [in Russian]. Nauka, Moscow, 1974. (German Translation: W. Wapnik & A. Tscherwonenkis, Theorie der Zeichenerkennung, Akademie-Verlag, Berlin, 1979).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Bernhard Schölkopf
    • 1
  • Alexander Smola
    • 2
  • Klaus-Robert Müller
    • 2
  1. 1.Max-Planck-Institut f. biol. KybernetikTübingenGermany
  2. 2.GMD FIRSTBerlinGermany

Personalised recommendations