Machine Learning

, Volume 20, Issue 3, pp 273–297

Support-Vector Networks

  • Corinna Cortes
  • Vladimir Vapnik
Article

Abstract

The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.

High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

pattern recognition efficient learning algorithms neural networks radial basis function classifiers polynomial classifiers 

References

  1. Aizerman, M., Braverman, E., & Rozonoer, L. (1964). Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821–837.Google Scholar
  2. Anderson, T.W., & Bahadur, R.R. (1966). Classification into two multivariate normal distributions with different covariance matrices. Ann. Math. Stat., 33:420–431.Google Scholar
  3. Boser, B.E., Guyon, I., & Vapnik, V.N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop of Computational Learning Theory, 5, 144–152, Pittsburgh, ACM.Google Scholar
  4. Bottou, L., Cortes, C., Denker, J.S., Drucker, H., Guyon, I., Jackel, L.D., LeCun, Y., Sackinger, E., Simard, P., Vapnik, V., & Miller, U.A. (1994). Comparison of classifier methods: A case study in handwritten digit recognition. Proceedings of 12th International Conference on Pattern Recognition and Neural Network.Google Scholar
  5. Bromley, J., & Sackinger, E. (1991). Neural-network and k-nearest-neighbor classifiers. Technical Report 11359-910819-16TM, AT&T.Google Scholar
  6. Courant, R., & Hilbert, D. (1953). Methods of Mathematical Physics, Interscience, New York.Google Scholar
  7. Fisher, R.A. (1936). The use of multiple measurements in taxonomic problems. Ann. Eugenics, 7:111–132.Google Scholar
  8. LeCun, Y. (1985). Une procedure d'apprentissage pour reseau a seuil assymetrique. Cognitiva 85: A la Frontiere de l'Intelligence Artificielle des Sciences de la Connaissance des Neurosciences, 599–604, Paris.Google Scholar
  9. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., & Jackel, L.D. (1990). Handwritten digit recognition with a back-propagation network. Advances in Neural Information Processing Systems, 2, 396–404, Morgan Kaufman.Google Scholar
  10. Parker, D.B. (1985). Learning logic. Technical Report TR-47, Center for Computational Research in Economics and Management Science, Massachusetts Institute of Technology, Cambridge, MA.Google Scholar
  11. Rosenblatt, F. (1962). Principles of Neurodynamics, Spartan Books, New York.Google Scholar
  12. Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning internal representations by backpropagating errors. Nature, 323:533–536.PubMedGoogle Scholar
  13. Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1987). Learning internal representations by error propagation. In James L. McClelland & David E. Rumelhart (Eds.), Parallel Distributed Processing, 1, 318–362, MIT Press.Google Scholar
  14. Vapnik, V.N. (1982). Estimation of Dependences Based on Empirical Data, Addendum 1, New York: Springer-Verlag.Google Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • Corinna Cortes
    • 1
  • Vladimir Vapnik
  1. 1.AT&T Bell LabsHolmdelUSA

Personalised recommendations