Skip to main content

Advertisement

SpringerLink
Go to cart
  • Log in
  1. Home
  2. Machine Learning
  3. Article
Support-vector networks
Download PDF
Your article has downloaded

Similar articles being viewed by others

Slider with three articles shown per slide. Use the Previous and Next buttons to navigate the slides or the slide controller buttons at the end to navigate through each slide.

Nonlinear optimization and support vector machines

23 May 2018

Veronica Piccialli & Marco Sciandrone

Nonlinear optimization and support vector machines

14 April 2022

Veronica Piccialli & Marco Sciandrone

Kernel classification using a linear programming approach

14 June 2019

Alexander M. Malyscheff & Theodore B. Trafalis

Applying an Approximation of the Anderson Discriminant Function and Support Vector Machines for Solving Some Classification Tasks

01 January 2020

V. V. Zenkov

Multi-step Training of a Generalized Linear Classifier

29 September 2018

Kanishka Tyagi & Michael Manry

Entropy-Based Estimation in Classification Problems

01 March 2019

Yu. A. Dubnov

A support vector approach based on penalty function method

17 December 2021

Songfeng Zheng

Evaluation of the Posterior Probability of a Class with a Series of Anderson Discriminant Functions

01 March 2019

V. V. Zenkov

A new trigonometric kernel function for support vector machine

14 December 2022

Sajad Fathi Hafshejani & Zahra Moaberfard

Download PDF
  • Published: September 1995

Support-vector networks

  • Corinna Cortes1 &
  • Vladimir Vapnik1 

Machine Learning volume 20, pages 273–297 (1995)Cite this article

  • 123k Accesses

  • 29691 Citations

  • 71 Altmetric

  • Metrics details

Abstract

Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.

High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

Download to read the full article text

Working on a manuscript?

Avoid the common mistakes

References

  • Aizerman, M., Braverman, E., & Rozonoer, L. (1964). Theoretical foundations of the potential function method in pattern recognition learning.Automation and Remote Control, 25:821–837.

    Google Scholar 

  • Anderson, T.W., & Bahadur, R.R. (1966). Classification into two multivariate normal distributions with different covariance matrices.Ann. Math. Stat., 33:420–431.

    Google Scholar 

  • Boser, B.E., Guyon, I., & Vapnik, V.N. (1992). A training algorithm for optimal margin classifiers. InProceedings of the Fifth Annual Workshop of Computational Learning Theory, 5, 144–152. Pittsburgh, ACM.

    Google Scholar 

  • Bottou, L., Cortes, C., Denker, J.S., Drucker, H., Guyon, I., Jackel, L.D., LeCun, Y., Sackinger, E., Simard, P., Vapnik, V., & Miller, U.A. (1994). Comparison of classifier methods: A case study in handwritten digit recognition.Proceedings of 12th International Conference on Pattern Recognition and Neural Network.

  • Bromley, J., & Sackinger, E. (1991). Neural-network andk-nearest-neighbor classifiers. Technical Report 11359-910819-16TM, AT&T.

  • Cournant, R., & Hilbert, D. (1953).Methods of Mathematical Physics, Interscience, New York.

    Google Scholar 

  • Fisher, R.A. (1936). The use of multiple measurements in taxonomic problems.Ann. Eugenics, 7:111–132.

    Google Scholar 

  • LeCun, Y. (1985). Une procedure d'apprentissage pour reseau a seuil assymetrique.Cognitiva 85: A la Frontiere de l'Intelligence Artificielle des Sciences de la Connaissance des Neurosciences, 599–604, Paris.

  • LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., & Jackel, L.D. (1990). Handwritten digit recognition with a back-propagation network.Advances in Neural Information Processing Systems, 2, 396–404, Morgan Kaufman.

    Google Scholar 

  • Parker, D.B. (1985). Learning logic. Technical Report TR-47, Center for Computational Research in Economics and Management Science, Massachusetts Institute of Technology, Cambridge, MA.

  • Rosenblatt, F. (1962).Principles of Neurodynamics, Spartan Books, New York.

    Google Scholar 

  • Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning internal representations by backpropagating errors.Nature, 323:533–536.

    Google Scholar 

  • Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1987). Learning internal representations by error propagation. In James L. McClelland & David E. Rumelhart (Eds.),Parallel Distributed Processing, 1, 318–362, MIT Press.

    Google Scholar 

  • Vapnik, V.N. (1982).Estimation of Dependences Based on Empirical Data, Addendum 1, New York: Springer-Verlag.

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. AT&T Bell Labs., 07733, Holmdel, NJ, USA

    Corinna Cortes & Vladimir Vapnik

Authors
  1. Corinna Cortes
    View author publications

    You can also search for this author in PubMed Google Scholar

  2. Vladimir Vapnik
    View author publications

    You can also search for this author in PubMed Google Scholar

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Cortes, C., Vapnik, V. Support-vector networks. Mach Learn 20, 273–297 (1995). https://doi.org/10.1007/BF00994018

Download citation

  • Received: 15 May 1993

  • Accepted: 20 February 1995

  • Issue Date: September 1995

  • DOI: https://doi.org/10.1007/BF00994018

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • pattern recognition
  • efficient learning algorithms
  • neural networks
  • radial basis function classifiers
  • polynomial classifiers
Download PDF

Working on a manuscript?

Avoid the common mistakes

Advertisement

Over 10 million scientific documents at your fingertips

Switch Edition
  • Academic Edition
  • Corporate Edition
  • Home
  • Impressum
  • Legal information
  • Privacy statement
  • California Privacy Statement
  • How we use cookies
  • Manage cookies/Do not sell my data
  • Accessibility
  • FAQ
  • Contact us
  • Affiliate program

Not logged in - 95.216.99.153

Not affiliated

Springer Nature

© 2023 Springer Nature Switzerland AG. Part of Springer Nature.