Skip to main content

Support Vector Machines

  • Reference work entry
  • First Online:
Encyclopedia of Machine Learning and Data Mining

Abstract

Support vector machines (SVMs) are a class of linear algorithms which can be used for classification, regression, density estimation, novelty detection, etc. In the simplest case of two-class classification, SVMs find a hyperplane that separates the two classes of data with as wide a margin as possible. This leads to good generalization accuracy on unseen data and supports specialized optimization methods that allow SVM to learn from a large amount of data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 699.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 949.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A comprehensive treatment of SVMs can be found in Schölkopf and Smola (2002) and Shawe-Taylor and Cristianini (2004). Some important recent developments of SVMs for structured output are collected in Bakir et al. (2007). As far as applications are concerned, see Lampert (2009) for computer vision and Schölkopf et al. (2004) for bioinformatics. Finally, Vapnik (1998) provides the details on statistical learning theory.

Recommended Reading

A comprehensive treatment of SVMs can be found in Schölkopf and Smola (2002) and Shawe-Taylor and Cristianini (2004). Some important recent developments of SVMs for structured output are collected in Bakir et al. (2007). As far as applications are concerned, see Lampert (2009) for computer vision and Schölkopf et al. (2004) for bioinformatics. Finally, Vapnik (1998) provides the details on statistical learning theory.

  • Bakir G, Hofmann T, Schölkopf B, Smola A, Taskar B, Vishwanathan SVN (2007) Predicting structured data. MIT Press, Cambridge

    Google Scholar 

  • Borgwardt KM (2007) Graph kernels. Ph.D. thesis, Ludwig-Maximilians-University, Munich

    Google Scholar 

  • Boser B, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers. In: Haussler D (ed) Proceedings of annual conference on computational learning theory, Pittsburgh. ACM Press, pp 144–152

    Google Scholar 

  • Cortes C, Vapnik V (1995) Support vector networks. Mach Learn 20(3):273–297

    MATH  Google Scholar 

  • Haussler D (1999) Convolution kernels on discrete structures. Technical report UCS-CRL-99-10, UC Santa Cruz

    Google Scholar 

  • Joachims T (1998) Text categorization with support vector machines: learning with many relevant features. In: Proceedings of the European conference on machine learning. Springer, Berlin, pp 137–142

    Google Scholar 

  • Jordan MI, Bartlett PL, McAuliffe JD (2003) Convexity, classification, and risk bounds. Technical report 638, UC Berkeley

    Google Scholar 

  • Lampert CH (2009) Kernel methods in computer vision. Found Trends Comput Graph Vis 4(3): 193–285

    Article  MathSciNet  Google Scholar 

  • Mangasarian OL (1965) Linear and nonlinear separation of patterns by linear programming. Operations Research 13:444–452

    Article  MathSciNet  MATH  Google Scholar 

  • Platt JC (1999a) Fast training of support vector machines using sequential minimal optimization. In: Advances in kernel methods—support vector learning. MIT Press, pp 185–208

    Google Scholar 

  • Platt JC (1999b) Probabilities for sv machines. In: Smola AJ, Bartlett PL, Schölkopf B, Schuurmans D (eds) Advances in large margin classifiers. MIT Press, Cambridge, MA, pp 61–74

    Google Scholar 

  • Schölkopf B, Smola A (2002) Learning with kernels. MIT Press, Cambridge, MA

    MATH  Google Scholar 

  • Schölkopf B, Tsuda K, Vert J-P (2004) Kernel methods in computational biology. MIT Press, Cambridge, MA

    Google Scholar 

  • Shawe-Taylor J, Bartlett PL, Williamson RC, Anthony M (1998) Structural risk minimization over data-dependent hierarchies. IEEE Trans Inf Theory 44(5):1926–1940

    Article  MathSciNet  MATH  Google Scholar 

  • Shawe-Taylor J, Cristianini N (2000) Margin distribution and soft margin. In: Smola AJ, Bartlett PL, Schölkopf B, Schuurmans D (eds) Advances in large margin classifiers. MIT Press, Cambridge, MA, pp 349–358

    Google Scholar 

  • Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge University Press, Cambridge, UK

    Book  MATH  Google Scholar 

  • Smola A, Vishwanathan SVN, Le Q (2007) Bundle methods for machine learning. In: Koller D, Singer Y (eds) Advances in neural information processing systems, vol 20. MIT Press, Cambridge MA

    Google Scholar 

  • Taskar B (2004) Learning structured prediction models: a large margin approach. Ph.D. thesis, Stanford University

    Google Scholar 

  • Tsochantaridis I, Joachims T, Hofmann T, Altun Y (2005) Large margin methods for structured and interdependent output variables. J Mach Learn Res 6:1453–1484

    MathSciNet  MATH  Google Scholar 

  • Vapnik V (1998) Statistical learning theory. John Wiley, New York

    MATH  Google Scholar 

  • Wahba G (1990) Spline models for observational data. CBMS-NSF regional conference series in applied mathematics, vol 59. SIAM, Philadelphia

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinhua Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Science+Business Media New York

About this entry

Cite this entry

Zhang, X. (2017). Support Vector Machines. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_810

Download citation

Publish with us

Policies and ethics