Machine Learning

, Volume 46, Issue 1, pp 21–52

Bayesian Methods for Support Vector Machines: Evidence and Predictive Class Probabilities

  • Peter Sollich
Article

DOI: 10.1023/A:1012489924661

Cite this article as:
Sollich, P. Machine Learning (2002) 46: 21. doi:10.1023/A:1012489924661

Abstract

I describe a framework for interpreting Support Vector Machines (SVMs) as maximum a posteriori (MAP) solutions to inference problems with Gaussian Process priors. This probabilistic interpretation can provide intuitive guidelines for choosing a ‘good’ SVM kernel. Beyond this, it allows Bayesian methods to be used for tackling two of the outstanding challenges in SVM classification: how to tune hyperparameters—the misclassification penalty C, and any parameters specifying the ernel—and how to obtain predictive class probabilities rather than the conventional deterministic class label predictions. Hyperparameters can be set by maximizing the evidence; I explain how the latter can be defined and properly normalized. Both analytical approximations and numerical methods (Monte Carlo chaining) for estimating the evidence are discussed. I also compare different methods of estimating class probabilities, ranging from simple evaluation at the MAP or at the posterior average to full averaging over the posterior. A simple toy application illustrates the various concepts and techniques.

Support vector machinesGaussian processesBayesian inferenceevidencehyperparameter tuningprobabilistic predictions
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • Peter Sollich
    • 1
  1. 1.Department of MathematicsKing's College LondonStrand, LondonUK