Skip to main content
Log in

Support vector machine learning algorithm and transduction

  • Published:
Computational Statistics Aims and scope Submit manuscript

Summary

The paper first reviews a recently developed method called the Support Vector Machine. The main feature of the method is to transform the original input vectors into high-dimensional space, and then construct a linear regression function or hyperplane in that space. The transformation is usually done by applying the kernel technique. The paper then shows that the same kernel technique can be applied to classical algorithms such as Ridge Regression. In conclusion, we present a new transductive learning algorithm that also allows us to compute confidence levels.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. 3Transduction is inference from particular to particular. Here we deal with a problem of transduction in the sense that we are interested only in the labelling of a particular example rather than in a general inductive rule for classifying future examples (3). Transduction is naturally related to a set of algorithms known as instance-based or case-based learning. Perhaps the most well-known algorithm in this class is k-nearest neighbour algorithm. The transductive algorithm described in this paper, however, is not based on similarities between examples (as are most of the instance-based techniques), but relies on selection of support vectors. Using the support vectors allows us to deal with high-dimensional problems, and to introduce the confidence and credibility measures.

References

  1. Vapnik, V. N. (1998), Statistical Learning Theory. Wiley, New York.

    MATH  Google Scholar 

  2. Saunders, C., Gammerman, A. & Vovk, V. (1998), Ridge Regression Learning Algorithm in Dual Variables. Machine Learning, Proceedings of the Fifteenth International Conference (ICML’98), pp. 515–521, edited by Jude Shavlik, Morgan Kaufmann Publishers, San Francisco.

    Google Scholar 

  3. Gammerman, A. J. (1996), Machine Learning: progress and prospects. ISBN 0 0900145 93 5, University of London, 1996.

  4. Gammerman, A., Vovk, V, & Vapnik, V. (1998), Learning by Transduction. Uncertainty in Artificial Intelligence, Proceedings of the Fourteenth Conference, pp.148–155, edited by G.F. Cooper and S. Moral, Morgan Kaufmann Publishers, San Francisco.

    Google Scholar 

  5. Vovk, V, Gammerman, A. & Saunders, C. (1999), Machine Learning Applications of Algorithmic Randomness. Machine Learning, Proceedings of the Sixteenth International Conference (ICML ’99), pp.444–453, edited by I. Bratko and S. Dzeroski, Morgan Kaufmann Publishers, San Francisco.

    Google Scholar 

  6. Wahba, G. (1978), Improper priors, spline smoothing and the problem of guarding against model errors in regression. J.Roy.Stat.Soc. Ser.B, 40:364–372, 1978.

    MathSciNet  MATH  Google Scholar 

  7. Li, M. and Vitanyi, P. (1997), An introduction to Kolmogorov complexity and its applications. Springer, New York, 2nd edition, 1997.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This work is partially supported by EPSRC through grants GR/L35812 (“Support Vector and Bayesian learning algorithms”), GR/M14937 (“Predictive complexity: recursion-theoretic variants”) and GR/M16856 (“Comparison of Support Vector Machine and Minimum Message Length methods for induction and prediction”)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gammermann, A. Support vector machine learning algorithm and transduction. Computational Statistics 15, 31–39 (2000). https://doi.org/10.1007/s001800050034

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s001800050034

Keywords

Navigation