Advertisement

Classifier Complexity Reduction by Support Vector Pruning in Kernel Matrix Learning

  • V. Vijaya Saradhi
  • Harish Karnick
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4507)

Abstract

This paper presents an algorithm for reducing a classifier’s complexity by pruning support vectors in learning the kernel matrix. The proposed algorithm retains the ‘best’ support vectors such that the span of support vectors, as defined by Vapnik and Chapelle, is as small as possible. Experiments on real world data sets show that the number of support vectors can be reduced in some cases by as much as 85% with little degradation in generalization performance.

Keywords

Kernel Matrix Learning Span of Support Vectors Classifier Complexity 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Burges, C.J.C.: Simplified support vector decision rules. In: 13th International Conference on Machine Learning, p. 71 (1996)Google Scholar
  2. 2.
    Burges, C.J.C.: Improving the accuracy and speed of support vector machines. In: Neural Information Processing Systems (1997)Google Scholar
  3. 3.
    Chapelle, O., Vapnik, V., Bosquet, O., Mukherjee, S.: Choosing kernel parameters for support vector machines. Machine Learning 46(1-3), 131 (2001)Google Scholar
  4. 4.
    Downs, T., Gates, K.E., Masters, A.: Exact simplification of support vector solutions. Journal of Machine Learning Research 2, 293 (2001)CrossRefGoogle Scholar
  5. 5.
    Keerthi, S.S., Chapelle, O., DeCoste, D.: Building support vector machines with reduced classifier complexity. Journal of Machine Learning Research 7 (2006)Google Scholar
  6. 6.
    Lanckriet, G.R.G., Cristianini, N., Bartlett, P., El Ghaoui, L., Jordan, M.I.: Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research 5, 27 (2004)Google Scholar
  7. 7.
    Lee, Y.-J., Mangasarian, O.L.: Rsvm: reduced support vector machines. In: CD Proceedings of the first SIAM International Conference on Data Mining, Chicago (2001)Google Scholar
  8. 8.
    Löfberg, J.: YALMIP: A toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD Conference, Taipei, Taiwan (2004), Available from http://control.ee.ethz.ch/~joloef/yalmip.php
  9. 9.
    Nguyen, D., Ho, T.: An efficient method for simplifying support vector machines. In: 22nd International Conference on Machine Learning, Bonn, Germany, pp. 617–624 (2005)Google Scholar
  10. 10.
    Rätsch, G.: Benchmark repository. Technical report, Intelligent Data Analysis Group, Fraunhofer-FIRST (2005)Google Scholar
  11. 11.
    Schoelkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)Google Scholar
  12. 12.
    Sturm, J.F.: Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software 11-12, 625–653 (1999)CrossRefMathSciNetGoogle Scholar
  13. 13.
    Tipping, M.E.: Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Vapnik, V.: Statistical Learning Theory. John Wiley and Sons, New York (1998)zbMATHGoogle Scholar
  15. 15.
    Vapnik, V., Chapelle, O.: Bounds on error expectation for SVM. Neural Computation 12, 2013 (2000)CrossRefGoogle Scholar
  16. 16.
    Wu, M., Scholkopf, B., Bakir, G.: Building sparse large margin classifiers. In: 22nd International Conference on Machine Learning, Bonn, Germany (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • V. Vijaya Saradhi
    • 1
  • Harish Karnick
    • 1
  1. 1.Department of Computer Science and Engineering, Indian Institute of Technology, KanpurIndia

Personalised recommendations