Advertisement

Efficient Large Scale Linear Programming Support Vector Machines

  • Suvrit Sra
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4212)

Abstract

This paper presents a decomposition method for efficiently constructing ℓ1-norm Support Vector Machines (SVMs). The decomposition algorithm introduced in this paper possesses many desirable properties. For example, it is provably convergent, scales well to large datasets, is easy to implement, and can be extended to handle support vector regression and other SVM variants. We demonstrate the efficiency of our algorithm by training on (dense) synthetic datasets of sizes up to 20 million points (in ℝ32). The results show our algorithm to be several orders of magnitude faster than a previously published method for the same task. We also present experimental results on real data sets—our method is seen to be not only very fast, but also highly competitive against the leading SVM implementations.

Keywords

Support Vector Machine Decomposition Method Training Point Decomposition Procedure Machine Learn Research 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bennett, K.: Combining support vector and mathematical programming methods for classification. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods, pp. 307–326. MIT Press, Cambridge (1999)Google Scholar
  2. 2.
    Bradley, P.S., Mangasarian, O.L.: Massive data discrimination via linear support vector machines. Optimization Methods and Software 13(1) (2000)Google Scholar
  3. 3.
    Censor, Y., Zenios, S.A.: Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press, Oxford (1997)MATHGoogle Scholar
  4. 4.
    Chang, C.-C., Lin, C.-J.: LIBSVM: A libary for support vector machines (2001), http://www.csie.ntu.edu.tw/~cjlin/libsvm
  5. 5.
    Chapelle, O.: Training a support vector machine in the primal. Technical report, Max Planck Institute for Biological Cybernetics (2006)Google Scholar
  6. 6.
    Cristianini, N., Shawe-Taylor, J.: An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, Cambridge (2000)Google Scholar
  7. 7.
    Blake, C.L., Newman, D.J., Hettich, S., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
  8. 8.
    Graepel, T., Herbrich, R., Schölkopf, B., Smola, A., Bartlett, P., Müller, K.-R., Obermayer, K., Williamson, R.: Classification on proximity data with lp-machines. In: 9th Int. Conf. on Artificial Neural Networks: ICANN (1999)Google Scholar
  9. 9.
    Hildreth, C.: A quadratic programming procedure. Naval Res. Logist. Quarterly 4 (1957)Google Scholar
  10. 10.
    Joachims, T.: Making large-scale SVM learning practical. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods, pp. 42–56. MIT Press, Cambridge (1999)Google Scholar
  11. 11.
    Lewis, D.D., Yang, Y., Rose, T.G., Li, F.: RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research 5, 361–397 (2004)Google Scholar
  12. 12.
    Mangasarian, O.L.: Normal solutions of linear programs. Mathematical Programming Study 22, 206–216 (1984)MathSciNetMATHGoogle Scholar
  13. 13.
    Mangasarian, O.L.: Exact 1-norm support vector machines via unconstrained convex differentiable minimization. Journal of Machine Learning Research (2006)Google Scholar
  14. 14.
    Mangasarian, O.L., Musicant, D.R.: Active support vector machine classification. In: NIPS (2001)Google Scholar
  15. 15.
    Mardia, K.V., Jupp, P.: Directional Statistics, 2nd edn. John Wiley and Sons Ltd., Chichester (2000)MATHGoogle Scholar
  16. 16.
    Platt, J.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods, pp. 185–208. MIT Press, Cambridge (1999)Google Scholar
  17. 17.
    Prokhorov, D.: Slide presentation in IJCNN 2001, Ford Research Laboratory. IJCNN 2001 neural network competition (2001)Google Scholar
  18. 18.
    Sra, S., Jegelka, S.S.: SSLib: Sparse Matrix Manipulation Library (2006), http://www.cs.utexas.edu/users/suvrit/work/progs/sparselib.html
  19. 19.
    Zhu, J., Rosset, S., Hastie, T., Tibshirani, R.: 1-norm support vector machines. In: NIPS (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Suvrit Sra
    • 1
  1. 1.Dept. of Comp. SciencesThe University of Texas at AustinAustinUSA

Personalised recommendations