Skip to main content

On the Complexity of Working Set Selection

  • Conference paper
Algorithmic Learning Theory (ALT 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3244))

Included in the following conference series:

Abstract

The decomposition method is currently one of the major methods for solving the convex quadratic optimization problems being associated with Support Vector Machines (SVM-optimization). A key issue in this approach is the policy for working set selection. We would like to find policies that realize (as good as possible) three goals simultaneously: “(fast) convergence to an optimal solution”, “efficient procedures for working set selection”, and “high degree of generality” (including typical variants of SVM-optimization as special cases). In this paper, we study a general policy for working set selection that has been proposed quite recently. It is known that it leads to convergence for any convex quadratic optimization problem. Here, we investigate its computational complexity when it is used for SVM-optimization. We show that it poses an NP-complete working set selection problem, but a slight variation of it (sharing the convergence properties with the original policy) can be solved in polynomial time. We show furthermore that so-called “rate certifying pairs” (introduced by Hush and Scovel) can be found in linear time, which leads to a quite efficient decomposition method with a polynomial convergence rate for SVM-optimization.

This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the authors’ views. This work was furthermore supported by the Deutsche Forschungsgemeinschaft Grant SI 498/7-1.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pp. 144–152. ACM Press, New York (1992)

    Chapter  Google Scholar 

  2. Chang, C.-C., Hsu, C.-W., Lin, C.-J.: The analysis of decomposition methods for support vector machines. IEEE Transactions on Neural Networks 11(4), 248–250 (2000)

    Google Scholar 

  3. Hsu, C.-W., Lin, C.-J.: A simple decomposition method for support vector machines. Machine Learning 46(1-3), 291–314 (2002)

    Article  MATH  Google Scholar 

  4. Hush, D., Scovel, C.: Polynomial-time decomposition algorithms for support vector machines. Machine Learning 51, 51–71 (2003)

    Article  MATH  Google Scholar 

  5. Ibarra, O.H., Kim, C.E.: Fast approximation algorithms for knapsack and sum of subset problem. Journal of the Association on Computing Machinery 22(4), 463–468 (1975)

    MATH  MathSciNet  Google Scholar 

  6. Joachims, T.: Making large scale SVM learning practical. In: Schölkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods—Support Vector Learning, pp. 169–184. MIT Press, Cambridge (1998)

    Google Scholar 

  7. Karp, R.M.: Reducibility among combinatorial problems. In: Miller, R.E., Thatcher, J.W. (eds.) Complexity of Computer Computations, pp. 85–103. Plenum Press, New York (1972)

    Google Scholar 

  8. Keerthi, S.S., Gilbert, E.G.: Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning 46, 351–360 (2002)

    Article  MATH  Google Scholar 

  9. Keerthi, S.S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to SMO algorithm for SVM regression. IEEE Transactions on Neural Networks 11(5), 1188–1193 (2000)

    Article  Google Scholar 

  10. Keerthi, S.S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13, 637–649 (2001)

    Article  MATH  Google Scholar 

  11. Laskov, P.: An improved decomposition algorithm for regression support vector machines. Machine Learning 46, 315–350 (2002)

    Article  MATH  Google Scholar 

  12. Liao, S.-P., Lin, H.-T., Lin, C.-J.: A note on the decomposition methods for support vector regression. Neural Computation 14, 1267–1281 (2002)

    Article  MATH  Google Scholar 

  13. Lin, C.-J.: On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks 12, 1288–1298 (2001)

    Article  Google Scholar 

  14. Lin, C.-J.: Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Transactions on Neural Networks 13, 248–250 (2002)

    Article  Google Scholar 

  15. Lin, C.-J.: A formal analysis of stopping criteria of decomposition methods for support vector machines. IEEE Transactions on Neural Networks 13, 1045–1052 (2002)

    Article  Google Scholar 

  16. List, N., Simon, H.U.: A general convergence theorem for the decomposition method. In: Proceedings of the 17th Annual Conference on Computational Learning Theory, pp. 363–377. Springer, Heidelberg (2004)

    Google Scholar 

  17. Mangasarian, O.L., Musicant, D.R.: Successive overrelaxation for support vector machines. IEEE Transactions on Neural Networks 10, 1032–1037 (1999)

    Article  Google Scholar 

  18. Mangasarian, O.L., Musicant, D.R.: Active support vector machine classification. Advances in Neural Information Processing Systems 12, 577–583 (2000)

    Google Scholar 

  19. Mangasarian, O.L., Musicant, D.R.: Lagrangian support vector machines. Journal of Machine Learning Research 1, 161–177 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  20. Osuna, E.E., Freund, R., Girosi, F.: Training support vector machines: an application to face detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 130–136 (1997)

    Google Scholar 

  21. Platt, J.C.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods—Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1998)

    Google Scholar 

  22. Saunders, C., Stitson, M.O., Weston, J., Bottou, L., Schölkopf, B., Smola, A.J.: Support vector machine reference manual. Technical Report CSD-TR-98-03, Royal Holloway, University of London, Egham, UK (1998)

    Google Scholar 

  23. Vapnik, V.: Statistical Learning Theory. Series on Adaptive and Learning Systems for Signal Processing, Communications, and Control. John Wiley & Sons, Chichester (1998)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Simon, H.U. (2004). On the Complexity of Working Set Selection. In: Ben-David, S., Case, J., Maruoka, A. (eds) Algorithmic Learning Theory. ALT 2004. Lecture Notes in Computer Science(), vol 3244. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30215-5_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30215-5_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23356-5

  • Online ISBN: 978-3-540-30215-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics