Skip to main content

Improved Working Set Selection for LaRank

  • Conference paper
  • 1858 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 6854))

Abstract

LaRank is a multi-class support vector machine training algorithm for approximate online and batch learning based on sequential minimal optimization. For batch learning, LaRank performs one or more learning epochs over the training set. One epoch sequentially tests all currently excluded training examples for inclusion in the dual optimization problem, with intermittent reprocess optimization steps on examples currently included. Working set selection for one reprocess step chooses the most violating pair among variables corresponding to a random example. We propose a new working set selection scheme which exploits the gradient update necessarily following an optimization step. This makes it computationally more efficient. Among a set of candidate examples we pick the one yielding maximum gain between either of the classes being updated and a randomly chosen third class. Experiments demonstrate faster convergence on three of four benchmark datasets and no significant difference on the fourth.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Schölkopf, B., Smola, A.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2002)

    Google Scholar 

  2. Bordes, A., Ertekin, S., Weston, J., Bottou, L.: Fast kernel classifiers with online and active learning. Journal of Machine Learning Research 6, 1579–1619 (2005), http://leon.bottou.org/papers/bordes-ertekin-weston-bottou-2005

    MathSciNet  MATH  Google Scholar 

  3. Glasmachers, T., Igel, C.: Second order SMO improves SVM online and active learning. Neural Computation 20(2), 374–382 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bordes, A., Bottou, L., Gallinari, P., Weston, J.: Solving multiclass support vector machines with LaRank. In: Proceedings of the 24th International Conference on Machine Learning, pp. 89–96. OmniPress (2007), http://www-etud.iro.umontreal.ca/~bordesa/mywiki/doku.php?id=larank

  5. Bordes, A., Usunier, N., Bottou, L.: Sequence labelling SVMs trained in one pass. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008, Part I. LNCS (LNAI), vol. 5211, pp. 146–161. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  6. Ertekin, S., Bottou, L., Giles, C.: Non-convex online support vector machines. IEEE Transactions on Pattern Recognition and Machine Intelligence 33(2), 368–381 (2011)

    Article  Google Scholar 

  7. Crammer, K., Singer, Y.: On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research 2, 265–292 (2002)

    MATH  Google Scholar 

  8. Steinwart, I.: Support vector machines are universally consistent. Journal of Complexity 18, 768–791 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  9. Platt, J.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods: Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)

    Google Scholar 

  10. Joachims, T.: Making large-scale SVM learning practical. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods: Support Vector Learning, pp. 169–184. MIT Press, Cambridge (1999)

    Google Scholar 

  11. Glasmachers, T., Igel, C.: Maximum-gain working set selection for support vector machines. Journal of Machine Learning Research 7, 1437–1466 (2006)

    MATH  Google Scholar 

  12. Igel, C., Glasmachers, T., Heidrich-Meisner, V.: Shark. Journal of Machine Learning Research 9, 993–996 (2008), http://shark-project.sourceforge.net

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tuma, M., Igel, C. (2011). Improved Working Set Selection for LaRank. In: Real, P., Diaz-Pernil, D., Molina-Abril, H., Berciano, A., Kropatsch, W. (eds) Computer Analysis of Images and Patterns. CAIP 2011. Lecture Notes in Computer Science, vol 6854. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23672-3_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23672-3_40

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23671-6

  • Online ISBN: 978-3-642-23672-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics