Skip to main content

Nesterov Acceleration for the SMO Algorithm

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2016 (ICANN 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9887))

Included in the following conference series:

Abstract

We revise Nesterov’s Accelerated Gradient (NAG) procedure for the SVM dual problem and propose a strictly monotone version of NAG that is capable of accelerating the second order version of the SMO algorithm. The higher computational cost of the resulting Nesterov Accelerated SMO (NA–SMO) is twice as high as that of SMO so the reduction in the number of iterations is not likely to translate in time savings for most problems. However, understanding NAG is presently an area of strong research and some of the resulting ideas may offer venues for even faster versions of NA–SMO.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Allen-Zhu, Z., Orecchia, L.: Linear coupling: an ultimate unification of gradient and mirror descent. arXiv:1407.1537 (2014)

  2. Arjevani, Y., Shalev-Shwartz, S., Shamir, O.: On lower and upper bounds for smooth and strongly convex optimization problems. CoRR abs/1503.06833 (2015)

    Google Scholar 

  3. Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009)

    Article  MathSciNet  Google Scholar 

  4. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bubeck, S., Lee, Y.T., Singh, M.: A geometric alternative to Nesterov’s accelerated gradient descent. arXiv:1506.08187 (2015)

  6. Chambolle, A., Dossal, C.: How to make sure the iterates of FISTA converge. https://hal.inria.fr/hal-01060130

  7. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 27:1–27:27 (2011)

    Article  Google Scholar 

  8. Chen, P.H., Fan, R.E., Lin, C.J.: A study on SMO-type decomposition methods for support vector machines. IEEE Trans. Neural Networks 17, 893–908 (2006)

    Article  Google Scholar 

  9. Flammarion, N., Bach, F.R.: From averaging to acceleration, there is only a step-size. In: Proceedings of the 28th Conference on Learning Theory, COLT 2015, Paris, France, 3–6 July 2015, pp. 658–695 (2015)

    Google Scholar 

  10. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization. Kluwer Academic Publishers, Boston (2004)

    Book  MATH  Google Scholar 

  11. O’Donoghue, B., Candès, E.J.: Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 15(3), 715–732 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Su, W., Boyd, S., Candes, E.: A differential equation for modeling nesterovs accelerated gradient method: theory and insights. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 2510–2518 (2014)

    Google Scholar 

  13. Sutskever, I., Martens, J., Dahl, G.E., Hinton, G.E.: On the importance of initialization and momentum in deep learning. In: Dasgupta, S., Mcallester, D. (eds.) Proceedings of the 30th International Conference on Machine Learning (ICML 2013), vol. 28, pp. 1139–1147 (2013)

    Google Scholar 

  14. Wibisono, A., Wilson, A., Jordan, M.: A variational perspective on accelerated methods in optimization. arXiv:1603.04245 (2016)

    Google Scholar 

Download references

Acknowledgments

With partial support from Spain’s grants TIN2013-42351-P and S2013/ICE-2845 CASI-CAM-CM, and also of the Cátedra UAM–ADIC in Data Science and Machine Learning. The first author is also supported by the FPU–MEC grant AP-2012-5163. The authors also gratefully acknowledge the use of the facilities of Centro de Computación Científica (CCC) at UAM.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alberto Torres-Barrán .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Torres-Barrán, A., Dorronsoro, J.R. (2016). Nesterov Acceleration for the SMO Algorithm. In: Villa, A., Masulli, P., Pons Rivero, A. (eds) Artificial Neural Networks and Machine Learning – ICANN 2016. ICANN 2016. Lecture Notes in Computer Science(), vol 9887. Springer, Cham. https://doi.org/10.1007/978-3-319-44781-0_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-44781-0_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-44780-3

  • Online ISBN: 978-3-319-44781-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics