Online Learning Meets Optimization in the Dual

  • Shai Shalev-Shwartz
  • Yoram Singer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4005)


We describe a novel framework for the design and analysis of online learning algorithms based on the notion of duality in constrained optimization. We cast a sub-family of universal online bounds as an optimization problem. Using the weak duality theorem we reduce the process of online learning to the task of incrementally increasing the dual objective function. The amount by which the dual increases serves as a new and natural notion of progress. We are thus able to tie the primal objective value and the number of prediction mistakes using and the increase in the dual. The end result is a general framework for designing and analyzing old and new online learning algorithms in the mistake bound model.


Online Learning Online Algorithm Dual Solution Dual Objective Bregman Divergence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Azoury, K., Warmuth, M.: Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning 43(3), 211–246 (2001)CrossRefMATHGoogle Scholar
  2. 2.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)MATHGoogle Scholar
  3. 3.
    Cesa-Bianchi, N., Conconi, A., Gentile, C.: On the generalization ability of on-line learning algorithms. In: Advances in Neural Information Processing Systems 14, pp. 359–366 (2002)Google Scholar
  4. 4.
    Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive aggressive algorithms. Technical report, The Hebrew University (2005)Google Scholar
  5. 5.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)Google Scholar
  6. 6.
    Dekel, O., Shalev-Shwartz, S., Singer, Y.: The Forgetron: A kernel-based perceptron on a fixed budget. In: Advances in Neural Information Processing Systems 18 (2005)Google Scholar
  7. 7.
    Gentile, C.: The robustness of the p-norm algorithms. Machine Learning 53(3) (2002)Google Scholar
  8. 8.
    Grove, A.J., Littlestone, N., Schuurmans, D.: General convergence results for linear discriminant updates. Machine Learning 43(3), 173–210 (2001)CrossRefMATHGoogle Scholar
  9. 9.
    Kivinen, J., Smola, A.J., Williamson, R.C.: Online learning with kernels. IEEE Transactions on Signal Processing 52(8), 2165–2176 (2002)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Kivinen, J., Warmuth, M.: Exponentiated gradient versus gradient descent for linear predictors. Information and Computation 132(1), 1–64 (1997)CrossRefMathSciNetMATHGoogle Scholar
  11. 11.
    Kivinen, J., Warmuth, M.: Relative loss bounds for multidimensional regression problems. Journal of Machine Learning 45(3), 301–329 (2001)CrossRefMATHGoogle Scholar
  12. 12.
    Li, Y., Long, P.M.: The relaxed online maximum margin algorithm. Machine Learning 46(1–3), 361–387 (2002)CrossRefMATHGoogle Scholar
  13. 13.
    Littlestone, N.: Learning when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2, 285–318 (1988)Google Scholar
  14. 14.
    Littlestone, N.: Mistake bounds and logarithmic linear-threshold learning algorithms. PhD thesis, U. C. Santa Cruz (March 1989)Google Scholar
  15. 15.
    Rosenblatt, F.: The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review 65, 386–407 (1958), Reprinted in Neurocomputing, MIT Press (1988)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Shai Shalev-Shwartz
    • 1
  • Yoram Singer
    • 1
    • 2
  1. 1.School of Computer Sci. & Eng.The Hebrew UniversityJerusalemIsrael
  2. 2.Google Inc.Mountain ViewUSA

Personalised recommendations