Logarithmic Regret Algorithms for Online Convex Optimization

  • Elad Hazan
  • Adam Kalai
  • Satyen Kale
  • Amit Agarwal
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4005)


In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich [Zin03] introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover’s Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret \(O({\sqrt{T}})\), for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight.

In this paper, we give algorithms that achieve regret O(log(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth [KW99], and Universal Portfolios by Cover [Cov91]. We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement.

The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection to follow-the-leader method, and builds on the recent work of Agarwal and Hazan [AH05]. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover’s algorithm and gradient descent.


Gradient Descent Projection Step Convex Cost Function Convex Loss Function Random Walk Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [AH05]
    Agarwal, A., Hazan, E.: Efficient algorithms for online game playing and universal portfolio management. In: ECCC, TR06-033 (2005)Google Scholar
  2. [BK97]
    Blum, A., Kalai, A.: Universal portfolios with and without transaction costs. In: COLT 1997: Proceedings of the tenth annual conference on Computational learning theory, pp. 309–313. ACM Press, New York (1997)CrossRefGoogle Scholar
  3. [Bro05]
    Brookes, M.: The matrix reference manual (2005), online:
  4. [CBL06]
    Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press, Cambridge (2006)CrossRefzbMATHGoogle Scholar
  5. [Cov91]
    Cover, T.: Universal portfolios. Math. Finance 1, 1–19 (1991)CrossRefMathSciNetzbMATHGoogle Scholar
  6. [FKM05]
    Flaxman, A., Kalai, A.T., McMahan, H.B.: Online convex optimization in the bandit setting: gradient descent without a gradient. In: Proceedings of 16th SODA, pp. 385–394 (2005)Google Scholar
  7. [Han57]
    Hannan, J.: Approximation to bayes risk in repeated play. In: Dresher, M., Tucker, A.W., Wolfe, P. (eds.) Contributions to the Theory of Games, vol. III, pp. 97–139 (1957)Google Scholar
  8. [Kak05]
    Kakade, S.: Personal communication (2005)Google Scholar
  9. [KV03]
    Kalai, A., Vempala, S.: Efficient algorithms for universal portfolios. J. Mach. Learn. Res. 3, 423–440 (2003)CrossRefMathSciNetzbMATHGoogle Scholar
  10. [KV05]
    Kalai, A., Vempala, S.: Efficient algorithms for on-line optimization. Journal of Computer and System Sciences 71(3), 291–307 (2005)CrossRefMathSciNetzbMATHGoogle Scholar
  11. [KW99]
    Kivinen, J., Warmuth, M.K.: Averaging expert predictions. In: Fischer, P., Simon, H.U. (eds.) EuroCOLT 1999. LNCS, vol. 1572, pp. 153–167. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  12. [LV03a]
    Lovász, L., Vempala, S.: The geometry of logconcave functions and an o *(n 3) sampling algorithm. Technical Report MSR-TR-2003-04, Microsoft Research (2003)Google Scholar
  13. [LV03b]
    Lovász, L., Vempala, S.: Simulated annealing in convex bodies and an 0*(n 4) volume algorithm. In: Proceedings of the 44th Symposium on Foundations of Computer Science (FOCS), pp. 650–659 (2003)Google Scholar
  14. [Rie91]
    Riedel, K.: A sherman-morrison-woodbury identity for rank augmenting matrices with application to centering. SIAM J. Mat. Anal. 12(1), 80–95 (1991)Google Scholar
  15. [Spa03]
    Spall, J.: Introduction to Stochastic Search and Optimization. John Wiley & Sons, Inc., New York (2003)CrossRefzbMATHGoogle Scholar
  16. [Vai96]
    Vaidya, P.M.: A new algorithm for minimizing convex functions over convex sets. Math. Program. 73(3), 291–341 (1996)CrossRefMathSciNetzbMATHGoogle Scholar
  17. [Zin03]
    Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Proceedings of the Twentieth International Conference (ICML), pp. 928–936 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Elad Hazan
    • 1
  • Adam Kalai
    • 2
  • Satyen Kale
    • 1
  • Amit Agarwal
    • 1
  1. 1.Princeton University 
  2. 2.TTIChicago

Personalised recommendations