Logarithmic regret algorithms for online convex optimization

Abstract

In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover’s Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret \(O(\sqrt{T})\) , for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight.

In this paper, we give algorithms that achieve regret O(log (T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1–19, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement.

The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover’s algorithm and gradient descent.

References

  1. Blum, A., & Kalai, A. (1997). Universal portfolios with and without transaction costs. In COLT ’97: proceedings of the tenth annual conference on computational learning theory (pp. 309–313). New York: ACM.

    Google Scholar 

  2. Brookes, M. (2005). The matrix reference manual. http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html.

  3. Boyd, S., & Vandenberghe, L. (2004). Convex optimization. New York: Cambridge University Press.

    MATH  Google Scholar 

  4. Cesa-Bianchi, N., & Lugosi, G. (2006). Prediction, learning, and games. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  5. Cover, T. (1991). Universal portfolios. Mathematical Finance, 1, 1–19.

    MATH  Article  MathSciNet  Google Scholar 

  6. Gaivoronski, A. A., & Stella, F. (2000). Stochastic nonstationary optimization for finding universal portfolios. Annals of Operations Research, 100, 165–188.

    MATH  Article  MathSciNet  Google Scholar 

  7. Hannan, J. (1957). Approximation to bayes risk in repeated play. In M. Dresher, A.W. Tucker, & P. Wolfe (Eds.), Contributions to the theory of games (Vol. III, pp. 97–139).

  8. Hazan, E. (2006). Efficient algorithms for online convex optimization and their applications. PhD thesis, Princeton University.

  9. Kakade, S. (2005). Personal communication.

  10. Kalai, A., & Vempala, S. (2003). Efficient algorithms for universal portfolios. Journal of Machine Learning Research, 3, 423–440.

    MATH  Article  MathSciNet  Google Scholar 

  11. Kalai, A., & Vempala, S. (2005). Efficient algorithms for on-line optimization. Journal of Computer and System Sciences, 71(3), 291–307.

    MATH  Article  MathSciNet  Google Scholar 

  12. Kivinen, J., & Warmuth, M. K. (1998). Relative loss bounds for multidimensional regression problems. In M. I. Jordan, M. J. Kearns, & S.A. Solla (Eds.), Advances in neural information processing systems (Vol. 10). Cambridge: MIT.

    Google Scholar 

  13. Kivinen, J., & Warmuth, M. K. (1999). Averaging expert predictions. In Computational learning theory: 4th European conference (EuroCOLT ’99) (pp. 153–167). Berlin: Springer.

    Google Scholar 

  14. Lovász, L., & Vempala, S. (2003a). The geometry of logconcave functions and an o *(n 3) sampling algorithm. Technical Report MSR-TR-2003-04, Microsoft Research.

  15. Lovász, L., & Vempala, S. (2003b). Simulated annealing in convex bodies and an 0*(n 4) volume algorithm. In Proceedings of the 44th symposium on foundations of computer science (FOCS) (pp. 650–659).

  16. Lobo, M. S., Vandenberghe, L., Boyd, S., & Lebret, H. (1998). Applications of second-order cone programming.

  17. Merhav, N., & Feder, M. (1992). Universal sequential learning and decision from individual data sequences. In COLT ’92: Proceedings of the fifth annual workshop on computational learning theory (pp. 413–427). New York: ACM.

    Google Scholar 

  18. Riedel, K. (1991). A Sherman–Morrison–Woodbury identity for rank augmenting matrices with application to centering. SIAM Journal on Mathematical Analysis, 12(1), 80–95.

    Google Scholar 

  19. Vaidya, P. M. (1996). A new algorithm for minimizing convex functions over convex sets. Mathematical Programming, 73(3), 291–341.

    Article  MathSciNet  Google Scholar 

  20. Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the twentieth international conference on machine learning (ICML) (pp. 928–936).

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Elad Hazan.

Additional information

Editors: Hans Ulrich Simon, Gabor Lugosi, Avrim Blum.

E. Hazan and S. Kale supported by Sanjeev Arora’s NSF grants MSPA-MCS 0528414, CCF 0514993, ITR 0205594.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Hazan, E., Agarwal, A. & Kale, S. Logarithmic regret algorithms for online convex optimization. Mach Learn 69, 169–192 (2007). https://doi.org/10.1007/s10994-007-5016-8

Download citation

Keywords

  • Online learning
  • Online optimization
  • Regret minimization
  • Portfolio management