Extracting certainty from uncertainty: regret bounded by variation in costs
- 624 Downloads
Prediction from expert advice is a fundamental problem in machine learning. A major pillar of the field is the existence of learning algorithms whose average loss approaches that of the best expert in hindsight (in other words, whose average regret approaches zero). Traditionally the regret of online algorithms was bounded in terms of the number of prediction rounds.
Cesa-Bianchi, Mansour and Stoltz (Mach. Learn. 66(2–3):21–352, 2007) posed the question whether it is be possible to bound the regret of an online algorithm by the variation of the observed costs. In this paper we resolve this question, and prove such bounds in the fully adversarial setting, in two important online learning scenarios: prediction from expert advice, and online linear optimization.
KeywordsIndividual sequences Prediction with expert advice Online learning Regret minimization
- Allenberg-Neeman, C., & Neeman, B. (2004). Full information game with gains and losses. In 15th international conference on algorithmic learning theory. Google Scholar
- Cesa-Bianchi, N., Mansour, Y., & Stoltz, G. (2007). Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2–3), 21–352. Google Scholar
- Hannan, J. (1957). Approximation to Bayes risk in repeated play. In M. Dresher, A. W. Tucker, & P. Wolfe (Eds.), Contributions to the theory of games (Vol. III, pp. 97–139). Google Scholar
- Hazan, E., & Kale, S. (2009a). On stochastic and worst-case models for investing. In Advances in neural information processing systems (NIPS) (Vol. 22). Google Scholar
- Hazan, E., & Kale, S. (2009b). Better algorithms for benign bandits. In ACM-SIAM symposium on discrete algorithms (SODA09). Google Scholar
- Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In ICML (pp. 928–936). Google Scholar