Lipschitz Bandits without the Lipschitz Constant
We consider the setting of stochastic bandit problems with a continuum of arms indexed by [0,1] d . We first point out that the strategies considered so far in the literature only provided theoretical guarantees of the form: given some tuning parameters, the regret is small with respect to a class of environments that depends on these parameters. This is however not the right perspective, as it is the strategy that should adapt to the specific bandit environment at hand, and not the other way round. Put differently, an adaptation issue is raised. We solve it for the special case of environments whose mean-payoff functions are globally Lipschitz. More precisely, we show that the minimax optimal orders of magnitude L d/(d + 2) T (d + 1)/(d + 2) of the regret bound over T time instances against an environment whose mean-payoff function f is Lipschitz with constant L can be achieved without knowing L or T in advance. This is in contrast to all previously known strategies, which require to some extent the knowledge of L to achieve this performance guarantee.
Unable to display preview. Download preview PDF.
- [ABL11]Audibert, J.-Y., Bubeck, S., Lugosi, G.: Minimax policies for combinatorial prediction games. In: Proceedings of the 24th Annual Conference on Learning Theory. Omnipress (2011)Google Scholar
- [BM10]Bubeck, S., Munos, R.: Open-loop optimistic planning. In: Proceedings of the 23rd Annual Conference on Learning Theory. Omnipress (2010)Google Scholar
- [BSY11]Bubeck, S., Stoltz, G., Yu, J.Y.: Lipschitz bandits without the lipschitz constant (2011), http://arxiv.org/pdf/1105.5041
- [DHK08]Dani, V., Hayes, T.P., Kakade, S.M.: Stochastic linear optimization under bandit feedback. In: Proceedings of the 21st Annual Conference on Learning Theory, pp. 355–366. Omnipress (2008)Google Scholar
- [Hor06]Horn, M.: Optimal algorithms for global optimization in case of unknown Lipschitz constant. Journal of Complexity 22(1) (2006)Google Scholar
- [Kle04]Kleinberg, R.: Nearly tight bounds for the continuum-armed bandit problem. In: Advances in Neural Information Processing Systems, pp. 697–704 (2004)Google Scholar
- [KSU08]Kleinberg, R., Slivkins, A., Upfal, E.: Multi-armed bandits in metric spaces. In: Proceedings of the 40th ACM Symposium on Theory of Computing (2008)Google Scholar
- [WAM09]Wang, Y., Audibert, J.Y., Munos, R.: Algorithms for infinitely many-armed bandits. In: Advances in Neural Information Processing Systems, pp. 1729–1736 (2009)Google Scholar
- [YM11]Yu, J.Y., Mannor, S.: Unimodal bandits. In: Proceedings of the 28th International Conference on Machine Learning (2011)Google Scholar