Lipschitz Bandits without the Lipschitz Constant

  • Sébastien Bubeck
  • Gilles Stoltz
  • Jia Yuan Yu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6925)

Abstract

We consider the setting of stochastic bandit problems with a continuum of arms indexed by [0,1] d . We first point out that the strategies considered so far in the literature only provided theoretical guarantees of the form: given some tuning parameters, the regret is small with respect to a class of environments that depends on these parameters. This is however not the right perspective, as it is the strategy that should adapt to the specific bandit environment at hand, and not the other way round. Put differently, an adaptation issue is raised. We solve it for the special case of environments whose mean-payoff functions are globally Lipschitz. More precisely, we show that the minimax optimal orders of magnitude L d/(d + 2) T (d + 1)/(d + 2) of the regret bound over T time instances against an environment whose mean-payoff function f is Lipschitz with constant L can be achieved without knowing L or T in advance. This is in contrast to all previously known strategies, which require to some extent the knowledge of L to achieve this performance guarantee.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [AB10]
    Audibert, J.-Y., Bubeck, S.: Regret bounds and minimax policies under partial monitoring. Journal of Machine Learning Research 11, 2635–2686 (2010)MathSciNetMATHGoogle Scholar
  2. [ABL11]
    Audibert, J.-Y., Bubeck, S., Lugosi, G.: Minimax policies for combinatorial prediction games. In: Proceedings of the 24th Annual Conference on Learning Theory. Omnipress (2011)Google Scholar
  3. [ACBF02]
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine Learning Journal 47(2-3), 235–256 (2002)CrossRefMATHGoogle Scholar
  4. [ACBFS02]
    Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.: The non-stochastic multi-armed bandit problem. SIAM Journal on Computing 32(1), 48–77 (2002)MathSciNetCrossRefMATHGoogle Scholar
  5. [Agr95]
    Agrawal, R.: The continuum-armed bandit problem. SIAM Journal on Control and Optimization 33, 1926–1951 (1995)MathSciNetCrossRefMATHGoogle Scholar
  6. [AOS07]
    Auer, P., Ortner, R., Szepesvári, C.: Improved rates for the stochastic continuum-armed bandit problem. In: Bshouty, N.H., Gentile, C. (eds.) COLT. LNCS (LNAI), vol. 4539, pp. 454–468. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  7. [BM10]
    Bubeck, S., Munos, R.: Open-loop optimistic planning. In: Proceedings of the 23rd Annual Conference on Learning Theory. Omnipress (2010)Google Scholar
  8. [BMSS11]
    Bubeck, S., Munos, R., Stoltz, G., Szepesvári, C.: \(\mathcal{X}\)–armed bandits. Journal of Machine Learning Research 12, 1655–1695 (2011)MathSciNetMATHGoogle Scholar
  9. [BSY11]
    Bubeck, S., Stoltz, G., Yu, J.Y.: Lipschitz bandits without the lipschitz constant (2011), http://arxiv.org/pdf/1105.5041
  10. [Cop09]
    Cope, E.: Regret and convergence bounds for immediate-reward reinforcement learning with continuous action spaces. IEEE Transactions on Automatic Control 54(6), 1243–1253 (2009)MathSciNetCrossRefMATHGoogle Scholar
  11. [DHK08]
    Dani, V., Hayes, T.P., Kakade, S.M.: Stochastic linear optimization under bandit feedback. In: Proceedings of the 21st Annual Conference on Learning Theory, pp. 355–366. Omnipress (2008)Google Scholar
  12. [Hor06]
    Horn, M.: Optimal algorithms for global optimization in case of unknown Lipschitz constant. Journal of Complexity 22(1) (2006)Google Scholar
  13. [JPS93]
    Jones, D.R., Perttunen, C.D., Stuckman, B.E.: Lipschitzian optimization without the Lipschitz constant. Journal of Optimization Theory and Applications 79(1), 157–181 (1993)MathSciNetCrossRefMATHGoogle Scholar
  14. [Kle04]
    Kleinberg, R.: Nearly tight bounds for the continuum-armed bandit problem. In: Advances in Neural Information Processing Systems, pp. 697–704 (2004)Google Scholar
  15. [KSU08]
    Kleinberg, R., Slivkins, A., Upfal, E.: Multi-armed bandits in metric spaces. In: Proceedings of the 40th ACM Symposium on Theory of Computing (2008)Google Scholar
  16. [Rob52]
    Robbins, H.: Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society 58, 527–535 (1952)MathSciNetCrossRefMATHGoogle Scholar
  17. [WAM09]
    Wang, Y., Audibert, J.Y., Munos, R.: Algorithms for infinitely many-armed bandits. In: Advances in Neural Information Processing Systems, pp. 1729–1736 (2009)Google Scholar
  18. [YM11]
    Yu, J.Y., Mannor, S.: Unimodal bandits. In: Proceedings of the 28th International Conference on Machine Learning (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Sébastien Bubeck
    • 1
  • Gilles Stoltz
    • 2
    • 3
  • Jia Yuan Yu
    • 2
    • 3
  1. 1.Centre de Recerca MatemàticaBarcelonaSpain
  2. 2.Ecole normale supérieure, CNRSParisFrance
  3. 3.HEC Paris, CNRSJouy-en-JosasFrance

Personalised recommendations