Skip to main content

Part of the book series: Wireless Networks ((WN))

  • 618 Accesses

Abstract

Sequential learning provides a rigorous framework to address the trade-offs between exploration and exploitation in face of uncertainty, vividly captured in the multi-armed bandit problem. In this chapter, we provide an overview of the sequential learning framework and a taxonomy of the types of problems where the framework applies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. V. Anantharam, P. Varaiya, and J. Walrand. “Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays-Part I: I.I.D. rewards”. In: Automatic Control, IEEE Transactions on 32.11 (Nov. 1987), pp. 968–976. ISSN: 0018-9286.

    Google Scholar 

  2. Venkatachalam Anantharam, Pravin Varaiya, and JeanWalrand. “Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays-Part II: Markovian rewards”. In: IEEE Transactions on Automatic Control 32.11 (1987), pp. 977–982.

    Google Scholar 

  3. Sébastien Bubeck and Nicol‘o Cesa-Bianchi. “Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems”. In: Foundations and Trends in Machine Learning 5.1 (2012), pp. 1–122.

    Google Scholar 

  4. Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. “Pure exploration in finitely-armed and continuous-armed bandits”. In: Theor. Comput. Sci. 412.19 (2011), pp. 1832–1852.

    Google Scholar 

  5. Shouyuan Chen et al. “Combinatorial Pure Exploration of Multi-Armed Bandits”. In: Annual Conference on Neural Information Processing Systems 2014. Montreal, Quebec, Canada, Dec. 2014, pp. 379–387.

    Google Scholar 

  6. Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.

    Google Scholar 

  7. John Gittins, Kevin Glazebrook, and Richard Weber. Multi-armed bandit allocation indices. John Wiley & Sons, 2011.

    Google Scholar 

  8. Sylvain Gelly, YizaoWang, and Olivier Teytaud. Modification of UCT with patterns in Monte-Carlo Go. Tech. rep. INRIA, 2006.

    Google Scholar 

  9. Junpei Komiyama, Junya Honda, and Hiroshi Nakagawa. “Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays”. In: Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6–11 July 2015. 2015, pp. 1152–1161.

    Google Scholar 

  10. Aditya Mahajan and Demosthenis Teneketzis. “Multi-armed bandit problems”. In: Foundations and Applications of Sensor Management. Springer, 2008, pp. 121–151.

    Google Scholar 

  11. Sandeep Pandey and Christopher Olston. “Handling advertisements of unknown quality in search advertising”. In: Advances in neural information processing systems. 2006, pp. 1065–1072.

    Google Scholar 

  12. Herbert Robbins. “Some Aspects of the Sequential Design of Experiments”. In: Bulletin of the American Mathematical Society 58 (1952), pp. 527–535.

    Google Scholar 

  13. William R Thompson. “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples”. In: Biometrika 25.3/4 (1933), pp. 285–294.

    Google Scholar 

  14. Yisong Yue et al. “The k-armed dueling bandits problem”. In: Journal of Computer and System Sciences 78.5 (2012), pp. 1538–1556.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rong Zheng .

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this chapter

Cite this chapter

Zheng, R., Hua, C. (2016). Introduction. In: Sequential Learning and Decision-Making in Wireless Resource Management. Wireless Networks. Springer, Cham. https://doi.org/10.1007/978-3-319-50502-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-50502-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-50501-5

  • Online ISBN: 978-3-319-50502-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics