Skip to main content

Towards a Fast Detection of Opponents in Repeated Stochastic Games

  • Conference paper
  • First Online:
Book cover Autonomous Agents and Multiagent Systems (AAMAS 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10642))

Included in the following conference series:

Abstract

Multi-agent algorithms aim to find the best response in strategic interactions. While many state-of-the-art algorithms assume repeated interaction with a fixed set of opponents (or even self-play), a learner in the real world is more likely to encounter the same strategic situation with changing counter-parties. This article presents a formal model of such sequential interactions, and a corresponding algorithm that combines the two established frameworks Pepper and Bayesian policy reuse. For each interaction, the algorithm faces a repeated stochastic game with an unknown (small) number of repetitions against a random opponent from a population, without observing the opponent’s identity. Our algorithm is composed of two main steps: first it draws inspiration from multiagent algorithms to obtain acting policies in stochastic games, and second it computes a belief over the possible opponents that is updated as the interaction occurs. This allows the agent to quickly select the appropriate policy against the opponent. Our results show fast detection of the opponent from its behavior, obtaining higher average rewards than the state-of-the-art baseline Pepper in repeated stochastic games.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We present experiments with one opponent, however, our approach could be generalized to more opponents taking the Cartesian product of all opponents as a single one.

  2. 2.

    Recall that R(sa) is initialized to \(R_{max}\) so it is likely to decrease in early episodes, but eventually will become pseudo-stationary.

References

  1. Albrecht, S.V., Crandall, J.W., Ramamoorthy, S.: Belief and truth in hypothesised behaviours. Artif. Intell. 235, 63–94 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  2. Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2/3), 235–256 (2002)

    Article  MATH  Google Scholar 

  3. Banerjee, B., Stone, P.: General game learning using knowledge transfer. In: International Joint Conference on Artificial Intelligence, pp. 672–677 (2007)

    Google Scholar 

  4. Barrett, S., Stone, P.: Cooperating with unknown teammates in complex domains: a robot soccer case study of ad hoc teamwork. In: Proceedings of the 29th Conference on Artificial Intelligence, pp. 2010–2016. Austin, Texas, USA (2014)

    Google Scholar 

  5. Bednar, J., Chen, Y., Liu, T.X., Page, S.: Behavioral spillovers and cognitive load in multiple games: an experimental study. Games Econ. Behav. 74(1), 12–31 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bellman, R.: A Markovian decision process. J. Math. Mech. 6(5), 679–684 (1957)

    MathSciNet  MATH  Google Scholar 

  7. Bloembergen, D., Tuyls, K., Hennes, D., Kaisers, M.: Evolutionary dynamics of multi-agent learning: a survey. J. Artif. Intell. Res. 53, 659–697 (2015)

    MathSciNet  MATH  Google Scholar 

  8. Boutsioukis, G., Partalas, I., Vlahavas, I.: Transfer learning in multi-agent reinforcement learning domains. In: Sanner, S., Hutter, M. (eds.) EWRL 2011. LNCS (LNAI), vol. 7188, pp. 249–260. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29946-9_25

    Chapter  Google Scholar 

  9. Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artif. Intell. 136(2), 215–250 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Brafman, R.I., Tennenholtz, M.: R-MAX a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res. 3, 213–231 (2003)

    MathSciNet  MATH  Google Scholar 

  11. Brunskill, E., Li, L.: PAC-inspired option discovery in lifelong reinforcement learning. In: Proceedings of the 22nd Conference on Artificial Intelligence, pp. 1599–1610 (2014)

    Google Scholar 

  12. Busoniu, L., Babuska, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 38(2), 156–172 (2008)

    Article  Google Scholar 

  13. Chakraborty, D., Stone, P.: Multiagent learning in the presence of memory-bounded agents. Auton. Agents Multi-Agent Syst. 28(2), 182–213 (2013)

    Article  Google Scholar 

  14. Conitzer, V., Sandholm, T.: AWESOME: a general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. Mach. Learn. 67(1–2), 23–43 (2006)

    Google Scholar 

  15. Crandall, J.W.: Just add pepper: extending learning algorithms for repeated matrix games to repeated markov games. In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, pp. 399–406. Valencia, Spain (2012)

    Google Scholar 

  16. Crandall, J.W.: Towards minimizing disappointment in repeated games. J. Artif. Intell. Res. 49(1), 111–142 (2014)

    MathSciNet  MATH  Google Scholar 

  17. Crandall, J.W.: Robust learning for repeated stochastic games via meta-gaming. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, pp. 3416–3422. Buenos Aires, Argentina (2015)

    Google Scholar 

  18. Da Silva, B.C., Basso, E.W., Bazzan, A.L., Engel, P.M.: Dealing with non-stationary environments using context detection. In: Proceedings of the 23rd International Conference on Machine Learnig, pp. 217–224. Pittsburgh, Pennsylvania (2006)

    Google Scholar 

  19. De Hauwere, Y.M., Vrancx, P., Nowe, A.: Learning multi-agent state space representations. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, pp. 715–722. Toronto, Canada (2010)

    Google Scholar 

  20. Elidrisi, M., Johnson, N., Gini, M., Crandall, J.W.: Fast adaptive learning in repeated stochastic games by game abstraction. In: Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems, pp. 1141–1148. Paris, France (2014)

    Google Scholar 

  21. Fernández, F., Veloso, M.: Probabilistic policy reuse in a reinforcement learning agent. In: Proceedings of the 5th International Conference on Autonomous Agents and Multiagent Systems, pp. 720–727. ACM, Hakodata, Hokkaido, Japan (2006)

    Google Scholar 

  22. Fudenberg, D., Tirole, J.: Game Theory. The MIT Press, Cambridge (1991)

    MATH  Google Scholar 

  23. Hernandez-Leal, P., Munoz de Cote, E., Sucar, L.E.: A framework for learning and planning against switching strategies in repeated games. Connect. Sci. 26(2), 103–122 (2014)

    Article  Google Scholar 

  24. Hernandez-Leal, P., Kaisers, M.: Learning against sequential opponents in repeated stochastic games. In: The 3rd Multi-disciplinary Conference on Reinforcement Learning and Decision Making, Ann Arbor (2017)

    Google Scholar 

  25. Hernandez-Leal, P., Taylor, M.E., Rosman, B., Sucar, L.E., Munoz de Cote, E.: Identifying and tracking switching, non-stationary opponents: a bayesian approach. In: Multiagent Interaction without Prior Coordination Workshop at AAAI, Phoenix, AZ, USA (2016)

    Google Scholar 

  26. Hernandez-Leal, P., Zhan, Y., Taylor, M.E., Sucar, L.E., Munoz de Cote, E.: Efficiently detecting switches against non-stationary opponents. Auton. Agents Multi-Agent Syst. 31(4), 767–789 (2017)

    Article  Google Scholar 

  27. Langford, J., Zhang, T.: The epoch-greedy algorithm for multi-armed bandits with side information. In: Advances in Neural Information Processing Systems, pp. 817–824 (2008)

    Google Scholar 

  28. Lazaric, A., Ghavamzadeh, M.: Bayesian multi-task reinforcement learning. In: Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel (2010)

    Google Scholar 

  29. Lazaric, A., Restelli, M., Bonarini, A.: Transfer of samples in batch reinforcement learning. In: International Conference on Machine Learning, pp. 544–551. ACM, Helsinki, Finland (2008)

    Google Scholar 

  30. Rosman, B., Hawasly, M., Ramamoorthy, S.: Bayesian policy reuse. Mach. Learn. 104(1), 99–127 (2016)

    Article  MathSciNet  Google Scholar 

  31. Taylor, M.E., Stone, P., Liu, Y.: Transfer learning via inter-task mappings for temporal difference learning. J. Mach. Learn. Res. 8, 2125–2167 (2007)

    MathSciNet  MATH  Google Scholar 

  32. Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: a survey. J. Mach. Learn. Res. 10, 1633–1685 (2009)

    MathSciNet  MATH  Google Scholar 

  33. Watkins, J.: Learning from delayed rewards. Ph.D. thesis, King’s College, Cambridge, UK (1989)

    Google Scholar 

Download references

Acknowledgments

This research has received funding through the ERA-Net Smart Grids Plus project Grid-Friends, with support from the European Union’s Horizon 2020 research and innovation programme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pablo Hernandez-Leal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hernandez-Leal, P., Kaisers, M. (2017). Towards a Fast Detection of Opponents in Repeated Stochastic Games. In: Sukthankar, G., Rodriguez-Aguilar, J. (eds) Autonomous Agents and Multiagent Systems. AAMAS 2017. Lecture Notes in Computer Science(), vol 10642. Springer, Cham. https://doi.org/10.1007/978-3-319-71682-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-71682-4_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-71681-7

  • Online ISBN: 978-3-319-71682-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics