Abstract
Multi-agent algorithms aim to find the best response in strategic interactions. While many state-of-the-art algorithms assume repeated interaction with a fixed set of opponents (or even self-play), a learner in the real world is more likely to encounter the same strategic situation with changing counter-parties. This article presents a formal model of such sequential interactions, and a corresponding algorithm that combines the two established frameworks Pepper and Bayesian policy reuse. For each interaction, the algorithm faces a repeated stochastic game with an unknown (small) number of repetitions against a random opponent from a population, without observing the opponent’s identity. Our algorithm is composed of two main steps: first it draws inspiration from multiagent algorithms to obtain acting policies in stochastic games, and second it computes a belief over the possible opponents that is updated as the interaction occurs. This allows the agent to quickly select the appropriate policy against the opponent. Our results show fast detection of the opponent from its behavior, obtaining higher average rewards than the state-of-the-art baseline Pepper in repeated stochastic games.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We present experiments with one opponent, however, our approach could be generalized to more opponents taking the Cartesian product of all opponents as a single one.
- 2.
Recall that R(s, a) is initialized to \(R_{max}\) so it is likely to decrease in early episodes, but eventually will become pseudo-stationary.
References
Albrecht, S.V., Crandall, J.W., Ramamoorthy, S.: Belief and truth in hypothesised behaviours. Artif. Intell. 235, 63–94 (2016)
Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2/3), 235–256 (2002)
Banerjee, B., Stone, P.: General game learning using knowledge transfer. In: International Joint Conference on Artificial Intelligence, pp. 672–677 (2007)
Barrett, S., Stone, P.: Cooperating with unknown teammates in complex domains: a robot soccer case study of ad hoc teamwork. In: Proceedings of the 29th Conference on Artificial Intelligence, pp. 2010–2016. Austin, Texas, USA (2014)
Bednar, J., Chen, Y., Liu, T.X., Page, S.: Behavioral spillovers and cognitive load in multiple games: an experimental study. Games Econ. Behav. 74(1), 12–31 (2012)
Bellman, R.: A Markovian decision process. J. Math. Mech. 6(5), 679–684 (1957)
Bloembergen, D., Tuyls, K., Hennes, D., Kaisers, M.: Evolutionary dynamics of multi-agent learning: a survey. J. Artif. Intell. Res. 53, 659–697 (2015)
Boutsioukis, G., Partalas, I., Vlahavas, I.: Transfer learning in multi-agent reinforcement learning domains. In: Sanner, S., Hutter, M. (eds.) EWRL 2011. LNCS (LNAI), vol. 7188, pp. 249–260. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29946-9_25
Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artif. Intell. 136(2), 215–250 (2002)
Brafman, R.I., Tennenholtz, M.: R-MAX a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res. 3, 213–231 (2003)
Brunskill, E., Li, L.: PAC-inspired option discovery in lifelong reinforcement learning. In: Proceedings of the 22nd Conference on Artificial Intelligence, pp. 1599–1610 (2014)
Busoniu, L., Babuska, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 38(2), 156–172 (2008)
Chakraborty, D., Stone, P.: Multiagent learning in the presence of memory-bounded agents. Auton. Agents Multi-Agent Syst. 28(2), 182–213 (2013)
Conitzer, V., Sandholm, T.: AWESOME: a general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. Mach. Learn. 67(1–2), 23–43 (2006)
Crandall, J.W.: Just add pepper: extending learning algorithms for repeated matrix games to repeated markov games. In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, pp. 399–406. Valencia, Spain (2012)
Crandall, J.W.: Towards minimizing disappointment in repeated games. J. Artif. Intell. Res. 49(1), 111–142 (2014)
Crandall, J.W.: Robust learning for repeated stochastic games via meta-gaming. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, pp. 3416–3422. Buenos Aires, Argentina (2015)
Da Silva, B.C., Basso, E.W., Bazzan, A.L., Engel, P.M.: Dealing with non-stationary environments using context detection. In: Proceedings of the 23rd International Conference on Machine Learnig, pp. 217–224. Pittsburgh, Pennsylvania (2006)
De Hauwere, Y.M., Vrancx, P., Nowe, A.: Learning multi-agent state space representations. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, pp. 715–722. Toronto, Canada (2010)
Elidrisi, M., Johnson, N., Gini, M., Crandall, J.W.: Fast adaptive learning in repeated stochastic games by game abstraction. In: Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems, pp. 1141–1148. Paris, France (2014)
Fernández, F., Veloso, M.: Probabilistic policy reuse in a reinforcement learning agent. In: Proceedings of the 5th International Conference on Autonomous Agents and Multiagent Systems, pp. 720–727. ACM, Hakodata, Hokkaido, Japan (2006)
Fudenberg, D., Tirole, J.: Game Theory. The MIT Press, Cambridge (1991)
Hernandez-Leal, P., Munoz de Cote, E., Sucar, L.E.: A framework for learning and planning against switching strategies in repeated games. Connect. Sci. 26(2), 103–122 (2014)
Hernandez-Leal, P., Kaisers, M.: Learning against sequential opponents in repeated stochastic games. In: The 3rd Multi-disciplinary Conference on Reinforcement Learning and Decision Making, Ann Arbor (2017)
Hernandez-Leal, P., Taylor, M.E., Rosman, B., Sucar, L.E., Munoz de Cote, E.: Identifying and tracking switching, non-stationary opponents: a bayesian approach. In: Multiagent Interaction without Prior Coordination Workshop at AAAI, Phoenix, AZ, USA (2016)
Hernandez-Leal, P., Zhan, Y., Taylor, M.E., Sucar, L.E., Munoz de Cote, E.: Efficiently detecting switches against non-stationary opponents. Auton. Agents Multi-Agent Syst. 31(4), 767–789 (2017)
Langford, J., Zhang, T.: The epoch-greedy algorithm for multi-armed bandits with side information. In: Advances in Neural Information Processing Systems, pp. 817–824 (2008)
Lazaric, A., Ghavamzadeh, M.: Bayesian multi-task reinforcement learning. In: Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel (2010)
Lazaric, A., Restelli, M., Bonarini, A.: Transfer of samples in batch reinforcement learning. In: International Conference on Machine Learning, pp. 544–551. ACM, Helsinki, Finland (2008)
Rosman, B., Hawasly, M., Ramamoorthy, S.: Bayesian policy reuse. Mach. Learn. 104(1), 99–127 (2016)
Taylor, M.E., Stone, P., Liu, Y.: Transfer learning via inter-task mappings for temporal difference learning. J. Mach. Learn. Res. 8, 2125–2167 (2007)
Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: a survey. J. Mach. Learn. Res. 10, 1633–1685 (2009)
Watkins, J.: Learning from delayed rewards. Ph.D. thesis, King’s College, Cambridge, UK (1989)
Acknowledgments
This research has received funding through the ERA-Net Smart Grids Plus project Grid-Friends, with support from the European Union’s Horizon 2020 research and innovation programme.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Hernandez-Leal, P., Kaisers, M. (2017). Towards a Fast Detection of Opponents in Repeated Stochastic Games. In: Sukthankar, G., Rodriguez-Aguilar, J. (eds) Autonomous Agents and Multiagent Systems. AAMAS 2017. Lecture Notes in Computer Science(), vol 10642. Springer, Cham. https://doi.org/10.1007/978-3-319-71682-4_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-71682-4_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-71681-7
Online ISBN: 978-3-319-71682-4
eBook Packages: Computer ScienceComputer Science (R0)