Autonomous Agents and Multi-Agent Systems

, Volume 31, Issue 5, pp 971–1002 | Cite as

An exploration strategy for non-stationary opponents

  • Pablo Hernandez-Leal
  • Yusen Zhan
  • Matthew E. Taylor
  • L. Enrique Sucar
  • Enrique Munoz de Cote
Article

Abstract

The success or failure of any learning algorithm is partially due to the exploration strategy it exerts. However, most exploration strategies assume that the environment is stationary and non-strategic. In this work we shed light on how to design exploration strategies in non-stationary and adversarial environments. Our proposed adversarial drift exploration (DE) is able to efficiently explore the state space while keeping track of regions of the environment that have changed. This proposed exploration is general enough to be applied in single agent non-stationary environments as well as in multiagent settings where the opponent changes its strategy in time. We use a two agent strategic interaction setting to test this new type of exploration, where the opponent switches between different behavioral patterns to emulate a non-deterministic, stochastic and adversarial environment. The agent’s objective is to learn a model of the opponent’s strategy to act optimally. Our contribution is twofold. First, we present DE as a strategy for switch detection. Second, we propose a new algorithm called R-max# for learning and planning against non-stationary opponent. To handle such opponents, R-max# reasons and acts in terms of two objectives: (1) to maximize utilities in the short term while learning and (2) eventually explore opponent behavioral changes. We provide theoretical results showing that R-max# is guaranteed to detect the opponent’s switch and learn a new model in terms of finite sample complexity. R-max# makes efficient use of exploration experiences, which results in rapid adaptation and efficient DE, to deal with the non-stationary nature of the opponent. We show experimentally how using DE outperforms the state of the art algorithms that were explicitly designed for modeling opponents (in terms average rewards) in two complimentary domains.

Keywords

Learning Exploration Non-stationary environments Switching strategies Repeated games 

Notes

Acknowledgments

The first author was supported by a scholarship grant 329007 from the National Council of Science and Technology of Mexico (CONACYT). This research has taken place in part at the Intelligent Robot Learning (IRL) Lab, Washington State University. IRL research is supported in part by grants AFRL FA8750-14-1-0069, AFRL FA8750-14-1-0070, NSF IIS-1149917, NSF IIS-1319412, USDA 2014-67021-22174, and a Google Research Award.

References

  1. 1.
    Auer, P., Cesa-Bianchi, N., & Fischer, P. (2002). Finite-time analysis of the multiarmed Bandit problem. Machine Learning, 47(2/3), 235–256.CrossRefMATHGoogle Scholar
  2. 2.
    Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(27), 1390–1396.MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Babes, M., Munoz de Cote, E., & Littman, M. L. (2008). Social reward shaping in the prisoner’s dilemma. In Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems, (pp. 1389–1392). Estoril: International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
  4. 4.
    Banerjee, B., & Peng, J. (2005). Efficient learning of multi-step best response. In Proceedings of the 4th International Conference on Autonomous Agents and Multiagent Systems, (pp. 60–66). Utretch: ACM.Google Scholar
  5. 5.
    Bard, N., Johanson, M., Burch, N., & Bowling, M. (2013). Online implicit agent modelling. In Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems, (pp. 255–262). Saint Paul, MN: International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
  6. 6.
    Bowling, M., & Veloso, M. (2002). Multiagent learning using a variable learning rate. Artificial Intelligence, 136(2), 215–250.MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Brafman, R. I., & Tennenholtz, M. (2003). R-MAX a general polynomial time algorithm for near-optimal reinforcement learning. The Journal of Machine Learning Research, 3, 213–231.MathSciNetMATHGoogle Scholar
  8. 8.
    Busoniu, L., Babuska, R., & De Schutter, B. (2008). A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man and Cybernetics, Part C Applications and Reviews, 38(2), 156–172.CrossRefGoogle Scholar
  9. 9.
    Carmel, D., & Markovitch, S. (1999). Exploration strategies for model-based learning in multi-agent systems. Autonomous Agents and Multi-Agent Systems, 2(2), 141–172.CrossRefGoogle Scholar
  10. 10.
    Cawley, G. C., & Talbot, N. L. C. (2003). Efficient leave-one-out cross-validation of kernel fisher discriminant classifiers. Pattern Recognition, 36(11), 2585–2592.CrossRefMATHGoogle Scholar
  11. 11.
    Chakraborty, D., Agmon, N., & Stone, P. (2013). Targeted opponent modeling of memory-bounded agents. In Proceedings of the Adaptive Learning Agents Workshop (ALA). Saint Paul, MN, USA.Google Scholar
  12. 12.
    Chakraborty, D., & Stone, P. (2008). Online multiagent learning against memory bounded adversaries. In Machine Learning and Knowledge Discovery in Databases (pp. 211–226). Berlin: Springer.Google Scholar
  13. 13.
    Chakraborty, D., & Stone, P. (2013). Multiagent learning in the presence of memory-bounded agents. Autonomous Agents and Multi-Agent Systems, 28(2), 182–213.CrossRefGoogle Scholar
  14. 14.
    Choi, S. P. M., Yeung, D. Y., & Zhang, N. L. (1999). An environment model for nonstationary reinforcement learning. In Advances in Neural Information Processing Systems, (pp. 987–993). Denver, CO, USA.Google Scholar
  15. 15.
    Da Silva, B. C., Basso, E. W., Bazzan, A. L., & Engel, P. M. (2006). Dealing with non-stationary environments using context detection. In Proceedings of the 23rd International Conference on Machine Learnig, (pp. 217–224). Pittsburgh, PA, USA.Google Scholar
  16. 16.
    Dietterich, T. G. (2000). Ensemble methods in machine learning. In Multiple Classifier Systems, (pp. 1–15). Berlin: SpringerGoogle Scholar
  17. 17.
    Doshi, P., & Gmytrasiewicz, P. J. (2006). On the difficulty of achieving equilibrium in interactive POMDPs. In Twenty-first National Conference on Artificial Intelligence, (pp. 1131–1136). Boston, MA, USA.Google Scholar
  18. 18.
    Elidrisi, M., Johnson, N., & Gini, M. (2012). Fast learning against adaptive adversarial opponents. In Proceedings of the Adaptive Learning Agents Workshop (ALA), Valencia, Spain.Google Scholar
  19. 19.
    Elidrisi, M., Johnson, N., Gini, M., & Crandall, J. W. (2014). Fast adaptive learning in repeated stochastic games by game abstraction. In Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems, (pp. 1141–1148). Paris, France.Google Scholar
  20. 20.
    Fulda, N., & Ventura, D. (2007). Predicting and preventing coordination problems in cooperative Q-learning systems. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence, (pp. 780–785). Hyderabad, India.Google Scholar
  21. 21.
    Garivier, A., & Moulines, E. (2011). On upper-confidence bound policies for switching bandit problems. In Algorithmic Learning Theory, (pp. 174–188). Berlin: Springer.Google Scholar
  22. 22.
    Geibel, P. (2001). Reinforcement learning with bounded risk. In Proceedings of the Eighteenth International Conference on Machine Learning, (pp. 162–169). Williamstown, MA: Morgan Kaufmann Publishers Inc.Google Scholar
  23. 23.
    Gittins, J. C. (1979). Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society, 41(2), 148–177.MathSciNetMATHGoogle Scholar
  24. 24.
    Hans, A., Schneegaß, D., Schäfer, A. M., & Udluft, S. (2008). Safe exploration for reinforcement learning. In European Symposium on Artificial Neural Networks, (pp. 143–148). Bruges, Belgium.Google Scholar
  25. 25.
    Hernandez-Leal, P., Munoz de Cote, E., & Sucar, L. E. (2013). Modeling non-stationary opponents. In Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems, (pp. 1135–1136). International Foundation for Autonomous Agents and Multiagent Systems, Saint Paul, MN, USA.Google Scholar
  26. 26.
    Hernandez-Leal, P., Munoz de Cote, E., & Sucar, L. E. (2014). A framework for learning and planning against switching strategies in repeated games. Connection Science, 26(2), 103–122.CrossRefGoogle Scholar
  27. 27.
    Hernandez-Leal, P., Munoz de Cote, E., & Sucar, L. E. (2014). Exploration strategies to detect strategy switches. In Proceedings of the Adaptive Learning Agents Workshop (ALA). Paris, France.Google Scholar
  28. 28.
    Hernandez-Leal, P., Taylor, M. E., Rosman, B., Sucar, L. E., & Munoz de Cote, E. (2016). Identifying and tracking switching, non-stationary opponents: a Bayesian approach. In In Multiagent Interaction without Prior Coordination Workshop at AAAI. Phoenix, AZ, USA.Google Scholar
  29. 29.
    HolmesParker, C., Taylor, M. E., Agogino, A., & Tumer, K. (2014). CLEANing the reward: counterfactual actions to remove exploratory action noise in multiagent learning. In Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems, (pp. 1353–1354). International Foundation for Autonomous Agents and Multiagent Systems, Paris, France.Google Scholar
  30. 30.
    Kakade, S. M. (2003). On the sample complexity of reinforcement learning. Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London.Google Scholar
  31. 31.
    Lazaric, A., Munoz de Cote, E., & Gatti, N. (2007). Reinforcement learning in extensive form games with incomplete information: The bargaining case study. In Proceedings of the 6th International Conference on Autonomous Agents and Multiagent Systems. Honolulu, HI: ACM.Google Scholar
  32. 32.
    Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning, (pp. 157–163). New Brunswick, NJ.Google Scholar
  33. 33.
    Littman, M. L., & Stone, P. (2001). Implicit Negotiation in Repeated Games. In ATAL ’01: Revised Papers from the 8th International Workshop on Intelligent Agents VIII.Google Scholar
  34. 34.
    Lopes, M., Lang, T., Toussaint, M., & Oudeyer, P. Y. (2012). Exploration in model-based reinforcement learning by empirically estimating learning progress. In Advances in Neural Information Processing Systems, (pp. 206–214). Lake Tahoe, NV.Google Scholar
  35. 35.
    MacAlpine, P., Urieli, D., Barrett, S., Kalyanakrishnan, S., Barrera, F., Lopez-Mobilia, A., Ştiurcă, N., Vu, V., & Stone, P. (2012). UT Austin Villa 2011: a champion agent in the RoboCup 3D Soccer simulation competition. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, (pp. 129–136). International Foundation for Autonomous Agents and Multiagent Systems, Valencia, Spain.Google Scholar
  36. 36.
    Marinescu, A., Dusparic, I., Taylor, A., Cahill, V., & Clarke, S. (2015). P-MARL: Prediction-based multi-agent reinforcement learning for non-stationary environments. In Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
  37. 37.
    Mohan, Y., & Ponnambalam, S. G. (2011). Exploration strategies for learning in multi-agent foraging. In Swarm, Evolutionary, and Memetic Computing 2011, (pp. 17–26). Springer.Google Scholar
  38. 38.
    Monahan, G. E. (1982). A survey of partially observable Markov decision processes: Theory, models, and algorithms. Management Science, 28, 1–16.MathSciNetCrossRefMATHGoogle Scholar
  39. 39.
    Mota, P., Melo, F., & Coheur, L. (2015). Modeling students self-studies behaviors. In Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems, (pp. 1521–1528). Istanbul, TurkeyGoogle Scholar
  40. 40.
    Munoz de Cote, E., Chapman, A. C., Sykulski, A. M., & Jennings, N. R. (2010). Automated planning in repeated adversarial games. In Uncertainty in Artificial Intelligence, (pp. 376–383). Catalina Island, CA.Google Scholar
  41. 41.
    Munoz de Cote, E., & Jennings, N. R. (2010). Planning against fictitious players in repeated normal form games. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, (pp. 1073–1080). International Foundation for Autonomous Agents and Multiagent Systems, Toronto, Canada.Google Scholar
  42. 42.
    Ng, A. Y., Harada, D., & Russell, S. J. (1999). Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, (pp. 278–287). Bled, Slovenia.Google Scholar
  43. 43.
    Puterman, M. (1994). Markov decision processes: Discrete stochastic dynamic programming. New York: Wiley.CrossRefMATHGoogle Scholar
  44. 44.
    Rejeb, L., Guessoum, Z., & M’Hallah, R. (2005). An adaptive approach for the exploration–exploitation dilemma for learning agents. In Proceedings of the 4th international Central and Eastern European conference on Multi-Agent Systems and Applications, (pp. 316–325). Springer, Budapest, Hungary.Google Scholar
  45. 45.
    Stahl, I. (1972). Bargaining theory. Stockolm: Stockolm School of Economics.Google Scholar
  46. 46.
    Stone, P., & Veloso, M. (2000). Multiagent systems: A survey from a machine learning perspective. Autonomous Robots, 8(3), 345–383.CrossRefGoogle Scholar
  47. 47.
    Suematsu, N., & Hayashi, A. (2002). A multiagent reinforcement learning algorithm using extended optimal response. In Proceedings of the 1st International Conference on Autonomous Agents and Multiagent Systems, (pp. 370–377). ACM Request Permissions, Bologna, Italy.Google Scholar
  48. 48.
    Taylor, M. E., & Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10, 1633–1685.MathSciNetMATHGoogle Scholar
  49. 49.
    Vrancx, P., Gurzi, P., Rodriguez, A., Steenhaut, K., & Nowe, A. (2015). A reinforcement learning approach for interdomain routing with link prices. ACM Transactions on Autonomous and Adaptive Systems, 10(1), 1–26.CrossRefGoogle Scholar
  50. 50.
    Watkins, C., & Dayan, P. (1992). Q-learning. Machine Learning, 8, 279–292.MATHGoogle Scholar
  51. 51.
    Weinberg, M., & Rosenschein, J. S. (2004). Best-response multiagent learning in non-stationary environments. In Proceedings of the 3rd International Conference on Autonomous Agents and Multiagent Systems, (pp. 506–513). New York: IEEE Computer Society.Google Scholar
  52. 52.
    Zinkevich, M. A., Bowling, M., & Wunder, M. (2011). The lemonade stand game competition: Solving unsolvable games. SIGecom Exchanges, 10(1), 35–38.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2016

Authors and Affiliations

  • Pablo Hernandez-Leal
    • 1
  • Yusen Zhan
    • 3
  • Matthew E. Taylor
    • 3
  • L. Enrique Sucar
    • 2
  • Enrique Munoz de Cote
    • 2
  1. 1.Centrum Wiskunde & Informatica (CWI)AmsterdamThe Netherlands
  2. 2.Instituto Nacional de Astrofísica, Óptica y ElectrónicaPueblaMéxico
  3. 3.Washington State University (WSU)PullmanUSA

Personalised recommendations