Skip to main content
Log in

Navigating in a space of game views

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Game-theoretic modeling entails selecting the particular elements of a complex strategic situation deemed most salient for strategic analysis. Recognizing that any game model is one of many possible views of the situation, we term this a game view, and propose that sophisticated game reasoning would naturally consider multiple views. We introduce a conceptual framework, game view navigation, for game-theoretic reasoning through a process of constructing and analyzing a series of game views. The approach is illustrated using a variety of existing methods, which can be cast in terms of navigation patterns within this framework. By formally defining these as well as recently introduced ideas as navigating in a space of game views, we recognize common themes and opportunities for generalization. Game view navigation thus provides a unifying perspective that sheds light on connections between disparate reasoning methods, and defines a design space for creation of new techniques. We further apply the framework by defining and exploring new techniques based on modulating player aggregation in equilibrium search.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

Not applicable.

Notes

  1. Even when a PSNE exists, IBR is not guaranteed to find it. An exception is the class of potential games for which IBR is a complete algorithm [22].

  2. This claim assumes perfect BR computation. If guaranteed only to a degree of approximation, this would translate to a corresponding approximate equilibrium result.

  3. Results are sensitive to the choice of \(\sigma ^{*k}\) in cases where there are multiple equilibria in \(\text {NE}(\Gamma ^k)\). See discussion of support enumeration below.

  4. Fearnley et al. [27] study the question from a complexity-theoretic perspective, providing insights on the number of payoff queries required to guarantee identification of approximate solutions, in a variety of game contexts.

  5. The definitions can be straightforwardly generalized to role symmetry. We further assume for simplicity that \(p-1\) divides \(n-1\).

  6. This plot is too cluttered to tease apart all the methods, but provides a helpful overview. We zoom in on particular methods in subsequent figures.

References

  1. Jiang, A. X., Leyton-Brown, K., & Bhat, N. A. R. (2011). Action-graph games. Games and Economic Behavior, 71, 141–173.

    Article  MathSciNet  Google Scholar 

  2. Jiang, A.X., Chan, H., & Leyton-Brown, K. (2017). Resource-graph games: A compact representation for games with structured strategy spaces. In Thirty-first AAAI conference on artificial intelligence (pp. 572–578).

  3. Kearns, M. (2007). Graphical games. In N. Nisan, T. Roughgarden, E. Tardos, & V. V. Vazirani (Eds.), Algorithmic Game Theory (pp. 159–180). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  4. Li, Z., Jia, F., Mate, A., Jabbari, S., Chakraborty, M., Tambe, M., & Vorobeychik, Y. (2022). Solving structured hierarchical games using differential backward induction. In Thirty-eighth conference on uncertainty in artificial intelligence (pp. 1107–1117).

  5. Sandholm, T. (2015). Solving imperfect-information games. Science, 347(6218), 122–123.

    Article  Google Scholar 

  6. Brown, N., & Sandholm, T. (2019). Superhuman AI for multiplayer poker. Science, 365(6456), 885–890.

    Article  MathSciNet  Google Scholar 

  7. Moravčik, M., Schmid, M., Burch, N., Lisý, V., Morill, D., Bard, N., Davis, T., Waugh, K., Johanson, M., & Bowling, M. (2017). DeepStack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337), 508–513.

    Article  MathSciNet  Google Scholar 

  8. Wellman, M. P. (2016). Putting the agent in agent-based modeling. Autonomous Agents and Multi-Agent Systems, 30, 1175–1189.

    Article  Google Scholar 

  9. Sokota, S., Ho, C., & Wiedenbeck, B. (2019). Learning deviation payoffs in simulation-based games. In Thirty-third AAAI conference on artificial intelligence (pp. 2173–2180).

  10. Perolat, J., De Vylder, B., Hennes, D., Tarassov, E., Strub, F., Boer, V., Muller, P., et al. (2022). Mastering the game of Stratego with model-free multiagent reinforcement learning. Science, 378(6623), 990–996.

    Article  MathSciNet  Google Scholar 

  11. Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J. P., Jaderberg, M., … Silver, D. (2019). Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575, 350–354.

    Article  Google Scholar 

  12. Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140–1144.

    Article  MathSciNet  Google Scholar 

  13. Osborne, M. J., & Rubinstein, A. (1994). A course in game theory. Cambridge, MA: MIT Press.

    Google Scholar 

  14. Sandholm, T., & Singh, S. (2012). Lossy stochastic game abstraction with bounds. In Thirteenth ACM conference on electronic commerce (pp. 880–897).

  15. Gilpin, A., & Sandholm, T. (2007). Better automated abstraction techniques for imperfect information games, with application to Texas Hold’em poker. In Sixth international joint conference on autonomous agents and multi-agent systems (pp. 1168–1175).

  16. Page, S. E. (2018). The model thinker. New York: Basic Books.

    Google Scholar 

  17. Sandholm, T. (2015). Abstraction for solving large incomplete-information games. In Twenty-ninth AAAI conference on artificial intelligence, Austin (pp. 4127–4131).

  18. Wiedenbeck, B., & Wellman, M.P. (2012). Scaling simulation-based game analysis through deviation-preserving reduction. In Eleventh international conference on autonomous agents and multi-agent systems (pp. 931–938).

  19. Hawkin, J., Holte, R.C., & Szafron, D. (2012). Using sliding windows to generate action abstractions in extensive-form games. In Twenty-sixth AAAI conference on artificial intelligence (pp. 1924–1930).

  20. Cheng, S.-F., Reeves, D.M., Vorobeychik, Y., & Wellman, M.P. (2004). Notes on equilibria in symmetric games. In AAMAS-04 workshop on game-theoretic and decision-theoretic agents, New York.

  21. Kreps, D. M. (1990). Game theory and economic modelling. Oxford: Oxford University Press.

    Book  Google Scholar 

  22. Tardos, E., & Wexler, T. (2007). Network formation games and the potential function method. In N. Nisan, T. Roughgarden, E. Tardos, & V. V. Vazirani (Eds.), Algorithmic game theory (pp. 487–516). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  23. McMahan, H.B., Gordon, G.J., & Blum, A. (2003). Planning in the presence of cost functions controlled by an adversary. In Twentieth international conference on machine learning (pp. 536–543).

  24. Porter, R., Nudelman, E., & Shoham, Y. (2008). Simple search methods for finding a Nash equilibrium. Games and Economic Behavior, 63, 642–662.

    Article  MathSciNet  Google Scholar 

  25. Lanctot, M., Zambaldi, V., Gruslys, A., Lazaridou, A., Tuyls, K., Pérolat, J., Silver, D., & Graepel, T. (2017). A unified game-theoretic approach to multiagent reinforcement learning. In Thirty-first annual conference on neural information processing systems (pp. 4190–4203).

  26. Wellman, M.P., Tuyls, K., & Greenwald, A. (2024). Empirical game-theoretic analysis: a survey. Technical Report. Journal of Artificial Intelligence Research (to appear).

  27. Fearnley, J., Gairing, M., Goldberg, P., & Savani, R. (2015). Learning equilibria of games via payoff queries. Journal of Machine Learning Research, 16, 1305–1344.

    MathSciNet  Google Scholar 

  28. Wellman, M.P., Kim, T.H., & Duong, Q. (2013). Analyzing incentives for protocol compliance in complex domains: A case study of introduction-based routing. In Twelfth workshop on the economics of information security.

  29. Brinkman, E. (2018). Understanding financial market behavior through empirical game-theoretic analysis. PhD thesis, University of Michigan.

  30. Wiedenbeck, B., & Brinkman, E. (2023). Data structures for deviation payoffs. In Twenty-second international conference on autonomous agents and multi-agent systems (pp. 670–678).

  31. Vorobeychik, Y., Wellman, M. P., & Singh, S. (2007). Learning payoff functions in infinite games. Machine Learning, 67, 145–168.

    Article  Google Scholar 

  32. Duong, Q., Vorobeychik, Y., Singh, S., & Wellman, M.P. (2009). Learning graphical game models. In Twenty-first international joint conference on artificial intelligence, Pasadena (pp. 116–121).

  33. Li, Z., & Wellman, M.P. (2020). Structure learning for approximate solution of many-player games. In Thirty-fourth AAAI conference on artificial intelligence (pp. 2119–2127).

  34. Bighashdel, A., Wang, Y., McAleer, S., Savani, R., & Oliehoek, F.A. (2024). Policy space response oracles: A survey. In Thirty-third international joint conference on artificial intelligence.

  35. Brown, G. W. (1951). Iterative solution of games by fictitious play. In T. C. Koopmans (Ed.), Activity analysis of production and allocation (pp. 374–376). New York: Wiley.

    Google Scholar 

  36. Balduzzi, D., Garnelo, M., Bachrach, Y., Czarnecki, W.M., Perolat, J., Jaderberg, M., & Graepel, T. (2019). Open-ended learning in symmetric zero-sum games. In Thirty-sixth international conference on machine learning (pp. 434–443).

  37. Marris, L., Muller, P., Lanctot, M., Tuyls, K., & Graepel, T. (2021). Multi-agent training beyond zero-sum with correlated equilibrium meta-solvers. In Thirty-eighth international conference on machine learning (pp. 7480–7491).

  38. Muller, P., Omidshafiei, S., Rowland, M., Tuyls, K., Perolat, J., Liu, S., Hennes, D., Marris, L., Lanctot, M., Hughes, E., et al. (2020). A generalized training approach for multiagent learning. In Eighth international conference on learning representations.

  39. Wang, Y., Shi, Z.R., Yu, L., Wu, Y., Singh, R., Joppa, L., & Fang, F. (2019). Deep reinforcement learning for green security games with real-time information. In Thirty-third AAAI conference on artificial intelligence (pp. 1401–1408).

  40. Wang, Y., Ma, Q., & Wellman, M.P. (2022). Evaluating strategy exploration in empirical game-theoretic analysis. In Twenty-first international conference on autonomous agents and multi-agent systems (pp. 1346–1354).

  41. Jin, K., Vorobeychik, Y., & Liu, M. (2021). Multi-scale games: Representing and solving games on networks with group structure. In Thirty-fifth AAAI conference on artificial intelligence (pp. 5497–5505).

  42. Wellman, M.P., Reeves, D.M., Lochner, K.M., Cheng, S.-F., & Suri, R. (2005). Approximate strategic reasoning through hierarchical reduction of large symmetric games. In Twentieth national conference on artificial intelligence (pp. 502–508).

  43. Ficici, S.G., Parkes, D.C., & Pfeffer, A. (2008). Learning and solving many-player games through a cluster-based representation. In Twenty-fourth conference on uncertainty in artificial intelligence (pp. 187–195).

  44. Jordan, P.R., Schvartzman, L.J., & Wellman, M.P. (2010). Strategy exploration in empirical games. In Ninth international conference on autonomous agents and multi-agent systems (pp. 1131–1138).

  45. McAleer, S., Wang, K.A., Lanier, J., Lanctot, M., Baldi, P., Sandholm, T., & Fox, R. (2022). Anytime PSRO for two-player zero-sum games. In AAAI-22 workshop on reinforcement learning in games.

  46. Jordan, P.R., Kiekintveld, C., & Wellman, M.P. (2007). Empirical game-theoretic analysis of the TAC supply chain game. In Sixth international joint conference on autonomous agents and multi-agent systems (pp. 1188–1195).

  47. Jordan, P.R. (2010). Practical strategic reasoning with applications in market games. PhD thesis, University of Michigan.

  48. Balduzzi, D., Tuyls, K., Perolat, J., & Graepel, T. (2018). Re-evaluating evaluation. In Thirty-second annual conference on neural information processing systems (pp. 3272–3283).

  49. Smith, M., Anthony, T., & Wellman, M.P. (2021). Iterative empirical game solving via single policy best response. In Ninth international conference on learning representations.

  50. Gatchel, M., & Wiedenbeck, B. (2023). Learning parameterized families of games. In Twenty-second international conference on autonomous agents and multi-agent systems (pp. 1044–1052).

  51. Nisan, N., Roughgarden, T., Tardos, E., & Vazirani, V. V. (Eds.). (2007). Algorithmic game theory. Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Funding

This work was supported in part by the US Army Research Office under MURI Grant # W911NF-18-1-0208.

Author information

Authors and Affiliations

Authors

Contributions

MPW conceived and developed the framework and wrote the manuscript. MPW and KM designed the new methods for modulating player aggregation. KM implemented these methods and performed the experiments. Both authors reviewed the manuscript.

Corresponding author

Correspondence to Michael P. Wellman.

Ethics declarations

Conflict of interest

The authors declare no conflict of interests.

Ethical approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wellman, M.P., Mayo, K. Navigating in a space of game views. Auton Agent Multi-Agent Syst 38, 31 (2024). https://doi.org/10.1007/s10458-024-09660-x

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10458-024-09660-x

Keywords

Navigation