Abstract
Game-theoretic modeling entails selecting the particular elements of a complex strategic situation deemed most salient for strategic analysis. Recognizing that any game model is one of many possible views of the situation, we term this a game view, and propose that sophisticated game reasoning would naturally consider multiple views. We introduce a conceptual framework, game view navigation, for game-theoretic reasoning through a process of constructing and analyzing a series of game views. The approach is illustrated using a variety of existing methods, which can be cast in terms of navigation patterns within this framework. By formally defining these as well as recently introduced ideas as navigating in a space of game views, we recognize common themes and opportunities for generalization. Game view navigation thus provides a unifying perspective that sheds light on connections between disparate reasoning methods, and defines a design space for creation of new techniques. We further apply the framework by defining and exploring new techniques based on modulating player aggregation in equilibrium search.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Not applicable.
Notes
Even when a PSNE exists, IBR is not guaranteed to find it. An exception is the class of potential games for which IBR is a complete algorithm [22].
This claim assumes perfect BR computation. If guaranteed only to a degree of approximation, this would translate to a corresponding approximate equilibrium result.
Results are sensitive to the choice of \(\sigma ^{*k}\) in cases where there are multiple equilibria in \(\text {NE}(\Gamma ^k)\). See discussion of support enumeration below.
Fearnley et al. [27] study the question from a complexity-theoretic perspective, providing insights on the number of payoff queries required to guarantee identification of approximate solutions, in a variety of game contexts.
The definitions can be straightforwardly generalized to role symmetry. We further assume for simplicity that \(p-1\) divides \(n-1\).
This plot is too cluttered to tease apart all the methods, but provides a helpful overview. We zoom in on particular methods in subsequent figures.
References
Jiang, A. X., Leyton-Brown, K., & Bhat, N. A. R. (2011). Action-graph games. Games and Economic Behavior, 71, 141–173.
Jiang, A.X., Chan, H., & Leyton-Brown, K. (2017). Resource-graph games: A compact representation for games with structured strategy spaces. In Thirty-first AAAI conference on artificial intelligence (pp. 572–578).
Kearns, M. (2007). Graphical games. In N. Nisan, T. Roughgarden, E. Tardos, & V. V. Vazirani (Eds.), Algorithmic Game Theory (pp. 159–180). Cambridge: Cambridge University Press.
Li, Z., Jia, F., Mate, A., Jabbari, S., Chakraborty, M., Tambe, M., & Vorobeychik, Y. (2022). Solving structured hierarchical games using differential backward induction. In Thirty-eighth conference on uncertainty in artificial intelligence (pp. 1107–1117).
Sandholm, T. (2015). Solving imperfect-information games. Science, 347(6218), 122–123.
Brown, N., & Sandholm, T. (2019). Superhuman AI for multiplayer poker. Science, 365(6456), 885–890.
Moravčik, M., Schmid, M., Burch, N., Lisý, V., Morill, D., Bard, N., Davis, T., Waugh, K., Johanson, M., & Bowling, M. (2017). DeepStack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337), 508–513.
Wellman, M. P. (2016). Putting the agent in agent-based modeling. Autonomous Agents and Multi-Agent Systems, 30, 1175–1189.
Sokota, S., Ho, C., & Wiedenbeck, B. (2019). Learning deviation payoffs in simulation-based games. In Thirty-third AAAI conference on artificial intelligence (pp. 2173–2180).
Perolat, J., De Vylder, B., Hennes, D., Tarassov, E., Strub, F., Boer, V., Muller, P., et al. (2022). Mastering the game of Stratego with model-free multiagent reinforcement learning. Science, 378(6623), 990–996.
Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J. P., Jaderberg, M., … Silver, D. (2019). Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575, 350–354.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140–1144.
Osborne, M. J., & Rubinstein, A. (1994). A course in game theory. Cambridge, MA: MIT Press.
Sandholm, T., & Singh, S. (2012). Lossy stochastic game abstraction with bounds. In Thirteenth ACM conference on electronic commerce (pp. 880–897).
Gilpin, A., & Sandholm, T. (2007). Better automated abstraction techniques for imperfect information games, with application to Texas Hold’em poker. In Sixth international joint conference on autonomous agents and multi-agent systems (pp. 1168–1175).
Page, S. E. (2018). The model thinker. New York: Basic Books.
Sandholm, T. (2015). Abstraction for solving large incomplete-information games. In Twenty-ninth AAAI conference on artificial intelligence, Austin (pp. 4127–4131).
Wiedenbeck, B., & Wellman, M.P. (2012). Scaling simulation-based game analysis through deviation-preserving reduction. In Eleventh international conference on autonomous agents and multi-agent systems (pp. 931–938).
Hawkin, J., Holte, R.C., & Szafron, D. (2012). Using sliding windows to generate action abstractions in extensive-form games. In Twenty-sixth AAAI conference on artificial intelligence (pp. 1924–1930).
Cheng, S.-F., Reeves, D.M., Vorobeychik, Y., & Wellman, M.P. (2004). Notes on equilibria in symmetric games. In AAMAS-04 workshop on game-theoretic and decision-theoretic agents, New York.
Kreps, D. M. (1990). Game theory and economic modelling. Oxford: Oxford University Press.
Tardos, E., & Wexler, T. (2007). Network formation games and the potential function method. In N. Nisan, T. Roughgarden, E. Tardos, & V. V. Vazirani (Eds.), Algorithmic game theory (pp. 487–516). Cambridge: Cambridge University Press.
McMahan, H.B., Gordon, G.J., & Blum, A. (2003). Planning in the presence of cost functions controlled by an adversary. In Twentieth international conference on machine learning (pp. 536–543).
Porter, R., Nudelman, E., & Shoham, Y. (2008). Simple search methods for finding a Nash equilibrium. Games and Economic Behavior, 63, 642–662.
Lanctot, M., Zambaldi, V., Gruslys, A., Lazaridou, A., Tuyls, K., Pérolat, J., Silver, D., & Graepel, T. (2017). A unified game-theoretic approach to multiagent reinforcement learning. In Thirty-first annual conference on neural information processing systems (pp. 4190–4203).
Wellman, M.P., Tuyls, K., & Greenwald, A. (2024). Empirical game-theoretic analysis: a survey. Technical Report. Journal of Artificial Intelligence Research (to appear).
Fearnley, J., Gairing, M., Goldberg, P., & Savani, R. (2015). Learning equilibria of games via payoff queries. Journal of Machine Learning Research, 16, 1305–1344.
Wellman, M.P., Kim, T.H., & Duong, Q. (2013). Analyzing incentives for protocol compliance in complex domains: A case study of introduction-based routing. In Twelfth workshop on the economics of information security.
Brinkman, E. (2018). Understanding financial market behavior through empirical game-theoretic analysis. PhD thesis, University of Michigan.
Wiedenbeck, B., & Brinkman, E. (2023). Data structures for deviation payoffs. In Twenty-second international conference on autonomous agents and multi-agent systems (pp. 670–678).
Vorobeychik, Y., Wellman, M. P., & Singh, S. (2007). Learning payoff functions in infinite games. Machine Learning, 67, 145–168.
Duong, Q., Vorobeychik, Y., Singh, S., & Wellman, M.P. (2009). Learning graphical game models. In Twenty-first international joint conference on artificial intelligence, Pasadena (pp. 116–121).
Li, Z., & Wellman, M.P. (2020). Structure learning for approximate solution of many-player games. In Thirty-fourth AAAI conference on artificial intelligence (pp. 2119–2127).
Bighashdel, A., Wang, Y., McAleer, S., Savani, R., & Oliehoek, F.A. (2024). Policy space response oracles: A survey. In Thirty-third international joint conference on artificial intelligence.
Brown, G. W. (1951). Iterative solution of games by fictitious play. In T. C. Koopmans (Ed.), Activity analysis of production and allocation (pp. 374–376). New York: Wiley.
Balduzzi, D., Garnelo, M., Bachrach, Y., Czarnecki, W.M., Perolat, J., Jaderberg, M., & Graepel, T. (2019). Open-ended learning in symmetric zero-sum games. In Thirty-sixth international conference on machine learning (pp. 434–443).
Marris, L., Muller, P., Lanctot, M., Tuyls, K., & Graepel, T. (2021). Multi-agent training beyond zero-sum with correlated equilibrium meta-solvers. In Thirty-eighth international conference on machine learning (pp. 7480–7491).
Muller, P., Omidshafiei, S., Rowland, M., Tuyls, K., Perolat, J., Liu, S., Hennes, D., Marris, L., Lanctot, M., Hughes, E., et al. (2020). A generalized training approach for multiagent learning. In Eighth international conference on learning representations.
Wang, Y., Shi, Z.R., Yu, L., Wu, Y., Singh, R., Joppa, L., & Fang, F. (2019). Deep reinforcement learning for green security games with real-time information. In Thirty-third AAAI conference on artificial intelligence (pp. 1401–1408).
Wang, Y., Ma, Q., & Wellman, M.P. (2022). Evaluating strategy exploration in empirical game-theoretic analysis. In Twenty-first international conference on autonomous agents and multi-agent systems (pp. 1346–1354).
Jin, K., Vorobeychik, Y., & Liu, M. (2021). Multi-scale games: Representing and solving games on networks with group structure. In Thirty-fifth AAAI conference on artificial intelligence (pp. 5497–5505).
Wellman, M.P., Reeves, D.M., Lochner, K.M., Cheng, S.-F., & Suri, R. (2005). Approximate strategic reasoning through hierarchical reduction of large symmetric games. In Twentieth national conference on artificial intelligence (pp. 502–508).
Ficici, S.G., Parkes, D.C., & Pfeffer, A. (2008). Learning and solving many-player games through a cluster-based representation. In Twenty-fourth conference on uncertainty in artificial intelligence (pp. 187–195).
Jordan, P.R., Schvartzman, L.J., & Wellman, M.P. (2010). Strategy exploration in empirical games. In Ninth international conference on autonomous agents and multi-agent systems (pp. 1131–1138).
McAleer, S., Wang, K.A., Lanier, J., Lanctot, M., Baldi, P., Sandholm, T., & Fox, R. (2022). Anytime PSRO for two-player zero-sum games. In AAAI-22 workshop on reinforcement learning in games.
Jordan, P.R., Kiekintveld, C., & Wellman, M.P. (2007). Empirical game-theoretic analysis of the TAC supply chain game. In Sixth international joint conference on autonomous agents and multi-agent systems (pp. 1188–1195).
Jordan, P.R. (2010). Practical strategic reasoning with applications in market games. PhD thesis, University of Michigan.
Balduzzi, D., Tuyls, K., Perolat, J., & Graepel, T. (2018). Re-evaluating evaluation. In Thirty-second annual conference on neural information processing systems (pp. 3272–3283).
Smith, M., Anthony, T., & Wellman, M.P. (2021). Iterative empirical game solving via single policy best response. In Ninth international conference on learning representations.
Gatchel, M., & Wiedenbeck, B. (2023). Learning parameterized families of games. In Twenty-second international conference on autonomous agents and multi-agent systems (pp. 1044–1052).
Nisan, N., Roughgarden, T., Tardos, E., & Vazirani, V. V. (Eds.). (2007). Algorithmic game theory. Cambridge: Cambridge University Press.
Funding
This work was supported in part by the US Army Research Office under MURI Grant # W911NF-18-1-0208.
Author information
Authors and Affiliations
Contributions
MPW conceived and developed the framework and wrote the manuscript. MPW and KM designed the new methods for modulating player aggregation. KM implemented these methods and performed the experiments. Both authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interests.
Ethical approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wellman, M.P., Mayo, K. Navigating in a space of game views. Auton Agent Multi-Agent Syst 38, 31 (2024). https://doi.org/10.1007/s10458-024-09660-x
Accepted:
Published:
DOI: https://doi.org/10.1007/s10458-024-09660-x