Skip to main content
Log in

On Stackelberg mixed strategies

  • S.I.: Logic and the Foundations of Decision and Game Theory (LOFT)
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

It is sometimes the case that one solution concept in game theory is equivalent to applying another solution concept to a modified version of the game. In such cases, does it make sense to study the former separately (as it applies to the original representation of the game), or should we entirely subordinate it to the latter? The answer probably depends on the particular circumstances, and indeed the literature takes different approaches in different cases. In this article, I consider the specific example of Stackelberg mixed strategies. I argue that, even though a Stackelberg mixed strategy can also be seen as a subgame perfect Nash equilibrium of a corresponding extensive-form game, there remains significant value in studying it separately. The analysis of this special case may have implications for other solution concepts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. One line of work concerns settings where there are many selfish followers and a single benevolent leader, for example a party that “owns” the system and controls part of the activity in it, who acts to optimize some system-wide objective. See, e.g., Roughgarden (2004). In this article I will not assume that the leader is benevolent.

  2. Note that player 1 merely stating that she will play D, without any commitment, will not work: she would always have an incentive to back out and play U after all, to collect an additional 1 unit of utility, regardless of what player 2 plays. Player 2 will anticipate this and play L anyway.

  3. Schelling (1960) similarly suggests that, by the time we have incorporated aspects such as commitment moves into a standard game-theoretic representation of the game at hand, we have abstracted away these issues and are at some level not really studying them anymore.

  4. It is easy to be misled by Fig. 3 into thinking that it does make this fairly obvious, due to the natural ordering of the mixed strategies from left to right. However, this is an artifact of the fact that there are only two pure strategies for player 1 in the original game. If there were three pure strategies, it would not be possible to order the mixed strategies so naturally from left to right. We could in principle visualize the resulting tree in three dimensions instead of two. For more pure strategies, this of course becomes problematic. More importantly, such visualizations are technically not part of the extensive form. The extensive form only specifies a set of actions for each node, and no ordering, distance function, or topology on them. Such are only added when we draw the tree on a piece of paper (or in three dimensions, etc.).

  5. Constant-sum games, in which \(u_1(s_1,s_2)+u_2(s_1,s_2)=c\) for some constant c, are effectively equivalent.

  6. In fact, he pointed out that there was a case in which his reduction from linear programs to zero-sum games does not work; this gap was later filled by Adler (2013).

  7. Papadimitriou (1994) introduced the class PPAD. Daskalakis and Papadimitriou (2005) showed that the problem is PPAD-hard for three players; Chen and Deng (2005) then obtained the stronger result that it is PPAD-hard even for two players. Etessami and Yannakakis (2010) proved that with three or more players, the problem of computing an exact Nash equilibrium, rather than an \(\epsilon \)-equilibrium, is FIXP-complete.

  8. An exception is, of course, if we play against an exploitable non-game-theoretic player, such as one who always plays Scissors.

  9. For a study of how much it can help, see Letchford et al. (2014).

  10. All of this is easily generalized to n players, but for simplicity I will stick to two players here.

  11. The notation here is a bit nonstandard: in isolation, it would be more natural to use \(A_i\) to denote the set of actions and \(S_i\) to denote the set of pure strategies, i.e., mappings from signals to actions. However, in order to make the comparison to other concepts easier, it will help to stick to using \(s_i\) for the rows and columns of the game.

  12. Nash equilibria of a game with private information are often referred to as Bayes-Nash equilibria.

  13. It could be argued that the analogy is imperfect because in the Stackelberg version of the argument, the game is modified to a single (two-stage) game, whereas in the correlated equilibrium version of the argument, two different correlated equilibria potentially require different ways of modifying the game, extending them with different signaling schemes. It is not entirely clear to me how significant this distinction is. In any case, if two correlated equilibria require different signaling schemes, then consider a new, joint signaling scheme where each player receives the signals from both signaling schemes, with the signals drawn independently across the two schemes. Then, both correlated equilibria are (Bayes-)Nash equilibria of the game with the joint signaling scheme (with the players simply ignoring the part of the signal that corresponds to the other equilibrium). Taking this to the limit, we may imagine a single, universal signaling scheme such that all correlated equilibria of interest are Nash equilibria of the game with this universal signaling scheme.

References

  • Adler, I. (2013). The equivalence of linear programs and zero-sum games. International Journal of Game Theory, 42(1), 165–177.

    Article  Google Scholar 

  • An, B., Shieh, E., Tambe, M., Yang, R., Baldwin, C., DiRenzo, J., et al. (2012). PROTECT—a deployed game theoretic system for strategic security allocation for the United States Coast Guard. AI Magazine, 33(4), 96–110.

    Google Scholar 

  • Aumann, R. (1974). Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics, 1, 67–96.

    Article  Google Scholar 

  • Chen, X., & Deng, X. (2005). Settling the complexity of 2-player Nash equilibrium. Electronic Colloquium on Computational Complexity, Report No. 150

  • Chen, X., Deng, X., & Teng, S.-H. (2009). Settling the complexity of computing two-player Nash equilibria. Journal of the ACM, 56(3), 14.

    Article  Google Scholar 

  • Conitzer, V., & Korzhyk, D. (2011). Commitment to correlated strategies. In Proceedings of the National Conference on Artificial Intelligence (AAAI) (pp. 632–637). AAAI: San Francisco.

  • Conitzer, V., & Sandholm, T. (2006). Computing the optimal strategy to commit to. In Proceedings of the ACM Conference on Electronic Commerce (EC) (pp. 82–90). ACM: Ann Arbor.

  • Conitzer, V., & Sandholm, T. (2008). New complexity results about Nash equilibria. Games and Economic Behavior, 63(2), 621–641.

    Article  Google Scholar 

  • Dantzig, G. (1951). A proof of the equivalence of the programming problem and the game problem. In T. Koopmans (Ed.), Activity analysis of production and allocation (pp. 330–335). New York: John Wiley & Sons.

    Google Scholar 

  • Daskalakis, C., & Papadimitriou, C.H. (2005). Three-player games are hard. Electronic Colloquium on Computational Complexity. Report No. 139.

  • Daskalakis, C., Goldberg, P., & Papadimitriou, C. H. (2009). The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39(1), 195–259.

    Article  Google Scholar 

  • Dickhaut, J., & Kaplan, T. (1991). A program for finding Nash equilibria. The Mathematica Journal, 1(4), 87–93.

    Google Scholar 

  • Etessami, K., & Yannakakis, M. (2010). On the complexity of Nash equilibria and other fixed points. SIAM Journal on Computing, 39(6), 2531–2597.

    Article  Google Scholar 

  • Fudenberg, D., & Tirole, J. (1991). Game theory. Cambridge: MIT Press.

    Google Scholar 

  • Gilboa, I., & Zemel, E. (1989). Nash and correlated equilibria: Some complexity considerations. Games and Economic Behavior, 1, 80–93.

    Article  Google Scholar 

  • Khachiyan, L. (1979). A polynomial algorithm in linear programming. Soviet Mathematics Doklady, 20, 191–194.

    Google Scholar 

  • Kiekintveld, C., Jain, M., Tsai, J., Pita, J., Ordóñez, F., & Tambe, M. (2009). Computing optimal randomized resource allocations for massive security games. In Proceedings of the Eighth International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 689–696). AAMAS: Budapest.

  • Korzhyk, D., Yin, Z., Kiekintveld, C., Conitzer, V., & Tambe, M. (2011). Stackelberg vs. Nash in security games: An extended investigation of interchangeability, equivalence, and uniqueness. Journal of Artificial Intelligence Research, 41(2), 297–327.

    Google Scholar 

  • Lemke, C., & Howson, J. (1964). Equilibrium points of bimatrix games. Journal of the Society of Industrial and Applied Mathematics, 12, 413–423.

    Article  Google Scholar 

  • Letchford, J., Korzhyk, D., & Conitzer, V. (2014). On the value of commitment. Autonomous Agents and Multi-Agent Systems, 28(6), 986–1016.

    Article  Google Scholar 

  • Papadimitriou, C. H. (1994). On the complexity of the parity argument and other inefficient proofs of existence. Journal of Computer and System Sciences, 48(3), 498–532.

    Article  Google Scholar 

  • Pita, J., Jain, M., Ordóñez, F., Portway, C., Tambe, M., & Western, C. (2009). Using game theory for Los Angeles airport security. AI Magazine, 30(1), 43–57.

    Google Scholar 

  • Porter, R., Nudelman, E., & Shoham, Y. (2008). Simple search methods for finding a Nash equilibrium. Games and Economic Behavior, 63(2), 642–662.

    Article  Google Scholar 

  • Roughgarden, T. (2004). Stackelberg scheduling strategies. SIAM Journal on Computing, 33(2), 332–350.

    Article  Google Scholar 

  • Sandholm, T., Gilpin, A., & Conitzer, V. (2005). Mixed-integer programming methods for finding Nash equilibria. In Proceedings of the National Conference on Artificial Intelligence (AAAI) (pp. 495–501). AAAI: Pittsburgh.

  • Savani, R., & von Stengel, B. (2006). Hard-to-solve bimatrix games. Econometrica, 74, 397–429.

    Article  Google Scholar 

  • Schelling, T. C. (1960). The strategy of conflict. Cambridge: Harvard University Press.

    Google Scholar 

  • Tsai, J., Rathi, S., Kiekintveld, C., Ordóñez, F., & Tambe, M. (2009). IRIS—a tool for strategic security allocation in transportation networks. In Proceedings of the Eighth International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 37–44). AAMAS: Budapest.

  • von Neumann, J. (1928). Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100, 295–320.

    Article  Google Scholar 

  • von Stengel, B., & Zamir, S. (2010). Leadership games with convex strategy sets. Games and Economic Behavior, 69, 446–457.

    Article  Google Scholar 

  • Yin, Z., Jiang, A. X., Tambe, M., Kiekintveld, C., Leyton-Brown, K., Sandholm, T., et al. (2012). TRUSTS: Scheduling randomized patrols for fare inspection in transit systems using game theory. AI Magazine, 33(4), 59–72.

Download references

Acknowledgments

I thank the LOFT and Synthese reviewers for helpful feedback, and ARO and NSF for support under Grants W911NF-12-1-0550, W911NF-11-1-0332, IIS-0953756, CCF-1101659, CCF-1337215, and IIS-1527434.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vincent Conitzer.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Conitzer, V. On Stackelberg mixed strategies. Synthese 193, 689–703 (2016). https://doi.org/10.1007/s11229-015-0927-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-015-0927-6

Keywords

Navigation