Knowledge, Belief and Counterfactual Reasoning in Games

  • Robert StalnakerEmail author
Part of the Springer Graduate Texts in Philosophy book series (SGTP, volume 1)


Deliberation about what to do in any context requires reasoning about what will or would happen in various alternative situations, including situations that the agent knows will never in fact be realized. In contexts that involve two or more agents who have to take account of each others’ deliberation, the counterfactual reasoning may become quite complex. When I deliberate, I have to consider not only what the causal effects would be of alternative choices that I might make, but also what other agents might believe about the potential effects of my choices, and how their alternative possible actions might affect my beliefs. Counter factual possibilities are implicit in the models that game theorists and decision theorists have developed – in the alternative branches in the trees that model extensive form games and the different cells of the matrices of strategic form representations – but much of the reasoning about those possibilities remains in the informal commentary on and motivation for the models developed. Puzzlement is sometimes expressed by game theorists about the relevance of what happens in a game ‘off the equilibrium path’: of what would happen if what is (according to the theory) both true and known by the players to be true were instead false. My aim in this paper is to make some suggestions for clarifying some of the concepts involved in counterfactual reasoning in strategic contexts, both the reasoning of the rational agents being modeled, and the reasoning of the theorist who is doing the modeling, and to bring together some ideas and technical tools developed by philosophers and logicians that I think might be relevant to the analysis of strategic reasoning, and more generally to the conceptual foundations of game theory.


Actual World True Belief Belief Revision Belief State Common Belief 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Adams, E. (1970). Subjunctive and indicative conditionals. Foundations of Language, 6, 89–94.Google Scholar
  2. Alchourón, C., & Makinson, D. (1982). The logic of theory change: Contraction functions and their associated revision functions. Theoria, 48, 14–37.CrossRefGoogle Scholar
  3. Alchourón, C., Gärdenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet functions for contraction and revision. Journal of Symbolic Logic, 50, 510–530.CrossRefGoogle Scholar
  4. Aumann, R. (1976). Agreeing to disagree. Annals of Statistics, 4, 1236–1239.CrossRefGoogle Scholar
  5. Bacharach, M. (1985). Some extensions of a claim of Aumann in an axiomatic model of knowledge. Journal of Economic Theory, 37, 167–190.CrossRefGoogle Scholar
  6. Bernheim, B. (1984). Rationalizable strategic behavior. Econometrica, 52, 1007–1028.CrossRefGoogle Scholar
  7. Blume, L., Brandenburger, A., & Dekel, E. (1991a). Lexicographic probabilities and choice under uncertainty. Econometrica, 59, 61–79.CrossRefGoogle Scholar
  8. Blume, L., Brandenburger, A., & Dekel, E. (1991b). Lexicographic probabilities and equilibrium refinements. Econometrica, 59, 81–98.CrossRefGoogle Scholar
  9. Dekel, E., & Fudenberg, D. (1990). Rational behavior with payoff uncertainty. Journal of Economic Theory, 52, 243–267.CrossRefGoogle Scholar
  10. Fudenberg, D., & Tirole, J. (1992). Game theory. Cambridge, MA: MIT Press.Google Scholar
  11. Gärdenfors, P. (1988). Knowledge in flux: Modeling the dynamics of epistemic states. Cambridge, MA: MIT Press.Google Scholar
  12. Gibbard, A., & Harper, W. (1981). Counterfactuals and two kinds of expected utility. In C. Hooker et al. (Eds.), Foundations and applications of decision theory. Dordrecht/Boston: Reidel.Google Scholar
  13. Grove, A. (1988). Two modelings for theory change. Journal of Philosophical Logic, 17, 157–170.CrossRefGoogle Scholar
  14. Harper, W. (1975). Rational belief change, popper functions and counterfactuals. Synthese, 30, 221–262.CrossRefGoogle Scholar
  15. Lewis, D. (1980). Causal decision theory. Australasian Journal of Philosophy, 59, 5–30.CrossRefGoogle Scholar
  16. Makinson, D. (1985). How to give it up: A survey of some formal aspects of the logic of theory change. Synthese, 62, 347–363.CrossRefGoogle Scholar
  17. Mongin, P. (1994). The logic of belief change and nonadditive probability. In D. Prawitz & D. Westerstahl (Eds.), Logic and philosophy of science in Uppsala. Dordrecht: Kluwer.Google Scholar
  18. Pappas, G., & Swain, M. (1978). Essays on knowledge and justification. Ithaca: Cornell University Press.Google Scholar
  19. Pearce, G. (1984). Rationalizable strategic behavior and the problem of perfection. Econometrica, 52, 1029–1050.CrossRefGoogle Scholar
  20. Pettit, P., & Sugden, R. (1989). The backward induction paradox. Journal of Philosophy, 86, 169–182.CrossRefGoogle Scholar
  21. Skyrms, B. (1982). Causal decision theory. Journal of Philosophy, 79, 695–711.CrossRefGoogle Scholar
  22. Skyrms, B. (1992). The dynamics of rational deliberation. Cambridge, MA: Harvard University Press.Google Scholar
  23. Spohn, W. (1987). Ordinal conditional functions: A dynamic theory of epistemic states. In W. Harper & B. Skyrms (Eds.), Causation in decision, belief change and statistics (Vol. 2, pp. 105–134). Dordrecht: Reidel.Google Scholar
  24. Stalnaker, R. (1994). On the evaluation of solution concepts. Theory and Decision, 37, 49–73.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Department of Linguistics and PhilosophyMITCambridgeUSA

Personalised recommendations