Skip to main content
Log in

Salience reasoning in coordination games

  • Published:
Synthese Aims and scope Submit manuscript

A Correction to this article was published on 15 April 2021

This article has been updated

Abstract

Salience reasoning, many have argued, can help solve coordination problems, but only if such reasoning is supplemented by higher-order predictions, e.g. beliefs about what others believe yet others will choose. In this paper, I will argue that this line of reasoning is self-undermining. Higher-order behavioral predictions defeat salience-based behavioral predictions. To anchor my argument in the philosophical literature, I will develop it in response and opposition to the popular Lewisian model of salience reasoning in coordination games. This model imports the problematic higher-order beliefs by way of a ‘symmetric reasoning’ constraint. In the second part of this paper, I will argue that a player may employ salience reasoning only if she suspends judgment about what others believe yet others will do.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Change history

Notes

  1. E.g. Bicchieri (2005, p. 36ff), Cubitt & Sugden (2003), Gauthier (1975), Gilbert (1989), Postema (2008), Hédoin (2014), Lewis (1969) and Schelling (1960).

  2. The games that I will be talking about are two-player, conflict-free, pure coordination games, i.e. games with multiple strict Nash equilibria in which one player’s gain does not require the other player’s sacrifice.

    In this game, players have to solve the equilibrium selection problem. There are two relevant pure equilibria and and the players have to figure out a way to settle on one of them. Ultimately, each player is trying to match what she takes the other player to choose, which is why each player’s choice depends only on estimates (beliefs, credences, or knowledge) about the other player’s choice.

  3. In some cases, salience might impede cooperation (see Gilbert, 1989, p. 66f). Suppose, for instance, you and I stand to win a prize if we both push the same color button at roughly the same time. Unfortunately, we cannot communicate to coordinate our actions. The buttons available are: red, green, blue, and yellow. Suppose a public announcement is made saying that we have a red phobia and would never press red buttons for any reason. Surely, this announcement would make the red button salient, but it wouldn’t help us coordinate to push the red color button.

  4. The salience of a particular outcome is a correlation device, which is why the salient outcome has sometimes been called the “correlated equilibrium”. This notion was first introduced by Aumann (1974) and then used by Vanderschraaf (1995) in a discussion of Lewis’s Convention.

  5. Some alternative suggestions are the following. Gauthier argues that salience can change the structure of the game. Salience-reasoning, thus, is not reasoning about pure coordination (Gauthier, 1975). Sugden (2003) argues that players can coordinate their actions by conceiving of them as a “team”. Postema (2008) thinks comparing coordination to jazz improvisation can help us understand coordination.

  6. I should add a note on the use of the term “salience”. We shall say that public events such as a visual cue, or precedent make a coordination equilibrium salient. If a particular coordination equilibrium has precedent, then it makes sense to say that this equilibrium is salient because it has precedent. Of course, this does not amount to a definition of salience, which I do not intend to provide.

  7. In the present context, all we need is an informal concept of publicity, meaning the relevant fact is “out in the open” between both agents. I don’t wish to commit to a formal account of publicity involving common knowledge (see Paternotte 2011 for such a definition).

  8. These labels—‘Symmetric Reasoning Condition’ and ‘Public Fact Condition’—were suggested by an anonymous reviewer.

  9. I will address the codicil “if these conditions are applied simultaneously” at the very end of this paper.

  10. For simplicity, we’ll confine ourselves to two player games throughout this paper.

  11. The use of the term “guide” instead of “predict” is preferable for the following reason. Many philosophers hold that an agent cannot predict her own behavior based on previous actions while deliberating what to do. For a nice summary of this debate see Hájek (2016). Furthermore, Gilbert’s claims, which I will focus on below, are not couched in terms of prediction. On her view, rational agents may sometimes decide to give in to an “urge” to act in accordance with the salient option.

  12. Weatherson (2016, section 1) presents a different argument for the seemingly self-undermining nature of higher-order reasoning in the context of coordination games.

  13. Vanderschraaf and Sillari (2014) present the following alternative formulation: “Given a set of agents N and a proposition A′ ⊆ Ω, the agents of N are symmetric reasoners with respect to A′ (or A′-symmetric reasoners) iff, for each i, j ∈ N and for any proposition E ⊆ Ω, if Ki(A′) ⊆ Ki(E) and Ki(A′) ⊆ KjKj(A′), then Ki(A′) ⊆ KjKj(E).

  14. Consult Sillari (2005, p. 383) for a brief discussion and disagreement with Gilbert.

  15. Others are Fagin et al. (1995, Ch. 6, Ch. 11), Fagin et al. (1999) and Halpern & Moses (1990).

  16. By “rational” I mean that (a.) the players seek to maximize individual expected utility, and (b) they know all propositions that can be derived in the context of the game.

  17. Defeaters of this kind have been called “exclusionary defeaters” (Horty 2012).

  18. Horty’s (2012) book Reasons as Defaults contains a user-friendly set-theoretic formal characterization of exclusionary defeat. The main idea is to supplement ordinary propositional logic with a special symbol that represents defeasible generalizations. For instance, given two arbitrary propositions, \(X\) and \({\text{Y}}\), stands for the defeasible generalization that lets a reasoner conclude Y from X, by default. Each such rule is denoted using a subscripted Greek letter \(\delta\) n (e.g. ). The conclusions of these default rules are picked out using the function \(Con\left( {\delta_{n} } \right)\), e.g. \(Con(\delta_{1} ) = Y\). Exclusionary defeaters can be thought of as rules that take other rules out of consideration, they exclude them. Such rules are constructed using the special function \(Out\left( {\delta_{n} } \right)\) which means that a rule is taken out of consideration and no conclusions may be derived from it. Reconsider the example of the illuminated chair. There are three relevant propositions: R = ‘This chair looks red’; P = ‘This chair is illuminated by red light’; F = ‘This chair is red’. Furthermore, there are two relevant default rules: —this chair’s seeming redness is a reason for concluding that it is red—, and —this chair’s being illuminated by red light excludes the reason provided by this chair’s seeming redness. For any given reasoning problem, we can collect these rules in a set D, e.g. for the chair illumination problem the set is this: . To see which rules are excluded in our reasoning problem, we create a new set ‘excluded(D)’ that contains all rules that are taken out of consideration by some rule in D: \({\text{excluded}}\left( D \right){ } = { }\{ \delta \in D:Con\left( D \right){ \vdash }Out\left( \delta \right)\)}. Now we have all the rules that we are not allowed to reason with. To obtain the rules we are allowed to reason with, we finally create a new set \({\text{S}}\) that contains all and only those rules that are not excluded: \({\text{S }} = { }\left\{ {\delta \in D:\delta \notin excluded\left( D \right)} \right\}\). This rendition is a vast simplification of Horty’s model, but it should suffice to convey the basic idea.

  19. The notion of defeat as employed in the present context is normative. If a reason is defeated, then it ought not to be used in drawing inferences. A reasoner may, of course, irrationally draw these inferences regardless. But if she does, her inference is normatively impermissible. Coordination among irrational players, as was indicated above, shall not be discussed in this paper.

  20. There is a glaring similarity between the framework presented here and Muñoz’s (2019) notion of “disqualification.” One consideration, C1 disqualifies another consideration C2 if both considerations support the same conclusion, but the evidential support “comes from [the disqualifying consideration] alone” (Muñoz’s, 2019, p. 888). Muñoz argues that disqualification is a sui generis relation that cannot be defined in terms of more mundane forms of defeat. Of course, here is not the place to decide the case. For our purposes, the more traditional notions of defeat (i.e. exclusionary, and rebutting defeat) have sufficient expressive power.

  21. Defeat relations among reasons can obtain for various reasons. In standard cases, more specific information defeats less specific information (e.g. Horty, 2012, p. 216). We wouldn’t, for instance, want to conclude that Tweety can fly on the ground that he’s a bird, knowing that he’s penguin. One might, thus, wonder whether the defeat relation between precedent and higher-order beliefs can be explained in similar ways. I think this is not so. Rather, the defeat relation in our case is simply grounded in basic assumptions about the structure of the game. In pure coordination games, each rational player is trying to match the other’s choice, i.e. each player will act on what she believes the other player is going to choose. If both the structure of the game as well as the players’ rationality are common knowledge, then each player knows that the same holds true for the other player. Each player knows that the other will act on her belief about what she thinks the other is going to do, which is why precedent has, at this point, been defeated.

  22. Precedent-based considerations may, as one helpful reviewer put it, seem to be “smuggled into” the higher-order predictions.

  23. Now, a critic may accept the idea that higher-order beliefs defeat precedent-based inferences, but she may, however, wish to add that these higher-order beliefs can themselves be defeated, in which case the original precedent-based inference would be reinstated. More particularly, the critic may wish to propose the following rule that governs cases of conflict between higher-order and lower order predictions: in such conflict cases, the lower-order reason or rule takes priority. This rule functions as an arbiter, as it were. Formally, this is a possibility, of course. Horty’s (2012, 129) framework, which I have relied on throughout this paper, explicitly allows for the possibility that defeaters might themselves be defeated. In the case at hand, however, such an extra rule is not plausible. Reconsider a case akin to ‘Fast Food 3’. Suppose I know two things. I know that we’ve always gone to McDonald’s in the past. I also happen to know that you think that I will go to Wendy’s this time around. If my arguments are on track, then my higher-order belief defeats my lower-order belief. According to the suggestion that there is a new ‘arbiter’ rule, my precedent-based prediction would win out. But this suggestion clearly delivers the wrong result. In this case, I ought to go to Wendy’s. If I rely on precedent and, thus, go to McDonald’s, I’m simply irrational.

  24. Furthermore, Lederman (2018a) argues that, given standard assumptions, rational players fail to coordinate their efforts in the following situation: to win a large prize, two players, who cannot communicate, must each hit a buzzer if a mast they both clearly see right in front of them is larger than 100 cm. The mast is 300 cm high.

  25. Friedman has reservations (see Friedman, 2017). The connection between suspension and not believing, she contends, is normative, not descriptive. An agent who is suspended about P ought not believe nor disbelieve it; but since we’re operating under the assumption of perfect rationality, we can sidestep these subtleties.

  26. E.g. Friedman (2013), van Fraassen (1998) and Hájek (1998).

  27. Thus, salience-based coordination must, at its core, be vindicated without relying on higher-order reasoning requirements. This is not a bad thing, because higher-order belief requirements—paradigmatically, common knowledge requirements—are, as Lederman (2018c) points out, not usually meant to capture pre-theoretical desiderata. Rather, common knowledge requirements represent “simplifying technical assumptions” (Lederman 2018b). In fact, to many, common knowledge assumptions seem to be an implausible departure from common sense. In this sense, analyses that can vindicate rational cooperation without relying on higher-order reasoning models are, if anything, closer to common sense.

  28. Or their own behavior, according to Gilbert’s (1989) line of reasoning that was addressed at the end of section 2.

  29. This expression was suggested by a reviewer.

  30. I am immensely grateful to Peter Carruthers, Aleks Knoks, Harvey Lederman, Eric Pacuit, Javiera Perez Gomez, Arthur Schipper, and Aiden Woodcock for their comments on drafts of this paper. I am also grateful to Dominik Klein, and Olivier Roy for giving me the opportunity to present this work at the University of Bayreuth.

References

  • Aumann, R. (1974). Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics, 1, 67–96.

    Article  Google Scholar 

  • Bergmann, M. (2005). Defeaters and higher-level requirements. The Philosophical Quarterly, 55(419–436), 8.

    Google Scholar 

  • Bicchieri, C. (2005). The grammar of society: The nature and dynamics of social norms. Cambridge University Press.

    Book  Google Scholar 

  • Burge, T. (1975). On knowledge and convention. Philosophical Review, 84, 249–255.

    Article  Google Scholar 

  • Clark, H. H. (1996). Using language. Cambridge University Press.

    Book  Google Scholar 

  • Cubitt, R. P., & Sugden, R. (2003). Common knowledge, salience and convention: A reconstruction of David Lewis’ game theory. Economics and Philosophy, 19(02), 175–210.

    Article  Google Scholar 

  • Davies, M. (1987). Relevance and mutual knowledge. Behavioral and Brain Sciences, 10(4), 716–717.

    Article  Google Scholar 

  • Fagin, R., Halpern, J. Y., Moses, Y., & Vardi, M. Y. (1995). Reasoning about knowledge. MIT Press.

    Google Scholar 

  • Fagin, R., Halpern, J. Y., Moses, Y., & Vardi, M. Y. (1999). Common knowledge revisited. Annals of Pure and Applied Logic, 96(1–3), 89–105.

    Article  Google Scholar 

  • Friedman, J. (2013). Suspended judgment. Philosophical Studies, 162(2), 165–181.

    Article  Google Scholar 

  • Friedman, J. (2017). Why suspend judging? Noûs, 51(2), 302–326.

    Article  Google Scholar 

  • Gauthier, D. (1975). Coordination. Dialogue, 14(195–22), 1.

    Google Scholar 

  • Gilbert, M. (1989). Rationality and salience. Philosophical Studies, 57, 61–77.

    Article  Google Scholar 

  • Grice, P. (1969). Utterer’s meaning and intentions. The Philosophical Review, 68, 147–177.

    Article  Google Scholar 

  • Hájek, A. (2016). Deliberation welcomes prediction. Episteme, 13(4), 507–528.

    Article  Google Scholar 

  • Hájek, A. (1998). Agnosticism meets Bayesianism. Analysis, 58(3), 199–206.

    Article  Google Scholar 

  • Halpern, J. Y., & Moses, Y. (1990). Knowledge and common knowledge in a distributed environment. Journal of the ACM (JACM), 37(3), 549–587.

    Article  Google Scholar 

  • Hédoin, C. (2014). A framework for community-based salience: Common knowledge, common understanding and community membership. Economics & Philosophy, 30(3), 365–395.

    Article  Google Scholar 

  • Horty, J. F. (2012). Reasons as defaults. Oxford University Press.

    Book  Google Scholar 

  • Kneeland, T. (2012). Coordination under limited depth of reasoning. University of British Columbia Working Paper.

  • Lederman, H. (2018a). Uncommon knowledge. Mind, 127(508), 1069–1105.

    Google Scholar 

  • Lederman, H. (2018b). Common knowledge. In M. Jankovic & K. Ludwig (Eds.), Handbook of social intentionality. London: Routledge.

    Google Scholar 

  • Lederman, H. (2018c). Two paradoxes of common knowledge: Coordinated attack and electronic mail. Noûs, 52, 921–945.

    Article  Google Scholar 

  • Lewis, D. (1969). Convention: A philosophical study. Harvard University Press.

    Google Scholar 

  • Marmor, A. (2009). Social conventions. Princeton University Press.

    Book  Google Scholar 

  • Moore, R. (2013). Imitation and conventional communication. Biology and Philosophy, 28(3), 481–500.

    Article  Google Scholar 

  • Moore, R. E. (1979). Refraining. Philosophical Studies, 36(407–24), 9.

    Google Scholar 

  • Muñoz, D. (2019). Defeaters and disqualifiers. Mind, 128(511), 887–906.

    Article  Google Scholar 

  • Paternotte, C. (2011). Being realistic about common knowledge: A Lewisian approach. Synthese, 183, 249–276.

    Article  Google Scholar 

  • Pollock, J. (1970). The structure of epistemic justification. In Studies in the theory of knowledge, American philosophical quarterly monograph series (Vol. 4, pp. 62–78). Basil Blackwell Publisher, Inc..

  • Postema, G. J. (2008). Salience reasoning. Topoi, 27(1–2), 41–55.

    Article  Google Scholar 

  • Raz, J. (1975). Practical reasoning and norms. Hutchinson and Company, 1975. (Second edition) with new Postscript printed in 1990 by Princeton University Press, and reprinted by Oxford University Press in 2002; pagination refers to the Oxford edition.

  • Rubinstein, A. (1989). The electronic mail game: Strategic behavior under “almost common knowledge.” The American Economic Review, 79, 385–391.

    Google Scholar 

  • Schelling, T. (1960). The strategy of conflict. Harvard Business School Press.

    Google Scholar 

  • Schönherr, J. (2019). Lucky joint action. Philosophical Psychology, 32(1), 123–142.

    Article  Google Scholar 

  • Sillari, G. (2005). A logical framework for convention. Synthese, 147(2), 379–400.

    Article  Google Scholar 

  • Sillari, G. (2008). Common knowledge and convention. Topoi, 27(1–2), 29–39.

    Article  Google Scholar 

  • Skyrms, B. (2004). The stag hunt and the evolution of social structure. Cambridge University Press.

    Google Scholar 

  • Sturgeon, S. (2010). Confidence and coarse-grained attitudes. In Oxford studies in epistemology (Vol. 3, p. 21). Oxford: Oxford University Press.

  • Sugden, R. (2003). The logic of team reasoning. Philosophical Explorations, 6(3), 165–181.

    Article  Google Scholar 

  • Vanderschraaf, P. (1995). Convention as correlated equilibrium. Erkenntnis, 42(1), 65–87.

    Article  Google Scholar 

  • Vanderschraaf, P., & Sillari, G. (2014). Common knowledge. In E. N. Zalta (Ed.), The Stanford encyclopaedia of philosophy, Spring 2014 edn. Metaphysics Research Lab, Stanford University.

  • van Fraassen, B. C. (1998). The agnostic subtly probabilified. Analysis, 58(212–220), 3.

    Google Scholar 

  • Weatherson, B. (2016). Games, beliefs and credences. Philosophy and Phenomenological Research, 92, 209–236.

    Article  Google Scholar 

  • Wedgwood, R. (2002). The aim of belief. Philosophical Perspectives, 36(s16), 267–297.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julius Schönherr.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: all figures had the figure legend belonging to figure 1.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schönherr, J. Salience reasoning in coordination games. Synthese 199, 6601–6620 (2021). https://doi.org/10.1007/s11229-021-03083-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-021-03083-x

Keywords

Navigation