Skip to main content

Advertisement

Log in

Message Exchange Games in Strategic Contexts

  • Published:
Journal of Philosophical Logic Aims and scope Submit manuscript

A Correction to this article was published on 10 January 2018

This article has been updated

Abstract

When two people engage in a conversation, knowingly or unknowingly, they are playing a game. Players of such games have diverse objectives, or winning conditions: an applicant trying to convince her potential employer of her eligibility over that of a competitor, a prosecutor trying to convict a defendant, a politician trying to convince an electorate in a political debate, and so on. We argue that infinitary games offer a natural model for many structural characteristics of such conversations. We call such games message exchange games, and we compare them to existing game theoretic frameworks used in linguistics—for example, signaling games—and show that message exchange games are needed to handle non-cooperative conversation. In this paper, we concentrate on conversational games where players’ interests are opposed. We provide a taxonomy of conversations based on their winning conditions, and we investigate some essential features of winning conditions like consistency and what we call rhetorical cooperativity. We show that these features make our games decomposition sensitive, a property we define formally in the paper. We show that this property has far-reaching implications for the existence of winning strategies and their complexity. There is a class of winning conditions (decomposition invariant winning conditions) for which message exchange games are equivalent to Banach- Mazur games, which have been extensively studied and enjoy nice topological results. But decomposition sensitive goals are much more the norm and much more interesting linguistically and philosophically.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Change history

  • 10 January 2018

    Our paper, ‘Message Exchange Games in Strategic Contexts’ lost the funding information and acknowledgments. We had put in it on its way to publication. We include them in this erratum here.

Notes

  1. Thanks to Chris Potts and Matthew Stone for this example.

  2. Assuming bounded rationality of conversational agents may restore an effect to messages: for instance the Iterative Best-Response model in [16] allows a level 2 sender to misdirect a less sophisticated level 1 receiver. However, we are convinced that the conversational examples presented in this article are compatible with a common belief in rationality and require an analysis making such an assumption.

  3. We assume here that the prosecutor has an interest to charge Bronston with perjury only if he believes that Bronston actually performed perjury. One can relax this assumption, but that would mean that the prosecutor’s beliefs are irrelevant to his subsequent moves and that the commitments-related interpretation of actions should be considered here.

  4. And following the logical model of commitment that one adopts, it can even make him inconsistent. See [46] for a discussion.

  5. Notice also that, interestingly, even if B explicitly lies about his availability with such an answer, he remains only implicitly committed that it is the meeting that makes him unavailable. So he can still drop this commitment at the cost of admitting that he was incoherent or not responsive. Hence, even if A, for some reason, is willing to confront B for lying to him, and has formal evidence that the meeting is indeed in the morning, doing so still requires a lot of efforts on his part.

  6. unless of course A is ready to accuse B of lying to him.

  7. Examples of such moves are Answering a question, Explaining why a previous commitment is true, Elaborating on a previous commitment, Correcting a previous commitment, and so on—in fact, these correspond to the discourse relations of a discourse theory [1].

  8. The current formulation of the evaluation function |c d o t| is a preliminary attempt and is designed to meet the requirements in the present exposition. We treat this issue in more detail in [4].

  9. See e.g. [10].

  10. Or a maximal consistent and saturated set.

  11. See, for instance, [39] for a nice survey on infinitary games.

  12. For an introduction to LTL, see e.g. [25].

  13. This holds assuming 0 has a countable number of moves in her first turn. Otherwise, only U 10 is finite and the arguments continue to hold.

  14. In principle, participants could continue acknowledging each other’s acknowledgments ad infinitum. But such acknowledgments wouldn’t serve any purpose. For a discussion see [46].

References

  1. Asher, N., & Lascarides, A. (2003). Logics of conversation: Cambridge University Press.

  2. Asher, N., & Lascarides, A. (2013). Strategic conversation. Semantics and Pragmatics, 6.2, 1–62.

  3. Asher, N., Paul, S., & Venant, A. (2015). Conversational goals and achieving them in no-win situations: Submitted to Journal of Logic Language and Information.

  4. Asher, N., & Paul, S. (2016) In J. Hunter , M. Stone, & M. Simons (Eds.), Evaluating conversational success: Weighted Message Exchange Games. New Brunswick, New Jersey, USA: Semdial.

  5. Aumann, R.J., & Hart, S. (2003). Long cheap talk. Econometrica, 71(6), 1619–1660.

    Article  Google Scholar 

  6. Aumann, R.J., & Maschler, M. (1995). Repeated games with incomplete information: MIT Press.

  7. Axelrod, R.M. (2006). The evolution of cooperation: Basic Books.

  8. Benz, A. Jäger, G., & Van Rooij, R. (Eds.) (2005). Game theory and pragmatics: Palgrave Macmillan.

  9. Büch, J.R., & Landweber, L.H. (1969). Solving sequential conditions by finite-state strategies. Transactions of the American Mathematical Society, 138, 367–378.

    Google Scholar 

  10. Chang, C.C., & Keisler, H.J. (1973). Model theory north: Holland Publishing.

  11. Chaterjee, K. (2007). Concurrent games with tail objectives. Theoretical Computer Science, 388, 181–198.

    Article  Google Scholar 

  12. Crawford, V., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50(6), 1431–1451.

    Article  Google Scholar 

  13. Dung, P.M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence, 77(2), 321–357.

    Article  Google Scholar 

  14. Farrell, J. (1993). Meaning and credibility in cheap-talk games. Games and Economic Behaviour, 5, 514–531.

    Article  Google Scholar 

  15. Franke, M. (2008). Meaning and inference in case of conflict. In Balogh, K. (Ed.) Proceedings of the 13th ESSLLI Student Session (pp. 65–74).

  16. Franke, M. (2009). Signal to Act: Game Theory in Pragmatics. ILLC dissertation series. Institute for Logic, Language and Computation.

  17. Franke, M., De Jager, T., & Rooij, R. (2009). Relevance in cooperation and conflict. Journal of Logic and Language.

  18. Glazer, J., & Rubinstein, A. (2001). Debates and decisions: on a rationale of argumentation rules. Games and Economic Behavior, 36(2), 158–173.

    Article  Google Scholar 

  19. Glazer, J., & Rubinstein, A. (2004). On optimal rules of persuasion. Econometrica, 72(6), 119–123.

    Article  Google Scholar 

  20. Grädel, E. (2008). Banach-Mazur games on graphs. In Hariharan, R., Mukund, M., & Vinay, V. (Eds.) Foundations of Software Technology and Theoretical Computer Science (FSTTCS) (pp. 364–382).

  21. Grice, H.P. (1975). Logic and conversation. In Cole, P., & Morgan, J.L. (Eds.) Syntax and Semantics Volume 3: Speech Acts (pp. 41–58): Academic Press.

  22. Grosz, B., & Sidner, C. (1986). Attention, intentions and the structure of discourse. Computational Linguistics, 12, 175–204.

    Google Scholar 

  23. Grosz, B.J., & Kraus, S. (1993). Collaborative plans for group activities. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (pp. 367–373). Los altos, California: Morgan Kaufmann.

  24. Kechris, A. (1995). Classical descriptive set theory. New York: Springer-Verlag.

  25. Lamport, L. (1980). Sometime is sometimes not never: On the temporal logic of programs. In Proceedings of the 7th ACM SIGPLAN-SIGACT symposium on Principles of programming languages (pp. 174–185): ACM.

  26. Lewis, D. (1969). Convention: A philosophical study, Harvard University Press.

  27. Libkin, L. (2004). Elements of finite model theory: Springer.

  28. Malone, P. (2009). The life you save: Nine Steps to Finding the Best Medical Care and Avoiding the Worst: Da Capo Lifelong.

  29. Mann, W.C., & Thompson, S.A. (1987). Rhetorical structure theory: A framework for the analysis of texts. International Pragmatics Association Papers in Pragmatics, 1, 79–105.

    Google Scholar 

  30. Martin, D.A. (1975). Borel determinacy. Annals of Mathematics, 102(2), 363–371.

    Article  Google Scholar 

  31. Mauldin, R. (Ed.) (1981). The Scottish Book. Mathematics from the Scottish Café: Birkhaüser.

  32. McNaughton, R., & Papert, S. (1971). Counter-free automata. In Research Monograph, Vol. 65: MIT Press.

  33. Oxtoby, J. (1957). The Banach-Mazur game and Banach category theorem. Contribution to the Theory of Games, 3, 159–163.

    Google Scholar 

  34. Parikh, P. (1991). Communication and strategic inference. Linguistics and Philosophy, 14(5), 473–514.

    Article  Google Scholar 

  35. Parikh, P. (2000). Communication, meaning and interpretation. Linguistics and Philosophy, 25, 185–212.

    Article  Google Scholar 

  36. Parikh, P. (2001). The use of language. Stanford, California: CSLI Publications.

  37. Perrin, D., & Pin, J.E. (2004). Infinite Words: Automata, Semigroups, Logic and Games: Elsevier Publications.

  38. Rabin, M. (1990). Communication between rational agents. Journal of Economic Theory, 51, 144–170.

    Article  Google Scholar 

  39. Revalski, J.P.. The Banach-Mazur game: history and recent developments. Technical report, Institute of Mathematics and Informatics, (pp. 2003–2004): Bulgarian Academy of Sciences.

  40. Sacks, H. Lectures on Conversation. Blackwell Publishers, Oxford, 1992. Edited by Gail Jefferson. This is the published version of lecture notes from 1967–1972.

  41. Solan, L.M., & Tiersma, P.M. (2005). Speaking of Crime: The language of criminal justice. Chicago, IL: University of Chicago Press.

  42. Spence, A.M. (1973). Job market signaling. Journal of Economics, 87(3), 355–374.

    Google Scholar 

  43. Traum, D., & Allen, J. (1994). Discourse obligations in dialogue processing. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics (ACL94) (pp. 1–8). Las Cruces New, Mexico.

  44. Van Rooij, R. (2003). Being polite is a handicap: towards a game theoretical analysis of polite linguistic behavior. In TARK (pp. 45–58).

  45. van Rooij, R. (2004). Signalling games select horn strategies. Linguistics and Philosophy, 27, 493–527.

    Article  Google Scholar 

  46. Venant, A., & Asher, N. (2015). Ok or not ok? commitments, acknowledgments and corrections. In Proceedings of Semantics and Linguistic Theory (SALT 25). Stanford.

  47. Venant, A., Asher, N., & Dégremont, C. (2014). Credibility and its attacks. In Proceedings of Semdial 2014. Semdial.

  48. Walton, D.N. (1984). Logical dialogue-gamES. Lanham, Maryland: University Press of America.

  49. Zwick, U., & Paterson, M.S. (1995). The complexity of mean payoff games. In Computing and Combinatorics (pp. 1–10): Springer.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicholas Asher.

Additional information

A correction to this article is available online at https://doi.org/10.1007/s10992-017-9455-9.

Appendix: Proof of Proposition 1

Appendix: Proof of Proposition 1

Proof

Let s(⋅|m) be a receiver strategy. Let μ(⋅|m) be a probability distribution over sender types such that s is rational given belief in μ and such that \(s(a^{*}_{t_{\textsc {good}},m}|m) > 0\). By definition, if s is rational given μ, \(a^{*}_{{t_{\textsc {good}}},m}\) must be a best response to m. In particular \(a^{*}_{{t_{\textsc {good}}},m}\) must yield a better (or equal) expected utility that \(a^{*}_{{t_{\text {bad}}},m}\) which can be written as:

$${\sum}_{t \in T} \mu(t|m)u_{R}(t,m,a^{*}_{t_{\textsc{good}},m}) - {\sum}_{t} \mu(t|m)u_{R}(t,m,a^{*}_{t_{\textsc{bad}},m}) \ge 0 $$

that is to say

$$\left(\begin{array}{l} \sum\limits_{t \in T_{\textsc{good}}} \mu(t|m)\left((u_{R}(t,m,a^{*}_{t_{\textsc{good}},m}) - u_{R}(t,m,a^{*}_{t_{\textsc{bad}},m}) \right) \\ -\sum\limits_{t \in t_{\textsc{bad}}} \mu(t|m) \left(u_{R}(t,m,a^{*}_{t_{\textsc{bad}},m}) - u_{R}(t,m,a^{*}_{t_{\textsc{GOOD}},m}) \right) \end{array} \right) \ge 0 $$

Notice that both terms of the above difference are positive (the first term on the left is strictly positive since t goodT good). Let

$$\begin{array}{l}\delta_{\textsc{good}} = max_{t \in T_{\textsc{good}}} (u_{R}(t,m,a^{*}_{t_{\textsc{good}},m}) - u_{R}(t,m,a^{*}_{t_{\textsc{bad}},m})) \textnormal{ and}\\ \delta_{t_{\textsc{bad}}} = u_{R}(t_{\textsc{bad}},m,a^{*}_{t_{\textsc{bad}},m}) - u_{R}(t_{\textsc{bad}},m,a^{*}_{t_{\textsc{good}},m}). \end{array}$$

Since t badT bad we have:

$${\sum}_{t \in T_{\textsc{good}}} \mu(t|m)\delta_{\textsc{good}} - \mu(t_{\textsc{bad}}|m)\delta_{t_{\textsc{bad}}} \ge \left(\begin{array}{l} \sum\limits_{t \in t_{\textsc{good}}} \mu(t|m)\left((u_{R}(t,m,a^{*}_{t_{\textsc{good}},m}) - u_{R}(t,m,a^{*}_{t_{\textsc{bad}},m}) \right) \\ -\sum\limits_{t \in t_{\textsc{bad}}} \mu(t|m) \left(u_{R}(t,m,a^{*}_{t_{\textsc{bad}},m}) - u_{R}(t,m,a^{*}_{t_{\textsc{goog}},m}) \right) \end{array} \right) $$

Hence we must have

$${\sum}_{t \in T_{\textsc{good}}} \mu(t|m)\delta_{\textsc{good}} - \mu(t_{\textsc{bad}}|m)\delta_{t_{\textsc{bad}}} = \mu(T_{\textsc{good}}|m)\delta_{\textsc{good}} - \mu(t_{\textsc{bad}}|m)\delta_{t_{\textsc{bad}}} \ge 0 $$

and since \(\delta _{t_{\textsc {bad}}} > 0\) by hypothesis we have \(\mu (t_{\textsc {good}}|m)\frac {\delta _{t_{\textsc {good}}}}{\delta _{t_{\textsc {bad}}}} \ge \mu (t_{\textsc {bad}} |m)\) which concludes the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Asher, N., Paul, S. & Venant, A. Message Exchange Games in Strategic Contexts. J Philos Logic 46, 355–404 (2017). https://doi.org/10.1007/s10992-016-9402-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10992-016-9402-1

Keywords

Navigation