Skip to main content

Rational Cooperation and Collective Reasons

  • Chapter
Book cover Cooperation

Part of the book series: Philosophical Studies Series ((PSSP,volume 82))

  • 232 Accesses

Abstract

The purpose of this and the next chapter is to discuss the notion of rational cooperation (and the reasons for cooperating more generally) within a game-theoretical setting. The emphasis will be on conceptual rather than technical issues. The focus in this chapter will be on the “single-shot” situation. While Sections I and II are introductory ones, Section III presents a parametric account of the various factors (“reasons”) that agents can more or less rationally base their i-cooperative behavior on in collective action dilemma situations. Section IV brings also group-reasons into play. Section V argues that in a Centipede situation (a Prisoner’s dilemma with ordered choices) short-term individual rationality leads to non-cooperative behavior while long-term rationality taking into account collective reasons can lead to full cooperation. There are also two appendices, one (Appendix 1) on joint equilibria and another one (Appendix 2) on the rationality of strategic intentions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. As explained earlier, in this book preferences are want-based and utilities express want-intensities. Thus, my preferring x to y can be taken to amount in this context to my wanting x to be realized rather than y. From a formal point of view, we may represent the preferences by weak linear orderings as explained in note 6 of Chapter 3. In addition we will in this chapter need subjective probabilities, degrees of belief, specifying the probability of action in a choice situation (cf. note 3). In general, intention-contents are in this book regarded as goals. We recall from Chapter 2 that there are both action-intentions (concerning what the agent has committed himself to) and aim-intentions (concerning commitments the agent believes require collective action for their satisfaction). As indicated, a rational agent tends to select goals and form intentions which respect his preferences, whenever his preferences are not themselves in conflict — as merely personal and group preferences (preference rankings) may be.

    Google Scholar 

  2. Recall the comments relating collective goals to public goods in note 2 of Chapter 10 claiming that public goods functioning as goals need not be collective goals (nor vice versa).

    Google Scholar 

  3. a) More generally, we can take the participants’ preferences to be about the entities in the situation they “care about”. These entities may be a selected set of joint outcomes or goals or conglomerations of actions together with their outcomes, or something else relevant (see Tuomela, 1984, Chapter 12, for a discussion). Thus in a simple two-person two-choice situation (on which I concentrate) in a PD case I will simply speak of the four joint outcomes CC, CD, DC, and DD as the entities valued by the agents. This kind of talk may mean several things, particularly the whole mutual action-process with a certain intended and expected joint outcome. Thus agent A’s action of choosing C and carrying it out accompanied by B’s action of choosing D and carrying it out together with the resulting joint outcome CD is the entity assumed to be valued by the agents from their own perspective (in the I-mode). The joint outcome may but need not involve their shared goal. Generally speaking, the agent prefers one outcome to another if and only if he wants the former to be realized more strongly than he wants the latter to be realized. b) I will assume in this context that we can use well-grounded conditional probabilities. However, in general there seems to be some reason to employ causal conditionals in the arguments (cf. Gibbard and Harper, 1978 and Tuomela, 1984, esp. Chapter 12). Speaking in non-formal terms, what we should thus use is probability sentences of roughly the following kind: p(outcome CC would be “purposively generated”, viz., be generated in the intended way, should A do X intentionally for the rational intention-belief reason). This is the probability of a causal conditional statement being true. c) The probabilities here have been regarded as being independent of utilities. This seems to be a simplification since wants and beliefs may factually interact — e.g., we seem to have a tendency to “believe what we want”. d) Note that the discussion in this section assumes that goal G is strictly (= with probability one) generated by the CC outcome. In a more refined treatment we will need to take into account probabilities of the kind p(G/CC), p(G/CD), etc. 4 Let us next consider what happens if we remove or change the special assumptions concerning goal G and its achievement. First, assume that G is a private good, viz., a divisible good. Next we assume that if a group member does not participate (contribute) he will be excluded from the consumption of the good — he does not get a slice of the cake. Here the person who does not participate becomes socially sanctioned and receives the negative utility m, assuming that G can be produced alone or without this person. In the case of mutual defection both are socially sanctioned, but, as earlier, it can have a different value (m*). (In cases where the free rider still succeeds in getting his slice of the cake, our first schema, meant for a public good, applies.) The defector gets nothing but a social sanction. In the special cases discussed here, the expected utilities of actions of course can be computed as in the case of public goods produceable alone.

    Google Scholar 

  4. It can be shown for the case of two-person games that if the participants share their knowledge of the structure of the game, the rationality of the players, and about the above kind of probabilistic beliefs, then we get a mixed-strategy equilibrium; full-blown mutual knowledge (“common” knowledge in game-theoreticians’ terminology) is not needed (see Brandenburger, 1992). In some cases mutual cooperation can accordingly be such an equilibrium. The notion of joint equilibrium that I will define in Appendix 1 is not a mixed strategy equilibrium.

    Google Scholar 

  5. Let us consider the general finite Centipede. How does the agents’ practical reasoning go in the case of some arbitrary node j versus j+1? Suppose there are k nodes and j+1 < k. First, how do we define a Centipede in a relevant sense? Here lj means down (or left) choice and rj right choice at j by the player whose choice point node j is. The end node of the game is k. Assumption b) is an analog of the mutual cooperation or CC-outcome of a PD, assumed to be Pareto-preferred to the DD outcome. At node k-1 (the last choice point) a rational player need not assume anything about the other player as the choice is not a strategic one, but chooses the dominant 1 without further ado. Let me note that game-theoretical textbooks typically claim that in a finitely repeated PD and also in a Centipede the backward induction argument works. This argument is supposed to prove that it is rational to defect at all stages, including the first (e.g., see Bicchieri, 1993, and Sobel, 1994). Let me briefly consider the backward induction argument, following largely Sobel’s (1994) discussion. Sobel argues against some other authors that the backward induction argument works with the subjunctive interpretation of conditionals but not with the material interpretation. Applied to a Centipede, the argument says this (cf. Sobel, 1994, p. 349): Ideally rational and well-informed players will, and would whatever they had done in previous rounds, defect in the last stage k of a Centipede (or, respectively, in round k of a sequence of Prisoner’s Dilemmas). Next, for every j such that 1 < j < k, if ideally rational and well-informed players will (and would whatever they had done at previous stages) defect at stages j+1 through k, then they will (and would whatever they had done at previous stages) defect at stage j. Then, for every j such that 1 < j < k, if ideally rational and well-informed players will (and would whatever they had done at previous stages) believe that they will, and so would (whatever they had done at previous stages) defect at stages j+1 through k, then they will (and would whatever they had done at previous stages) defect at stage j. Then, ideally rational and well-informed players will defect at every stage. While I will not here further comment on the matter, let me say that one basic point here is that an ideal case can be defined by using subjunctive conditions along with indicative ones, and antecedents of subjunctive conditions can be incompatible with implications of conditions for the case (e.g., the assumption that a certain stage requiring the straight choice has been reached). The present backward induction argument is meant to hold for the ideal case. The situation with real human beings, including well-informed and “normally humanly rational” persons is different, and the present argument says nothing about them. I wish to emphasize that my argument for “defection” (moving down) at all stages is not a backward induction argument but is a consequence of the “theory of the game” — given that a feasible weaker interpretation of the connection between D-rationality and rational choice is used than that used above in premises 3) and 4) in the earlier inconsistency proof.

    Google Scholar 

  6. Scanlon (1990) discusses promises and principles weaker than promising which nevertheless lead to a kind of moral obligations, e.g., for the first farmer to pay back his debt to the second one in Hume’s example discussed earlier.

    Google Scholar 

  7. McClennen’s (1998) resolute choice model is in many respects similar to mine, at least in spirit. His distinction between “myopic” and “sophisticated” decision-makers resembles my distinction between short-term and long-term rational agents and my account of long-term rationality is also incompatible with the game-theoretic principle of “separability”. However, he does not find any need for collective reasons in my present wide sense for solving the Centipede and seems to be concerned only with solutions relying on a commonly accepted social norm (“practice”) to cooperate. My approach does not require such a norm. Hollis (1998) in his long and historically well-informed discussion advocates yet a different kind of approach, which criticizes the utility-maximization conception of rationality.

    Google Scholar 

  8. See the discussion and evidence presented in Sugden (1993).

    Google Scholar 

  9. After finishing the present chapter, I came across the book by Danielson (1992), which discusses similar questions, but from the quite different standpoint of “artificial moral agents”. He presents computer simulations in which robots defined to present different kinds of decision strategies in an extended PD (a two-move Centipede) and a sequential, two-move Chicken. He shows that there are strategies which can produce different and opposite results (in the sense of the present section). No probabilistic considerations are present in his treatment. He argues, for example, that in the case of the Centipede the strategies of “universal cooperation”, “conditional cooperation”, “reciprocal cooperation”, and “empirical straightforward maximization”, when playing pairwise against each other will yield mutual cooperation (see Danielson, 1992, e.g., p. 152). In the case of the extended Chicken the strategies “narrow cooperation”, “narrower cooperation”, “broad cooperation”, and “liberal broad cooperation” analogously achieve mutual cooperation (see Danielson, 1992, Chapter 9). Relating Danielson’s results to mine, the cooperation-yielding strategies just listed belong to a CT-rational agent’s arsenal, but that arsenal is of course not exhausted by them.

    Google Scholar 

  10. Let me still give another similar example. Skyrms (1994) discusses the case of Max and Moritz in a PD situation. This result he takes to be a problem for Jeffrey, as it is based on “voodoo decision theory”. However, if the arguments presented in this chapter are tenable, Skyrms’s analysis of the situation is not warranted. The present Max and Moritz type of PD structure can well be (and often instances of it are) a meaningful situation that can be handled in terms of joint equilibria in the defined sense.

    Google Scholar 

  11. My discussion below has benefited considerably from comments by Kaarlo Miller and from reading Robins’s (1997) paper. Bratman (1998) complements and supports my present points.

    Google Scholar 

  12. Letting I and B stand for intention and belief, respectively, and p for drinking toxin and m for receiving a million, we can simply formulate a)-c).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Tuomela, R. (2000). Rational Cooperation and Collective Reasons. In: Cooperation. Philosophical Studies Series, vol 82. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-9594-0_11

Download citation

  • DOI: https://doi.org/10.1007/978-94-015-9594-0_11

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-5411-1

  • Online ISBN: 978-94-015-9594-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics