Skip to main content

Long-Term Cooperation

  • Chapter
Cooperation

Part of the book series: Philosophical Studies Series ((PSSP,volume 82))

  • 235 Accesses

Abstract

In this chapter our discussion concerning the possibilities of obtaining solutions to collective action dilemmas will be continued, primarily by reference to long-term situations. First the possibilities offered by recent game-theoretical results concerning cooperative solutions in the case of rational agents will be considered and evaluated (Section I). Different types of solutions, such as a) “external” versus “internal” and b) “education-based” versus “control-based”, will be discussed in Section II. The results offered by recent experimental empirical research concerning how real human beings behave in dilemma situations will be considered in Section III. Section IV looks for help from evolutionary game theory, which does not impose strong rationality requirements on actors. This section also discusses the relationship between correlated evolutionary game theory and account of cooperation developed in this book. This chapter will mostly be concerned with i-cooperation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. Mor and Rosenschein (1995) start with the following PD: played for a fixed amount of time (intervals between clock ticks). In other words, mutual cooperation even in the case of (rational) egoists is shown to be rational in the stability (equilibrium) sense when the opting out possibility is added to a PD and a time limit is imposed on how long choices may contemplated. No matter how idealized the conditions are, this is of course an interesting result in view of how difficult it is under any circumstances to cooperate rationally (i-rationally) in a single shot PD within a standard game-theoretical setting.

    Google Scholar 

  2. As to the underlying psychological motivation, I have elsewhere advocated the so-called achievement theory of motivation (Tuomela, 1984; also cf. Chapter 3 above). According to it the basic underlying psychological motivational factors are the motives to achieve, to gain (social and other) power, and to be socially affiliated (and, e.g., socially accepted and receive justice). These may combine in a context of suitable education and environmental influence to result in persons and collectives to which the we-ness approach is basically applicable. Surely there are selfish limits to we-ness, but it is hard to specify what indeed they are. One may try to speculate, much as Pettit (1996) in his paper does, that although people in general behave in largely non-selfish ways, self-interested considerations are “virtually” (although not actually) present. According to him, people normally are cooperative until the cooperation becomes too costly, viz., goes too strongly against the satisfaction of central selfish desires. One could say that the selfish desires (at least those related to primitive needs) set limits of sorts to cooperativeness. Exceed those borders and the “alarms of selfishness start ringing”, requiring a change of behavior. More precisely the virtual presence of selfish considerations is characterized by Pettit in terms of the following conditions (p. 68):“1. The agent does what he or she does for certain nonegocentric reasons, so that self-interest has no actual presence, explicit or implicit, in his or her deliberations. 2. But what the agent does is more or less satisfactory — the criterion of satisfactoriness may be variable — in self-interested terms; it serves self-interest reasonably well. 3. Moreover, if what the agent did as a result of nonegocentric considerations were not satisfactory in this way, then this would cause the agent to being thinking in self-interested terms and, in all likelihood, to adjust his or her behavior accordingly.” While I find this largely acceptable, I would like to add the requirement that self-interest be an underlying sustaining cause satisfying Pettit’s three conditions. His model fits well what I have said about acting for collective reasons, except that he does not operate with my distinction between other-regarding and “group-regarding” reasons (recall Chapter 11). Pettit’s paper, which came to my attention after the present book had already been written, contains an interesting discussion of “complier-centered” and “deviant-centered” ways of institutional design (also cf. Pettit, 1997). They bear resemblance to my notions of the “soft line” and the “rough line”.

    Google Scholar 

  3. I argue in Tuomela (1995, Chapters 4 and 10), in relation to what has just been said that a central necessary condition for the existence of social institutions (and hence society) is precisely the existence of groups which function well (and the related decision-making systems, “authority-systems” as I have called them), because they will help to eliminate collective action dilemmas and free riders. Agreement-based joint intentions tend to bind people to their agreed-upon collective tasks and projects and because of their binding force help combat free riders. Alternatively or in addition, the s-norm approach can similarly be used and it can lead to “cheap” solutions, as seen. Thus suitable agreements and mutual beliefs, resulting in relevant trust, can get a group going — viz., obeying the joint decisions made by the group members or their representatives. Free riderism will still normally remain a threat to social stability. It is worth noting that while authority-systems can be used for an intentional construction and maintenance of social institutions, it might also happen that their use can generate unintended joint effects which may or may not be beneficial for the existence of social institutions. This is a phenomenon which may relate both to intentional and to unintentional free riding.

    Google Scholar 

  4. In a review of experimental results concerning “prosocial” behavior Pruitt and Kimmel (1977, p. 384), summarize the research findings relevant to altruistic behavior in Prisoner’s Dilemma situations. These findings are relevant both to the creation of r-institutions and s-institutions. According to them, the goal of achieving mutual cooperation becomes “more likely to the extent that: (a) one has had experience with the situation over time, especially if this experience has involved mutual noncooperation; (b) one has had time to think or has otherwise been stimulated into examining his experience; (c) the PD (prisoner’s dilemma game) is decomposed so that one must look to the other for his best outcomes; (d) high outcomes are associated with mutual cooperation; (e) low outcomes are associated with exploitation and mutual noncooperation; (f) mutual cooperation yields equitable outcomes; (g) decisions can be reversed as long as either party is dissatisfied with his outcomes; (h) the other party employs a tit-for-tat strategy, especially if it involves slow reciprocation of newly cooperative behavior; (i) the parties communicate with each other; (j) one sees oneself as weaker than the other party; (k) one anticipates continued interaction with the other; and (1) one’s aspirations are so high that the other’s cooperation is apparently needed to achieve them.” The expectation of (future) cooperation from the other is “stronger to the extent that (a) the other has recently cooperated with oneself or another party; (b) the other has consistently cooperated; (c) one has sent a message requesting cooperation or received one assuring cooperation; (d) one knows that the other’s incentives or instructions favor cooperation; (e) the other is seen as dependent on oneself; (f) the other employs a tit-for-tat strategy involving slow retaliation when one fails to cooperate; and, assuming that one has adopted a goal of mutual cooperation, (g) the other is seen as similar to oneself or as a friend.” The maximal rate of cooperative action is to be found when both the goal and the expectation of cooperation are present; it is minimal when neither is present.

    Google Scholar 

  5. In this note, I will give a brief technical exposition of evolutionary game theory. This review is largely based on the discussion by Skyrms (1994, 1996). In evolutionary game theory payoffs are given in terms of evolutionary fitness (expected number of offspring). Furthermore, evolutionary game theory concerns populations of agents rather than single agents and it concerns evolution (the dynamic case) rather than a single-shot case. The payoff for an individual playing strategy Ai against one playing strategy Aj is written as u(Ai/Aj). The population is assumed to be very large (effectively infinite). Individuals are paired at random in the standard evolutionary game theory but are regarded as correlated in Skyrms’s account. Let us write p(Ai) for the proportion of the population playing strategy Aj. This is also the probability that an individual playing Ai is randomly selected from the population. The expected fitness for an individual playing Ai is determined by averaging over all the strategies that Ai may be played against: (math).

    Google Scholar 

  6. The average fitness of the population U is calculated by averaging over the strategies: (math). If the population is large enough, then the expected number of offspring to individuals playing strategy Ai, viz., u(Ai), is with high probability close to the actual number of offspring. Often the expected number of offspring to individuals playing a strategy is identified with the actual number of offspring (thus U=u). The evolutionary model now assumes that the proportion of the population playing a strategy in the next generation p’ is equal to: (math). Considered as a dynamical system with discrete time, the population evolves according to the difference equation: (math). This is an empirical assumption, which says that the proportion of the population which plays a given strategy changes in direct proportion to its relative payoff (positive or negative difference from the average payoff). If the time between generations is small, the above equation may be approximated by a continuous dynamical system governed by the differential equation: (math). Provided average fitness of the population is positive, the orbits of this differential equation on the simplex of population proportions for various strategies are the same as those of the simpler differential equation: (math), although the velocity along the orbits may differ. This equation represents replicator dynamics. A dynamic equilibrium is a fixed point of the dynamics under consideration. An equilibrium is stable if points near to it remain near, and it is strongly stable if nearby points tend toward it. It can be shown that every evolutionary stable state is a strongly stable equilibrium point in the replicator dynamics but not conversely. An evolutionarily stable strategy is an attractor in the replicator dynamics.

    Google Scholar 

  7. Using Jeffrey’s approach, Skyrms takes a pure strategy to be ratifiable if it maximizes expected fitness when it is on the “brink” of fixation. Consider thus the probability measure that an individual would have “on the brink” of performing action Ai, and let (math) UX(Ai/Aj) be the Jeffrey expected utility calculated according to this probability. This is taken to be the expected fitness for an individual playing Ai. Act Ai is said to be ratifiable just in case U(Aj) > U(Aj) for all j different from i. A strategy is adaptive ratifiable if throughout some neighborhood of its point of fixation it has higher fitness than the average fitness of the population. If a strategy is adaptive ratifiable then it is a strongly stable (attracting) equilibrium in the replicator dynamics (see note 3)). The replicator dynamics is basically as in the original case except that utility is defined and computed as Jeffrey expected utility in terms of pairing proportions.

    Google Scholar 

  8. It can be noted here that Gauthier’s (1986, p. 167) idea of constrained maximization (CM) bears resemblance to (SIM) and our earlier discussion in Chapter 10 concerning the transformation of given utilities into final ones. Concentrating on the “transparent case” (viz., the case with full knowledge about the participants choices). Speaking about choices in a PD Gauthier’s transformation principle defining CM is: Choose C if your partner chooses C, but choose D if your partner chooses D (viz., is a “straightforward maximizer” (SM) in Gauthier’s account). (SIM) analogously accepts cooperation-cooperation and defection-defection pairs.) As has been repeatedly noted in the literature, his account faces the kind of deconditionalization problem discussed in the appendix to Chapter 4 and which my account, especially in view of the Bulletin Board view of the adoption of collective goals, has been argued to avoid. The conflict between constrained and straightforward maximization is often in the literature taken to be equivalent to ametagame of the following kind (cf. Bicchieri, 1993, Franssen, 1994). In this metagame there are two equilibria, (CM, CM) and (SM, SM), of which (CM, CM) is dominant. This fact corresponds to (and replaces) premise 2) in the practical reasoning discussed in the text. In conjunction with premise 1) we get the entailment that it is rational for both players to cooperate (choose C) even in a single-shot case. Although this is a formally flawless argument, one may question the idea of excluding the possibility of free-riding “by definition”, so to speak. The interaction situation here is defined in a way which makes free-riding impossible (cf. (SIM)). Thus, a constrained maximizer is a person who never attempts to free ride although he is disposed to be involved in mutual defection (if the other one is a straightforward maximizer). The dichotomy generated by the distinction CM/SM is not a genuine one. The principle of constrained maximization applies to some (“transparent”) situations but conflicts with “objective” rationality in that they a priori exclude the free riding. This concurs with what was said about (SIM). (As to the “translucent” cases in which the other’s choices are known only with some probability, see, e.g., Franssen (1994). Those cases are “messy” — under some conditions cooperation is rational and some other conditions not. I will not discuss the matter here.)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Tuomela, R. (2000). Long-Term Cooperation. In: Cooperation. Philosophical Studies Series, vol 82. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-9594-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-94-015-9594-0_12

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-5411-1

  • Online ISBN: 978-94-015-9594-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics