Our Pavlov learns by conditioned response, through rewards and punishments, to cooperate or defect. We analyze the behavior of an extended play Prisoner's Dilemma with Pavlov against various opponents and compute the time and cost to train Pavlov to cooperate. Among our results is that Pavlov and his clone would learn to cooperate more rapidly than if Pavlov played against the Tit for Tat strategy. This fact has implications for the evolution of cooperation.
Keywordsgame theory prisoner's dilemma Markov chain evolution of cooperation
Unable to display preview. Download preview PDF.
- Axelrod, R.: 1984,The Evolution of Cooperation, Basic Books, New York.Google Scholar
- Axelrod, R. and Hamilton, W. D.: 1981, ‘The Evolution of Cooperation’,Science 211, 1390–96.Google Scholar
- Hofstadter, D. R.: 1983, ‘Computer Tournaments of the Prisoner's Dilemma’,Scientific American 248(5), 16–26.Google Scholar
- Kemeny, J. G. and Snell, J. L.: 1976,Finite Markov Chains, Springer-Verlag, New York.Google Scholar
- PC-MATLAB: 1985, 1987, The Math Works, 20 N. Main St., Suite 250, Sherborn, MA 01770.Google Scholar
- Rapoport, A. and Chammah, A. M.: 1985,Prisoner's Dilemma, Univ. of Mich. Press, Ann Arbor.Google Scholar