The Evolution of Stable Strategies
The theory of games has been applied for nearly four decades to models of economic, social, and biological behaviour, largely through the notion of optimal strategy, but more recently through the notions of Nash equilibrium and evolutionarily stable strategy, the latter having been introduced in Maynard Smith (1974). The choice of strategy by the two agents in these models is governed by the assumption that the two agents are in competition for the advantages to be sought, and by the further assumption that their choices will be made with full knowledge and understanding of the mathematical analysis which lies behind the models. But the role of models where the strategies are arrived at through cooperation, instead of competition, between the two agents is greatly underestimated in some quarters, as the recent book by Boorman and Levitt (1980) will attest. Moreover, the assumption that the most desirable strategies will be chosen immediately by both agents, as soon as the mathematical analysis of the model is made manifest to them, sits rather uneasily alongside the sorts of assumptions found in other branches of modelling, where optimal solutions are reached gradually through a series of trials and errors. The original formulation of evolutionarily stable strategies tried to introduce a dynamical point of view into the static theory of games by explaining why we should expect deviations from stable strategies to be selected out of the population through evolution. However, a truly dynamic convergence to the desired strategies has only recently begun to emerge in the work of such authors as Hines (1980a,b).
KeywordsNash Equilibrium Probability Density Function Pure Strategy Relative Advantage Option Matrix
Unable to display preview. Download preview PDF.
- Maynard Smith, J. (1976): Evolution and the theory of games, Amer. Scientist 64: 41 - 45.Google Scholar