Internet and Network Economics

Volume 6484 of the series Lecture Notes in Computer Science pp 306-318

Exploiting Myopic Learning

  • Mohamed MostagirAffiliated withSocial and Information Sciences Laboratory, California Institute of Technology

* Final gross prices may vary according to local VAT.

Get Access


We show how a principal can exploit myopic social learning in a population of agents in order to implement social or selfish outcomes that would not be possible under the traditional fully-rational agent model. Learning in our model takes a simple form of imitation, or replicator dynamics; a class of learning dynamics that often leads the population to converge to a Nash equilibrium of the underlying game. We show that, for a large class of games, the principal can always obtain strictly better outcomes than the corresponding Nash solution and explicitly specify how such outcomes can be implemented. The methods applied are general enough to accommodate many scenarios, and powerful enough to generate predictions that allude to some empirically-observed behavior.