Exploiting Myopic Learning

  • Mohamed Mostagir
Conference paper

DOI: 10.1007/978-3-642-17572-5_25

Part of the Lecture Notes in Computer Science book series (LNCS, volume 6484)
Cite this paper as:
Mostagir M. (2010) Exploiting Myopic Learning. In: Saberi A. (eds) Internet and Network Economics. WINE 2010. Lecture Notes in Computer Science, vol 6484. Springer, Berlin, Heidelberg

Abstract

We show how a principal can exploit myopic social learning in a population of agents in order to implement social or selfish outcomes that would not be possible under the traditional fully-rational agent model. Learning in our model takes a simple form of imitation, or replicator dynamics; a class of learning dynamics that often leads the population to converge to a Nash equilibrium of the underlying game. We show that, for a large class of games, the principal can always obtain strictly better outcomes than the corresponding Nash solution and explicitly specify how such outcomes can be implemented. The methods applied are general enough to accommodate many scenarios, and powerful enough to generate predictions that allude to some empirically-observed behavior.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Mohamed Mostagir
    • 1
  1. 1.Social and Information Sciences LaboratoryCalifornia Institute of Technology 

Personalised recommendations