Exploiting Myopic Learning
We show how a principal can exploit myopic social learning in a population of agents in order to implement social or selfish outcomes that would not be possible under the traditional fully-rational agent model. Learning in our model takes a simple form of imitation, or replicator dynamics; a class of learning dynamics that often leads the population to converge to a Nash equilibrium of the underlying game. We show that, for a large class of games, the principal can always obtain strictly better outcomes than the corresponding Nash solution and explicitly specify how such outcomes can be implemented. The methods applied are general enough to accommodate many scenarios, and powerful enough to generate predictions that allude to some empirically-observed behavior.
Unable to display preview. Download preview PDF.
- 1.Acemoglu, D., Bimpikis, K., Ozdaglar, A.: Communication Dynamics in Endogenous Social Networks. Working Paper (2010)Google Scholar
- 2.Acemoglu, D., Dahleh, M., Lobel, I., Ozdaglar, A.E.: Bayesian learning in social networks. NBER Working Paper (2008)Google Scholar
- 5.Eeckhout, J., Persico, N., Todd, P.: A Theory of Optimal Random Crackdowns. American Economic Review (2010)Google Scholar
- 9.Kamenica, E., Gentzkow, M.: Bayesian persuasion. NBER Working Paper (2009)Google Scholar