Spread of Cooperation in Complex Agent Networks Based on Expectation of Cooperation

  • Ryosuke Shibusawa
  • Tomoaki Otsuka
  • Toshiharu Sugawara
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9862)

Abstract

This paper proposes a behavioral strategy called expectation of cooperation with which cooperation in the prisoner’s dilemma game spreads over agent networks by incorporating Q-learning. Recent advances in computer and communication technologies enable intelligent agents to operate in small and handy computers such as mobile PCs, tablet computers, and smart phones as delegates of their owners. Because the interaction of these agents is associated with social links in the real world, social behavior is to some degree required to avoid conflicts, competition, and unfairness that may lead to further inefficiency in the agent society. The proposed strategy is simple and easy to implement but nevertheless can spread over and maintain cooperation in agent networks under certain conditions. We conducted a number of experiments to clarify these conditions, and the results indicate that cooperation spread and was maintained with the proposed strategy in a variety of networks.

Keywords

Nash Equilibrium Complete Graph Cooperative Behavior Cooperation Strategy Public Good Game 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Axelrod, R.: An evolutionary approach to norms. Am. Polit. Sci. Rev. 80(4), 1095–1111 (1986)CrossRefGoogle Scholar
  2. 2.
    Barabasi, A.L., Albert, R.: Emergence of scaling in random networks. Science 286, 509–512 (1999)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Fowler, J.H., Christakis, N.A.: Cooperative behavior cascades in human social networks. Proc. Natl. Acad. Sci. (PNAS) 107(12), 5334–5338 (2010)CrossRefGoogle Scholar
  4. 4.
    Hao, J., Leung, H.: Achieving social optimality with influencer agents. In: Glass, K., Colbaugh, R., Ormerod, P., Tsao, J. (eds.) Complex 2012. LNICST, vol. 126, pp. 140–151. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  5. 5.
    Holland, J.H., Holyoak, K.J., Nisbett, R.E., Thagard, P.R.: Induction: Processes of Inference, Learning, and Discovery. MIT Press, Cambridge (1986)Google Scholar
  6. 6.
    Jiang, Y., Jiang, J.C.: Understanding social networks from a multiagent perspective. IEEE Trans. Parallel Distrib. Syst. 25(10), 2743–2759 (2014)CrossRefGoogle Scholar
  7. 7.
    Li, A., Yong, X.: Entanglement guarantees emergence of cooperation in quantum prisoner’s dilemma games on networks. Nat. Sci. Rep. 4(6286) (2014)Google Scholar
  8. 8.
    Li, A., Yong, X.: Emergence of super cooperation of prisoner’s dilemma games on scale-free networks. PLoS ONE 10(2) (2015)Google Scholar
  9. 9.
    Masuda, N.: Evolution of cooperation driven by zealots. Nat. Sci. Rep. 2(646) (2012)Google Scholar
  10. 10.
    Matlock, M., Sen, S.: Effective tag mechanisms for evolving coordination. In: Proceedings of the 6th International Joint Conference on Autonomous Agents, Multiagent Systems, AAMAS 2007, pp. 251:1–251:8. ACM, New York (2007)Google Scholar
  11. 11.
    Matlock, M., Sen, S.: Effective tag mechanisms for evolving cooperation. In: Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2009, vol. 1, pp. 489–496 (2009)Google Scholar
  12. 12.
    Moriyama, K.: Utility based Q-learning to maintain cooperation in prisoner’s dilemma games. In: Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2007, Washington, DC, USA, pp. 146–152 (2007)Google Scholar
  13. 13.
    Moriyama, K., Kurihara, S., Numao, M.: Evolving subjective utilities: prisoner’s dilemma game examples. In: Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems, AAMAS, vol. 1, pp. 233–240 (2011)Google Scholar
  14. 14.
    Nowak, M.A.: Five rules for the evolution of cooperation. Science 314(5805), 1560–1563 (2006)CrossRefGoogle Scholar
  15. 15.
    Nowak, M.A., Sigmund, K.: Evolution of indirect reciprocity. Nature 437, 1291–1298 (2005)CrossRefGoogle Scholar
  16. 16.
    Ohdaira, T., Terano, T.: Cooperation in the prisoner’s dilemma game based on the second-best decision. J. Artif. Soc. Soc. Simul. 12(4) (2009)Google Scholar
  17. 17.
    Rockenbach, B., Milinski, M.: The efficient interaction of indirect reciprocity and costly punishment. Nature 444, 718–723 (2006)CrossRefGoogle Scholar
  18. 18.
    Sen, S., Airiau, S.: Emergence of norms through social learning. In: International Joint Conference on Artificial Intelligence (IJCAI-07), pp. 1507–1512 (2007)Google Scholar
  19. 19.
    Shi, D.-M., Yang, H.-X., Hu, M.-B., Du, W.-B., Wang, B.-H., Cao, X.-B.: Preferential selection promotes cooperation in a spatial public goods game. Phys. A: Stat. Mech. Appl. 388(21), 4646–4650 (2009)CrossRefGoogle Scholar
  20. 20.
    Shibusawa, R., Sugawara, T.: Norm emergence via influential weight propagation in complex networks. In: Proceedings of the Europea Network Intelligence Conference (ENIC), IEEE Xplore, pp. 30–37, September 2014Google Scholar
  21. 21.
    Sugawara, T.: Emergence and stability of social conventions in conflict situations. In: International Joint Conference on Artificial Intelligence (IJCAI-11), pp. 371–378 (2011)Google Scholar
  22. 22.
    Vázquez, A.: Growing network with local rules: preferential attachment, clustering hierarchy, and degree correlations. Phys. Rev. E 67(5), 056194 (2003)CrossRefGoogle Scholar
  23. 23.
    Villatoro, D., Sabater-Mir, J., Sen, S.: Social instruments for robust convention emergence. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume One, IJCAI 2011, pp. 420–425. AAAI Press (2011)Google Scholar
  24. 24.
    Watts, D., Strogatz, S.: Collective dynamics of ‘small-world’ networks. Nature 393, 440–442 (1998)CrossRefGoogle Scholar
  25. 25.
    Xianyu, B.: Prisoner’s dilemma game on complex networks with agents’ adaptive expectations. J. Artif. Soc. Soc. Simul. 15(3), 3 (2012)Google Scholar
  26. 26.
    Yu, C., Zhang, M., Ren, F., Luo, X.: Emergence of social norms through collective learning in networked agent societies. In: Proceedings of the 12th International Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 475–482 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Ryosuke Shibusawa
    • 1
  • Tomoaki Otsuka
    • 1
  • Toshiharu Sugawara
    • 1
  1. 1.Department of Computer Science and Communications EngineeringWaseda UniversityTokyoJapan

Personalised recommendations