Skip to main content

Spread of Cooperation in Complex Agent Networks Based on Expectation of Cooperation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9862))

Abstract

This paper proposes a behavioral strategy called expectation of cooperation with which cooperation in the prisoner’s dilemma game spreads over agent networks by incorporating Q-learning. Recent advances in computer and communication technologies enable intelligent agents to operate in small and handy computers such as mobile PCs, tablet computers, and smart phones as delegates of their owners. Because the interaction of these agents is associated with social links in the real world, social behavior is to some degree required to avoid conflicts, competition, and unfairness that may lead to further inefficiency in the agent society. The proposed strategy is simple and easy to implement but nevertheless can spread over and maintain cooperation in agent networks under certain conditions. We conducted a number of experiments to clarify these conditions, and the results indicate that cooperation spread and was maintained with the proposed strategy in a variety of networks.

T. Sugawara—This work is supported by KAKENHI (No. 25280087).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    When \(L=2.5\), the ratios increased, but the emergence speed was extremely low.

References

  1. Axelrod, R.: An evolutionary approach to norms. Am. Polit. Sci. Rev. 80(4), 1095–1111 (1986)

    Article  Google Scholar 

  2. Barabasi, A.L., Albert, R.: Emergence of scaling in random networks. Science 286, 509–512 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  3. Fowler, J.H., Christakis, N.A.: Cooperative behavior cascades in human social networks. Proc. Natl. Acad. Sci. (PNAS) 107(12), 5334–5338 (2010)

    Article  Google Scholar 

  4. Hao, J., Leung, H.: Achieving social optimality with influencer agents. In: Glass, K., Colbaugh, R., Ormerod, P., Tsao, J. (eds.) Complex 2012. LNICST, vol. 126, pp. 140–151. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  5. Holland, J.H., Holyoak, K.J., Nisbett, R.E., Thagard, P.R.: Induction: Processes of Inference, Learning, and Discovery. MIT Press, Cambridge (1986)

    Google Scholar 

  6. Jiang, Y., Jiang, J.C.: Understanding social networks from a multiagent perspective. IEEE Trans. Parallel Distrib. Syst. 25(10), 2743–2759 (2014)

    Article  Google Scholar 

  7. Li, A., Yong, X.: Entanglement guarantees emergence of cooperation in quantum prisoner’s dilemma games on networks. Nat. Sci. Rep. 4(6286) (2014)

    Google Scholar 

  8. Li, A., Yong, X.: Emergence of super cooperation of prisoner’s dilemma games on scale-free networks. PLoS ONE 10(2) (2015)

    Google Scholar 

  9. Masuda, N.: Evolution of cooperation driven by zealots. Nat. Sci. Rep. 2(646) (2012)

    Google Scholar 

  10. Matlock, M., Sen, S.: Effective tag mechanisms for evolving coordination. In: Proceedings of the 6th International Joint Conference on Autonomous Agents, Multiagent Systems, AAMAS 2007, pp. 251:1–251:8. ACM, New York (2007)

    Google Scholar 

  11. Matlock, M., Sen, S.: Effective tag mechanisms for evolving cooperation. In: Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2009, vol. 1, pp. 489–496 (2009)

    Google Scholar 

  12. Moriyama, K.: Utility based Q-learning to maintain cooperation in prisoner’s dilemma games. In: Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2007, Washington, DC, USA, pp. 146–152 (2007)

    Google Scholar 

  13. Moriyama, K., Kurihara, S., Numao, M.: Evolving subjective utilities: prisoner’s dilemma game examples. In: Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems, AAMAS, vol. 1, pp. 233–240 (2011)

    Google Scholar 

  14. Nowak, M.A.: Five rules for the evolution of cooperation. Science 314(5805), 1560–1563 (2006)

    Article  Google Scholar 

  15. Nowak, M.A., Sigmund, K.: Evolution of indirect reciprocity. Nature 437, 1291–1298 (2005)

    Article  Google Scholar 

  16. Ohdaira, T., Terano, T.: Cooperation in the prisoner’s dilemma game based on the second-best decision. J. Artif. Soc. Soc. Simul. 12(4) (2009)

    Google Scholar 

  17. Rockenbach, B., Milinski, M.: The efficient interaction of indirect reciprocity and costly punishment. Nature 444, 718–723 (2006)

    Article  Google Scholar 

  18. Sen, S., Airiau, S.: Emergence of norms through social learning. In: International Joint Conference on Artificial Intelligence (IJCAI-07), pp. 1507–1512 (2007)

    Google Scholar 

  19. Shi, D.-M., Yang, H.-X., Hu, M.-B., Du, W.-B., Wang, B.-H., Cao, X.-B.: Preferential selection promotes cooperation in a spatial public goods game. Phys. A: Stat. Mech. Appl. 388(21), 4646–4650 (2009)

    Article  Google Scholar 

  20. Shibusawa, R., Sugawara, T.: Norm emergence via influential weight propagation in complex networks. In: Proceedings of the Europea Network Intelligence Conference (ENIC), IEEE Xplore, pp. 30–37, September 2014

    Google Scholar 

  21. Sugawara, T.: Emergence and stability of social conventions in conflict situations. In: International Joint Conference on Artificial Intelligence (IJCAI-11), pp. 371–378 (2011)

    Google Scholar 

  22. Vázquez, A.: Growing network with local rules: preferential attachment, clustering hierarchy, and degree correlations. Phys. Rev. E 67(5), 056194 (2003)

    Article  Google Scholar 

  23. Villatoro, D., Sabater-Mir, J., Sen, S.: Social instruments for robust convention emergence. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume One, IJCAI 2011, pp. 420–425. AAAI Press (2011)

    Google Scholar 

  24. Watts, D., Strogatz, S.: Collective dynamics of ‘small-world’ networks. Nature 393, 440–442 (1998)

    Article  Google Scholar 

  25. Xianyu, B.: Prisoner’s dilemma game on complex networks with agents’ adaptive expectations. J. Artif. Soc. Soc. Simul. 15(3), 3 (2012)

    Google Scholar 

  26. Yu, C., Zhang, M., Ren, F., Luo, X.: Emergence of social norms through collective learning in networked agent societies. In: Proceedings of the 12th International Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 475–482 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Toshiharu Sugawara .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Shibusawa, R., Otsuka, T., Sugawara, T. (2016). Spread of Cooperation in Complex Agent Networks Based on Expectation of Cooperation. In: Baldoni, M., Chopra, A., Son, T., Hirayama, K., Torroni, P. (eds) PRIMA 2016: Principles and Practice of Multi-Agent Systems. PRIMA 2016. Lecture Notes in Computer Science(), vol 9862. Springer, Cham. https://doi.org/10.1007/978-3-319-44832-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-44832-9_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-44831-2

  • Online ISBN: 978-3-319-44832-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics