Skip to main content

Learning in Multi Agent Social Environments with Opponent Models

  • Conference paper
  • First Online:
Multi-Agent Systems and Agreement Technologies (EUMAS 2015, AT 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9571))

Abstract

We examine how synthetic agents interact in social environments employing a variety of agent training strategies against diverse opponents. Such agent training and playing methods indicate that quality playing relies more on the correct set-up of the learning mechanism than on experience. The experimentation provides valuable insight into the potential of an agent to compete against other agents in its environment and yet manage to also co-operate so that this particular environment allows for the emergence of a competitive champion agent, which will represent its group in further contests. Additionally, by investigating performance while constraining the number of moves we gain interesting insight into competitive learning and playing with resource constraints.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gilbert, N., Troitzsch, K.G.: Simulation for the Social Scientist, 2nd edn. Open University Press, Maidenhead (2005)

    Google Scholar 

  2. Ferber, J.: Multi-agent Systems: An Introduction to Distributed Artificial Intelligence. Addison-Wesley, Boston (1999)

    Google Scholar 

  3. Shoham, Y., Leyton, B.K.: Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, Cambridge (2009)

    MATH  Google Scholar 

  4. Kiourt, C., Kalles, D.: Social reinforcement learning in game playing. In: IEEE International Conference on Tools with Artificial Intelligence, Athens, Greece, pp. 322–326, 7–9 November 2012

    Google Scholar 

  5. Marivate, V.N.: Social learning methods in board game agents. In: IEEE Symposium on Computational Intelligence and Games (CIG 2008), Perth, Australia, pp. 323–328 (2008)

    Google Scholar 

  6. Kiourt, C., Kalles, D.: Building a social multi-agent system simulation management toolbox. In: 6th Balkan Conference in Informatics, Thessaloniki, Greece, pp. 66–70 (2013)

    Google Scholar 

  7. Lockett, J.A., Miikkulainen, R.: Evolving opponent models for Texas Hold ’Em. In: IEEE Symposium on Computational Intelligence and Games, Perth, Australia, pp. 31–38 (2008)

    Google Scholar 

  8. Carmel, D., Markovitch, S.: Opponent modeling in multi-agent systems. In: Weiss, G., Sen, S. (eds.) IJCAI-WS 1995. LNCS, vol. 1042, pp. 40–52. Springer, Heidelberg (1996)

    Chapter  Google Scholar 

  9. Al-Khateeb, B., Kendall, G.: Introducing a round robin tournament into evolutionary individual and social learning checkers. In: Developments in E-Systems Engineering, Dubai, United Arab Emirates, December 2011

    Google Scholar 

  10. Kalles, D., Kanellopoulos, P.: On verifying game design and playing strategies using reinforcement learning. In: Proceedings of ACM Symposium on Applied Computing, Special Track on Artificial Intelligence and Computation Logic, Las Vegas (2001)

    Google Scholar 

  11. Tesauro, G.: Programming backgammon using self-teaching neural nets. Artif. Intell. 134, 181–199 (2002)

    Article  MATH  Google Scholar 

  12. MacNamee, B., Cunningham, P.: Creating socially interactive non player characters. The μ-SIC system. Int. J. Intell. Games Simul. 2(1), 28–35 (2003)

    Google Scholar 

  13. Sutton, R., Barto, A.: Reinforcement Learning - An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  14. March, J.G.: Exploration and exploitation in organizational learning. Organ. Sci. 2(1), 71–87 (1991)

    Article  MathSciNet  Google Scholar 

  15. Kiourt, C., Kalles, D.: Development of grid-based multi agent systems for social learning. In: IEEE International Conference on Information Intelligence, Systems and Applications (IISA 2015), Corfu, Greece, 04–07 July 2015

    Google Scholar 

  16. Elo, A.E.: The Rating of Chess Players, Past and Present. Arco Publishing, New York (1978)

    Google Scholar 

  17. Glickman, E.M., Albyn, J.C.: Rating the chess rating system. Chance 12(2), 21–28 (1999)

    Google Scholar 

Download references

Acknowledgment

The first author has been partially supported by the Hellenic Artificial Intelligence Society (EETN).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chairi Kiourt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Kiourt, C., Kalles, D. (2016). Learning in Multi Agent Social Environments with Opponent Models. In: Rovatsos, M., Vouros, G., Julian, V. (eds) Multi-Agent Systems and Agreement Technologies. EUMAS AT 2015 2015. Lecture Notes in Computer Science(), vol 9571. Springer, Cham. https://doi.org/10.1007/978-3-319-33509-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-33509-4_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-33508-7

  • Online ISBN: 978-3-319-33509-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics