Field-Based Coordination of Mobile Intelligent Agents: An Evolutionary Game Theoretic Analysis

  • Krunoslav Trzec
  • Ignac Lovrek
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4692)


The paper deals with field-based coordination of agent team in which the continental divide game is applied as a coordination mechanism. The agent team consists of self-interested mobile intelligent agents whose behaviour is modelled using coordination policies based on adaptive learning algorithms. Three types of learning algorithms have been used: three parameter Roth-Erev algorithm, stateless Q-learning algorithm, and experience-weighted attraction algorithm. The coordination policies are analyzed by replicator dynamics from evolutionary game theory. A case study describing performance evaluation of coordination policies according to the analysis is considered.


Nash Equilibrium Multiagent System Evolutionary Game Coordination Mechanism Replicator Dynamic 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Mamei, M., Zambonelli, F.: Field-Based Coordination for Pervasive Multiagent Systems. Springer, Berlin (2006)zbMATHGoogle Scholar
  2. 2.
    Van Huyck, J.B., Cook, J.P., Battalio, R.C.: Adaptive Behavior and Coordination Failure. Journal of Economic Behavior and Organization 32, 483–503 (1997)CrossRefGoogle Scholar
  3. 3.
    Cooper, R.W.: Coordination Games. Cambridge University Press, Cambridge (1999)zbMATHGoogle Scholar
  4. 4.
    Salmon, T.C.: An Evaluation of Econometric Models of Adaptive Learning. Econometrica 6, 1597–1628 (2001)CrossRefGoogle Scholar
  5. 5.
    Erev, I., Roth, A.E.: Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria. American Economic Review 4, 848–881 (1998)Google Scholar
  6. 6.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  7. 7.
    Camerer, C., Ho, T.-H.: Experience-Weighted Attraction Learning in Normal Form Games. Econometrica 4, 827–874 (1999)CrossRefGoogle Scholar
  8. 8.
    Weibull, J.W.: Evolutionary Game Theory. MIT Press, Cambridge (1997)Google Scholar
  9. 9.
    Walsh, W.E., Das, R., Tesauro, G., Kephart, J.O.: Analyzing Complex Strategic Interactions in Multi-Agent Systems. In: Proceeding of the AAAI 2002 Workshop on Game Theoretic and Decision Theoretic Agents, Edmonton, Canada, pp. 109–118 (2002)Google Scholar
  10. 10.
    Trzec, K., Lovrek, I., Mikac, B.: Agent Behaviour in Double Auction Electronic Market for Communication Resources. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2006. LNCS (LNAI), vol. 4251, pp. 318–325. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  11. 11.
    Lovrek, I., Sinkovic, V.: Mobility Management for Personal Agents in the All-Mobile Network. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3213, pp. 1143–1149. Springer, Heidelberg (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Krunoslav Trzec
    • 1
  • Ignac Lovrek
    • 2
  1. 1.Ericsson Nikola Tesla, R&D Centre, Krapinska 45, HR-10000 ZagrebCroatia
  2. 2.University of Zagreb, Faculty of Electrical Engineering and Computing, Department of Telecommunications, Unska 3, HR-10000 ZagrebCroatia

Personalised recommendations