Skip to main content

FIT BUT: Rational Agents in the Multi-Agent Programming Contest

Part of the Lecture Notes in Computer Science book series (LNAI,volume 12947)

Abstract

The 2020 Multi-Agent Programming Contest introduced a modified scenario from last year - Agents Assemble II. Teams of agents compete against each other in completing tasks that consists of assembling blocks into desired structures. In this paper, we describe our strategy, system design, and improvements we made over last year. This paper also contains a description of the tournament matches from our point of view.

Keywords

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://multiagentcontest.org/.

References

  1. Brooks, R.: A robust layered control system for a mobile robot. IEEE J. Robot. Autom. 2(1), 14–23 (1986). https://doi.org/10.1109/JRA.1986.1087032

    Article  Google Scholar 

  2. Cardoso, R.C., Ferrando, A., Papacchini, F.: LFC: combining autonomous agents and automated planning in the multi-agent programming contest. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2019. LNCS (LNAI), vol. 12381, pp. 31–58. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59299-8_2

    Chapter  Google Scholar 

  3. Jensen, A.B., Villadsen, J.: GOAL-DTU: development of distributed intelligence for the multi-agent programming contest. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2019. LNCS (LNAI), vol. 12381, pp. 79–105. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59299-8_4

    Chapter  Google Scholar 

  4. Rao, A.S., Georgeff, M.P., et al.: BDI agents: from theory to practice. In: ICMAS, vol. 95, pp. 312–319 (1995)

    Google Scholar 

  5. Uhlir, V., Zboril, F., Vidensky, F.: Multi-agent programming contest 2019 FIT BUT team solution. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2019. LNCS (LNAI), vol. 12381, pp. 59–78. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59299-8_3

    Chapter  Google Scholar 

Download references

Acknowledgment

This work was supported by the project IT4IXS: IT4Innovations Excellence in Science project (LQ1602).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vaclav Uhlir .

Editor information

Editors and Affiliations

A Team Overview: Short Answers

A Team Overview: Short Answers

1.1 A.1 Participants and Their Background

  • What was your motivation to participate in the contest?

    Our group is related to artificial agents and multi-agent systems, and we wanted to compete in an international contest to test our skills. Last year we took second place, this year we wanted to achieve the same or better result.

  • What is the history of your group? (course project, thesis, \(\ldots \))

    Members of our research group have been teaching artificial intelligence at our faculty for nearly 20 years. Most of the projects or theses in our group concern the topic of artificial intelligence, multi-agent systems, soft-computing and machine learning.

  • What is your field of research? Which work therein is related?

    Vaclav Uhlir: Ecosystems involving autonomous units (mainly autonomous cars).

    František Zboril: Artificial agents, BDI agents and prototyping of wireless sensor networks using mobile agents.

    František Vidensky: BDI agents (mainly intention/action selection problems).

1.2 A.2 Statistics

  • Did you start your agent team from scratch or did you build on your own or someone else’s agents (e.g. from last year)?

    Our system is built on the system we developed last year.

  • How much time did you invest in the contest (for programming, organizing your group, other)?

    About 10 h of planning and organizing, 20 h of implementing new features and roughly 100 h of bug hunting.

  • How was the time (roughly) distributed over the months before the contest?

    Due to the other unrelated time constrains our work was mainly done in weeks just before contest.

  • How many lines of code did you produce for your final agent team?

    7461 lines of code (about 5000 of them from were from last year)

    1273 comment lines (797 from last year)

    59 still active “TODO’s” (last year we had 42 left)

  • How many people were involved?

    3

  • When did you start working on your agents?

    We used our system from contest year 2019 and mainly did only adaptation for new system changes and improvements in agents performances.

1.3 A.3 Technology and Techniques

Did you make use of agent technology/AOSE methods or tools? What were your experiences?

  • Agent programming languages and/or frameworks?

    No

  • Methodologies (e.g. Prometheus)?

    No

  • Notation (e.g. Agent UML)?

    No

  • Coordination mechanisms (e.g. protocols, games, ...)?

    No

  • Other (methods/concepts/tools)?

    Our agents are in some ways similar to BDI agents but they also use the idea of hierarchical models of behavior. They have their own plans, but at the same time they have to fulfill plans that were planned centrally.

1.4 A.4 Agent System Details

  • How do your agents decide what to do?

    Agents are divided into synchronized groups and those groups create plans that are assigned to individual agents. Simultaneously, agents can create some simple plans themselves (for example, clear actions). Resulting plan is then selected for fulfillment by the agent according to priorities and (non)conflicts in group.

  • How do your agents decide how to do it?

    Agents only decide what to do based on ability to do it so they generate all reasonable possibilities of actions resulting in achievement of some goal.

  • How does the team work together? (i.e. coordination, information sharing, ...) How decentralised is your approach?

    Our approach for higher functions is strongly centralized and agents wait for group decision which is triggered by the slowest agent in the group. Individual agents are capable of communicating with the system and by themselves only performing simple tasks (like digging, exploring or attacking enemies).

  • Do your agents make use of the following features: Planning, Learning, Organisations, Norms? If so, please elaborate briefly.

    Our agents are creating plans, but those plans are mainly used for avoiding future conflicts and only one step of the plan is actually used before new plan is generated.

  • Can your agents change their general behavior during runtime? If so, what triggers the changes?

    New plans for agents are generated for each simulation cycle. So every action is dependent only on the current state of the environment. From macro perspective behavioral changes can be observed upon system changes (like sudden availability of new tasks) of when agent experiences desynchronization and will perform only simple tasks (see answer for de/centralization question).

  • Did you have to make changes to the team (e.g. fix critical bugs) during the contest?

    Yes, due to connection issues we had to modify our run cycle and we discovered bug in one of our routines allowing us to increase performance between first and second day of contest.

  • How did you go about debugging your system? What kinds of measures could improve your debugging experience?

    We have custom logging system allowing us to review most of the performed actions and decisions. Unsolved issues remained with identifying actual step in simulation (packaging or marking percepts and actions with step number would solve most of the issues we had in debugging).

  • During the contest you were not allowed to watch the matches. How did you understand what your team of agents was doing?

    We implemented status line where for every step we had step number, our score and counter for each type of plan agents where performing. So we could see for example that 6 agents were working on connecting blocks and just be anticipating every next step where only 4 agents would be connecting and 1 would be carrying objective to the submit area.

  • Did you invest time in making your agents more robust/fault-tolerant? How?

    Yes, in a case of desynchronization with others agents from a group or for agents with conflicting information - those agents where banned from group decisions and where effectively assigned to clearing out terrain and harassing enemy.

1.5 A.5 Scenario and Strategy

  • What is the main strategy of your agent team?

    Aiming for closest achievable and possibly high valued tasks.

  • Please explain whether you think you came up with a good strategy or you rather enabled your agents to find the best strategy.

    We think we came up with a good strategy. Our strategy was already tested last year and has been improved this year.

  • Did you implement any strategy that tries to interfere with your opponents?

    Yes, we did. Desychronized agents or “bored” agents (with low priority plans) could sabotage the opponent’s agents by attacking them or their blocks.

  • How do your agents decide which tasks to complete?

    Agents selected task based on mach with currently already held blocks.

  • How do your agents coordinate assembling and delivering a structure for a task?

    In group decision process agents are assigned meeting coordinates. But mainly it can be viewed as “master” agent acquiring task and using “slave” agents to connect other blocks before master goes to submit finished task.

  • Which aspect(s) of the scenario did you find particularly challenging?

    Adapting existing engine to the cyclic map proved to be far more challenging then we expected (this was due to large number and types of access and to internal grid coordination system).

1.6 A.6 And the Moral of it is ...

  • What did you learn from participating in the contest?

    A relatively simply looking scenario can present a far greater challenge than expected.

  • What advice would you give to yourself before the contest/another team wanting to participate in the next?

    Start implementing as soon as possible. Create small functioning iterations and let them play against each other to see what and how much improves overall behaviour.

  • What are the strong and weak points of your team?

    Our team has expertise in multiple different languages and coding approach techniques and mainly in rapid development using whatever means necessary. Unfortunately this has begun to be known and members of our team are used for other tasks limiting their time availability.

  • Where did you benefit from your chosen programming language, methodology, tools, and algorithms?

    Good ratio between development speed and performance.

  • Which problems did you encounter because of your chosen technologies?

    Java stack trace bug hunting and git server fault - unexpectedly disabling team synchronization.

  • Did you encounter previously unseen problems/bugs during the contest?

    Yes, percepts decoding became far more unpredictable and harder to manage. (Sometimes steps percepts would be incomplete or missing entirely for some agent.)

  • Did playing against other agent teams bring about new insights on your own agents?

    Not especially.

  • What would you improve (wrt. your agents) if you wanted to participate in the same contest a week from now (or next year)?

    Internal grid system, desynchronization management (with recovery options) and implementing agent mobility update.

  • Which aspect of your team cost you the most time?

    Bug hunting problems inherited from last year.

  • What can be improved regarding the contest/scenario for next year?

    Running matches in virtual environment on server and in repeated rounds in span of weeks.

  • Why did your team perform as it did? Why did the other teams perform better/worse than you did?

    We think our agent are more versatile in having ability to immediately adapt to changing conditions, but this will be better assessed after release of all papers from contest participants.

  • If you participated in the “free-for-all” event after the contest, did you learn anything new about your agents from that?

    Yes, in that scenario our agents where effectively outnumbered by enemy agents so strategy of investing one agent to harass one enemy is no longer effective.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Uhlir, V., Zboril, F., Vidensky, F. (2021). FIT BUT: Rational Agents in the Multi-Agent Programming Contest. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds) The Multi-Agent Programming Contest 2021. MAPC 2021. Lecture Notes in Computer Science(), vol 12947. Springer, Cham. https://doi.org/10.1007/978-3-030-88549-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88549-6_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88548-9

  • Online ISBN: 978-3-030-88549-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics