Skip to main content

The 15th Edition of the Multi-Agent Programming Contest - The GOAL-DTU Team

  • Conference paper
  • First Online:
The Multi-Agent Programming Contest 2021 (MAPC 2021)

Abstract

We provide an overview of the GOAL-DTU system for the Multi-Agent Programming Contest, including the overall strategy and how the system is designed to apply this strategy. Our agents are implemented using the GOAL programming language. We evaluate the performance of our agents in the contest and, finally, we discuss how to improve the system based on an analysis of its strengths and weaknesses.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Hindriks, K.V., Koeman, V.: The GOAL Agent Programming Language Home (2021). https://goalapl.atlassian.net/wiki

  2. Hindriks, K.V., de Boer, F.S., van der Hoek, W., Meyer, J.-J.C.: Agent programming with declarative goals. In: Castelfranchi, C., Lespérance, Y. (eds.) ATAL 2000. LNCS (LNAI), vol. 1986, pp. 228–243. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44631-1_16

    Chapter  Google Scholar 

  3. Hindriks, K.V.: Programming rational agents in GOAL. In: El Fallah Seghrouchni, A., Dix, J., Dastani, M., Bordini, R.H. (eds.) Multi-Agent Programming, pp. 119–157. Springer, Boston (2009). https://doi.org/10.1007/978-0-387-89299-3_4

    Chapter  MATH  Google Scholar 

  4. Hindriks, K.V., Dix, J.: GOAL: a multi-agent programming language applied to an exploration game. In: Shehory, O., Sturm, A. (eds.) Agent-Oriented Software Engineering, pp. 235–258. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54432-3_12

    Chapter  Google Scholar 

  5. Boss, N.S., Jensen, A.S., Villadsen, J.: Building multi-agent systems using Jason. Ann. Math. Artif. Intell. 59, 373–388 (2010)

    Article  MathSciNet  Google Scholar 

  6. Vester, S., Boss, N.S., Jensen, A.S., Villadsen, J.: Improving multi-agent systems using Jason. Ann. Math. Artif. Intell. 61, 297–307 (2011)

    Article  Google Scholar 

  7. Ettienne, M.B., Vester, S., Villadsen, J.: Implementing a multi-agent system in python with an auction-based agreement approach. In: Dennis, L., Boissier, O., Bordini, R.H. (eds.) ProMAS 2011. LNCS (LNAI), vol. 7217, pp. 185–196. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31915-0_11

    Chapter  Google Scholar 

  8. Villadsen, J., Jensen, A.S., Ettienne, M.B., Vester, S., Andersen, K.B., Frøsig, A.: Reimplementing a multi-agent system in Python. In: Dastani, M., Hübner, J.F., Logan, B. (eds.) ProMAS 2012. LNCS (LNAI), vol. 7837, pp. 205–216. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38700-5_13

    Chapter  Google Scholar 

  9. Villadsen, J., et al.: Engineering a multi-agent system in GOAL. In: Cossentino, M., El Fallah Seghrouchni, A., Winikoff, M. (eds.) EMAS 2013. LNCS (LNAI), vol. 8245, pp. 329–338. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-45343-4_18

    Chapter  Google Scholar 

  10. Villadsen, J., From, A.H., Jacobi, S., Larsen, N.N.: Multi-agent programming contest 2016 - the Python-DTU team. Int. J. Agent-Oriented Softw. Eng. 6(1), 86–100 (2018)

    Article  Google Scholar 

  11. Villadsen, J., Fleckenstein, O., Hatteland, H., Larsen, J.B.: Engineering a multi-agent system in Jason and CArtAgO. Ann. Math. Artif. Intell. 84, 57–74 (2018)

    Article  MathSciNet  Google Scholar 

  12. Villadsen, J., Bjørn, M.O., From, A.H., Henney, T.S., Larsen, J.B.: Multi-agent programming contest 2018—the Jason-DTU team. In: Ahlbrecht, T., Dix, J., Fiekas, N. (eds.) MAPC 2018. LNCS (LNAI), vol. 11957, pp. 41–71. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-37959-9_3

    Chapter  Google Scholar 

  13. Jensen, A.B., Villadsen, J.: GOAL-DTU: development of distributed intelligence for the multi-agent programming contest. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2019. LNCS (LNAI), vol. 12381, pp. 79–105. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59299-8_4

    Chapter  Google Scholar 

Download references

Acknowledgments

We thank Tobias Ahlbrecht, Asta Halkjær From and Benjamin Simon Stenbjerg Jepsen for discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jørgen Villadsen .

Editor information

Editors and Affiliations

A Team Overview: Short Answers

A Team Overview: Short Answers

1.1 A.1 Participants and Their Background

  • What was your motivation to participate in the contest? To work on implementing a multi-agent system capable of competing in a realistic, albeit simulated, scenario.

  • What is the history of your group? (course project, thesis, \(\ldots \)) The name of our team is GOAL-DTU. We participated in the contest in 2009 and 2010 as the Jason-DTU team, in 2011 and 2012 as the Python-DTU team, in 2013 and 2014 as the GOAL-DTU team, in 2015/2016 as the Python-DTU team, in 2017 and 2018 as the Jason-DTU team and in 2019 as the GOAL-DTU team. We are affiliated with the Algorithms, Logic and Graphs section at DTU Compute, Department of Applied Mathematics and Computer Science, Technical University of Denmark (DTU). DTU Compute is located in the greater Copenhagen area. The main contact is associate professor Jørgen Villadsen, email: ‘jovi@dtu.dk

  • What is your field of research? Which work therein is related? We are responsible for the Artificial Intelligence and Algorithms study line of the MSc in Computer Science and Engineering programme.

1.2 A.2 Statistics

  • Did you start your agent team from scratch or did you build on your own or someone else’s agents (e.g. from last year)? We used as starting point our code from the MAPC 2019.

  • How much time did you invest in the contest (for programming, organizing your group, other)? We used approximately 160 h to qualify. From January until the contest we used approximately 300 h.

  • How was the time (roughly) distributed over the months before the contest? To qualify we used approximately 80 h in August and 80 h in September. In January we updated GOAL—the new version of GOAL was not compatible with most of the old code, thus a lot had to be rewritten. We also had to spend some time debugging GOAL itself. In February, the actual programming of the agents started.

  • How many lines of code did you produce for your final agent team?  2000 lines of code.

  • How many people were involved? 5 people: Jørgen Villadsen, Alexander Birch Jensen, Benjamin Simon Stenbjerg Jepsen, Erik Kristian Gylling and Jonas Weile.

  • When did you start working on your agents? We started working on our code from MAPC 2019 in August. As mentioned in the previous questions, large parts of the existing code had to be rewritten, however. This began in January.

1.3 A.3 Technology and Techniques

Did you make use of agent technology/AOSE methods or tools? What were your experiences?

  • Agent programming languages and/or frameworks? We used GOAL. We find that it is very intuitive and relatively easy for newcomers to learn which is an advantage as the programming team changes.

  • Methodologies (e.g. Prometheus)? No.

  • Notation (e.g. Agent UML)? No.

  • Coordination mechanisms (e.g. protocols, games, ...)? No.

  • Other (methods/concepts/tools)? We used the Eclipse IDE for programming (it has a GOAL add-on).

1.4 A.4 Agent System Details

  • How do your agents decide what to do? The agents reactively decide on their actions based on the current percepts, their beliefs and their goals.

  • How do your agents decide how to do it? By predetermined rules and actions.

  • How does the team work together? (i.e. coordination, information sharing, ...) How decentralised is your approach? The team communicates via messages and channels to share information and agree on plans. The approach is mostly decentralized, but certain planning tasks are currently delegated to a single agent at a time.

  • Do your agents make use of the following features: Planning, Learning, Organisations, Norms? If so, please elaborate briefly. The agents use planning to choose the tasks to pursue. A single agent is chosen to do the planning, but this agent relies on input from all other agents, and the planning agent is chosen dynamically at run time. The planning agent will search through assignment combinations and choose the most promising.

  • Can your agents change their general behavior during run time? If so, what triggers the changes? An agent will change its behaviour when it is chosen to take part in solving a task.

  • Did you have to make changes to the team (e.g. fix critical bugs) during the contest? We chose not to make changes during the contest.

  • How did you go about debugging your system? What kinds of measures could improve your debugging experience? We used log files to record the agents belief base and percepts. We experimented with linear temporal logic, but ultimately it did not make it to the final version.

  • During the contest you were not allowed to watch the matches. How did you understand what your team of agents was doing? By logging to the console. Admittedly, we could have done much more to improve this aspect.

  • Did you invest time in making your agents more robust/fault-tolerant? How? We spent some time on this, but not enough. This was one of our problems at the competition.

1.5 A.5 Scenario and Strategy

  • What is the main strategy of your agent team? First, to explore, have our agents find other agents, deduce the map dimensions and agree on a task planning agent. Once this agent has been found, it will continuously inquire the other agents about their available resources and try to create task plans. The task plan is sent to all agents involved in the plan, and these will try to solve it as efficiently as possible.

  • Please explain whether you think you came up with a good strategy or you rather enabled your agents to find the best strategy. We defined the strategy for our agents. Obviously, the agents have to find strategies for solving tasks and some aspects are only loosely defined.

  • Did you implement any strategy that tries to interfere with your opponents? We worked on some clearing strategies to defend goal cells, but they seemingly did more harm than good at the competition.

  • How do your agents decide which tasks to complete? Each task is ranked based on a simple heuristic based on the reward and the delivery time. The tasks are then checked based on their ranks in decreasing order, and the agents will try to complete any tasks they deem solvable.

  • How do your agents coordinate assembling and delivering a structure for a task? The agents create structured plans on how to assemble the structure. The plans are continuously checked to see if they remain feasible.

  • Which aspect(s) of the scenario did you find particularly challenging? It was a challenge that the map was a torus and also that the environment was dynamic.

1.6 A.6 And the Moral of it is ...

  • What did you learn from participating in the contest? We learned a lot about using GOAL to write multi-agent programs. We were reminded of the care it takes to develop and test in multi-agent environments.

  • What advice would you give to yourself before the contest/another team wanting to participate in the next? Start early, because unexpected problems will occur. Have a clear testing strategy.

  • What are the strong and weak points of your team? The coordination between agents is working quite well and the A* path finding helps agents to move directly. Agents could be more flexible in helping each other and prioritizing other agents’ tasks over their own when it is better for the team.

  • Where did you benefit from your chosen programming language, methodology, tools, and algorithms? GOAL has built-in functionality that allows agents to communicate with one another and it has a predefined agent-cycle that is suitable for the belief-desire-intention model. A* was used by the agents to determine movement actions for short distances.

  • Which problems did you encounter because of your chosen technologies? We had problems with the EIS interface. These were most obvious during transitions between simulations. We also had some problems with GOAL and backwards compatibility.

  • Did you encounter previously unseen problems/bugs during the contest? We had a problem with our agents receiving false information and then not being able to do anything meaningful. This problem was not experienced beforehand—probably due to insufficient testing.

  • Did playing against other agent teams bring about new insights on your own agents? Yes, our agents are vulnerable to clear actions when they are waiting in the goal zones.

  • What would you improve (wrt. your agents) if you wanted to participate in the same contest a week from now (or next year)? If the contest was a week from now, we would mainly focus on bug fixing and thorough testing. If we had more time we would make better use of agents when they are not partaking in solving tasks. Also, we might look into some better defensive strategies and continuously revising plans to check if they could be optimized.

  • Which aspect of your team cost you the most time? We had major problems with a lot of the code not being compatible with the newest version of GOAL. Due to missing unit-tests, the problems were almost impossible to locate, and a lot of code had to be rewritten. This was a major setback. Furthermore, the A* algorithm used more CPU time than expected.

  • What can be improved regarding the contest/scenario for next year? As has already been suggested, running the agent programs on the server itself. If this was implemented, it would be interesting to decrease the time available for the agents to decide on their actions.

  • Why did your team perform as it did? Why did the other teams perform better/worse than you did? The A* and coordination between our agents made us fast at completing patterns. However, we had a large setback during January, which meant we had to rewrite most of the additions to the 2019 version, as well as spending some time on GOAL itself. This left little time for debugging. We thus found a lot of bugs during the competition.

  • If you participated in the “free-for-all” event after the contest, did you learn anything new about your agents from that? We had our suspicions confirmed—that the current strategy will be a lot less effective if there are many agents cluttering the goal zone. For such scenarios, we need a more dynamic task-solving approach.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jensen, A.B., Villadsen, J., Weile, J., Gylling, E.K. (2021). The 15th Edition of the Multi-Agent Programming Contest - The GOAL-DTU Team. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds) The Multi-Agent Programming Contest 2021. MAPC 2021. Lecture Notes in Computer Science(), vol 12947. Springer, Cham. https://doi.org/10.1007/978-3-030-88549-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88549-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88548-9

  • Online ISBN: 978-3-030-88549-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics