Skip to main content

JaCaMo Builders: Team Description for the Multi-agent Programming Contest 2020/21

Part of the Lecture Notes in Computer Science book series (LNAI,volume 12947)


This paper describes the JaCaMo Builders team and its participation in the Multi-Agent Programming Contest 2020/21 based on the Agents Assemble II scenario. The paper presents the analysis of the scenario and design of the solution; the software architecture, including the tools used during the development of the team; the main strategies; and the results achieved by the team, with challenges and directions for future editions of the contest.

This is a preview of subscription content, log in via an institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD   44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions


  1. 1.

  2. 2.

    For simplicity, we use the term “match” to refer to each single round that our team plays in the contest, including the rounds against the same opponent.

  3. 3.

  4. 4.

    We refer to world as the full scenario, including the parts already discovered by the agents and the parts not yet discovered. We refer to map as the part of the world already discovered by the agents.

  5. 5.

    Some technique must be applied to identify agents since the MAPC server merely informs about the presence of an agent, the team it belongs (and not its identifier), and its coordinates according to the observing agent’s limited view.

  6. 6.

  7. 7.

    We did not remove comments in the code before counting the number of lines.

  8. 8.

    Besides STC strategy, we have implemented a spiral strategy. Comparing these two strategies, in the beginning of the exploration, they perform very similarly in terms of mapped area. However, STC discovering rate increases from around step 100, making STC discover about 70% more than spiral strategy in further steps.

  9. 9.

    Our agents are able to walk carrying a maximum of four blocks (a block attached to each side).

  10. 10.

    The algorithm only routes paths for the agent and its attached blocks in the current rotation, i.e., it does not try other possible rotations.

  11. 11.

    There are three situations considered fails, which result in a complete reset of the agent: agent is lost (its map does not match with its view), agent did not send an action for a number of times, and agent is performing a task that just expired.

  12. 12.

    In our strategies, agents only catch blocks from dispensers, ignoring dropped blocks.

  13. 13.

    We have set to use t-test for sample sizes smaller than 30.

  14. 14.

    The code and complete example for statistical tests are available at

  15. 15.

    Although we have developed a Multi-Armed Bandit approach.


  1. Amaral, C.J., Hübner, J.F., Kampik, T.: A resource-oriented abstraction for managing multi-agent systems. Towards jacamo-rest (2020)

    Google Scholar 

  2. Amaral, C.J., Hübner, J.F.: Jacamo-web is on the fly: an interactive multi-agent system IDE. In: Dennis, L.A., Bordini, R.H., Lespérance, Y. (eds.) EMAS 2019. LNCS (LNAI), vol. 12058, pp. 246–255. Springer, Cham (2020).

    Chapter  Google Scholar 

  3. Amaral, C.J., Kampik, T., Cranefield, S.: A framework for collaborative and interactive agent-oriented developer operations. In: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2020, Richland, SC, pp. 2092–2094. International Foundation for Autonomous Agents and Multiagent Systems (2020)

    Google Scholar 

  4. Behrens, T.M., Hindriks, K.V., Dix, J.: Towards an environment interface standard for agent platforms. Ann. Math. Artif. Intell. 61(4), 261–295 (2011)

    Article  Google Scholar 

  5. Boissier, O., Bordini, R., Hübner, J.F., Ricci, A., Santi, A.: Multi-agent oriented programming with JaCaMo. Sci. Comput. Program. 78(6), 747–761 (2013)

    Google Scholar 

  6. Boissier, O., Bordini, R.H., Hubner, J., Ricci, A.: Multi-Agent Oriented Programming: Programming Multi-Agent Systems Using JaCaMo. MIT Press, Cambridge (2020)

    Google Scholar 

  7. Bordini, R.H., Wooldridge, M., Hübner, J.F.: Programming Multi-Agent Systems in AgentSpeak Using Jason. Wiley, Hoboken (2007)

    Google Scholar 

  8. Gabriely, Y., Rimon, E.: Spanning-tree based coverage of continuous areas by a mobile robot. In: Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation, vol. 2, pp. 1927–1933 (2001)

    Google Scholar 

  9. Hübner, J.F., Sichman, J.S., Boissier, O.: Developing organised multi-agent systems using the MOISE+ model: programming issues at the system and agent levels. Int. J. Agent-Oriented Softw. Eng. 1(3/4), 370–395 (2007)

    Google Scholar 

  10. Kampik, T., Amaral, C.J., Hübner, J.F.: Developer operations and engineering multi-agent systems (2021)

    Google Scholar 

  11. Köster, M., Schlesinger, F., Dix, J.: The multi-agent programming contest 2012. In: Dastani, M., Hübner, J.F., Logan, B. (eds.) ProMAS 2012. LNCS (LNAI), vol. 7837, pp. 174–195. Springer, Heidelberg (2013).

    Chapter  Google Scholar 

  12. Ricci, A., Piunti, M., Viroli, M.: Environment programming in multi-agent systems: an artifact-based perspective. AAMAS 23, 158–192 (2011).

    Article  Google Scholar 

  13. Vermorel, J., Mohri, M.: Multi-armed bandit algorithms and empirical evaluation. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 437–448. Springer, Heidelberg (2005).

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Maicon R. Zatelli .

Editor information

Editors and Affiliations


Short Answers

A Team Overview: Short Answers

1.1 A.1 Participants and Their Background

  • What was your motivation to participate in the contest?

    • There are three main aims of our this year’s participation: (i) improve our MAS developing skills, (ii) evaluate new features developed in the JaCaMo platform, and (iii) evaluate some proposals developed in final works of some team members, such as their thesis.

  • What is the history of your group? (course project, thesis, \(\ldots \))

    • Our agent team, named JaCaMo Builders, was developed by a group formed by five PhDs, two PhD students, and two undergraduate students from different institutions: Federal University of Santa Catarina (UFSC), Federal Institute of Santa Catarina (IFSC), Santa Catarina State University (UDESC), and Umeå University. Maicon, Tiago, Cleber, and Maiquel were PhD students of prof. Jomi and worked in their thesis with MAS. The MAPC was introduced to them by Jomi and since the “Agents in Mars” scenario, at least some of them is always attending the MAPC. Nowadays, they became professors in different universities/institutes and still work with research in the MAS field. Vitor is a student of Maiquel and Robson a student of Maicon, both having a contact with MAS for the first time. Finally, Timotheus and Mauri are respectively a PhD student and a professor having a first contact with the MAPC.

  • What is your field of research? Which work therein is related?

    • Most of our group members are researchers in the MAS field, working on the development of languages, platforms, and other tools in order to bring advances in the MAS field.

    • For example, one of the PhD thesis, is studying an automated model for generating MAS organisations. In the context of the contest, a possible application could be on deciding an initial setup for the agents of a match. We have created different strategies which can be seen as organisational roles. In this sense, on the design time, it could produce organisational structures in which the agents are arranged in, testing which one is performing better. Another possible application could be on generating coalitions for achieving particular tasks on the running time. Unfortunately, we had no time to exploit it in this contest since we had other more important issues to solve.

1.2 A.2 Statistics

  • Did you start your agent team from scratch or did you build on your own or someone else’s agents (e.g. from last year)?

    • Our team of agents was developed from scratch.

  • How much time did you invest in the contest (for programming, organizing your group, other)?

    • We started to work on the team development in April/2020 and we surely spent about 1000 h until the contest.

  • How was the time (roughly) distributed over the months before the contest?

    • We tried to work the same amount of time during all the months before the contest, however, after the first attempt of qualification we had to invest some more time to work in the building of structures. The qualification was a little harder than what we imagined and it made our group to change the priorities of which strategies we should implement first.

  • How many lines of code did you produce for your final agent team?

    • The agents’ code has about 5914 lines written in the Jason language. The CArtAgO environment has about 947 lines while other Java files have about 1258 lines.

  • How many people were involved?

    • Our group is formed by nine people: five PhDs, two PhD students, and two undergraduate students from different institutions: Federal University of Santa Catarina (UFSC), Federal Institute of Santa Catarina (IFSC), Santa Catarina State University (UDESC), and Umeå University.

  • When did you start working on your agents?

    • We started to work on the team development in April/2020.

1.3 A.3 Technology and Techniques

  • Did you make use of agent technology/AOSE methods or tools? What were your experiences?

    • We used agent technology to develop a team for this year’s scenario, however we did not use any software engineering method or tool during the development of our team. Part of our group of people had already previous experience in the MAPC as well as with agent oriented programming.

  • Agent programming languages and/or frameworks?

    • We adopted the JaCaMo platform to develop our team, in special, the Jason language for the agents and CArtAgO to develop the environment.

  • Methodologies (e.g. Prometheus)?

    • We did not use any methodology.

  • Notation (e.g. Agent UML)?

    • We did not use any notation.

  • Coordination mechanisms (e.g. protocols, games,...)?

    • We use auction to decide how agents decide who accepts which task as well as to decide which agent becomes helped to another agent. In addition, a synchronization mechanism was developed to synchronize all agents in the same world.

  • Other (methods/concepts/tools)?

    • We did not use any other special tool, method or concepts during the development of our team. However, we implemented an algorithm to schedule internal tasks that agents have to perform in certain moments.

1.4 A.4 Agent System Details

  • How do your agents decide what to do?

    • The agents have an internal scheduler to decide which tasks to perform in each moment. These tasks are not the tasks of the scenario, but the internal tasks of the agent, such as to accomplish some internal goal. In order to decide what to do (which action to take) in the MAPC scenario, it depends on what the agent is doing in the moment. For example, an agent that planned to move from a cell to another cell is following a route that was calculated in the current of previous steps. Another example is when the agents still do not share the same world map, which means the agents try to meet the other agents of the team as soon as possible.

  • How do your agents decide how to do it?

    • It all depends on what the agents want to do. If the agents are still in the beginning of the match, they use a strategy to explore the map and synchronize all the information. The information is shared among agents by means of merging the information of every two agents that meet each other. All the information is stored in an artifact and each agent has access to its own information until it meets another agent. If the agents are already working with some task, a single agent works individually if the task requires a single block and if the task requires two blocks, the agent asks another agent to bring a block and to finish the structure. Agents who have the goal to defend the goal zones and task boards try first to find these places and then get four blocks in order to carry until the goal zone or task board that it found before. The defenders observe when opponents arrive near to the task boards or goal zones that they are protecting and try to move accordingly in order to block the way of the opponents.

  • How does the team work together? (i.e. coordination, information sharing, ...) How decentralised is your approach?

    • Our team of agents in completely decentralised in the beginning of the match. During the beginning of the match, agents try to build a “common map” which is stored in an CArtAgO artifact, shared among all agents. After building the “common map”, it became the only point of information about the map. All decisions are taken in a non-centralised way, however different agents may participate, for example, to build some structure together.

  • Do your agents make use of the following features: Planning, Learning, Organisations, Norms? If so, please elaborate briefly.

    • We did not use planning, learning, an explicit organisation, or norms in our team for this year, even though we plan to integrate machine learning in our team somewhere in the future.

  • Can your agents change their behavior during runtime? If so, what triggers the changes?

    • We organized a match in two main phases: the first phase is the exploration and synchronization and it aims to make all agents be in the same world. The second phase is more focused on accepting, building and submitting tasks, defending goal zones, so that, agents would not try to explore so much the map anymore. What triggers this change of behavior is when the agents fulfil enough exploration and synchronization of the map.

  • Did you have to make changes to the team (e.g. fix critical bugs) during the contest?

    • We had a critical issue happening during the transitions among different matches, however to fix such issue we simply restarted the agents.

  • How did you go about debugging your system? What kinds of measures could improve your debugging experience?

    • We used assertions (a new feature of the Jason language and also available for JaCaMo) as well as print/log messages.

  • During the contest you were not allowed to watch the matches. How did you understand what your team of agents was doing? Did this understanding help you to improve your team’s performance?

    • We had a log system in our MAS, where agents often print what they are doing during the execution, such as which action they are performing, where they are and when they accept and submit some task. It was not completely precise to understand what was going on during the contest however it was enough to give some idea about how the agents were behaving.

  • Did you invest time in making your agents more robust? How?

    • Yes. We invested some time to make agents to reset themselves if they understand that they are performing bad actions (or not any action) or get lost in the world.

1.5 A.5 Scenario and Strategy

  • What is the main strategy of your agent team?

    • We considered four main problems to deal with the proposed scenario: world exploration and data synchronization; routing and walking in the world; accepting, building, and submitting tasks; and defense of the goal zones. All these “sub-problems” are important in the scenario of the MAPC 2020. Synchronization in the exploration phase is important to let all agents to see the “same world" and it allows agents to make better and faster decisions. Routing plays an important role for the team, because the scenario is quite dynamic and investing time to make the agents to walk in such a scenario may give us a good reward during the competition. We considered the acceptance, building and submission of the tasks as the core part of this MAPC scenario; once the agents needs to decide which tasks they could accept and also how to actually build a structure, because it does not depend on a single agent, but a small group of agents. Finally, as a way to also maximize our chances to perform better than other teams, agents needs to defend the goal zones with the main aim to avoid that the adversary could deliver their tasks.

  • Please explain whether you think you came up with a good strategy or you rather enabled your agents to find the best strategy.

    • We spent some time before to start writing code of our agents discussing the strategies that we would adopt in our team. In addition, we also made multiple versions of our team and made them play against each other to decide which would be the strategies that we keep for the final version of our team.

  • Did you implement any strategy that tries to interfere with your opponents?

    • In order to maximize the chances to win the contest we saw two main aspects to consider: to make as much points as possible as well as to minimize the number of points that the opponent could make. In order to minimize the number of points that the opponent could make, we adopted a strategy where some agents had the goal to avoid that opponent agents could approach the goal zones or task boards, so that, the opponent agents could not make as much points as they wanted.

  • How do your agents decide which tasks to complete?

    • Once the agents know at least a task board and a goal zone to submit a task, the agents can start accepting tasks. They often accepted tasks with single blocks, however in some situations they also could accept tasks of at a maximum of two blocks. We did not try to accept tasks that require more than two blocks once the organization of the agents would me much more complex, and we focused in other aspects of the scenario instead.

  • How do your agents coordinate assembling and delivering a structure for a task?

    • We do not define a priori which agents will form each group to complete a task. The decision of which agent will help to accomplish each task is made by means of a kind of auction with the available agents in the moment. In addition, some agents may accomplish tasks individually, when the tasks is formed by a single block.

  • Which aspect(s) of the scenario did you find particularly challenging?

    • The characteristics of the map were the most challenging aspects for our group, such as the “infinity” map and the fact that agents do not know who they are seeing in the map.

1.6 A.6 And the Moral of It Is ...

  • What did you learn from participating in the contest?

    • Participating in the contests is always a great challenge and even though several members of our team participated in more than two editions we always have a lot of new things to learn. This year scenario was a lot more challenging and brought problems that demanded a precise coordination of the agents, such as to build a structure and to explore and synchronise the map. This new challenged allowed us to elaborate and test different strategies to make agents collaborate in order to build strategies and also to minimize the time that agents take to be situated in a common world.

  • What advice would you give to yourself before the contest/another team wanting to participate in the next?

    • Our main advice is to build a team as simple as possible in order to minimize bugs and complexity of the agents in a first moment. After having a first simple version of the team fully working, increment it with more complex strategies and test it before to move forward to new strategies.

  • What are the strong and weak points of your team?

    • The main strength of our teams is that agents do not only focus on accepting, building and submitting tasks to get rewards, but also to defend the goal zones and task boards to prevent that the adversary gets rewards. So, our expectation was to maximize our chances to get a bigger score than our adversaries.

    • The main weaknesses of our agents is that they have a little difficulty to move when carrying more blocks, which can make it difficult for an agent that is holding a more complex structure to escape the clearing area when some clear action happens or to move around in areas with too many obstacles. In addition, our agents do not commit to tasks that demand more than two blocks, which can be a clear disadvantage in scenarios with few tasks that demand no more than two blocks. We did not spend so much time on improving it once we tried to make our agents build the structures near to the goal zones.

  • Where did you benefit from your chosen programming language, methodology, tools, and algorithms?

    • The choice for the JaCaMo platform to develop our team of agents was an important point for our group. The separation of the MAS implementation according to different first-class abstractions such as organization, agents, and environment made it easier for us to organize and maintain the code of the MAS. In addition, several members of our group had already some experience with the JaCaMo platform.

  • Which problems did you encounter because of your chosen technologies?

    • The main problem that we found in our chosen technologies was the difficulty to debug the execution, so that, we implemented a feature based on “asserts” to help us to identify some wrong behaviors in the system, as well as print messages.

  • Did you encounter new problems during the contest?

    • We found a problem during the transitions between matches, which did not happen during our local tests. In addition, we noticed some issues when agents needed to submit tasks of two blocks and defend task boards and goal zones to prevent the opponent agents to complete their tasks.

  • Did playing against other agent teams bring about new insights on your own agents?

    • Yes. We noticed during the contest that most teams looked not so ready for the situation in which the task boards or goal zones were occupied by some opponent agent and such opponent agent not allowing the agents to freely submit tasks or get blocks. However, unfortunately, our strategy to defend the task boards and goal zones did not work well during the contest.

  • What would you improve (wrt. your agents) if you wanted to participate in the same contest a week from now (or next year)?

    • We would have a couple of issues to fix in order to make our team perform better during that week, such as (1) improve the transition between the matches, (2) fix some problems when agents commit to tasks of two blocks, (3) fix some problems in our defender agents (the ones that try to block the access of the opponent agents to the goal zones and task boards).

  • Which aspect of your team cost you the most time?

    • We feel that the most challenging aspect of the implementation of our team of agents was the map exploration and map synchronization, which played an important role in all other things that agents could do. So that, making the agents discover the dimensions of the map and setting all agents in the “same map” took us a plenty of time. In addition, we spent quite a huge time in debugging the system in order to fix some problems during the agents’ execution.

  • What can be improved regarding the contest/scenario for next year?

    • We suggest the inclusion of new agent types (such as existed in scenarios of previous years), which may also bring new challenges and stress the organization of agents. For example, some blocks could be brought just by certain type of agent.

    • In addition, we also suggest to keep the scenario of this year’s scenario for the next edition of the contest, without big changes. As a first time attending the contest for this scenario, we devoted a huge amount of time to implement the team and we see a lot yet possible to improve in the team. During the contest, we noticed other teams also demonstrated some issues that could be fixed for the next edition. Thus, our suggestion is to try to keep the next edition’s scenario as close as possible to this edition.

  • Why did your team perform as it did? Why did the other teams perform better/worse than you did?

    • We believe some of our strategies were a little buggy during the contests, such as the submission of tasks of two blocks and the defense of task boards and goal zones. These two strategies play a very important role in our team and if they do not work properly the team becomes a lot weaker than it really is. In addition, we had a serious problem during transitions between matches, which made us to restart the full MAS during the contest. It means, our agents lost all their beliefs, goals and knowledge about the world where they were situated.

  • If you participated in the “free-for-all” event after the contest, did you learn anything new about your agents from that?

    • In the “free-for-all”, our team seemed to perform better, and we held the third place among four participants. Our team was able to submit three tasks of single blocks however another team did not submit any task. We also observed that several agents (of different teams) were occupying the goal zones most of the time, which made it more difficult for the teams to submit tasks.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Amaral, C.J. et al. (2021). JaCaMo Builders: Team Description for the Multi-agent Programming Contest 2020/21. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds) The Multi-Agent Programming Contest 2021. MAPC 2021. Lecture Notes in Computer Science(), vol 12947. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88548-9

  • Online ISBN: 978-3-030-88549-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics