Skip to main content

MMD: The Block Building Agent Team with Explainable Intentions

  • Conference paper
  • First Online:
The Multi-Agent Programming Contest 2022 (MAPC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13997))

Included in the following conference series:

  • 96 Accesses

Abstract

The Multi-Agent Programming Contest (MAPC) is an excellent test ground to stimulate research on the development and programming of multi-agent systems. The current Agents Assemble III scenario is a nice example for cooperative distributed problem solving in a highly dynamic environment, and it requires that the agents are normative agents. For MAPC 2022, we have implemented the MMD multi-agent system from scratch in the Python programming language to find out if a multi-agent system can be developed efficiently in a general programming language using multi-agent concepts. We describe the implementation details, including the coordination and the optimisation algorithms of the MMD multi-agent system to solve the complex and dynamic tasks, and also including the testing aspects that use explainable intentions as well. The performance indicators of the implementation are the development time, the development efforts, and the quality of the job done by the implemented multi-agent system. The development time of the MMD system is not more than any other system at MAPC 2022, including those that were implemented with agent-oriented programming. The comparison of the development efforts of the contest participants is difficult because the performance of the systems are also different, but the development effort is more likely to be independent from the implementation language used. The first position of the MMD system at MAPC 2022 seems to indicate that the implemented MMD multi-agent system is competitive with the systems developed with agent-oriented software engineering methods.

The work of L.Z. Varga was supported by the “Application Domain Specific Highly Reliable IT Solutions” project which has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence Programme TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://multiagentcontest.org/2022/.

  2. 2.

    https://github.com/agentcontest/python-mapc2020.

  3. 3.

    The implementation used constant priorities for the main agent intentions, however, they could have depended on the current environment for better performance.

  4. 4.

    Despite of not prioritizing complete map exploration, our experience shows that usually all maps are merged into a single one by the half of the simulation, while the dimension detection occurs some time later. The full map discovery happens late in the simulation, although, it is exceptional.

  5. 5.

    Restarting the A* algorithm in every simulation step may not be efficient in terms of computation time. The D* Lite algorithm [11] would have been more efficient. On the other hand, the D* Lite algorithm uses more memory, especially if there are many agents.

  6. 6.

    It was a very simple estimation, which have not been improved due to lack of time.

  7. 7.

    An agent is free if it is not involved in task achievement.

  8. 8.

    Although we had this complex task evaluation function, we are not sure that it really played an important role in the MAPC 2022 contest, because there were always only 2 active tasks at the contest, and there were not many options to choose from.

  9. 9.

    There could have been an optimization, if the right block is included in the attachments, then only keep that one. It was not implemented due to lack of time.

  10. 10.

    Due to lack of time, wrong block selection algorithm was not implemented, therefore if there is at least one wrong block, then all the attached blocks are detached.

References

  1. Ahlbrecht, T., Dix, J.: Multi-agent programming contest - Lecture 2 at 15th Workshop-School on Agents, Environments, and Applications. https://www.youtube.com/watch?v=HgNlfKm7YdQ &t=1417s. Accessed Nov 2022

  2. Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T.: The multi-agent programming contest: a Résumé. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2019. LNCS (LNAI), vol. 12381, pp. 3–27. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59299-8_1

    Chapter  Google Scholar 

  3. Bratman, M.: Intention, Plans, and Practical Reason. Harvard University Press, Cambridge (1987)

    Google Scholar 

  4. Cardoso, R.C., Ferrando, A., Papacchini, F.: LFC: combining autonomous agents and automated planning in the multi-agent programming contest. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2019. LNCS (LNAI), vol. 12381, pp. 31–58. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59299-8_2

    Chapter  Google Scholar 

  5. Durfee, E.H.: Cooperative distributed problem solving between (and within) intelligent agents. In: Rudomin, P., Arbib, M.A., Cervantes-Pérez, F., Romo, R. (eds.) Neuroscience: From Neural Networks to Artificial Intelligence. NEURALCOMPUTING, vol. 4, pp. 84–98. Springer, Heidelberg (1993). https://doi.org/10.1007/978-3-642-78102-5_5

    Chapter  Google Scholar 

  6. Edelman, B., Ostrovsky, M., Schwarz, M.: Internet advertising and the generalized second-price auction: selling billions of dollars worth of keywords. Am. Econ. Rev. 97(1), 242–259 (2007). https://doi.org/10.1257/aer.97.1.242

    Article  Google Scholar 

  7. Englemore, R., Morgan, A.: Blackboard Systems; Edited by Robert Engelmore, Tony Morgan (the Insight Series in Artificial Intell, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston (1988)

    Google Scholar 

  8. Hansen, E.A., Zhou, R.: Anytime heuristic search. J. Artif. Intell. Res. 28, 267–297 (2007). https://doi.org/10.1613/jair.2096

    Article  MathSciNet  MATH  Google Scholar 

  9. Hart, P., Nilsson, N., Raphael, B.: A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 4(2), 100–107 (1968). https://doi.org/10.1109/tssc.1968.300136

    Article  Google Scholar 

  10. Jennings, N.R.: Coordination through joint intentions in industrial multiagent systems. AI Mag. 14(4), 79 (1993). https://doi.org/10.1609/aimag.v14i4.1071. https://ojs.aaai.org/index.php/aimagazine/article/view/1071

  11. Koenig, S., Likhachev, M.: D*lite. In: Proceedings of the Eighteenth National Conference on Artificial Intelligence and Fourteenth Conference on Innovative Applications of Artificial Intelligence, Edmonton, Alberta, Canada, 28 July–1 August 2002, pp. 476–483 (2002). http://www.aaai.org/Library/AAAI/2002/aaai02-072.php

  12. Sandholm, T., Lesser, V.R.: Issues in automated negotiation and electronic commerce: extending the contract net framework. In: Proceedings of the First International Conference on Multiagent Systems, San Francisco, California, USA, 12–14 June 1995, pp. 328–335 (1995)

    Google Scholar 

  13. Smith: The contract net protocol: high-level communication and control in a distributed problem solver. IEEE Trans. Comput. C-29(12), 1104–1113 (1980). https://doi.org/10.1109/tc.1980.1675516

  14. Stern, R., et al.: Multi-agent pathfinding: definitions, variants, and benchmarks. In: Proceedings of the Twelfth International Symposium on Combinatorial Search, SOCS 2019, Napa, California, 16–17 July 2019, pp. 151–159. AAAI Press (2019)

    Google Scholar 

  15. Uhlir, V., Zboril, F., Vidensky, F.: Multi-agent programming contest 2019 FIT BUT team solution. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2019. LNCS (LNAI), vol. 12381, pp. 59–78. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59299-8_3

    Chapter  Google Scholar 

  16. Vázquez-Salceda, J.: The Role of Norms and Electronic Institutions in Multi-agent Systems. Birkhäuser Basel (2004). https://doi.org/10.1007/978-3-0348-7955-2

  17. Wooldridge, M.: Understanding equilibria in multi-agent systems. In: Keynote presentation at FTC 2021 - Future Technologies Conference 2021 (2021). https://youtu.be/Iqm8UTXUG24?t=411. Accessed Nov 2022

  18. Wurman, P.R., D’Andrea, R., Mountz, M.: Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Mag. 29(1), 9 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to László Z. Varga .

Editor information

Editors and Affiliations

Appendices

16th Multi-agent Programming Contest: All Questions Answered

A Team Overview: Short Answers

1.1 A.1 Participants and Their Background

  • Who is part of your team?

    Miklós Miskolczi, László Z. Varga

  • What was your motivation to participate in the contest?

    We wanted to do an experience with multi-agent systems, and of course we wanted to be the winner.

  • What is the history of your group? (course project, thesis, \(\ldots \) )

    The MSc diploma work of Miklós Miskolczi.

  • What is your field of research? Which work therein is related?

    Multi-agent systems, online routing game model, multi-agent path finding.

1.2 A.2 Statistics

  • Did you start your agent team from scratch, or did you build on existing agents (from yourself or another previous participant)?

    The agent team was started from scratch, but we used a modified version of the experimental Python client for 2020/21 edition of the Multi-Agent Programming Contest to communicate with the contest server.

  • How much time did you invest in the contest (for programming, organising your group, other)?

    We started in February 2022 and worked on the program 28 h per week, a total of 896 h.

  • How was the time (roughly) distributed over the months before the contest?

    Continuous development. The last two weeks mainly testing.

  • How many lines of code did you produce for your final agent team?

    github.com/AlDanial/cloc v 1.94 T=0.13 s (649.6 files/s, 72314.5 lines/s)

    Language

    files

    blank

    comment

    code

    Python

    83

    1882

    2050

    4842

    Text

    1

    121

    0

    553

    Markdown

    1

    3

    0

    12

    SUM:

    85

    2006

    2050

    5407

    The above data include the modified experimental Python client, which is:

    github.com/AlDanial/cloc v 1.94 T=0.05 s (20.9 files/s, 15498.5 lines/s)

    Language

    files

    blank

    comment

    code

    Python

    1

    115

    102

    526

1.3 A.3 Technology and Techniques

Did you use any of these agent technology/AOSE methods or tools? What were your experiences?

  • Agent programming languages and/or frameworks?

    No.

  • Methodologies (e.g. Prometheus)?

    No.

  • Notation (e.g. Agent UML)?

    No.

  • Coordination mechanisms (e.g. protocols, games, ...)?

    We used a simple (one level) Contract Net protocol, which includes a simplified auction mechanism.

  • Other (methods/concepts/tools)?

    We used our own version of the practical reasoning agent architecture. We used our own version of the blackboard architecture for the coordination of the agents.

  • What hardware did you use during the contest?

    Hardware

    Specification

    Processor

    AMD Ryzen 5 3600 6-Core Processor

    RAM

    16 GB

    OS

    Windows 11 Pro

    Only about 20% of the processing power of the computer was used by the agent team.

1.4 A.4 Agent System Details

  • Would you say your system is decentralised? Why?

    Although the coordination of the agents is done by a central blackboard, and the implementation of the whole agent team is a single Python program, the system can be seen as a decentralised one in the sense that the planning and the activities of the agents are done individually. The agents were meant to be separate threads, but threading in Python is not fast enough, and we had to refactor the code to speed up the system. Python multiprocessing might be the solution for the next competition.

  • Do your agents use the following features: Planning, Learning, Organisations, Norms? If so, please elaborate briefly.

    The actions needed to achieve a goal is basically hardcoded in the implementation. Simple learning is used to discover e.g. the cost of a clear action. In order to solve a multiple-block task, the agents are organised into a sub-team. The sub-team is connected through the intentions of the members. The intentions are assigned by the blackboard.

  • How do your agents cooperate?

    Cooperation is done through the intentions of the agents (which is a simplified and direct communication between the agents) and the shared blackboard which includes the shared maps as well. The planning for the cooperation is hardcoded in the intentions.

  • Can your agents change their general behaviour during run time? If so, what triggers the changes?

    Intentions are reconsidered in each simulation step. If there are changes in the environment, then the agents may change their intention. The behaviour of the intentions may be different depending on the match configuration. The capabilities of the agents depend on the match configuration as well. The behaviour of the blackboard depends on the current state of the environment and the agents.

  • Did you have to make changes to the team (e.g. fix critical bugs) during the contest?

    The code and the settings were not changed, but there was a problem with the connection to the server at the second and third simulation of each match, and the agent team had to be restarted manually after the first simulation steps. Interestingly, this problem did not occur during the warm-up match before the competition with the real competition server. Also, this problem did not occur with the localhost.

  • How did you go about debugging your system? What kinds of measures could improve your debugging experience?

    The basic “debugging tool” was the printout on the console, but we also implemented an “explanation function”. If the system is run with the explanation function, then the agents give information on what they are doing. The given information might be their believes, or their current intention and its details.

    The server logs and replays were also used to trace back various complex cases.

  • During the contest, you were not allowed to watch the matches. How did you track what was going on? Was it helpful?

    The agents printed on the console the same information as those during the testing period before the contest. It was helpful in the sense that we could see that everything goes well.

  • Did you invest time in making your agents more robust/fault-tolerant? How?

    Robustness and fault-tolerance was part of the development process.

1.5 A.5 Scenario and Strategy

  • How would you describe your intended agent behaviour? Did the actual behaviour deviate from that?

    The agents mainly do what they are intended to do. Sometimes they produce strange behaviour, but we know that this may be due to the incompleteness of the solution. For example, individual route planning may produce deadlock like situation.

  • Why did your team perform as it did? Why did the other teams perform better/worse than you did?

    The results at the competition were similar to those at the testing period, excluding the cases when the other teams were heavily agressive against opponent agents.

    We do not know much about the other teams.

  • Did you implement any strategy that tries to interfere with your opponents?

    Yes.

    The tolerant way: If our agents notice that the other team stay in a goal zone for a long time at the place needed for our team, then our team go to another place.

    The agressive way: When our agent is at the goal zone, then it tries to keep the agents of the other team away from the goal zone by shooting at other agents approaching the goal zone, assuming that the role capabilities of our agent allows this. This goal zone defendence behaviour was not possible with the match configuration of the competition, so our agents did not shoot at the other team during the competition.

  • How do your agents coordinate assembling and delivering a structure for a task?

    Multi-block tasks are delivered by a single coordinator agent and block provider agents for each block. The coordinator goes to the selected goal zone. The coordinator clears the surrounding of the goal zone until the first block provider arrives. Block providers fetch the block from a dispenser and take it to the surrounding of the coordinator. The block provider waits until the call from the coordinator. When the call arrives, then the block provider takes the block to the place requested by the coordinator, and then the two agents connect the blocks.

  • Which aspect(s) of the scenario did you find particularly challenging?

    Map building, map merging, map update, map size determination, path finding on the looping map. Shortly: dynamic map management.

    Limited (and in our opinion, not realistic) perception of the agents, which means, among others, the following: When the agent moves and there is a failure, then the agent does not know which step failed. The agent does not know which blocks are attached to which agent.

  • What would you improve (wrt. your agents) if you wanted to participate in the same contest a week from now (or next year)?

    We have ideas, but we keep them for the next competition. Surely we have to prepare to defend our agents from the potential saboteur agents of the other team.

  • What can be improved regarding the scenario for next year? What would you remove? What would you add?

    Perception capabilities of the agents (see above).

    There were only two active tasks in the current scenario, and often there was no big difference between the two tasks. Therefore a good task selection strategy was not so critical in the current scenario. Bigger choice of tasks would be more challenging.

1.6 A.6 And the Moral of it is ...

  • What did you learn from participating in the contest?

    Good programming and debugging exercise in a non-deterministic and hardly reproducible environment.

    Building an agent architecture from scratch in a general programming language.

  • What advice would you give to yourself before the contest/another team wanting to participate in the next?

    Now we have more knowledge to build a cleaner agent architecture.

  • Where did you benefit from your chosen programming language, methodology, tools, and algorithms?

    The main benefits were the development speed and the simplicity. We followed the “keep it simple principle” to ensure fault-tolerance and make components open for extensions and optimizations.

  • Which problems did you encounter because of your chosen technologies?

    Performance issues. Full parallel operation would need another implementation approach.

    Programming errors are signalled in Python only when the actual line of code is executed. This way, it is easy to make errors.

  • Which aspect of your team cost you the most time?

    Architecture building, safe map management, path finding and ensuring fault-tolerance.

1.7 A.7 Looking into the Future

  • Did the warm-up match help improve your team of agents? How useful do you think it is?

    We did not change anything after the warm-up match, but it was good to know that the connection to the server works.

  • What are your thoughts on changing how the contest is run, so that the participants’ agents are executed on the same infrastructure by the organisers? What do you see as positive or negative about this approach?

    The positive aspect would be that all teams have the same conditions (for example network speed).

    The negative aspect would be that we cannot correct any problem during the competition. For example we had to restart the team manually, because the connection to the server did not work the same way as at the warm-up match.

  • Do you think a match containing more than two teams should be mandatory?

    This might be a possibility, but probably with not too big team sizes.

  • What else can be improved regarding the MAPC for next year?

    Nothing more than those already mentioned above.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Miskolczi, M., Varga, L.Z. (2023). MMD: The Block Building Agent Team with Explainable Intentions. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds) The Multi-Agent Programming Contest 2022. MAPC 2022. Lecture Notes in Computer Science(), vol 13997. Springer, Cham. https://doi.org/10.1007/978-3-031-38712-8_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-38712-8_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-38711-1

  • Online ISBN: 978-3-031-38712-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics