MARL-Ped+Hitmap: Towards Improving Agent-Based Simulations with Distributed Arrays
Multi-agent systems allow the modelling of complex, heterogeneous, and distributed systems in a realistic way. MARL-Ped is a multi-agent system tool, based on the MPI standard, for the simulation of different scenarios of pedestrians who autonomously learn the best behavior by Reinforcement Learning. MARL-Ped uses one MPI process for each agent by design, with a fixed fine-grain granularity. This requirement limits the performance of the simulations for a restricted number of processors that is lesser than the number of agents. On the other hand, Hitmap is a library to ease the programming of parallel applications based on distributed arrays. It includes abstractions for the automatic partition and mapping of arrays at runtime with arbitrary granularity, as well as functionalities to build flexible communication patterns that transparently adapt to the data partitions.
In this work, we present the methodology and techniques of granularity selection in Hitmap, applied to the simulations of agent systems. As a first approximation, we use the MARL-Ped multi-agent pedestrian simulation software as a case of study for intra-node cases. Hitmap allows to transparently map agents to processes, reducing oversubscription and intra-node communication overheads. The evaluation results show significant advantages when using Hitmap, increasing the flexibility, performance, and agent-number scalability for a fixed number of processing elements, allowing a better exploitation of isolated nodes.
KeywordsAgents Crowd simulation Message-passing Programming tools Distributed arrays
- 1.Bharambe, A., Pang, J., Seshan, S.: Colyseus: a distributed architecture for online multiplayer games. In: NSDI 2006: Proceedings of the 3rd conference on Networked Systems Design and Implementation, p. 12. USENIX Association, Berkeley, CA, USA (2006)Google Scholar
- 11.Mallón, D.A., Gómez, A., Mouriño, J.C., Taboada, G.L., Teijeiro, C., Touriño, J., Fraguela, B.B., Doallo, R., Wibecan, B.: Upc performance evaluation on a multicore system. In: Proceedings of the Third Conference on Partitioned Global Address Space Programing Models, pp. 9: 1–9: 7. PGAS 2009, NY, USA. ACM, New York (2009)Google Scholar
- 14.Reynolds, C.: Steering behaviors for autonomous characters. In: Game Developers Conference, pp. 763–782. Miller Freeman Game Group, San Francisco, California (1999)Google Scholar
- 15.Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
- 17.Wooldridge, M.: Multi-Agent Systems. Intelligent Agents. MIT Press, Cambridge (2013)Google Scholar