Abstract
How does the size of a swarm affect its collective action? Despite being arguably a key parameter, no systematic and satisfactory guiding principles exist to select the number of units required for a given task and environment. Even when limited by practical considerations, system designers should endeavor to identify what a reasonable swarm size should be. Here, we show that this fundamental question is closely linked to that of selecting an appropriate swarm density. Our analysis of the influence of density on the collective performance of a target tracking task reveals different ‘phases’ corresponding to markedly distinct group dynamics. We identify a ‘transition’ phase, in which a complex emergent collective response arises. Interestingly, the collective dynamics within this transition phase exhibit a clear trade-off between exploratory actions and exploitative ones. We show that at any density, the exploration–exploitation balance can be adjusted to maximize the system’s performance through various means, such as by changing the level of connectivity between agents. While the density is the primary factor to be considered, it should not be the sole one to be accounted for when sizing the system. Due to the inherent finite-size effects present in physical systems, we establish that the number of constituents primarily affects system-level properties such as exploitation in the transition phase. These results illustrate that instead of learning and optimizing a swarm’s behavior for a specific set of task parameters, further work should instead concentrate on learning to be adaptive, thereby endowing the swarm with the highly desirable feature of being able to operate effectively over a wide range of circumstances.
Similar content being viewed by others
Data availability
The data used in this study can be found in the following GitHub repository https://github.com/hianlee/swarm-density-tracking.
References
Açikmeşe, B., & Bayard, D.S. (2012). A markov chain approach to probabilistic swarm guidance. In 2012 American Control Conference (ACC). IEEE, (pp 6300–6307). Montreal, https://doi.org/10.1109/ACC.2012.6314729
Biswal, S., Elamvazhuthi, K., & Berman, S. (2021). Decentralized control of multi-agent systems using local density feedback. IEEE Transactions on Automatic Control, 67(8), 3920–3932. https://doi.org/10.1109/TAC.2021.3109520
Bouffanais, R. (2016). Design and control of swarm dynamics. Singapore: Springer. https://doi.org/10.1007/978-981-287-751-2
Cates, M. E., & Tailleur, J. (2015). Motility-induced phase separation. Annual Review of Condensed Matter Physics, 6(1), 219–244. https://doi.org/10.1146/annurev-conmatphys-031214-014710
Coquet, C., Aubry, C., & Arnold, A., et al. (2019). A local charged particle swarm optimization to track an underwater mobile source. In OCEANS 2019 - Marseille. IEEE, Marseille. https://doi.org/10.1109/OCEANSE.2019.8867527
Coquet, C., Arnold, A., & Bouvet, P. J. (2021). Control of a robotic swarm formation to track a dynamic target with communication constraints: Analysis and simulation. Applied Sciences, 11(7). https://doi.org/10.3390/app11073179
Crosscombe, M., & Lawry, J. (2021). The impact of network connectivity on collective learning. In Proceedings of the 15th International Symposium on Distributed Autonomous Robotics Systems (DARS21). https://doi.org/10.1007/978-3-030-92790-5_7
Dadgar, M., Couceiro, M. S., & Hamzeh, A. (2017). RDPSO diversity enhancement based on repulsion between similar ions for robotic target searching. In 2017 Artificial Intelligence and Signal Processing Conference (AISP). (pp 275–280). Shiraz https://doi.org/10.1109/AISP.2017.8324096
Dadgar, M., Couceiro, M. S., & Hamzeh, A. (2020). RbRDPSO: Repulsion-based RDPSO for robotic target searching. Iranian Journal of Science and Technology - Transactions of Electrical Engineering, 44(1), 551–563. https://doi.org/10.1007/s40998-019-00245-z
de Souza, C., Castillo, P., & Vidolov, B. (2022). Local interaction and navigation guidance for hunters drones : A chase behavior approach with real-time tests. Robotica, 40(8), 1–19. https://doi.org/10.1017/S0263574721001910
Dorigo, M., Theraulaz, G., & Trianni, V. (2021). Swarm robotics: Past, present, and future. Proceedings of the IEEE, 109(7), 1152–1165. https://doi.org/10.1109/jproc.2021.3072740
Ebert, J. T., Gauci, M., & Nagpal, R. (2018). Multi-feature collective decision making in robot swarms. In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS. (pp. 1711–1719). Stockholm. https://doi.org/10.5555/3237383.3237953
Ebert, J. T., Gauci, M., & Mallmann-trenn, F., et al. (2020). Bayes Bots : Collective bayesian decision-making in decentralized robot swarms. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). (pp. 7186–7192) Paris, https://doi.org/10.1109/ICRA40945.2020.9196584
Elamvazhuthi, K., & Berman, S. (2019). Mean-field models in swarm robotics: A survey. Bioinspiration & Biomimetics. https://doi.org/10.1088/1748-3190/ab49a4
Engelbrecht, A. P. (2010). Heterogeneous particle swarm optimization. In: Dorigo, M. et al (ed) 7th Int. Conf. ANTS 2010. (pp. 191–202). Springer: Berlin,
Esterle, L., & Lewis, P. R. (2020). Distributed autonomy and trade-offs in online multiobject k-coverage. Computational Intelligence, 36(2), 720–742. https://doi.org/10.1111/coin.12264
Francesca, G., & Birattari, M. (2016). Automatic design of robot swarms: achievements and challenges. Frontiers in Robotics and AI, 3, 29.
Hamann, H. (2012). Towards swarm calculus: Universal properties of swarm performance and collective decisions. In: Swarm Intelligence: 8th International Conference, ANTS 2012. Volume 7461 of LNCS. (pp 168–179). Springer: Berlin
Hamann, H. (2018a). Superlinear scalability in parallel computing and multi-robot systems: Shared resources, collaboration, and network topology. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10793, 31–42. https://doi.org/10.1007/978-3-319-77610-1_3
Hamann, H. (2018). Swarm robotics: A formal approach. London: Springer International Publishing. https://doi.org/10.1007/978-3-319-74528-2
Hecker, J. P., & Moses, M. E. (2015). Beyond pheromones: evolving error-tolerant, flexible, and scalable ant-inspired robot swarms. Swarm Intelligence, 9(1), 43–70. https://doi.org/10.1007/s11721-015-0104-z
Hönig, W., & Ayanian, N. (2016). Dynamic multi-target coverage with robotic cameras. In IEEE International Conference on Intelligent Robots and Systems (pp. 1871–1878). Daejeon, https://doi.org/10.1109/IROS.2016.7759297
Hornischer, H., Varughese, J. C., Thenius, R., et al. (2020). CIMAX: Collective information maximization in robotic swarms using local communication. Adaptive Behavior, 29(3). https://doi.org/10.1177/1059712320912021
Horsevad, N., Kwa, H. L., & Bouffanais, R. (2022a). Beyond bio-inspired robotics: How multi-robot systems can support research on collective animal behavior. Frontiers in Robotics and AI. https://doi.org/10.3389/frobt.2022.865414
Horsevad, N., Mateo, D., Kooij, R. E., et al. (2022b). Transition from simple to complex contagion in collective decision-making. Nature Communications, 13, 1442. https://doi.org/10.1038/s41467-022-28958-6
Hüttenrauch, M., Adrian, S., Neumann, G., et al. (2019). Deep reinforcement learning for swarm systems. Journal of Machine Learning Research, 20(54), 1–31.
Jensen, E. A., Lowmanstone, L., & Gini, M. (2018). Communication-restricted exploration for search teams. Distributed Autonomous Robotic Systems Springer Proceedings in Advanced Robotics, 6, 17–30. https://doi.org/10.1007/978-3-319-73008-0
Jurt, M., Milner, E., Sooriyabandara, M., et al. (2022). Collective transport of arbitrarily shaped objects using robot swarms. Artificial Life and Robotics. https://doi.org/10.1007/s10015-022-00730-5
Khaluf, Y., Birattari, M., & Rammig, F. (2013). Probabilistic analysis of long-term swarm performance under spatial interferences. In International Conference on Theory and Practice of Natural Computing (pp. 121–132). Caceres. https://doi.org/10.1007/978-3-642-45008-2_10
Khaluf, Y., Pinciroli, C., Valentini, G., et al. (2017). The impact of agent density on scalability in collective systems: Noise-induced versus majority-based bistability. Swarm Intelligence, 11(2), 155–179. https://doi.org/10.1007/s11721-017-0137-6
Kit, J. L., Dharmawan, A. G., & Mateo, D., (2019). Decentralized multi-floor exploration by a swarm of miniature robots teaming with wall-climbing units. In International Symposium on Multi-Robot and Multi-Agent Systems (MRS). IEEE, New Brunswick. https://doi.org/10.1109/MRS.2019.8901058
Kouzehgar, M., Meghjani, M., & Bouffanais, R. (2020). Multi-agent reinforcement learning for dynamic ocean monitoring by a swarm of buoys. In: IEEE-MTS Global Oceans 2020: Singapore–US Gulf Coast, IEEE, pp 1–8, https://doi.org/10.1109/IEEECONF38699.2020.9389128
Kwa, H. L., & Bouffanais, R. (2022). The effect of network connectivity on exploration and exploitation during decentralized collective learning. In 2022 International Workshop on Agent-Based Modelling of Human Behaviour (ABMHuB), Online, http://abmhub.cs.ucl.ac.uk/2022/camera_ready/Kwa_Bouffanais.pdf
Kwa, H. L., Kit, J. L., & Bouffanais, R. (2020a). Optimal swarm strategy for dynamic target search and tracking. In: Proc. of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020), Auckland, New Zealand, pp 672–680, https://doi.org/10.5555/3398761.3398842
Kwa, H.L., Tokić, G., & Bouffanais, R., et al (2020b). Heterogeneous swarms for maritime dynamic target search and tracking. In Global OCEANS 2020: Singapore-U.S Gulf Coast. IEEE, Singapore, https://doi.org/10.1109/IEEECONF38699.2020.9389145
Kwa, H. L., Kit, J. L., & Bouffanais, R. (2021). Tracking multiple fast targets with swarms : Interplay between social interaction and agent memory. In: ALIFE 2021: The 2021 Conference on Artificial Life, Prague, Czech Republic, https://doi.org/10.1162/isal_a_00376
Kwa, H. L., Babineau, V., Philippot, J., et al. (2022). Adapting the exploration-exploitation balance in heterogeneous swarms: Tracking evasive targets. Artificial Life, 29, 1–16. https://doi.org/10.1162/artl_a_00390
Kwa, H. L., Kit, J. L., & Bouffanais, R. (2022). Balancing collective exploration and exploitation in multi-agent and multi-robot systems: A review. Frontiers in Robotics and AI. https://doi.org/10.3389/frobt.2021.771520
Lerman, K., & Galstyan, A. (2001). Mathematical model of foraging in a group of robots: Effect of interference. Autonomous Robots, 13(2), 127–141. https://doi.org/10.1023/A:1019633424543
Li, H., Feng, C., & Ehrhard, H., et al. (2017). Decentralized stochastic control of robotic swarm density: Theory, simulation, and experiment. In: IEEE International Conference on Intelligent Robots and Systems, (pp. 4341–4347). Vancouver. https://doi.org/10.1109/IROS.2017.8206299
Ligot, A., Cotorruelo, A., Garone, E., et al. (2022). Towards an empirical practice in off-line fully-automatic design of robot swarms. IEEE Transactions on Evolutionary Computation. https://doi.org/10.1109/TEVC.2022.3144848
Liu, Z., Crosscombe, M., & Lawry, J. (2021). Imprecise fusion operators for collective learning. In: ALIFE 2021: The 2021 Conference on Artificial Life, https://doi.org/10.1162/isal_a_00407
Mateo, D., Kuan, Y. K., & Bouffanais, R. (2017). Effect of correlations in swarms on collective response. Scientific Reports. https://doi.org/10.1038/s41598-017-09830-w
Mateo, D., Horsevad, N., Hassani, V., et al. (2019). Optimal network topology for responsive collective behavior. Science Advances, 5(4), eaau0999. https://doi.org/10.1126/sciadv.aau099
Oliveira, M., Pinheiro, D., & Macedo, M., et al. (2017). Better exploration-exploitation pace, better swarm: Examining the social interactions. In 2017 IEEE Latin American Conference on Computational Intelligence (LA-CCI). IEEE, Arequipa, Peru, https://doi.org/10.1109/LA-CCI.2017.8285712
Pang, B., Song, Y., Zhang, C., et al. (2019). A swarm robotic exploration strategy based on an improved random walk method. Journal of Robotics. https://doi.org/10.1155/2019/6914212
Piotrowski, A. P., Napiorkowski, J. J., & Piotrowska, A. E. (2020). Population size in particle swarm optimization. Swarm and Evolutionary Computation, 58(100), 718. https://doi.org/10.1016/j.swevo.2020.100718
Prasetyo, J., De Masi, G., & Ferrante, E. (2019). Collective decision making in dynamic environments. Swarm Intelligence, 13, 217–243. https://doi.org/10.1007/s11721-019-00169-8
Rausch, I., Reina, A., Simoens, P., et al. (2019). Coherent collective behaviour emerging from decentralised balancing of social feedback and noise. Swarm Intelligence, 13, 321–345. https://doi.org/10.1007/s11721-019-00173-y
Roeva, O., Fidanova, S., & Paprzycki, M. (2015). Population size influence on the genetic and ant algorithms performance in case of cultivation process modeling. Recent Advances in Computational Optimization. https://doi.org/10.1007/978-3-319-12631-9_7
Rosenfeld, A., Kaminka, G. A., & Kraus, S. (2006). A study of scalability properties in robotic teams. In P. Scerri, R. Vincent, & R. Mailler (Eds.), Coordination of Large-Scale Multiagent Systems (pp. 27–51). Boston: Springer.
Rossides, G., Metcalfe, B., & Hunter, A. (2021). Particle swarm optimization—An adaptation for the control of robotic swarms. Robotics, 10(2), 58. https://doi.org/10.3390/robotics10020058
Rubenstein, M., Ahler, C., Hoff, N., et al. (2014). Kilobot: A low cost robot with scalable operations designed for collective behaviors. Robotics and Autonomous Systems, 62(7), 966–975. https://doi.org/10.1016/j.robot.2013.08.006
Schaerf, T. M., Makinson, J. C., Myerscough, M. R., et al. (2013). Do small swarms have an advantage when house hunting? The effect of swarm size on nest-site selection by apis mellifera. Journal of the Royal Society Interface. https://doi.org/10.1098/rsif.2013.0533
Schranz, M., Di Caro, G. A., Schmickl, T., et al. (2021). Swarm intelligence and cyber-physical systems: Concepts, challenges and future trends. Swarm and Evolutionary Computation, 60, 100762. https://doi.org/10.1016/j.swevo.2020.100762
Schroeder, A., Trease, B., & Arsie, A. (2019). Balancing robot swarm cost and interference effects by varying robot quantity and size. Swarm Intelligence, 13(1), 1–19. https://doi.org/10.1007/s11721-018-0161-1
Sekunda, A., Komareji, M., & Bouffanais, R. (2016). Interplay between signaling network design and swarm dynamics. Network Science, 4(2), 244–265. https://doi.org/10.1017/nws.2016.5
Shishika, D., & Paley, D. A. (2019). Mosquito-inspired distributed swarming and pursuit for cooperative defense against fast intruders. Autonomous Robots, 43(7), 1781–1799. https://doi.org/10.1007/s10514-018-09827-y
Strickland, L., Baudier, K., & Bowers, K., et al. (2018). Bio-inspired role allocation of heterogeneous teams in a site defense task. In: Distributed Autonomous Robotic Systems 2018. Springer International Publishing, Boulder, CO, USA, https://doi.org/10.1007/978-3-030-05816-6_10
Sun, Z., Sun, H., Li, P., et al. (2022). Self-organizing cooperative pursuit strategy for multi-usv with dynamic obstacle ships. Journal of Marine Science and Engineering. https://doi.org/10.3390/jmse10050562
Sung, Y., Budhiraja, A.K., & Williams, R.K., et al. (2018). Distributed simultaneous action and target assignment for multi-robot multi-target tracking. In: Proceedings - IEEE International Conference on Robotics and Automation IEEE, (pp. 3724–3729). Brisbane. https://doi.org/10.1109/ICRA.2018.8460974
Sung, Y., Budhiraja, A. K., Williams, R. K., et al. (2020). Distributed assignment with limited communication for multi-robot multi-target tracking. Autonomous Robots, 44, 57–73. https://doi.org/10.1007/s10514-019-09856-1
Talamali, Mohamed S.., Saha, Arindam, Marshall, James A. R.., & Reina, Andreagiovanni. (2021). When less is more: Robot swarms adapt better to changes with constrained communication. Science Robotics, 6(56). https://doi.org/10.1126/scirobotics.abf1416
Thenius, R., Moser, D., Varughese, J. C., et al. (2016). subCULTron - cultural development as a tool in underwater robotics. Artificial Life and Intelligent Agents, 732, 27–41. https://doi.org/10.1007/978-3-319-90418-4_3
Vallegra, F., Mateo, D., & Tokić, G., et al. (2018). Gradual Collective Upgrade of a Swarm of Autonomous Buoys for Dynamic Ocean Monitoring. In IEEE-MTS OCEANS 2018, Charleston, SC, USA, https://doi.org/10.1109/OCEANS.2018.8604642
D Van Den Bergh, F., & Engelbrecht, A.P. (2001). Effects of swarm size on cooperative particle swarm optimisers. In GECCO’01: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, (pp. 892–899). San Fransisco. https://doi.org/10.5555/2955239.2955400
Vicsek, T., Czirók, A., Ben-Jacob, E., et al. (1995). Novel Type of Phase Transition in a System of Self-Driven Particles. Physical Review Letters, 75(6), 132–135. https://doi.org/10.1103/PhysRevLett.75.1226
Wahby, M., Petzold, J., & Eschke, C., et al. (2019). Collective change detection: Adaptivity to dynamic swarm densities and light conditions in robot swarms. In: Artificial Life Conference Proceedings, (pp. 642–649). MIT Press: Newcastle. https://doi.org/10.1162/isal_a_00233
Zhang, K., Yang, Z., & Başar, T. (2021). Multi-agent reinforcement learning: A selective overview of theories and algorithms. Handbook of Reinforcement Learning and Control (pp. 321–384). https://doi.org/10.1007/978-3-030-60990-0_12
Zhang, S., Liu, M. Y., Lei, X. K., et al. (2019). Dynamics and motion patterns of a k-capture game with attraction-repulsion interaction. EPL (Europhysics Letters). https://doi.org/10.1209/0295-5075/128/10003
Zhong, V.J., Umamaheshwarappa, R.R., & Dornberger, R., et al. (2018). Comparison of a real kilobot robot implementation with its computer simulation focussing on target-searching algorithms. In 2018 International Conference on Intelligent Autonomous Systems (ICoIAS). IEEE, (pp. 160–164). Singapore. https://doi.org/10.1109/ICoIAS.2018.8494196
Zoss, B. M., Mateo, D., Kuan, Y. K., et al. (2018). Distributed system of autonomous buoys for scalable deployment and monitoring of large waterbodies. Autonomous Robots, 42, 1669–1689. https://doi.org/10.1007/s10514-018-9702-0
Acknowledgements
Not applicable
Funding
This work was supported by the Thales Solutions Asia under the Singapore Economic Development Board Industrial Postgraduate Programme (EDB IPP) and the Natural Sciences and Engineering Research Council of Canada (NSERC), under the grant # RGPIN-2022-04064.
Author information
Authors and Affiliations
Contributions
Conceptualization: RB; Methodology: HLK & RB; Development of Simulation and Data Processing Tools: HLK; Conduct of Experiments: HLK; Data Analysis: HLK, JP & RB; Manuscript Preparation and Review: HLK, JP & RB
Corresponding authors
Ethics declarations
Conflict of interest
H. L. Kwa is employed as a Research Engineer and receives a salary from Thales Solutions Asia. All other authors have no relevant financial or non-financial interests to disclose.
Consent for publication
Not applicable
Ethical approval
Not applicable
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1 Strategy velocity components
The search and track strategy given in Sect. 3.1 produces a velocity vector comprised of two parts: (1) the attraction velocity component, \(\textbf{v}_{i,\text {att}}[t]\), and (2) the repulsion velocity component, \(\textbf{v}_{i,\text {rep}}[t]\). These two components are then combined to give a final agent velocity using Eq. 1, which is restated here:
In this section, we state how the values for \(\textbf{v}_{i,\text {att}}[t]\) and \(\textbf{v}_{i,\text {rep}}[t]\) are obtained. This strategy was first presented in Kwa et al. (2022).
1.1 1.1 Attraction velocity component
The attractive component is used to encourage agents to aggregate at a point of interest, \(\textbf{p}[t]\), determined using Algorithm 2. At every time-step, each agent measures its local environment to look for a target. Should an agent detect a target, the agent will transition from an exploratory state into a tracking state, set \(\textbf{p}[t]\) as the target’s current location, and broadcast the location. Should a target not be detected, the agent will communicate with its k-nearest neighbors and attempt to track targets detected by its neighbors. In addition, each agent is endowed with a memory, M, of a duration of \(t_\text {mem}\). Using this memory, each agent is able to keep track of the position and time at which a target was found. Each agent also receives a set of target positions and encounter times from its k-nearest neighbors. These received values are compared to an agent’s own values and the most recent target position is used as a point of attraction, \(\textbf{p}[t]\). At this point, should the agent still not have any knowledge of the target’s location, \(\textbf{p}[t]\) is set to the agent’s own location, \(x_i[t]\), essentially disabling the attractive component. Through the use of this update algorithm, agents are able to compare information that is directly sensed from the environment with information received from its neighbors and choose which set of information to exploit.
At this point, it is important to reemphasize that in this framework, the neighborhood of an agent is to be understood in the network sense. As such, an agent i has as many neighbors as its degree, k. Also, since time-varying network topologies are considered, it should be noted the neighborhood of each agent evolves over the course of the task duration. Given this dynamic network topology, all agents independently set \(\textbf{p}[t]\) using Algorithm 2.
Using an agent’s velocity in the previous time-step and its location in relation to \(\textbf{p}[t]\), the attraction component can be calculated according to:
This equation is similar to that used in the social-only PSO model proposed by Engelbrecht (2010), where \(\omega\) is the velocity inertial weight, c is the social weight, and r is a number randomly drawn from the unit interval. In computational optimization, this is the main driver of a the system’s exploitative behaviour. Here, it is used to drive the MRS towards the target. It should be noted that in the proposed strategy, unlike the social-only PSO model that uses an infinite memory length, agents here instead use a limited memory length. This is done to prevent agents from exploiting outdated target positional information.
1.2 1.2 Adaptive repulsion
The adaptive repulsion component is used to promote agent exploration of the search space and stop agents from aggregating within a small area, thereby preventing over-exploitation of target information. In addition, this behavior also offers an anti-collision measure as a direct byproduct of this mechanism.
The inter-agent repulsion scheme adopted is based on the one used in the BoB swarm developed by Vallegra et al. (2018); Zoss et al. (2018). Using this behavior, an Agent i with topological neighbors j calculates its individual repulsion velocity as follows:
where \(\textbf{r}_{ij}\) is the vector from Agent i to Agent j and \(r_{ij} = \Vert \textbf{r}_{ij}\Vert\). This inter-agent repulsion is controlled by two parameters: the repulsion strength \(a_R\), affecting the agents’ distance from each other at equilibrium, and the exponent d in the pre-factor term \((a_R/r_{ij})\). In the work carried out, d is fixed at 6 given that this value has very moderate effects on the performance of the EED strategy. At large \((a_R/r_{ij})\) and d values, the repulsion strength of the agents is approximately equal to the nearest-neighbor distance in equilibrium configuration (Vallegra et al., 2018; Coquet et al., 2021).
The key aspect of this inter-agent repulsion is an agent’s ability to adjust its own repulsion strength, \(a_R[t]\), based on its local environment and neighbourhood. To this end, the agent’s exploratory state, \(S_{i, \text {exp}}[t]\), is used. When an agent has no target information, it enters an exploratory state, i.e., \(S_{i, \text {exp}}[t] = 1\), it increases its \(a_R\) value until a maximum value is attained. Conversely, if the agent is in a tracking state, i.e., \(S_{i, \text {exp}}[t] = 0\), the agent gradually reduces its \(a_R\) value until a minimum value is reached. The adaptive repulsion behavior used to obtain the repulsion component is summarized in Algorithm 3.
Appendix 2 Local density
This section shows the reasoning why Eq. 6 was used in the calculation of the system’s local swarm density. This is restated here for completeness:
Due to the implemented inter-agent repulsion behavior, agents will tend to fall into a hexagonal packing pattern as seen in Fig. 10a. As such, an individual agent will usually be surrounded by six other neighboring agents unless they are located at the edges of the system. By defining an \(L_i\) as the average distance between an agent i and its 6 nearest neighbors, it can be assumed that all 6 neighboring agents are located a distance of \(L_i\) away for the purposes of calculation of an individual agent’s local agent density.
While a different number of agents can be used for this calculation, the same trends in the local agent density when varying the global average swarm density, as seen in Fig. 10b. However, if less agents are used in this calculation, the initial divergence between the local agent density and the global average swarm density is accentuated. As such, an agent that finds itself in close proximity (relative to the size of the environment) to another agent while moving around the domain will return an artificially high local density. Conversely, if too many agents are used, the presence of such coincidental agent ‘clusters’ is not reflected. As such, an intermediate number, six in this case, was chosen to be used for the local density calculations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kwa, H.L., Philippot, J. & Bouffanais, R. Effect of swarm density on collective tracking performance. Swarm Intell 17, 253–281 (2023). https://doi.org/10.1007/s11721-023-00225-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11721-023-00225-4