Skip to main content
Log in

Effect of swarm density on collective tracking performance

  • Published:
Swarm Intelligence Aims and scope Submit manuscript

Abstract

How does the size of a swarm affect its collective action? Despite being arguably a key parameter, no systematic and satisfactory guiding principles exist to select the number of units required for a given task and environment. Even when limited by practical considerations, system designers should endeavor to identify what a reasonable swarm size should be. Here, we show that this fundamental question is closely linked to that of selecting an appropriate swarm density. Our analysis of the influence of density on the collective performance of a target tracking task reveals different ‘phases’ corresponding to markedly distinct group dynamics. We identify a ‘transition’ phase, in which a complex emergent collective response arises. Interestingly, the collective dynamics within this transition phase exhibit a clear trade-off between exploratory actions and exploitative ones. We show that at any density, the exploration–exploitation balance can be adjusted to maximize the system’s performance through various means, such as by changing the level of connectivity between agents. While the density is the primary factor to be considered, it should not be the sole one to be accounted for when sizing the system. Due to the inherent finite-size effects present in physical systems, we establish that the number of constituents primarily affects system-level properties such as exploitation in the transition phase. These results illustrate that instead of learning and optimizing a swarm’s behavior for a specific set of task parameters, further work should instead concentrate on learning to be adaptive, thereby endowing the swarm with the highly desirable feature of being able to operate effectively over a wide range of circumstances.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

The data used in this study can be found in the following GitHub repository https://github.com/hianlee/swarm-density-tracking.

References

Download references

Acknowledgements

Not applicable

Funding

This work was supported by the Thales Solutions Asia under the Singapore Economic Development Board Industrial Postgraduate Programme (EDB IPP) and the Natural Sciences and Engineering Research Council of Canada (NSERC), under the grant # RGPIN-2022-04064.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: RB; Methodology: HLK & RB; Development of Simulation and Data Processing Tools: HLK; Conduct of Experiments: HLK; Data Analysis: HLK, JP & RB; Manuscript Preparation and Review: HLK, JP & RB

Corresponding authors

Correspondence to Hian Lee Kwa or Roland Bouffanais.

Ethics declarations

Conflict of interest

H. L. Kwa is employed as a Research Engineer and receives a salary from Thales Solutions Asia. All other authors have no relevant financial or non-financial interests to disclose.

Consent for publication

Not applicable

Ethical approval

Not applicable

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1 Strategy velocity components

The search and track strategy given in Sect. 3.1 produces a velocity vector comprised of two parts: (1) the attraction velocity component, \(\textbf{v}_{i,\text {att}}[t]\), and (2) the repulsion velocity component, \(\textbf{v}_{i,\text {rep}}[t]\). These two components are then combined to give a final agent velocity using Eq. 1, which is restated here:

$$\begin{aligned} \textbf{v}_i[t] = \textbf{v}_{i,\text {att}}[t] + \textbf{v}_{i,\text {rep}}[t]. \end{aligned}$$
(7)

In this section, we state how the values for \(\textbf{v}_{i,\text {att}}[t]\) and \(\textbf{v}_{i,\text {rep}}[t]\) are obtained. This strategy was first presented in Kwa et al. (2022).

1.1 1.1 Attraction velocity component

The attractive component is used to encourage agents to aggregate at a point of interest, \(\textbf{p}[t]\), determined using Algorithm 2. At every time-step, each agent measures its local environment to look for a target. Should an agent detect a target, the agent will transition from an exploratory state into a tracking state, set \(\textbf{p}[t]\) as the target’s current location, and broadcast the location. Should a target not be detected, the agent will communicate with its k-nearest neighbors and attempt to track targets detected by its neighbors. In addition, each agent is endowed with a memory, M, of a duration of \(t_\text {mem}\). Using this memory, each agent is able to keep track of the position and time at which a target was found. Each agent also receives a set of target positions and encounter times from its k-nearest neighbors. These received values are compared to an agent’s own values and the most recent target position is used as a point of attraction, \(\textbf{p}[t]\). At this point, should the agent still not have any knowledge of the target’s location, \(\textbf{p}[t]\) is set to the agent’s own location, \(x_i[t]\), essentially disabling the attractive component. Through the use of this update algorithm, agents are able to compare information that is directly sensed from the environment with information received from its neighbors and choose which set of information to exploit.

At this point, it is important to reemphasize that in this framework, the neighborhood of an agent is to be understood in the network sense. As such, an agent i has as many neighbors as its degree, k. Also, since time-varying network topologies are considered, it should be noted the neighborhood of each agent evolves over the course of the task duration. Given this dynamic network topology, all agents independently set \(\textbf{p}[t]\) using Algorithm 2.

figure b

Using an agent’s velocity in the previous time-step and its location in relation to \(\textbf{p}[t]\), the attraction component can be calculated according to:

$$\begin{aligned} \textbf{v}_{i, \text {att}}[t] = \omega \textbf{v}_i[t-1] + c r \big (\textbf{p}[t] - \textbf{x}_i[t]\big ). \end{aligned}$$
(8)

This equation is similar to that used in the social-only PSO model proposed by Engelbrecht (2010), where \(\omega\) is the velocity inertial weight, c is the social weight, and r is a number randomly drawn from the unit interval. In computational optimization, this is the main driver of a the system’s exploitative behaviour. Here, it is used to drive the MRS towards the target. It should be noted that in the proposed strategy, unlike the social-only PSO model that uses an infinite memory length, agents here instead use a limited memory length. This is done to prevent agents from exploiting outdated target positional information.

1.2 1.2 Adaptive repulsion

The adaptive repulsion component is used to promote agent exploration of the search space and stop agents from aggregating within a small area, thereby preventing over-exploitation of target information. In addition, this behavior also offers an anti-collision measure as a direct byproduct of this mechanism.

The inter-agent repulsion scheme adopted is based on the one used in the BoB swarm developed by Vallegra et al. (2018); Zoss et al. (2018). Using this behavior, an Agent i with topological neighbors j calculates its individual repulsion velocity as follows:

$$\begin{aligned} \textbf{v}_{i, \text {rep}}[t] = - \sum _{j\in \mathcal {N}_i}\left( \frac{a_R[t]}{r_{ij}[t]}\right) ^d \frac{{\textbf{r}_{ij}[t]}}{r_{ij}[t]}, \end{aligned}$$
(9)

where \(\textbf{r}_{ij}\) is the vector from Agent i to Agent j and \(r_{ij} = \Vert \textbf{r}_{ij}\Vert\). This inter-agent repulsion is controlled by two parameters: the repulsion strength \(a_R\), affecting the agents’ distance from each other at equilibrium, and the exponent d in the pre-factor term \((a_R/r_{ij})\). In the work carried out, d is fixed at 6 given that this value has very moderate effects on the performance of the EED strategy. At large \((a_R/r_{ij})\) and d values, the repulsion strength of the agents is approximately equal to the nearest-neighbor distance in equilibrium configuration (Vallegra et al., 2018; Coquet et al., 2021).

The key aspect of this inter-agent repulsion is an agent’s ability to adjust its own repulsion strength, \(a_R[t]\), based on its local environment and neighbourhood. To this end, the agent’s exploratory state, \(S_{i, \text {exp}}[t]\), is used. When an agent has no target information, it enters an exploratory state, i.e., \(S_{i, \text {exp}}[t] = 1\), it increases its \(a_R\) value until a maximum value is attained. Conversely, if the agent is in a tracking state, i.e., \(S_{i, \text {exp}}[t] = 0\), the agent gradually reduces its \(a_R\) value until a minimum value is reached. The adaptive repulsion behavior used to obtain the repulsion component is summarized in Algorithm 3.

figure c

Appendix 2 Local density

This section shows the reasoning why Eq. 6 was used in the calculation of the system’s local swarm density. This is restated here for completeness:

$$\begin{aligned} \rho _L = \frac{1}{NT}\sum ^T_{t=1}\sum ^N_{i=1}\frac{7}{\pi L_{i,t}^2}. \end{aligned}$$
(10)

Due to the implemented inter-agent repulsion behavior, agents will tend to fall into a hexagonal packing pattern as seen in Fig. 10a. As such, an individual agent will usually be surrounded by six other neighboring agents unless they are located at the edges of the system. By defining an \(L_i\) as the average distance between an agent i and its 6 nearest neighbors, it can be assumed that all 6 neighboring agents are located a distance of \(L_i\) away for the purposes of calculation of an individual agent’s local agent density.

While a different number of agents can be used for this calculation, the same trends in the local agent density when varying the global average swarm density, as seen in Fig. 10b. However, if less agents are used in this calculation, the initial divergence between the local agent density and the global average swarm density is accentuated. As such, an agent that finds itself in close proximity (relative to the size of the environment) to another agent while moving around the domain will return an artificially high local density. Conversely, if too many agents are used, the presence of such coincidental agent ‘clusters’ is not reflected. As such, an intermediate number, six in this case, was chosen to be used for the local density calculations.

Fig. 10
figure 10

a: Positioning of agents (black dots) in relation to each other and the repulsion fields generated by the individual agents while in their equilibrium positions. Areas in yellow represent areas of high repulsion potential while those in blue represent areas of low repulsion potential. Given these repulsion potential fields, agents tend to fall into a hexagonal packing pattern around each other. b: Local agent density calculated with different number of neighboring agents for a swarm comprised of 50 agents, connected using a \(k=20\) communications network. The system is tracking a non-evasive target traveling at a maximum speed of \(\textbf{v}_{o, \text {max}}=0.15\). The local agent density is compared against the global average swarm density (dashed line) (Color figure online)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kwa, H.L., Philippot, J. & Bouffanais, R. Effect of swarm density on collective tracking performance. Swarm Intell 17, 253–281 (2023). https://doi.org/10.1007/s11721-023-00225-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11721-023-00225-4

Keywords

Navigation