1 Introduction

Many real-world optimisation problems are multi-objective in nature, which means that the optimal decisions need to be taken in the presence of multiple conflicting objectives (Deb 2014). Most of the existing research has focused on continuous multi-objective optimisation (CMOO) problems (Zopounidis and Pardalos 2010). However, DMOO problems with discrete variables and black-box objective functions are frequently encountered in a diverse set of disciplines such as engineering (Han et al. 2020), network models (Bergman et al. 2018) transportation (Ghannadpour and Zarrabi 2019), supply chain (Dickersbach 2005), medicine (Wang and Ma 2018) and many combinatorial problems (Zouache et al. 2018). For more details, the reader is kindly requested to go through the survey papers (Liu et al. 2020; Deb 2014; Ehrgott 2006; Ehrgott et al. 2016). Despite the finite number of objective functions and the relative finite search space in these DMOO problems, efficient algorithms are still lacking. The challenge of developing efficient algorithms arises from the discontinuity of objective functions and the discreteness of the optimal solution set.

The inspiration from biology and nature has been one of the most important and exhaustless sources to develop novel algorithms and innovative techniques to solve decision-making problems over the past decades (Afek et al. 2011; Kroeker 2011; Yang 2010). Computer scientists are taking inspiration from Physarum intelligent foraging behaviour to come up with many novel bio-inspired models capable of solving many NP-hard problems (Adamatzky 2010).

Swarm intelligence is one of the most interesting approaches for solving multi-objective optimisation (MOO) problems (Mostaghim and Teich 2013; Nebro et al. 2009; Alaya et al. 2007; Zou et al. 2011). These approaches deal with the collective behaviour of decentralised and self-organised systems. Just like social insects and animals, Physarum too exhibits swarm intelligence; it shares many features of collective behaviour such as synchronisation, communication, positive feedback, leadership and response thresholds (Reid and Latty 2016). In many optimisation problems, it is crucial to achieve an optimal balance between social collaboration and population diversity (Wong et al. 2010). A major issue that may rise in some swarm intelligence algorithms is the premature convergence as they focus on social collaboration within the population, and they seldom pay attention to the competition among individuals (Beni and Wang 1993). We believe that the individual skills of competitors are more efficient to achieve a good balance between exploration and exploitation and are an important approach to maintain population diversity and the effectiveness of the search.

We have proposed a novel model to imitate the complex patterns observed in Physarum generated in competition settings (Awad et al. 2019a). This new model is based on hexagonal cellular automata (CA) and reaction–diffusion (RD) systems. Physarum’s decision-making power and the discrete properties of CA have made our algorithm a perfectly suitable approach in solving DMOO problems.

Physarum certainly meets the criteria of an ideal model of decision-maker, as it makes multi-objective foraging decisions (Beekman and Latty 2015). Similar to those organisms that have a central information processing unit like a brain, Physarum balances its nutrient intake based on optimality theory. Physarum is capable of choosing the source that is high in concentration and away from dangerous environment, and it trades off risk against food quality (Latty and Beekman 2010). For searching in a discrete space, CA was a very successful choice. CA can represent a physical system, in which space and time are discrete. The physical grid quantities can take only a finite set of values and any finite number of dimensions. This has motivated us to develop a novel Physarum-inspired optimisation algorithm for DMOO problems. This algorithm is based on the possible heuristics that Physarum uses in complex foraging decisions in competition settings. Instead of using Physarum to optimise swarm algorithms, we explore the potential of competition among individual Physarum acting as a swarm to accelerate the algorithm’s convergence, guarantee the diversity of solutions and increase the ability to escape from local optima.

1.1 Work novelty and contributions

In this research, we proposed a novel autonomous Physarum-inspired competition algorithm (PCA). The competition among the different Physarum individuals has extended the search (exploration) from a single Physarum as in previous Physarum-inspired algorithms to a population-based search as in swarm algorithms, but keeping individual skills. This has facilitated global exploration and kept population diversity. We have considered the n-dimensional generalisation of the hexagonal grids to extend the solving capabilities of our PCA from only two-dimensional to multi-dimensional optimisation problems. PCA has achieved the best performance for the Spread indicator, inverted generational distance plus (IGD+), the second-best values for the HV and Epsilon indicators after SPEA2, and it even outperformed other well-established genetic algorithms. We have implemented a novel restart procedure which improved the overall performance as it drives the exploitation towards the best solution based on both personal experience and shared information. One of the interesting points in this research is a large number of benchmark functions and the diverse set of performance metrics used.

In this paper, a short overview of background concepts and related technologies, for a better understanding of DMOO problems, is presented in Sect. 2. We will then go further to show how we developed a novel Physarum competition algorithm (PCA) capable of solving DMOO problems (Sect. 3). We have conducted extensive experimental and statistical work on several benchmark functions to assess the performance of our PCA against state-of-the-art evolutionary algorithms (Sect. 4). First, we will provide the experimental results of deep valley problem, which is a discrete 2-D optimisation problems. Second, we will consider the n-D generalisation of the hexagonal grids to extend the solving capabilities of our PCA from only 2-D to n-D optimisation problems. Finally, we will give a conclusion of this research in Sect. 5.

2 Background concepts and related technologies

2.1 Background concepts

The concepts of MOO problems have been defined by many researchers (Deb et al. 2000). However, in this research, we will focus on DMOO problems (Fig. 1). DMOO problems are concerned with a class of mathematical optimisation problems with discrete variables and multiple objectives to be optimised simultaneously. It is hard to optimise objectives simultaneously because they are conflicting. These problems can be mathematically formalised as follows:

$$\begin{aligned} \begin{aligned}&\text {minimise} \quad F(X) = (f_1(X), \ldots , f_m(X)),\\&\quad \{F : X \rightarrow R^m, m \ge 2\} \\&\quad \text {subject to} \qquad G(X) = (g_1(X), \ldots , g_k(X)),\\&\quad \{G : X \rightarrow R^k, G_k(X) \ge 0, k \ge 0\}\\&\quad H(X) = (h_1(X), \ldots , h_l(X)), \\&\quad \{H : X \rightarrow R^l, H_l(X) = 0, l \ge 0\}\\&\quad \text {and} \quad X = (x_1, \ldots , x_n), \{X \subset Z^n\}\\ \end{aligned} \end{aligned}$$
(1)

In the above equation, X is an n-dimensional decision variable (vector) from a discrete (Integer) space. F(X) is an objective vector composed of m real-valued objective functions which are either continuous or discrete. G(X) and H(X) are the constraint vectors, which consist of real-valued functions representing inequality and equality constraints, respectively.

For more illustration, we have to be familiar with the following concepts:

Fig. 1
figure 1

In this DMOO problem, we are trying to maximise the performance and at the same time minimise the cost. The red dots are the non-dominated solution (Pareto front) while the yellow dots are example of solutions dominated by red ones (this figure is based on Brockhoff (2018) with some modifications)

Definition 1

(Decision Variables). Decision variables are controllable options that need to be determined in order to solve a problem. The solutions for the optimised problem are generated from the search space of each decision variable. The problem is solved when the best values of the decision variables have been identified.

Definition 2

(Objective Functions). The objective function of an optimisation problem indicates how much each decision variable contributes to the value to be optimised in the problem.

Definition 3

(Constraints). Constraints define the possible values that the decision variables of an optimisation problem must be fulfilled while optimising (minimising or maximising) F(X). They typically represent resource constraints, or the minimum or maximum levels of an activity.

Definition 4

(Pareto Dominance). From the problem definition 1, let \(a = (a_1, \ldots , a_m)\), \(b = (b_1, \ldots , b_m)\) be two solution vectors obtained from a decision variable (vector). a is said to dominate b if \(a_i \le b_i \forall i = 1, \ldots , m\) and \(a \ne b\)

2.2 Related work

The optimal solution for MOO problems is often not a single one, but rather a set of solutions known as the Pareto optimal set (or Pareto frontier). In this set, no element is superior to the others for all the objectives. Most of the multi-objective optimisation algorithms are based on the theories of Pareto Sort and non-dominated solutions (dominance-based) (Hillermeier 2001). There are two main sets of approaches for generating the Pareto frontier for a MOO problem. The first set of approaches are the criterion-space search methods, which rely on a combination of weighted sums and constraints scalarisations (Kirlik and Sayin 2014; Bektaş 2018) or the augmented weighted Tchebycheff scalarisation (Holzmann and Smith 2018). The second is the decision-space search methods, which are typically based on branch-and-bound search. These methods are developed for mixed-integer linear programmes that operate over the space defined by the original decision variables (Mavrotas and Diakoulaki 1998; Adelgren and Gupte 2017).

Algorithms for solving MOO problems include weighted sum (Audet et al. 2008), continuation methods (Schütze et al. 2008) and evolutionary algorithms (Deb 1999). Multi-objective evolutionary algorithms (MOEAs) are efficient approaches for solving MOO problems (Li et al. 2015). Two of the most well-known MOEAs are the non-dominated sorted genetic algorithm (NSGA-II) (Deb et al. 2000) and the strength Pareto evolutionary algorithm (SPEA2) (Zitzler et al. 2001). GA is based on natural selection and evolution. GA is a very powerful optimisation algorithm in many scientific fields. A continuous genetic algorithm proved to be an efficient solver for systems of second-order boundary value problems (Arqub and Abo-Hammour 2014). It involves initialisation, evaluation, selection, crossover, mutation, replacement and termination. An elegant explanation can be reviewed from Abo-Hammour et al. (2013). SPEA2 contains important operations such as archiving of individuals with good fitness, density estimation and fitness assignment and is able to obtain a population with both “precision” and “diversity”. SPEA2+ attempts to improve the problem space exploration abilities of SPEA2 by adding a more effective crossover mechanism and an algorithm to maintain diversity in the two object and variable spaces (Zitzler et al. 2001).

These two classical algorithms perform relatively well on MOO problems when the number of objective functions is not more than three. However, their performance significantly degrades for MOO problems when the number of objective functions is greater than four, in which case it becomes many-objective optimisation problems (Ishibuchi et al. 2008). More recent studies have proposed improved variants of dominance-based MOEAs that use reference vector-based niching selection rather than the crowding distance-based selection. NSGA-III (Deb and Jain 2014) is an example of such algorithms. Relaxed-dominance-based MOEAs that are used for the mating and/or environmental selections instead of the exact Pareto-dominance relationship have also been proposed. MOEAs based on \(\epsilon \) dominance (\(\epsilon \)-MOEAs) (Deb et al. 2005) and differential evolution (Arunachalam 2008) are examples of such algorithms.

Most of the above-mentioned techniques were designed for solving CMOO problems (Zopounidis and Pardalos 2010); however, DMOO problems are frequently encountered in a diverse set of engineering and scientific domains (Ehrgott 2006; Zhang et al. 2008). Some existing state-of-the-art methodologies rely on parametric single-objective reformulations of DMOO problems, employing commercial integer programming (IP) solvers as black-box tools for their algorithms (Audet et al. 2008). However, the Pareto frontier cannot be fully recovered by such linear parametric methods in general. Recent studies have tried to address the DMOO problems without tailoring the CMOO algorithm. However, the majority of them are specialised algorithms targeting specific types of DMOO problems (Dickersbach 2005; Gao et al. 2018; Wang 2015; Zouache et al. 2018; Peng et al. 2016; Bergman et al. 2018) and can only solve discrete bi-objective optimisation problems (Boland et al. 2015; Parragh 2018), while only a few of them deal with more than two objective functions (Cacchiani and D’Ambrosio 2017).

Swarm intelligence algorithms are one of the most popular and important approaches for solving multi-objective problems. Examples of swarm-inspired algorithms include particle swarm optimisation (PSO) (Mostaghim and Teich 2013; Nebro et al. 2009), ant colony optimisation (ACO) (Alaya et al. 2007) and artificial bee colony (ABC) (Zou et al. 2011). Recently, Physarum-inspired algorithms have been used for optimising the pheromone matrix initialisation in multi-objective ACO for solving bi-objective travelling salesman problems (Zhang et al. 2016b). The work of Masi and Vasile (2014) was the first attempt to introduce a Physarum-inspired algorithm for DMOO problems. Their algorithm starts by transforming a given DMOO problem into a graph problem, which can be solved as in multi-objective travelling salesman and vehicle routing problems. However, this approach cannot be generalised well when dealing with generic search problems rather than graph optimisation problems.

3 Physarum-inspired competition algorithm for solving DMOO problems

In this section, we present a novel Physarum-inspired competition algorithm (PCA) for DMOO problems. The present algorithm is inspired from our Physarum competition mathematical model (Awad et al. 2019a). This model simulates the foraging behaviour of multiple Physarum competing for multiple food resources, where the Physarum is presented in a cellular automata (CA) grid and the Physarum motility is based on reaction–diffusion (RD) system applied on CA. The Physarum motility over a hexagonal cellular automaton will allow a discrete and effective search. Each cell in the grid will be decoded as a solution (objective function) and will be regarded as a food resource to attract Physarum (Fig. 2).

Fig. 2
figure 2

a Physarum search space is mapped to the b objective space, in which each cell in the search space is decoded as a food resource to attract Physarum (this figure is based on Brockhoff (2018) with modifications)

The algorithm works by identifying the non-dominated solutions for an optimisation problem using the possible heuristics that Physarum uses in complex foraging decisions. The competing Physarum will explore the search space according to the diffusion equations based on the chemo-attraction forces towards food resources, and the repulsion negative forces that competing Physarum exert on each other. Physarum will diffuse maximally to cells that provide the best possible trade-offs between the objective functions.

From biology, we understand that Physarum foraging behaviour consists of two simultaneous self-organised processes: expansion (exploration) and contraction (exploitation) (Liu et al. 2017). From the algorithm perspective, the balance between exploration and exploitation process is fundamental for solving DMOO problems (Sun et al. 2018). In the Physarum exploration phase, the competing Physarum will explore the search space according to diffusion equations. In the Physarum exploitation phase, the protoplasmic flow in the body of Physarum plays a great role in developing its intelligence (Tero et al. 2005; Alim et al. 2013). The protoplasmic flux connected to high-quality food resources (non-dominated solutions) tends to increase, while the protoplasmic flux to poor-quality food resources (dominated solutions) tends to decrease as a feedback mechanism (Reid and Latty 2016). This behaviour can be interpreted as a natural attitude in optimising the energy required to feed the organism. The reader can refer to Awad et al. (2021) for more details on Physarum intelligent foraging behaviour in competition settings.

3.1 Rules of Physarum-inspired competition algorithm

In order to build our algorithm for solving DMOO problems, we have proposed the following rules:

  1. (i)

    The discrete search space of the optimisation problem will be mapped to an n-dimensional hexagonal CA to solve multi-dimensional problems (Sect. 3.2).

  2. (ii)

    Multiple Physarum will be randomly deployed over the search space using either Monte Carlo or Latin Hyper Cube sampling techniques (Sect. 3.4).

  3. (iii)

    In the exploration phase, Physarum will explore the search space according to the diffusion mechanism (Eq. 4). We also introduced a feature that maps to real biological behaviour, where the Physarum will stop diffusing if its mass is less than a critical value. This will prevent the Physarum from searching the entire search space in an overly greedy manner.

  4. (iv)

    Each cell in the grid will be decoded as a solution and will be regarded as a food resource to attract Physarum. The chemo-attraction forces exerted on Physarum will be a function of the quality of the food resource, which in this case is the degree of optimality of the solution to fulfil the objectives of the problem (Eq. 8).

  5. (v)

    Competing Physarum will exert repulsion forces on each other, which will be calculated as a negative force. When two Physarum are competing for the same cell, the one with higher mass will occupy this cell (Eq. 9).

  6. (vi)

    The cells that represent food resources engulfed by a Physarum (decoded solutions) will be excluded from further search using Physarum spatial memory (Reid et al. 2012). This memory will increase the searching efficiency and avoid visiting previously explored areas (Reid et al. 2012, 2013; Boussard et al. 2019).

  7. (vii)

    The cells that represent unfeasible solutions and do not satisfy the constraint set, will be regarded as walls (obstacles), and the Physarum will stop searching these cells.

  8. (viii)

    The procedure of exploring the search space by the competing Physarum will stop when one of the two conditions is satisfied: first when the competing strands of Physarum are unable to find new dominant solutions due to Physarum mass exhaustion (i.e. the mass of Physarum is less than a critical value) and second after a predefined number of iterations is reached.

3.2 Hexagonal CA grids

We have considered the Physarum motility over a 2-D, 3-D and n-D hexagonal cellular automata (Fig. 3), where a set of Physarum \((P = p_1, p_2 \dots \ p_m)\) are competing on a set of chemicals (food resources) \((\text {CHM} = \text {chm}_1, \text {chm}_2 \dots \ \text {chm}_n)\). The state of a cell \(c^t_{(i, j)}\) at iteration t is described by its type (Eq. 2).

$$\begin{aligned}{} & {} \text {CT}_{(i, j)} = \{``\text {FREE}'', ``\text {OBSTACLE}'',\nonumber \\{} & {} \quad ``\text {PHYSARUM}'', ``\text {CHEMICAL}''\} \end{aligned}$$
(2)

where \(\text {CT}_{(i,j)}\) represents the cell type.

Fig. 3
figure 3

Hexagonal neighbourhood of different dimensions

In the two-dimensional hexagonal CA, the grid is divided into a matrix \((X \times Y)\) of identical hexagonal cells, and each cell in the grid has six neighbours. Hexagonal/face-centred cubic (fcc) grids and hexagonal/body-centred cubic (bcc) grids are the three-dimensional “equivalence” of the two-dimensional hexagonal grids. For the bcc grid, the Voronoi neighbourhoods are truncated octahedra having eight hexagonal and six square faces. The space volume was dynamically tessellated by regular truncated octahedron voxels. The hexagonal grid has the densest packing. The truncated octahedron voxels are more “sphere-like”, and they have the highest volumetric quotient (the ratio of the polyhedron volume to the volume of its circumsphere) which is much higher than other possible space-filling polyhedrons.

For the n-dimensional generalisation of the hexagonal and bcc grids (Nagy and Strand 2009), the hexagonal faces neighbourhood is defined as follows:

Definition 5

Let \({\textbf {p}} = (p(1), p(2), \dots , p(n))\) and \({\textbf {q}} = (q(1), q(2),\ldots , q(n))\) be two points in \(Z^n\).

\({\textbf {q}}\) is 1-neighbour to \({\textbf {p}}\) if \(q(1) \equiv q(2) \equiv \dots \equiv q(n)~~(mod~2)\) and \(|p(i) - q(i)|~\le 1\) for \(1 \le i \le n\) and \(\sum _{i=1}^{n}~ (p(i) - q(i))^2 ~\le ~n\).

The position vector of a Physarum in the high-dimensional grid will be defined as follows:

$$\begin{aligned} P^{t}_{i} = [p^{t}_{i1}, p^{t}_{i2}, \dots , p^{t}_{in}], \quad i=1,2, \dots , n_p, \end{aligned}$$
(3)

where i represents the \(i^{th}\) Physarum individual, t is the current time step, n is the dimensionality of the problem space, and \(n_p\) is the population size.

3.3 Reaction–Diffusion equations (RD)

The RD equations are applied on Physarum’s mass. It defines the exploration of the available area in the search space by the cytoplasmic material formed by the Physarum plasmodium. We combined both the chemo-attraction force towards chemicals and the repulsion negative forces that competing Physarum exert on each other. Every cell occupied by Physarum at time step (t) uses the values of its neighbour cells to calculate the value of the mass at time step \((t+1)\). These rules are executed until a certain number of iteration reached. The Physarum diffusion is affected by the presence of food resources (objective functions) and other competitors as seen in the following equation:

$$\begin{aligned} \begin{aligned} \text {PM}^{t+1}_{i}&= \text {PM}^{t}_{i} +\sum _{P_{k}}^{\text {NB}_{P_{i}}}\\&\quad \times \left\{ \begin{aligned}&\big (\text {PF} * \text {PD}\big ) \big (\text {PM}^{t}_{k} - \text {PM}^{t}_{i}\big ){} & {} \text {if}\ \text {AA}_{P_{i} \rightarrow P_{k}} = 1, \\{} & {} {}&\hspace{0.25cm} \text {PM}^{t}_{i} > \text {Diff}\_\text {Thr}\\&0,{} & {} \text {otherwise} \end{aligned} \right\} . \end{aligned} \end{aligned}$$
(4)

The variables in Eq. (4) are explained as follows:

  • \(\text {PM}^{t+1}_{i}\) defines the diffusion of Physarum \(P_{i}\) mass for the next iteration \((t+1)\);

  • \(\text {PM}^{t}_{i}\) is the current mass of Physarum \(P_{i}\) for iteration (t);

  • \(\text {PM}^{t}_{k}\) is the current mass of neighbouring Physarum \(P_{k}\);

  • \(\text {NB}_{P_{i}}\) is the neighbourhood of Physarum \(P_{i}\) as in Definition 5;

  • PF is the sum of forces affecting Physarum diffusion (defined in Eq. 6);

  • PD is the Physarum diffusion coefficient (defined in Eq. 5);

  • \(\text {AA}_{P_{i} \rightarrow P_{k}}\) indicates whether Physarum \(P_{i}\) is available to diffuse towards a neighbouring Physarum \(P_{k}\) (defined in Eq. 7);

  • \(\text {Diff}\_\text {Thr}\) is the threshold in which the Physarum mass must exceed in order to diffuse.

Now, we define Eqs. (5)–(10) as follows:

$$\begin{aligned} \text {PPD} = \dfrac{\text {nbSize}}{10^n} \end{aligned}$$
(5)

In Eq. (5), nbSize is the neighbourhood’s size of Physarum \(P_{i}\) (Definition 5) and n is the dimensionality of the problem space.

$$\begin{aligned} \text {PF} = 1 + \text {AttForce}_{P_{i} \rightarrow P_{k}} + \text {RepForce}_{P_{i} \leftarrow P_{k}} \end{aligned}$$
(6)

In Eq. (6), \(\text {AttForce}_{P_{i} \rightarrow P_{k}}\) defines the value of attraction forces applied on Physarum \(P_{i}\) coming from its neighbouring Physarum \(P_{k}\) (defined in Eq. 8) and \(\text {RepForce}_{P_{i} \leftarrow P_{k}}\) defines the value of repulsion forces applied on Physarum \(P_{i}\) exerted by its neighbouring Physarum \(P_{k}\) (defined in Eq. 9).

$$\begin{aligned} \text {AA}_{P_{i} \rightarrow P_{k}} = {\left\{ \begin{array}{ll} 1, &{} \text {if}\ P_{k}~\text {is Empty} \\ 1, &{} \text {if}\ ~PID_{i} = PID_{k} \\ 0, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(7)

In Eq. (7), \(\text {PID}_{i}\) is the ID of Physarum \(P_{i}\).

$$\begin{aligned} \begin{aligned} \text {AttForce}_{P_{i} \rightarrow P_{k}} = \sum ^{F(X)}_{f_i} \left\{ \dfrac{f_i \cdot \text {Max} - f_i(P_{k})}{f_i \cdot \text {Max} - f_i \cdot \text {Min}} \right\} \end{aligned} \end{aligned}$$
(8)

In Eq. (8), F(X) is the objective functions and \(f_i \cdot \text {Min}\) and \(f_i \cdot \text {Max}\) are the local minimal and maximal values obtained by Physarum search for an objective function \(f_i\), respectively.

$$\begin{aligned} \begin{aligned}&\text {RepForce}_{P_{i} \leftarrow P_{k}}\\&\quad = \left\{ \begin{aligned}&\dfrac{\text {PM}_{\text {opp}_{(k \Rightarrow i)}}}{\text {Total}\_\text {Physarum}\_\text {Mass}}{} & {} \text {if}\ \text {PID}_{i} \ne \text {PID}_{\text {opp}_{(k \Rightarrow i)}},\\{} & {} {}&\hspace{0.25cm} \text {PM}_{\text {opp}_{(k \Rightarrow i)}} > \text {Rep}\_\text {Thr}\\&0,{} & {} \text {otherwise} \end{aligned} \right. \end{aligned} \end{aligned}$$
(9)

In Eq. (9), \(\text {PM}_{\text {opp}_{(k \Rightarrow i)}}\) is the neighbour Physarum mass at the opposite direction of Physarum \(P_{k}\) with reference to the centre point \(c_{i}\) as defined in Eq. (10) and \(\text {Rep}\_\text {Thr}\) is a threshold where Physarum must reach to repel neighbouring Physarum.

$$\begin{aligned} \text {opp}_{(k \Rightarrow i)} = c_{2i-k} \end{aligned}$$
(10)

In Eq. (10), \(\text {opp}_{(k \Rightarrow i)}\) is the opposite direction of cell \(c_k\) with reference to the centre point \(c_i\).

The main steps of the PCA are shown in Algorithm 1.

figure a

Physarum-Inspired Competition Algorithm

3.4 Physarum deployment strategy

In the Physarum exploration phase, the initial deployment of Physarum over the CA grids may directly affect the search performance, especially when the search space of the optimisation problem is huge (Liu et al. 2015). Most of the swarm-inspired algorithms use random deployment strategy [Monte Carlo sampling (McKay et al. 1979)]. However, in our algorithm design, we used a random deployment strategy (Monte Carlo sampling) and introduced a distributed regional deployment strategy, by using Latin Hypercube sampling technique (LHS) (Lee et al. 2006). LHS is a statistical method used to generate parameter values from a multi-dimensional distribution. In our previous work, LHS has proven to be an effective approach for initialising population (Tian et al. 2019; Usman et al. 2020). A square grid containing sample positions is a Latin square if (and only if) there is only one sample in each row and each column. By using LHS, we ensure that the multiple Physarum are evenly distributed all over the search space and thus maximising the coverage.

3.5 External archive of the non-dominated solutions set

In this research, each Physarum has its own external archive to maintain its local best non-dominated solutions, i.e. local Pareto front (LPF). We use the Pareto dominated comparison mechanism (Reyes-Sierra and Coello 2006) to prune/update this archive. The Pareto dominated comparison mechanism works by comparing the newly decoded solution at \(t_{n+1}\) with the archive at iteration \(t_n\) using the Pareto domination relationship in order to select all non-dominated solutions. These non-dominated solutions are saved into the new archive at iteration \(t_{n+1}\) by concatenating/overwriting the previous archive at iteration \(t_n\). In our algorithm design, the archive will initially contain the Physarum initial position as a non-dominated solution. Afterwards, during the Physarum exploration phase, each time a new solution is created (corresponding to a new position of Physarum); it will be added to or discarded from the archive using the Pareto dominated comparison mechanism.

In most of the previous studies, the performance of the MOEAs was evaluated using bounded external archive (BEA) based on non-dominated solutions in the final population at the end of the search (Luong and Bosman 2012). In this research, we have used an unbounded external archive (UEA), which stores all non-dominated solutions found during the search process. This method has been proved to be more practical in real-world applications than the final population method (Tanabe et al. 2017).

3.6 Physarum-inspired competition algorithm with a restart mechanism

We introduced a new approach with a restart mechanism (Algorithm 1, line 22). In our algorithm design, after a predetermined number of iterations (restart-iter), multiple Physarum will share their personal experiences (i.e. personal bests), which can be interpreted as a communication mechanism among them (Reid and Latty 2016). These local non-dominated solutions from each Physarum archive will be compared together (using the Pareto dominated comparison mechanism) to select the global Pareto frontier (GPF). The cells identified as part of the global Pareto frontier in the external archive will be selected as the new Physarum population. These new Physarum will have their original Physarum’s mass which will grow and resume exploration, while the others will potentially vanish. This restart mechanism can avoid stagnation on local optima and drive exploration towards the best solution positions based on both personal experience and shared information.

4 Experiments

We have implemented our PCA using Java, and we have used jMetalFootnote 1 as a multi-objective optimisation framework for our experiments (Durillo and Nebro 2011). All experiments were carried out on the Maxwell high-performance computing cluster from the University of Aberdeen.

4.1 Benchmark functions

A small number of DMOO benchmark functions have been proposed in recent years (multiple-criteria decision-making (MCDM), http://www.mcdmsociety.org/content/digital-library; Mittelmann 2018), and they are only for multi-objective combinatorial problems. To overcome this problem, we attempted to get benefit from the rich pool of CMOO benchmark functions and to transform them into DMOO problems.

One solution for this transformation is the direct discretisation of a CMOO problem’s continuous search space. This is achieved by partitioning the numeric continuous variables into several sub-ranges as in Elomaa and Rousu (2004); Riquelme et al. (2015); Abo-Hammour et al. (2014). The goal is to find a set of cut points to partition the range into a small number of intervals, thus transforming the continuous-valued attributes into a finite number of intervals. As an example of this method, assume we have a continuous search space of a certain objective function, and the values of variables are within the range of [0,1]. We will partition (discretise) this search space to equal discrete intervals (DI) of 0.01 (0, 0.01, ..., 0.99, 1). By using this method, the problem will be compatible with the discretised motility of Physarum over the CA grids as described in the Physarum competition mathematical model (Awad et al. 2019a) and in the biological experiments of Shirakawa et al. (2015).

To generate the true Pareto frontier of our DMOO problems, we used the decision-space search method (Mavrotas and Diakoulaki 1998; Adelgren and Gupte 2017). This method is typically based on the branch-and-bound search strategy developed for multi-objective integer linear programming problems.

In our study, we have selected a wide range of 38 benchmark functions with different characteristics and parameter settings such as DTLZ (Deb et al. 2002), GLT (Zhang et al. 2016a), MOP (Deb 1999) and ZDT (Zitzler et al. 2000). The list of benchmark functions describing the parameters, upper and lower bounds and the discretisation interval is shown in Tables 1 and 2, where d, m and c indicate the number of decision variables, objectives and constraints, respectively.

Table 1 The characteristics of the selected benchmark functions
Table 2 The characteristics of the selected benchmark functions

4.2 Performance assessment

As we are proposing a new optimisation algorithm, it is necessary to compare the performance of our PCA against other MOO algorithms. Assessing the performance of different MOO algorithms usually requires two properties: convergence and diversity. A number of quality indicators for performance assessment have been proposed in literature (Riquelme et al. 2015).

In our study, we have selected Epsilon (\(\epsilon \)) (Zitzler et al. 2003), which is an indicator for measuring convergence, Spread (\(\Delta \)) (Li and Zheng 2009), which is an indicator for measuring diversity, and hypervolume (HV) (Zitzler and Thiele 1999) and inverted generational distance plus (IGD+) (Veldhuizen and Lamont 2000; Ishibuchi et al. 2016), which are indicators for measuring both convergence and diversity.

Table 3 A summary of the parameters of the algorithms used in the experiments

For Epsilon and IGD+ indicators, the lower the value, the better the computed Pareto fronts, whereas for HV and Spread indicators, the higher the value, the better the computed Pareto fronts.

4.3 Experimental design

We have designed four scenarios for our Physarum competition algorithm, where a set of Physarum will be deployed over the search space (cell automaton) using either random sampling technique (Monte Carlo) McKay et al. (1979)) or stratified sampling technique (Latin Hypercube) (Lee et al. 2006), and whether the restart procedure will be applied or not. These four scenarios for our Physarum competition algorithm are as follows:

  1. (i)

    PCAwMC: A number of Physarum were deployed over the cell automaton using Monte Carlo sampling technique, and the restart procedure was not applied.

  2. (ii)

    PCAwLHS: A number of Physarum were deployed over the cell automaton using Latin hypercube sampling technique, and the restart procedure was not applied.

  3. (iii)

    PCASwMC: A number of Physarum were deployed over the cell automaton using Monte Carlo sampling technique, and the restart procedure was applied.

  4. (iv)

    PCASwLHS: A number of Physarum were deployed over the cell automaton using Latin hypercube sampling technique, and the restart procedure was applied.

Other MOEAs used for comparison were NSGA-II (Deb et al. 2000), NSGA-II-45,Footnote 2 NSGA-III (Deb and Jain 2014) and SPEA2 (Zitzler et al. 2001). These four algorithms are implemented in jMetal. The parameter settings of these algorithms are shown in Table 3, where N, max-iter and max-ev indicate the population size, maximum number of iterations and the maximum number of function evaluations, respectively. \(P_{\text {Mass}}\), \(\text {Diff}\_\text {Thr}\) and \(\text {Rep}\_\text {Thr}\) indicate the Physarum initial mass, diffusion threshold used in Eq. (4) and repulsion threshold used in Eq. (9), respectively. OPERATOR indicates the evolutionary operator used in MOEA, where SBX, PM and BTS stand for simulated binary crossover using two parent solutions (integer encoding) (Agrawal et al. 1995), polynomial-based mutation (Coello et al. 2007) and binary tournament selection (Miller and Goldberg 1995), respectively. \(P_c\) and \(P_m\) are the crossover and mutation probability. d is the number of decision variables. \(\eta _c\) and \(\eta _m\) are the distribution indexes of SBX and PM, respectively.

4.4 Experimental results and statistical analysis

Given an entire computed true Pareto optimal front for a benchmark function, the obtained results were analysed on the basis of the performance indicators as mentioned in Sect. 4.2. In this research, exhaustive experimental studies and extensive statistical analysis were performed to compare the performance of our proposed Physarum competition algorithms with four evolutionary algorithms. A minimum number of 30 independent runs is usually recommended; however, in our research 40 independent runs were executed for each algorithm.

As our experimental results did not follow a normal distribution, the differences between the results were assessed using the median and interquartile range (IQR). More detailed information was obtained by printing these results using box plots. Afterwards, we have applied a nonparametric statistical test, using Kruskal–Wallis for independent samples (also called distribution-free methods) (Kruskal et al. 1952), to test the null hypothesis that all algorithms have the same performance in each indicator for a certain benchmark function. Following the rejection of Kruskal–Wallis test and in order to find statistical significance between each algorithm, post hoc pair-wise analysis was further performed between the results of each algorithm, using the Dunn approach with the Bonferroni correction (Dunn 1964) at a 0.05 significance level.

4.5 Experimental results on two-dimensional optimisation problems

For illustrating our results in the discrete 2-D optimisation problem, the deep valley bi-objective optimisation problem was selected as its continuous counterpart MOO problem is considered to be difficult for evolutionary algorithms (Deb 1999). The feasible set of this problem is the integer points within the square \([0, 100] \times [0, 100]\), and the Pareto front is located at the bottom of a deep valley from the dominated regions. Statistical analysis was performed to compare the performance of our proposed Physarum competition algorithms (PCAwMC) in comparison with the three state-of-the-art evolutionary algorithms (NSGA-II, NSGA-III and SPEA2). The box plot (Fig. 4) of all indicators showed that our algorithm achieved the best performance for the Spread indicator about 1.9 compared to 0.8, 0.4 and 0.3 for NSGA-II, NSGA-III and SPEA2, respectively. These results analyses clearly demonstrate the ability of our algorithm to provide a better spread of solutions with good convergence behaviour. It has also confirmed our assumption that individual skills of competing Physarum are more efficient in exploration, which increases the diversity of the solutions. As regards the other performance indicators, PCA has achieved the second-best values for IGD+, HV and Epsilon indicators that are only exceeded by SPEA2. The pair-wise analysis (Table 4) showed that PCA spread performance is highly significant (P0.001) than all other algorithms. PCA outperformed the NSGA-II and NSGA-III for other indicators. PCA achieved nearly similar IGD+ performance as SPEA2 with no statistical difference, it has been only exceeded by SPEA2 for HV and Epsilon indicators.

Fig. 4
figure 4

Box plot graph of performance indicators for our PCAs and the four state-of-the-art MOEAs with respect to deep valley problem

Table 4 The statistical pair-wise analysis of PCAwMC VS MOEA algorithms with respect to performance indicators (HV, Epsilon, IGD+ and Spread) in deep valley problem

4.6 Experimental results on n-dimensional optimisation problems

In this subsection, we provide an extended version of our earlier conference poster paper (Awad et al. 2019b). We have extended the discrete search space of the Physarum from 2-D problems as deep valley problem to n-D generalisations of the hexagonal CA grids to solve multi-dimensional, multi-objective optimisation problems.

Exhaustive experimental study and extensive statistical analysis was performed to compare the performance of four of our proposed Physarum competition algorithms (i.e. PCAwMC, PCAwLHS, PCASwMC and PCASwLHS) in comparison with the four evolutionary algorithms (NSGA-II, NSGA-II-45, NSGA-III and SPEA2) on a benchmark suite of 38 problems (as indicated in Sect. 4.1) ranging from two dimensions to seven dimensions with up to 10 objectives and 5 constraints.

There are some previous benchmarking studies for MOEAs (Li et al. 2013; Bezerra et al. 2018); however, to the best of our knowledge, an exhaustive benchmarking study has never been performed. As far as we know, this study is the first attempt to assess algorithms that solve DMOO problems, with a large number of benchmark functions and performance indicators.

The complete and detailed results of the performance indicators ranking with respect to the 38 benchmark functions for each algorithm are provided in “Appendix” (Table 7).

4.7 The overall performance of PCA versus MOEAs

We exhaustively investigate the anytime performance (overall performance) of our four PCAs versus four MOEAs across all benchmark functions collectively to compare algorithms using the given performance indicator values (IGD+, HV, Epsilon and Spread) obtained in 40 runs for each algorithm. For this comparison, we calculated (from the statistical results in Table 7) the mean rank of performance indicators for each algorithm as shown in Table 5.

Table 5 The mean rank of performance indicators as sorted from the best performance to the worst in an up–down arrangement with respect to all benchmark functions collectively

In addition, we performed a nonparametric statistical test, using Friedman for dependent samples (Friedman 1937), to test the null hypothesis that all algorithms are having the same performance for each indicator across all the benchmark functions. Following the rejection of Friedman test and in order to find statistical significance with respect to each performance indicator, between each algorithm, post hoc pair-wise analysis was further performed between the results, using the Dunn approach with the Bonferroni correction (Dunn 1964) at a 0.05 significance level. Table 6 summarises the pair-wise comparison between algorithms’ performance in terms of HV, IGD+, Epsilon and Spread indicators, respectively, for all benchmark functions.

Table 6 The statistical pair-wise performance analysis shows a comparison of our PCAs with four MOEA algorithms for all benchmark problems

Looking at the results presented in Tables 5 and 6, it can be seen that PCASwLHS and PCASwMC have achieved the best performance for the Spread and IGD+ indicators and the second-best values for the HV and Epsilon indicators after SPEA2. Furthermore, the PCA with restart procedure algorithms have been proven to outperform NSGA-II, NSGA-II-45 and NSGA-III in terms of all performance indicators. This good performance of the PCAs in terms of the diversity (Spread indicator) has confirmed our hypothesis that the individual skills of the competing individuals are more efficient for avoiding the premature convergence and increasing the ability of escaping from local optima. The combination of the restart procedure and LHS deployment strategy (PCASwLHS) has improved the overall performance. This is because LHS enables well-distributed exploration covering the whole search space and the restart procedure drives the exploitation towards the best solution based on both the personal experience and the shared information by vanishing unfavourable paths (i.e. dominated solutions).

One of the interesting points in this research is the large number of benchmark functions and the diverse set of performance metrics used. These makes it very challenging for selecting appropriate statistical analysis methods to compare the performance of different algorithms. For this reason, the vast majority of published articles used only one or two performance metrics and a limited number of benchmark functions (Engelbrecht 2014). Due to the lack of research that covers in detail how to conduct statistical analysis to assess the performance of MOO algorithms, we intend in the future to develop a statistical framework for performing analysis to make it easier for researchers to evaluate their newly proposed algorithms.

5 Conclusion and future work

Many real-world problems can be naturally formulated as discrete multi-objective optimisation (DMOO) problems. To the best of our knowledge, we are the first to present a Physarum competition algorithm for solving discrete multi-objective optimisation problems. This algorithm is based on Physarum decision-making capabilities, and its discrete motility over a hexagonal cellular automaton allowed effective search in discrete search space. The modelling of the Physarum motility is based on the chemo-attraction forces towards food resources (objective functions) and the repulsion negative forces that competing Physarum exert on each other. We have considered the n-dimensional generalisation of the hexagonal grids to extend the solving capabilities of our PCA from only two-dimensional to multi-dimensional optimisation problems. We have implemented a novel restart procedure which improved the overall performance as it drives the exploitation towards the best solution based on both personal experience and shared information. Extensive experiments were conducted on several benchmark functions to assess the performance of our PCA against state-of-the-art evolutionary algorithms. The statistical analysis clearly demonstrated that our algorithm had achieved the best performance for the Spread and IGD+ indicators and the second-best values for the HV and Epsilon indicators after SPEA2. Furthermore, the PCA with restart procedure algorithms has been proven to outperform NSGA-II and NSGA-III in terms of all performance indicators. These results have confirmed our assumption that individual skills of competing Physarum are more efficient in exploration, which increases the diversity of the solutions. These results have demonstrated that PCA is a promising algorithm for solving DMOO problems. In addition, since our algorithm is implemented in Java, it can be easily integrated with the jMetal framework for providing a flexible and configurable PCA.

Table 7 Detailed performance results of the used algorithms across the 38 benchmark problems

For future work, we intend to explore the potential for solving continuous MOOP. This can be achieved by partitioning the numeric continuous variables in the search space into several sub-ranges, thus transforming the continuous-valued attributes into a finite number of discrete intervals. Accordingly, using the CA can avoid a huge amount of computation caused by the huge number of states selected as continuous actions. In our (PCA) algorithm, we have considered the n-dimensional generalisation of the hexagonal grids to extend the solving capabilities of our PCA from only two-dimensional to multi-dimensional optimisation problems without the need to use multiple automata. So, we believe that the generalisation of our algorithm to solve both discrete MOOP, and continuous MOOP is possible with expected good results.