1 Introduction

Swarm intelligence has become a hot topic in artificial intelligence and bionic calculation fields since its appearance in the 1980s. Bonabeau et al. (1999) defined swarm intelligence as “any attempt to design algorithms or distributed problem-solving strategies inspired by the collective behavior of social insect colonies and other animal societies”. The main properties of swarm intelligence include decentralization, stigmergy, self-organization, positive feedback, negative feedback, fluctuation, and differences (Garnier et al., 2007). Many algorithms have been designed based on the definitions and properties of swarm intelligence. These algorithms can be divided into two categories: one is the bionic process algorithm, based on the genetic algorithm mainly imitating the process of population evolution; the other is the bionic behavior algorithm, which simulates the behavior model of different species searching for prey, and the main representatives are particle swarm optimization, artificial bee colony algorithm, ant colony optimization, and so on. These algorithms have been widely applied to the reconstruction of grids (Kennedy, 2010), telecom network routing (Ducatelle et al., 2010), wireless sensor networks (Saleem et al., 2011), cluster analysis (Cura, 2012), pattern recognition (Eberhart and Shi, 2001), and other applications.

Currently, particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization are well implemented and have shown favorable performances. They use the behavior rules of individuals and the interactions between individuals to produce changes at the group level and achieve certain goals. The genetic algorithm simulates the mutations and exchanges of genes and finds the gene with the best environmental adaptation by the rule of ‘survival of the fittest’ (Whitley, 1994; Mitchell, 1998). Particle swarm optimization simulates the predatory behavior of bird swarms, which involves finding the prey by exchanging information and picking out the best among the search results, in addition to considering the information from other birds (Eberhart and Kennedy, 1995; Poli et al., 2007; Kennedy, 2010). The artificial bee colony algorithm simulates the bees’ honey-gathering behavior and finds better nectar sources through cooperation and division of labor (Karaboga, 2005; Karaboga and Basturk, 2007; Karaboga et al., 2014). Ant colony optimization simulates the foraging behavior of ants, which involves finding the shortest path from the nest to the food using pheromones for information exchanges (Dorigo et al., 1996; Dorigo and Birattari, 2010; Mohan and Baskaran, 2012).

Humans obtain favorable outcomes by imitating nature. The existing swarm intelligence algorithms can solve many practical problems. However, with the objects of optimization problems becoming increasingly complex and data becoming gradually enormous, traditional swarm intelligence algorithms cannot meet the practical demands in terms of time and accuracy. Because improvements in these traditional algorithms have gradually decreasing effect, designing a new algorithm to deal with optimization problems may be a better solution.

Apart from ants, bees, and birds, which have been successfully simulated, there are still many other creatures, such as bacteria and fireflies (Parpinelli and Lopes, 2011), worth our attention, and dolphin is one of them. The dolphin has many biological characteristics and living habits worth being learned from and simulated, such as echolocation, cooperation and division of labor, and information exchanges. Several phases, including the search phase, call phase, reception phase, and predation phase, comprise the dolphin’s predatory process, and these characteristics and habits help the dolphin achieve its goal during the predatory process. By simulating the biological characteristics and living habits shown in the dolphin’s predatory process, we propose a new algorithm named the ‘dolphin swarm algorithm’.

These characteristics and habits simulated in the dolphin swarm algorithm conform to the thoughts of swarm intelligence but differ from traditional swarm intelligence algorithms in their details. Take particle swarm optimization as an example. Although both methods use the concept of ‘individual’ and the interactions between multiple individuals to find the optimized solution, the way they get things done is totally different. Unlike particle swarm optimization (just moving forward to the solution), the dolphin swarm algorithm takes advantage of echolocation and adopts different strategies to obtain the solution more effectively, which may be a breakthrough. Furthermore, an algorithm based on dolphin’s biological characteristics and living habits does not exist, which is why we make an attempt at simulating dolphins here.

2 Behavior of dolphin swarm

Dolphin is generally acknowledged as one of the smartest animals and it has a lot of interesting biological characteristics and living habits worth our attention.

  1. 1.

    Echolocation: Dolphin has good eyesight, but the good eyesight helps the dolphin little with predation in poor light conditions. Instead, the dolphin uses a special ability to search for prey, which is echolocation. Dolphin is one of the few creatures that are adept at using echolocation. It can make sounds and estimate the location, distance, and even the shape of the prey according to echo intensity. With the help of echo, the dolphin can have a better perception of the surrounding environment.

  2. 2.

    Cooperation and division of labor: In most cases, predatory behavior is not achieved by one dolphin alone but by the joint efforts of many dolphins through cooperation and division of labor. Facing large preys, the predatory behavior is unlikely to be achieved by only one dolphin. In such a case, the dolphin calls other dolphins for help with predation. Moreover, there is a specific division of labor between the dolphins. For instance, the dolphins close to the prey are responsible for tracking the movements of the prey and the dolphins far away from the prey form a circle to surround the prey.

  3. 3.

    Information exchanges: Current studies show that dolphins have the ability to exchange information. They can express different ideas by using sounds at different frequencies and have their own language system. In the predatory process, especially under cooperation and division of labor, the ability of exchanging information is frequently used to call other dolphins and update the location of the prey. With the help of information exchanged, the dolphin can take better suited actions to make the predation more effective.

The whole process of dolphin’s predation consists of three stages. In the first stage, each dolphin independently takes advantage of sounds to search for nearby preys and to evaluate the surrounding environment using echoes. In the second stage, dolphins exchange their information. The dolphins that find large preys call other dolphins for help. The dolphins that have received information move toward the prey and surround it along with other dolphins. In the last stage, the prey is surrounded by the dolphins and then what the dolphins need to do is to take turns to enjoy the food, which means that predation is accomplished.

3 Dolphin swarm algorithm

The dolphin swarm algorithm (DSA) is implemented mainly by simulating the biological characteristics and living habits shown in the dolphin’s actual predatory process. The simulated predatory process is similar to the one described in Section 2. Main definitions and pivotal phases are introduced in Sections 3.1 and 3.2, respectively, and other definitions are introduced physically close to the part in which they are referred to in Section 3.2. The complete algorithm is expounded in Section 3.3.

3.1 Main definitions

3.1.1 Dolphin

Based on the idea of swarm intelligence, we need a certain number of dolphins to simulate the biological characteristics and living habits shown in the dolphin’s actual predatory process. In the optimization problem, each dolphin represents a feasible solution. Because the expressions of feasible solutions of different optimization problems vary, the dolphin in this study is defined as Dol i =[x1, x2, …, x D ]T (i = 1, 2, …, N), for better understanding, namely, a feasible D-dimensional solution, where N is the number of dolphins and x j (j = 1, 2, …, D) is the component of each dimension to be optimized.

3.1.2 Individual optimal solution and neighborhood optimal solution

The individual optimal solution (denoted as L) and neighborhood optimal solution (denoted as K) are two variables associated with the dolphin. For each Dol i (i = 1, 2, …, N), there are two corresponding variables L i (i = 1, 2, …, N) and K i (i = 1, 2, …, N), where L i represents the optimal solution that Dol i finds in a single time and K i the optimal solution of what Dol i finds by itself or gets from others.

3.1.3 Fitness

Fitness E is the basis for judging whether the solution is better. In DSA, E is calculated by a fitness function, and the closer it is to zero, the better it is. Because the fitness functions corresponding to different optimization problems are also different, for better understanding, the fitness function is represented as Fitness(X) in this study, and specific examples of this fitness function can be found in Section 4.

3.1.4 Distance

In DSA, three types of distances are used in total. The first is the distance between Dol i and Dol j , named DD i,j , which can be formulated as follows:

$${\rm{D}}{{\rm{D}}_{i,j}} = {\Vert }{\bf{Do}}{{\bf{l}}_i} - {\bf{Do}}{{\bf{l}}_j}{\Vert },\quad i,j = 1,2, \ldots ,N,\;i \neq j{.}$$
((1))

The second is the distance between Dol i and K i , named DK i , which can be formulated as follows:

$${\rm{D}}{{\rm{K}}_i} = \left\Vert {{\bf{Do}}{{\bf{l}}_i} - {K_i}} \right\Vert ,\quad i = 1,2, \ldots ,N{.}$$
((2))

The third is the distance between L i and K i , named DKL i , which can be formulated as follows:

$${\rm{DK}}{{\rm{L}}_i} = \left\Vert {{L_i} - {K_i}} \right\Vert ,\quad i = 1,2, \ldots ,N{.}$$
((3))

DD i,j affects the information transfer between Dol i and Dol j ; DK i and DKL i thus influence the movement of Dol i in the predation phase.

3.2 Pivotal phases

DSA can be divided into six phases, including the initialization phase, search phase, call phase, reception phase, predation phase, and termination phase. This subsection is aimed to expound the four pivotal phases of DSA, i.e., search phase, call phase, reception phase, and predation phase.

3.2.1 Search phase

In the search phase, each dolphin searches its nearby area by making sounds toward M random directions. Similarly, the sound is defined as V i = [v1, v2, …, v D ]T (i = 1, 2, …, M) in this study, where M is the number of sounds and v j (j = 1, 2, …, D) is the component of each dimension, namely the direction attribute of the sound. In addition, sound satisfies ‖V i ‖ = speed (i = 1, 2, …, M), where ‘speed’ is a constant representing the speed attribute of sound. To prevent dolphins from getting stuck in the search phase, we set a maximum search time T1. Within the maximum search time T1, the sound V j that Dol i (i = 1, 2, …, N) makes at time t will search for a new solution X ijt , which can be formulated as

$${X_{ijt}} = {\bf{Do}}{{\bf{l}}_i} + {V_j}t{.}$$
((4))

For the new solution X ijt that Dol i gets, its fitness E ijt is calculated as follows:

$${E_{ijt}} = {\rm{Fitness}}({X_{ijt}}){.}$$
((5))

If

$$\begin{array}{*{20}c}{{E_{iab}} = {{\min }_{j = 1,2, \ldots ,M;t = 1,2, \ldots ,{T_1}}}{E_{ijt}}\quad \quad \quad \quad \quad \;\;} \\ { = {{\min }_{j = 1,2, \ldots ,M;t = 1,2, \ldots ,{T_1}}}{\rm{Fitness}}({X_{ijt}}),} \end{array}$$
((6))

then the individual optimal solution L i of Dol i is determined as

$${L_i} = {X_{iab}}{.}$$
((7))

If

$${\rm{Fitness}}({L_i}) < {\rm{Fitness}}({K_i}),$$
((8))

then K i is replaced by L i ; otherwise, K i does not change.

After all the Dol i (i = 1, 2, …, N) update their L i and K i (if they can be updated), DSA enters the call phase.

3.2.2 Reception phase

Although the reception phase takes place after the call phase, it is necessary to expound on the reception phase first for better understanding. In DSA, the exchange process (including the call phase and the reception phase) is maintained by an N×N-order matrix named the ‘transmission time matrix’ (TS), where TS i,j represents the remaining time for the sound to move from Dol j to Dol i .

When DSA enters the reception phase, all the terms TS i,j (i=1, 2, …, N; j=1, 2, …, N) in the transmission time matrix decrease by one to indicate that the sounds spread over one unit of time. Then DSA needs to check every term TS i,j in the matrix, and if

$${\rm{T}}{{\rm{S}}_{i,j}} = 0,$$
((9))

then it means that the sound sent from Dol j to Dol i can be received by Dol i . Then we need to replace TS i,j by a new time term named ‘maximum transmission time’ (T2), to indicate that the corresponding sound has been received. Comparing K i and K j , if

$${\rm{Fitness}}({K_i}) > {\rm{Fitness}}({K_j}),$$
((10))

then K i is replaced by K j ; otherwise, K i does not change.

After all the terms in matrix TS satisfying Eq. (9) are handled, DSA enters the predation phase.

3.2.3 Call phase

In the call phase, each dolphin makes sounds to inform other dolphins of its result in the search phase, including whether a better solution is found and the better solution’s location. The transmission time matrix TS needs to be updated as follows:

For K i , K j , and TS i,j , if

$${\rm{Fitness}}({K_i}) > {\rm{Fitness}}({K_j}),$$
((11))

and

$${\rm{T}}{{\rm{S}}_{i,j}} > \left\lceil {{{{\rm{D}}{{\rm{D}}_{i,j}}} \over {A\cdot{\rm{speed}}}}} \right\rceil ,$$
((12))

where ‘speed’ is a constant representing the speed attribute of sound, as mentioned in Section 3.2.1, and A is a constant representing the acceleration, which can make sounds spread faster in case the speed is rather slow. Then, TS i,j is updated as follows:

$${\rm{T}}{{\rm{S}}_{i,j}} = \left\lceil {{{{\rm{D}}{{\rm{D}}_{i,j}}} \over {A\cdot{\rm{speed}}}}} \right\rceil {.}$$
((13))

After all the TS i,j (i=1, 2, …, N; j=1, 2, N) terms are updated (if they can be updated), DSA enters the reception phase.

3.2.4 Predation phase

In the predation phase, each dolphin needs to calculate the encircling radius R2, which determines the distance between the dolphin’s neighborhood optimal solution and its position after the predation phase according to the known information, and then gets a new position. For each dolphin, the known information contains its own position, its individual optimal solution L, its neighborhood optimal solution K, distance DK, and distance DKL; the search radius R1, which represents the maximum range in the search phase, can be calculated as follows:

$${R_1} = {T_1} \times {\rm{speed}}{.}$$
((14))

Generally, the calculation of encircling radius R2 and the update of the dolphin’s position should be discussed in three cases. In the following we take Dol i (i=1, 2, …, N) as an example to illustrate these three cases.

  1. (a)

    For the currently known information of Dol i (i=1, 2, …, N), if

    $${\rm{D}}{{\rm{K}}_i} \leq {R_1},$$
    ((15))

    then it means that the neighborhood optimal solution K i of Dol i is within the search range. For simplicity, in this case, DSA also regards the individual optimal solution L i as K i (Fig. 1).

    In this case, the encircling radius R2 can be calculated as follows:

    $${R_2} = \left( {1 - {2 \over e}} \right){\rm{D}}{{\rm{K}}_i},\quad e > 2,$$
    ((16))

    where e is a constant named the ‘radius reduction coefficient’, which is greater than two and usually set as three or four. It can be easily seen that R2 gradually converges to zero.

    After obtaining the encircling radius R2, we can obtain Dol i ’s new position newDol i :

    $${\bf{newDo}}{{\bf{l}}_i} = {K_i} + {{{\bf{Do}}{{\bf{l}}_i} - {K_i}} \over {{\rm{D}}{{\rm{K}}_i}}}{R_2}{.}$$
    ((17))

    Namely, Dol i moves toward K i and stops at the position that is R2 distance away from K i (Fig. 2).

  2. (b)

    For the currently known information of Dol i (i=1, 2, …, N), if

    $${\rm{D}}{{\rm{K}}_i} > {R_1},$$
    ((18))

    and

    $${\rm{D}}{{\rm{K}}_i} \geq {\rm{DK}}{{\rm{L}}_i},$$
    ((19))

    then it means that Dol i updates K i by receiving information from others and L i is closer to Dol i than K i is (Fig. 3).

    In this case, the encircling radius R2 can be calculated as follows:

    $${R_2} = \left( {1 - {{{{{\rm{D}}{{\rm{K}}_i}} \over {{\rm{Fitness}}({K_i})}} + {{{\rm{D}}{{\rm{K}}_i} - {\rm{DK}}{{\rm{L}}_i}} \over {{\rm{Fitness}}({L_i})}}} \over {e\cdot{\rm{D}}{{\rm{K}}_i}{1 \over {{\rm{Fitness}}({K_i})}}}}} \right){\rm{D}}{{\rm{K}}_i},\;\;e > 2{.}$$
    ((20))

    After obtaining the encircling radius R2, we can obtain Dol i ’s new position newDol i :

    $${\bf{newDo}}{{\bf{l}}_i} = {K_i} + {{{\bf{Random}}} \over {\left\Vert {{\bf{Random}}} \right\Vert }}{R_2}{.}$$
    ((21))

    Namely, Dol i moves to a random position that is R2 distance away from K i (Fig. 4).

  3. (c)

    For the currently known information of Dol i (i=1, 2, …, N), if it satisfies inequality (18) and

    $${\rm{D}}{{\rm{K}}_i} < {\rm{DK}}{{\rm{L}}_i},$$
    ((22))

    then it means that Dol i updates K i by receiving information from others and K i is closer to Dol i than L i is (Fig. 5).

Fig. 1
figure 1

Case (a) in the predation phase

Fig. 2
figure 2

The result of Dol i ’s movement in case (a)

Fig. 3
figure 3

Case (b) in the predation phase

Fig. 4
figure 4

The result of Dol i ’s movement in case (b)

Fig. 5
figure 5

Case (c) in the predation phase

In this case, the encircling radius R2 can be calculated as

$${R_2} = \left( {1 - {{{{{\rm{D}}{{\rm{K}}_i}} \over {{\rm{Fitness}}({K_i})}} - {{{\rm{DK}}{{\rm{L}}_i} - {\rm{D}}{{\rm{K}}_i}} \over {{\rm{Fitness}}({L_i})}}} \over {e\cdot{\rm{D}}{{\rm{K}}_i}{1 \over {{\rm{Fitness}}({K_i})}}}}} \right){\rm{D}}{{\rm{K}}_i},\;\;e > 2{.}$$
((23))

After obtaining the encircling radius R2, we can obtain Dol i ’s new position newDol i by Eq. (21). Namely, Dol i moves to a random position that is R2 distance away from K i (Fig. 6).

Fig. 6
figure 6

The result of Dol i ’s movement in case (c)

After Dol i moves to the position newDol i , comparing newDol i with K i in terms of fitness, if

$${\rm{Fitness}}({\bf{newDo}}{{\bf{l}}_i}) < {\rm{Fitness}}({K_i}),$$
((24))

then K i is replaced by newDol i ; otherwise, K i does not change.

After all the Dol i (i=1, 2, …, N) update their positions and K i (if it can be updated), determine whether DSA meets the end condition. If the end condition is satisfied, DSA enters the termination phase. Otherwise, DSA enters the search phase again.

3.3 Overall implementation

Algorithm 1 shows the overall DSA. It can be seen that the four pivotal phases are included, as well as the initialization phase and the termination phase. The initialization phase of DS contains the initialization of dolphins and the initialization of parameters. The initialization of dolphins is better when distributed randomly and evenly, and the initialization of parameters needs to be set according to different optimization problems. There are numerous conditions that can be used as the end condition, such as running out of given time, satisfying a certain precision, using up the calls of fitness functions, and so on. When the end condition is satisfied, the best one of K i (i=1, 2, …, N) will be the output.

4 Experiments

The experiments are divided into two parts. In the first part, DSA is compared with particle swarm optimization (PSO), the artificial bee colony algorithm (ABC), and the genetic algorithm (GA) in terms of the convergence curves of four benchmark functions with different extreme value distributions. In the second part, these four algorithms are compared in terms of the results of 10 benchmark functions.

In the experiments, 10 benchmark functions with different properties are used, which can be seen in Eqs. (25)(34) (Yao et al., 1999). Considering the distribution of extreme values, there are unimodal functions (f1(x)–f4(x)), step functions f5(x)), functions with random numbers f6(x)), and multimodal functions (f7(x)–f10(x)). Considering whether the variables are separable, f1(x), f5(x), f6(x), and f8(x) are separable and others are nonseparable. Apart from that, some benchmark functions have certain properties different from others, such as the discontinuity of f5(x) and the asymmetry of f8(x). These properties can help us testify the performances and characteristics of DSA such as the convergence rate and the possibility of falling into local optimum solutions, in various situations.

$${f_1}(x) = \sum\limits_{i = 1}^n {x_i^2} ,$$
((25))
$${f_2}(x) = \sum\limits_{i = 1}^n {\left\vert {{x_i}} \right\vert + \prod\limits_{i = 1}^n {\left\vert {{x_i}} \right\vert } } ,$$
((26))
$${f_3}(x) = {\sum\limits_{i = 1}^n {\left( {\sum\limits_{j = 1}^i {{x_i}} } \right)} ^2},$$
((27))
$${f_4}(x) = \max \left\{ {\left\vert {{x_i}} \right\vert :1 \leq i \leq n} \right\},$$
((28))
$${f_5}(x) = \sum\limits_{i = 1}^n {{{(\left\lfloor {{x_i} + 0{.}5} \right\rfloor )}^2},}$$
((29))
$${f_6}(x) = \sum\limits_{i = 1}^n {ix_i^4} + ({\rm{random}} - \left\lfloor {{\rm{random}}} \right\rfloor ),$$
((30))
$${f_7}(x) = \sum\limits_{i = 1}^{n - 1} {\left[ {100{{({x_{i + 1}} - x_i^2)}^2} + {{({x_i} - 1)}^2}} \right]} ,$$
((31))
$${f_8}(x) = \sum\limits_{i = 1}^n {\left[ {x_i^2 - 10\cos (2\pi {x_i} + 10)} \right]} ,$$
((32))
$${f_9}(x) = {1 \over {4000}}\sum\limits_{i = 1}^n {x_i^2} - \prod\limits_{i = 1}^n {\cos } \left( {{x_i}/\sqrt i } \right) + 1,$$
((33))
$$\begin{array}{*{20}c}{{f_{10}}(x) = 0{.}1\left\{ {{{\sin }^2}(3\pi {x_1}) + \sum\limits_{i = 1}^{n - 1} {{{({x_i} - 1)}^2}\left[ {1 + {{\sin }^2}(3\pi {x_{i + 1}})} \right]} } \right.} \\ {\left. {\quad \quad \quad + ({x_n} - 1)\left[ {1 + {{\sin }^2}(2\pi {x_n})} \right]} \right\} + \sum\limits_{i = 1}^n {u({x_i},5,100,4),} } \\ {{\rm{where}}\;\sum\limits_{i = 1}^n {u({x_i},a,k,m) = \left\{ {\begin{array}{*{20}c}{k{{({x_i} - a)}^m},\;\;} & {{x_i} > a,} \\ {0,\quad \quad \quad \quad } & { - a \leq {x_i} \leq a,} \\ {k{{( - {x_i} - a)}^m},} & {{x_i} < a{.}} \end{array} } \right.\;\;} } \end{array}$$
((34))

Apart from the 10 benchmark functions, the initial distributions and the parameters that these four algorithms use in the experiments are also very important. To obtain better results, the initial distributions of individuals of these four algorithms are always the same in a single experiment. Moreover, the parameters of PSO, GA, and ABC chosen in the experiments have been widely used, and the parameters of DSA are decided after running some tests. All of them are shown in Table 1.

Table 1 Parameters used for different algorithms

4.1 Comparison of convergence rate

We compare these four algorithms under the benchmark functions f1(x), f5(x), f6(x), and f7(x), respectively, with 10 dimensions (each dimension is in the range [−100, 100]), 10 individuals, and 10 000 calls of benchmark functions. We plot the results of the former 50 loops that are typical and easy to observe and obtain four curves (Figs. 710).

Fig. 7
figure 7

The former 50 loops’ convergence curves under a unimodal function f1(x)

Fig. 8
figure 8

The former 50 loops’ convergence curves under a step function f5(x)

Fig. 9
figure 9

The former 50 loops’ convergence curves under a random number function f6(x)

Fig. 10
figure 10

The former 50 loops’ convergence curves under a multimodal function f7(x)

Figs. 710 show that although f1(x), f5(x), f6(x), and f7(x) represent the unimodal function, step function, function with random numbers, and multimodal function, respectively, having completely different distributions of extreme values, the convergence of these four algorithms does not change much because of the different distributions of the extreme value. In the initial loops, ABC has the fastest convergence and DSA has the slowest convergence. With the increase of the number of loops, the other three algorithms’ convergence slows down, but DSA retains a relatively high convergence rate. Furthermore, DSA becomes the fastest one when the number of loops reaches 25. In addition, we can see that the convergence curve of DSA does not decline linearly as the curve of PSO does, and that it has a periodic convergence, similar to GA.

Through the above analysis, we can understand that DSA possesses some great features such as first-slow-then-fast convergence and periodic convergence. The reason that DSA possesses these features is that the information exchanges cause delay. We can learn from Eq. (13) that the delay is proportional to the distance between the dolphins. So, in the first few loops, the convergence is greatly influenced by the delay, but the influence becomes smaller with the decrease in the distance between dolphins. The advantage is that each dolphin has more time for searching its own surrounding area, which enlarges the whole dolphin swarm’s search area at the group level and avoids prematurity in DSA.

4.2 Comparison of results

DSA, PSO, GA, and ABC are compared with each other under four conditions: (1) 10 dimensions, 10 individuals, and 10 000 calls of benchmark functions; (2) 30 dimensions, 10 individuals, and 10 000 calls of benchmark functions; (3) 30 dimensions, 10 individuals, and 20 000 calls of benchmark functions; (4) 30 dimensions, 20 individuals, and 20 000 calls of benchmark functions (to be clear, each dimension ranges from −100 to 100). We use 10 benchmark functions with different properties to test these four algorithms and compare the results to find out the performances of these four algorithms in terms of different dimensions, different numbers of calls of benchmark functions, and different numbers of individuals. To obtain a better comparison and to reduce the influence of accidental factors, all the experimental results are obtained by carrying out each experiment 20 times. Furthermore, for each benchmark function, the algorithm that obtains the best result is tested with the other three algorithms using Wilcoxon’s tests at the 5% significance level to assess whether it is significantly different from others.

Considering five significant figures, the results of 10 dimensions, 10 individuals, and 10 000 calls of benchmark functions are shown in Table 2.

Table 2 Results of 10 dimensions, 10 individuals, and 10 000 calls of benchmark functions*

Considering five significant figures, the results of 30 dimensions, 10 individuals, and 10 000 calls of benchmark functions are shown in Table 3.

Table 3 Results of 30 dimensions, 10 individuals, and 10 000 calls of benchmark functions*

Considering five significant figures, the results of 30 dimensions, 10 individuals, and 20 000 calls of benchmark functions are shown in Table 4.

Table 4 Results of 30 dimensions, 10 individuals, and 20 000 calls of benchmark functions*

Considering five significant figures, the results of 30 dimensions, 20 individuals, and 20 000 calls of benchmark functions are shown in Table 5.

Table 5 Results of 30 dimensions, 20 individuals, and 20 000 calls of benchmark functions*

The Wilcoxon test results of 10 dimensions, 10 individuals, and 10 000 calls of benchmark functions are shown in Table 6.

Table 6 Results of Wilcoxon’s tests of the four algorithms*

The Wilcoxon test results of 30 dimensions, 10 individuals, and 10 000 calls of benchmark functions are shown in Table 7.

Table 7 Results of Wilcoxon’s tests of the four algorithms*

The Wilcoxon test results of 30 dimensions, 10 individuals, and 20 000 calls of benchmark functions are shown in Table 8.

Table 8 Results of Wilcoxon’s tests of the four algorithms*

The Wilcoxon test results of 30 dimensions, 20 individuals, and 20 000 calls of benchmark functions are shown in Table 9.

Table 9 Results of Wilcoxon’s tests of the four algorithms*

In Tables 25, the best means and standard deviations of the four algorithms are shown in bold font. In Tables 69, the algorithm that obtains the best result is represented as ‘—’ and ‘1’ means that the algorithm is significantly different from the best one.

On the contrary, ‘0’ means that the algorithm is not significantly different from the best one.

Tables 29 show that DSA performs better in most cases and has advantages in terms of magnitude over the other three algorithms in more than half of the cases. In the benchmark function f8(x), DSA (with a smaller standard deviation) and PSO are not significantly different from GA, although GA leads to the best results in the benchmark function f8(x). In some benchmark functions, taking benchmark function f2(x) as an example, DSA obtains unsatisfactory results, while GA obtains the best results. We can learn from Eq. (26) that there is a continuous multiplication that influences the benchmark function f2(x) the most. It means that the shape of the benchmark function f2(x) is too precipitous, which is unsuitable for DSA, because dolphins move around the neighborhood optimal solution in DSA. If the shape of the fitness function around the neighborhood optimal solution is too precipitous, then the predation phase is more likely to move to a random position. On the contrary, GA just exchanges the genes and selects the best one, so the shape of the fitness function has little influence on GA’s performance. In other words, the performance of DSA varies when the shape of the fitness function is different, but in most cases, DSA performs well.

Comparing Tables 2 and 3, we can figure out that without changing other parameters but the dimension, DSA performs better in low-dimensional unimodal functions and high-dimensional multimodal functions, and that the performance with low dimension is generally better than that with high dimension. Comparison of Tables 3 and 4 shows that DSA leads to better results with more calls of the benchmark function, which corresponds to the conclusions shown in Section 4.1. Because of the delay caused by information exchanges, DSA converges faster than the other three algorithms due to the increase in the number of calls of the benchmark function. Comparison of Tables 4 and 5 shows that DSA performs worse than before when the number of individuals increases, which also conforms to the conclusions shown in Section 4.1. Because the increase in the individual number means a decrease of the total loop number, there are not enough loops for DSA to converge faster than the other three algorithms since DSA has first-slow-then-fast convergence. Based on the above analysis, we can see that better results can be achieved by increasing the number of calls of the benchmark function appropriately when the solutions to the optimization problem become complex, while increasing the number of individuals does not work.

5 Conclusions

In this paper we first introduce some of the noteworthy biological characteristics and living habits of a dolphin and use the dolphin swarm’s predatory process to explain how these biological characteristics and living habits work. Then, we propose a brand new algorithm, the dolphin swarm algorithm (DSA), based on the idea of swarm intelligence by simulating these biological characteristics and living habits and give specific descriptions of the four pivotal phases in the algorithm, which are search phase, call phase, reception phase, and predation phase.

In the experiment section, DSA is compared with PSO, GA, and ABC in terms of the convergence rate and the benchmark function results. After comparing these four algorithms’ convergence curves under four benchmark functions with different extreme value distributions, we conclude that DSA possesses some great features such as first-slow-then-fast convergence and periodic convergence, and that DSA is more suitable for the optimization problems where fitness functions are called more often. After comparing these four algorithms’ results under 10 benchmark functions with different properties, we conclude that in more than half of the cases, DSA performs better, especially in low-dimensional unimodal functions, high-dimensional multimodal functions, step functions, and functions with random numbers. The experimental results indicate that DSA has better performance with fewer individuals, that DSA is sensitive to increase in dimensionality, and that the performances in low-dimensional multimodal functions and high-dimensional unimodal functions may not be good enough, and can be further optimized.

With reference to follow-up work, the application of DSA to other models, such as applying DSA in an extreme learning machine, is a further goal.