Swallow swarm optimization algorithm: a new method to optimization
- 927 Downloads
- 12 Citations
Abstract
This paper presents an exposition of a new method of swarm intelligence–based algorithm for optimization. Modeling swallow swarm movement and their other behavior, this optimization method represents a new optimization method. There are three kinds of particles in this method: explorer particles, aimless particles, and leader particles. Each particle has a personal feature but all of them have a central colony of flying. Each particle exhibits an intelligent behavior and, perpetually, explores its surroundings with an adaptive radius. The situations of neighbor particles, local leader, and public leader are considered, and a move is made then. Swallow swarm optimization algorithm has proved high efficiency, such as fast move in flat areas (areas that there is no hope to find food and, derivation is equal to zero), not getting stuck in local extremum points, high convergence speed, and intelligent participation in the different groups of particles. SSO algorithm has been tested by 19 benchmark functions. It achieved good results in multimodal, rotated and shifted functions. Results of this method have been compared to standard PSO, FSO algorithm, and ten different kinds of PSO.
Keywords
Computational intelligence Swallow swarm optimization (SSO) Benchmark function Fish swarm optimization Particle swarm optimization1 Introduction
Swarm intelligence (SI) [1] as observed in natural swarms is the result of actions that individual in the swarm perform exploiting local information. Usually, the swarm behavior serves to accomplish certain complex colony-level goals. Examples include group foraging by ants, division of labor among scouts and recruits in honeybee swarms, evading predators by fish schools, flocking of birds, and group-hunting as observed in canids, herons, and several cetaceans.
The decentralized decision-making mechanisms found in the above examples, and others in the natural world, offer an insight on how to design distributed algorithms that solve complex problems related to diverse fields, such as optimization, multi-agent decision making, and collective robotics. Ant colony optimization technique [2, 3, 4], particle swarm optimization algorithm [5, 6, 7], artificial fish swarm optimization algorithm [8, 9, 10], and glowworm swarm optimization algorithm [11, 12, 13, 14] and several swarms based collective robotic algorithms, etc are different methods of swarm intelligence [15, 16, 17].
In this paper, we present a novel algorithm called swallow swarm optimization (SSO) for the simultaneous computation of multimodal functions. The algorithm shares some common features with particle swarm optimization (PSO) with fish swarm optimization (FSO), but with several significant differences. Swallows own high swarm intelligence. Their flying speed is high, and they are able to fly long distances in order to migrate from one point to another. They fly in great colonies. Flying collectively, they mislead hunters in dangerous positions. Swallow swarm life has many particular features that are more complicated and bewildering in comparison with fish schools and ant colonies. Consequently, it has been appointed as the subject of research and algorithm simulation.
The second section is about the method. Third section is about the benchmark function. Forth section is about experimental results and examining the diverse states of PSO and FSO and then comparing them to proposed method.
2 Method
Review to compound methods of new optimization
Method | Authors | Paper | Publisher |
---|---|---|---|
PSO-GA [79] | Esmin, A. A. A. Lambert-Torres, G. Alvarenga, G. B. UFLA, Brazil | Hybrid evolutionary algorithm based on PSO and GA mutation | Sixth International Conference on Hybrid Intelligent Systems, 2006. HIS ‘06 |
PSO-GA [80] | Matthew Settles and Terence Soule | Breeding swarms: a GA/PSO hybrid | In GECCO ‘05: Proceedings of the 2005 conference on Genetic and evolutionary computation (2005), pp. 161–168 |
ACO-PSO [81] | Yan Meng and Ọlọrundamilọla Kazeem | A hybrid ACO/PSO control algorithm for distributed swarm robots | Proceedings of the 2007 IEEE Swarm Intelligence Symposium (SIS 2007) |
PSO-ACO [82] | D. Gómez-Cabrero, D. N. Ranasinghe | Fine-tuning the ant colony system algorithm through particle swarm optimization | Proceedings of the International Conference on Information and Automation, 2005 |
PSO-FSO [83] | Huadong Chen, Shuzong Wang, Jingxi Li, Yunfan Li | A hybrid of artificial fish swarm algorithm and particle swarm optimization for feedforward neural network training | 2007 International Conference on Intelligent Systems and Knowledge Engineering (ISKE 2007) |
ACO-FSO [84] | Hongyan Shi, Zhaoyu Bei | Application of improved ant colony algorithm | Fourth International Conference on Natural Computation, 2008. ICNC ‘08 |
ACO-FSO [85] | Hong-yan Shi, Zhao-yu Bei | A mixed ant colony algorithm for function optimization | Proceedings of the 21st annual international conference on Chinese control and decision IEEE Press Piscataway, NJ, USA 3919–3923, 2009 |
2.1 PSO and its developments
2.1.1 PSO structure
The initial ideas on particle swarms of Kennedy (a social psychologist) and Eberhart (an electrical engineer) were basically aimed at producing computational intelligence by exploiting simple analogs of social interaction, rather than purely individual cognitive abilities. The first simulations [5] were influenced by Heppner and Grenander’s work [19] and involved analogs of bird flocks searching for corn. These soon developed [5, 20, 21] into a powerful optimization method—PSO [22].
2.1.2 New developments of the PSO
Given its simple concept and efficiency, the PSO has become a popular optimizer and has widely been applied in practical problem solving. Thus, theoretical studies and performance improvements of the algorithm have become important and attractive. Convergence analysis and stability studies have been reported by Clerc and Kennedy [24], Trelea [25], Yasuda et al. [26], Kadirkamanathan et al. [27], and van den Bergh and Engelbrecht [28].
Another active research trend in PSO is hybrid PSO, which combines PSO with other evolutionary paradigms. Angeline [38] first introduced into PSO a selection operation similar to that in a genetic algorithm (GA). Hybridization of GA and PSO has been used in [39] for recurrent artificial neural network design. In addition to the normal GA operators, for example, selection [38], crossover [40], and mutation [41], other techniques such as local search [42] and differential evolution [43] have been used to combine with PSO. Cooperative approach [44], self-organizing hierarchical technique [45], deflection, stretching, and repulsion techniques [46] have also been hybridized with traditional PSO to enhance performance. Inspired by biology, some researchers introduced niche [47, 48] and speciation [49] techniques into PSO to prevent the swarm from crowding too closely and to locate as many optimal solutions as possible and adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization [50]. The orthogonal PSO (OPSO) reported in [41] uses an “intelligent move mechanism” (IMM) operation to generate two temporary positions, H and R, for each particle X, according to the cognitive learning and social learning components, respectively. Then, OED is performed on H and R to obtain the best position X* for the next move, and then, the particle velocity is obtained by calculating the difference between the new position X* and the current position X. Such an IMM was also used in [51] to orthogonally combine the cognitive learning and social learning components to form the next position, and the velocity was determined by the difference between the new position and the current position. The OED in [52] was used to help generate the initial population evenly. Different from previous work and to go steps further, in this paper, we use the OED to form an orthogonal learning strategy, which discovers and preserves useful information in the personal best and the neighborhood best positions in order to construct a promising and efficient exemplar. This exemplar is used to guide the particle to fly toward the global optimal region. The OL (orthogonal learning) strategy is a generic operator and can be applied to any kind of topology structure. If the OL is used for the GPSO, then P_{n} is P_{g}. If it is used for the LPSO, then P_{n} is P_{l}. For either a global or a local version, when constructing the vector of P_{o}, if P_{i} is the same as P_{n} (e.g., for the globally best particle, P_{i} and P_{g} are identical vectors), the OED (orthogonal experimental design) makes no contribution. In such a case, OLPSO will randomly select another particle P_{r} and then construct P_{o} by using the information of P_{i} and P_{r} through the OED. Two OLPSO versions that based on a global topology (OLPSO-G) and a local topology (OLPSO-L) are simulated [53].
In addition to research on parameter control and auxiliary techniques, PSO topological structures are also widely studied. The LPSO with a ring topological structure and the von Neumann topological structure PSO (VPSO) have been proposed by Kennedy and Mendes [54, 55] to enhance the performance in solving multimodal problems. Further, dynamically changing neighborhood structures have been proposed by Suganthan [36], Hu and Eberhart [56], and Liang and Suganthan [57] to avoid the deficiencies of fixed neighborhoods. Moreover, in the “fully informed particle swarm” (FIPS) algorithm [58], the information of the entire neighborhood is used to guide the particles. The CLPSO in [59] lets the particle use different pBests to update its flying on different dimensions for improved performance in multimodal applications.
2.2 Artificial fish swarm algorithm (AFSA)
A new evolutionary computation technique, artificial fish swarm algorithm (AFSA), was first proposed in 2002 [60]. The idea of AFSA is based on the simulation of the simplified natural social behavior of fish schooling and the swarming theory. AFSA possess similar attractive features of genetic algorithm (GA) such as independence from gradient information of the objective function, the ability to solve complex non-linear high-dimensional problems. Furthermore, they can achieve faster convergence speed and require few parameters to be adjusted. The AFSA does not possess the crossover and mutation processes used in GA, so it could be performed more easily. AFSA is also an optimizer based on population. The system is initialized firstly in a set of randomly generated potential solutions and then performs the search for the optimum one iteratively [61].
Artificial fish (AF) is a fictitious entity of true fish, which is used to carry on the analysis and explanation of problem, and can be realized by using animal ecology concept. With the aid of the object-oriented analytical method, we can regard the artificial fish as an entity encapsulated with one’s own data and a series of behaviors, which can accept amazing information of environment by sense organs, and do stimulant reaction by the control of tail and fin. The environment in which the artificial fish lives is mainly the solution space and the states of other artificial fish. Its next behavior depends on its current state and its environmental state (including the quality of the question solutions at present and the states of other companions), and it influences the environment via its own activities and other companions’ activities [62].
2.2.1 The basic functions of AFSA
Fish usually stay in the place with a lot of food, so we simulate the behaviors of fish based on this characteristic to find the global optimum, which is the basic idea of the AFSA. The basic behaviors of AF are defined [9, 10] as follows for maximum:
(4) AF_Move: Fish swim randomly in water; in fact, they are seeking food or companions in larger ranges.
(5) AF_Leap: Fish stop somewhere in water, every AF’s behavior result will gradually be the same, the difference in objective values (food concentration, FC) becomes smaller within some iterations, it might fall into local extremum, change the parameters randomly to the still states for leaping out current state.
The detail behavior pseudo-code can be seen in [63]. AF_Swarm makes few fish confined in local extreme values move in the direction of a few fish tending to global extreme value, which results in AF fleeing from the local extreme values. AF_Follow accelerates AF moving to better states and, at the same time, accelerates AF moving to the global extreme value field from the local extreme values.
2.3 Swallow swarm optimization (SSO)
2.3.1 Swallows natural life
Many creatures have a social life and live in groups, including birds. A variety of birds in the small and large colonies have a social life, and each one has certain characteristics and social behaviors. By investigating these behaviors and getting inspired by them, various optimization algorithms have been presented, such as PSO. These swallows have unique characteristics and social behaviors that have attracted our attention.
Swallow is an insect-eating bird; 83 species of swallows have been so far identified [66]. Due to its high compatibility with the environment, this bird lives almost everywhere on Earth. This bird has special characteristics that make it distinct from the other birds. So many studies have been conducted on the life of various species of swallows and remarkable results have been obtained, which will be further discussed:
2.3.1.1 Immigration
“Swallows” annually travel 17,000 km and migrate from a continent to another continent. Swallows migrate in very large groups of even hundred thousand [67]. The very social life in large groups indicates high swarm intelligence of these birds.
2.3.1.2 High-speed flying
Swallows keep a record of speed among the other migrants, so that they travel 4,000 km per 24 h, that is, with a speed of 170 km/h per hour. This feature can be very effective in the particles’ convergence speed and in solving the optimization problems in the least time.
2.3.1.3 Skilled hunters
Swallows have adapted to hunting insects on the wing by developing a slender streamlined body and long-pointed wings, which allow great maneuverability and endurance, as well as frequent periods of gliding. Their body shape allows for very efficient flight, which costs 50–75 % less for swallows than equivalent passerines of the same size. Swallows usually forage at around 30–40 km/h. Swallows are excellent flyers and use these skills to feed and attract a mate.
The swallows generally forage for prey that is on the wing, but they will on occasion snap prey off branches or on the ground. The flight may be fast and involve a rapid succession of turns and banks when actively chasing fast-moving prey; less agile prey may be caught with a slower more leisurely flight that includes flying in circles and bursts of flapping mixed with gliding. Where several species of swallow feed together, they will be separated into different niches based on height off the ground: some species feeding closer to the ground and others feeding at higher levels. Similar separation occurs where feeding overlaps with swifts. Niche separation may also occur with the size of prey chosen.
2.3.1.4 Different calls
Swallows are able to produce many different calls or songs, which are used to express excitement, to communicate with others of the same species, during courtship, or as an alarm when a predator is in the area. The songs of males are related to the body condition of the bird and are presumably used by females to judge the physical condition and suitability for mating of males. Begging calls are used by the young when soliciting food from their parents. The typical song of swallows is a simple, sometimes musical twittering [68].
Swallows are regularly correlated through various sounds, and if one of them does not find the food source, it immediately calls the other birds in the colony. These special sounds help them to have a better social life.
2.3.1.5 Information centers
Cliff swallow colonies function as “information centers” in which individuals unsuccessful at finding food locate other individuals that have found food and follow them to a food source [69]. Advantages associated with information sharing on the whereabouts of food are substantial and probably represent a major reason why cliff swallows live in colonies. He also discovered that cliff swallows represent one of the few birds (and indeed non-human vertebrates) that actively communicate the presence of food to others by giving distinct signals (calls) used only in that context [70]. The evolution of such information sharing is perplexing, because the typical beneficiaries of calling are individuals unrelated to the caller.
2.3.1.6 Floating swallow
When migrating, a few swallows always fly out of the colony and in the first glance; it might look like that they disturb the colony order. But these swallows that are generally young (newly matured) play an important role in the colony. First, the swallows have the chance to find food outside the internal areas of the colony and call the other swallows as soon as they find food. Second, if a hunting bird intends to attack the swallows, these floating swallows quickly notice and inform the other members of the colony. These swallows fly between the colonies and can change their colonies. This behavior has been used in this study. These particles can increase the chance to find the optimum points, and if the other particles converge on a local optimum by mistake, these particles strengthen the chance to find better points by their random and independent movement. In this paper, these particles have been called aimless particle.
2.3.1.7 The interest in social life
Swallows are one of the most important species of birds that prefer living in colonies to individual life. Sometimes, thousands of swallows can be seen as a colony in the sky. However, large colonies are composed of several smaller colonies. The number of birds in the group has a critical role in more successful reproduction, fighting predators, and a better search for food [71, 72]. There is a factor in the group life of swallows called social stimulation, which is an important influence [72]. Swallows’ colony size can have a direct impact on the hormones level in their bodies. The larger the colony is, the higher the hormone levels will be and the more successful swallows’ life and reproduction will be [73]. During 1,000 years of evolution, this bird has obtained a high swarm intelligence and has a successful social life.
2.3.1.8 Leaders
Each colony is divided into several subcolonies that have side-by-side nests and live in a site. Each colony has a leader that is commonly an experienced bird. If the suitability of the group leader is reduced for any reason, another bird immediately supersedes it. Swallows always follow the leader, provided they have the necessary competencies. In this study, two types of leaders are used: Local leaders that conduct the internal colonies and show a local optimum point and Head Leader that is responsible for the leadership of the entire colony and shows the public optimum point.
2.3.1.9 Escaping from predators
Because of their small body, swallows are good prey for most birds. Two of their strategies are synchronous flying and namely safety in numbers. Swallows’ intelligent behaviors against hunting birds are very interesting. Swallows can be highly adaptable to the different environments. These adaptive behaviors have been intelligent and can be well seen in feeding and nest-building architecture. Swallows are very successful in search of food and have specific strategies for finding food which are very complex [74]. Many behaviors of the swallows have remained unknown, and it is very difficult to implement all behaviors of this bird to produce a working collective intelligence algorithm, because the desired algorithm will have a high time complexity and will not have the necessary optimality, due to the high complexity of these behaviors. In this paper, only the behaviors of 8, 6, 2, and 1 have been used and good results have been obtained.
2.3.2 Algorithm SSO
- 1.
Explorer particle (e_{i})
- 2.
Aimless particle (o_{i})
- 3.
Leader particle (l_{i})
These particles move parallel to each other and always are in interaction. Each particle in colony (each colony can be consisted of some subcolonies) is responsible for something that through doing it guides the colony toward a better situation.
2.3.2.1 Explorer particle
Every particle e_{i} utilizes nearest particle LL_{i} in order to compute the vector of \( V_{{{\text{LL}}_{i} }} \).
2.3.2.2 Aimless particle
These particles in the beginning of exploring the situation do not have a good position in comparison with other particles, and the amount of their f(o_{i}) is bad. These particles, after being recognized, are differentiated from explorer particles e_{i}, so a new responsibility in group is defined for them (o_{i}). Their duty is an exploratory and random search. They start moving randomly and do not have anything to do with the position of HL_{i} and LL_{i}. They are swallows that explore remote areas as the scout of colony and inform the group if they find a good point. In a lot of optimization problems because of inappropriate distribution of particles in position space, the optimum response is kept hidden from the group’s eyes and the group converges in a local optimum. This is the greatest difficulty with optimization problems (early convergence in local optimum pints). Particles o_{i}, apparently, may have an aimless and useless behavior but examine the probability of neglecting the global optimum response and go around the diverse surrounding points with their long jumps and examine the situation of optimization. The particle o_{i} compares its position with the local optimum points LL_{i} and HL_{i}.
For example, the range of the function Rosenbrock has been defined between (−50, 50). If the function rand (min, max) produces a random number (25) and the function rand() produces 0.5, the fraction result would be 12.5. Now this amount may be added to or subtracted from the position of o_{i}. This will increase the chance of examining the different areas of environment.
2.3.2.3 Leader particle
Real boundaries between subcolonies may never be marked because swallow movements happen in high speed and high dynamic. The diagram of swallows and their numbers changes according to the size of subcolonies.
All of these three particles (e_{i}, o_{i}, and l_{i}) interact with each other continuously, and each particle (swallow) can play each of these three roles. During searching period, these particles may frequently change their role but primary goal (finding optimum point) is such a more important task.
2.3.2.4 Pseudocode SSO
1. Initialize population (random all particle e_{i}) |
2. Iter = 1 |
3. While (iter < max_iter) |
4. For every particle (e_{i}) Calculate f(e_{i}) |
5. Sort f(e_{1}, e_{2}, …, e_{n}))→min to max |
6. All particles e_{i} = f(e_{i}) |
7. HL = e_{min} |
8. For (j = 2 to j < m) LL_{i} = e_{j} |
9. For (j = 1 to j < b) o_{j} = e_{n−j+1} |
10. i = 1 |
11. While (k < iter) |
12. While (j < N) |
13. if (e_{best} > e_{i}) e_{best} = e_{i} |
14. While (i < N) |
15. Search (nearest LL_{i} to e_{i}) |
16. α_{HL} = {if(e_{i} = 0||e_{best} = 0)→1.5} |
17.\( \alpha_{\text{HL}} = \left\{ {\begin{array}{*{20}c} {{\text{if}}(e_{i} < e_{\text{best}} )\& \& (e_{i} < {\text{HL}}_{i} ) \to } \hfill & {\frac{{rand().e_{i} }}{{e_{i} \cdot e_{\text{best}} }}} \hfill & {e_{i} ,e_{\text{best}} \ne 0} \hfill \\ {{\text{if}}(e_{i} < e_{\text{best}} )\& \& (e_{i} > {\text{HL}}_{i} ) \to } \hfill & {\frac{{2rand() \cdot e_{\text{best}} }}{{1/(2.e_{i} )}}} \hfill & {e_{i} \ne 0} \hfill \\ {{\text{if}}(e_{i} > e_{\text{best}} ) \to } \hfill & {\frac{{e_{\text{best}} }}{1/(2.rand())}} \hfill & {} \hfill \\ \end{array} } \right. \) |
18. β_{HL} = {if(e_{i} = 0||e_{best} = 0)→1.5} |
19. \( \beta_{\text{HL}} = \left\{ {\begin{array}{*{20}c} {{\text{if}}(e_{i} < e_{\text{best}} )\& \& (e_{i} < {\text{HL}}_{i} ) \to } \hfill & {\frac{{rand().e_{i} }}{{e_{i} \cdot {\text{HL}}_{i} }}} \hfill & {e_{i} ,{\text{HL}}_{i} \ne 0} \hfill \\ {{\text{if}}(e_{i} < e_{\text{best}} )\& \& (e_{i} > {\text{HL}}_{i} ) \to } \hfill & {\frac{{2rand() \cdot {\text{HL}}_{i} }}{{1/(2.e_{i} )}}} \hfill & {e_{i} \ne 0} \hfill \\ {{\text{if}}(e_{i} > e_{\text{best}} ) \to } \hfill & {\frac{{{\text{HL}}_{i} }}{1/(2.rand())}} \hfill & {} \hfill \\ \end{array} } \right. \) |
20. \( V_{{{\text{HL}}_{i + 1} }} = V_{{{\text{HL}}_{i} }} + \alpha_{\text{HL}} rand()(e_{\text{best}} - e_{i} ) + \beta_{\text{HL}} rand()({\text{HL}}_{i} - e_{i} ) \) |
21. α_{LL} = {if(e_{i} = 0||e_{best} = 0)→2} |
22. \( \alpha_{\text{LL}} = \left\{ {\begin{array}{*{20}c} {{\text{if}}(e_{i} < e_{\text{best}} )\& \& (e_{i} < {\text{LL}}_{i} ) \to } \hfill & {\frac{{rand().e_{i} }}{{e_{i} \cdot e_{\text{best}} }}} \hfill & {e_{i} ,e_{\text{best}} \ne 0} \hfill \\ {{\text{if}}(e_{i} < e_{\text{best}} )\& \& (e_{i} > {\text{LL}}_{i} ) \to } \hfill & {\frac{{2rand() \cdot e_{\text{best}} }}{{1/(2.e_{i} )}}} \hfill & {e_{i} \ne 0} \hfill \\ {{\text{if}}(e_{i} > e_{\text{best}} ) \to } \hfill & {\frac{{e_{\text{best}} }}{1/(2.rand())}} \hfill & {} \hfill \\ \end{array} } \right. \) |
23. β_{LL} = {if(e_{i} = 0||e_{best} = 0)→2} |
24. \( \beta_{\text{LL}} = \left\{ {\begin{array}{*{20}c} {{\text{if}}(e_{i} < e_{\text{best}} )\& \& (e_{i} < {\text{LL}}_{i} ) \to } \hfill & {\frac{{rand().e_{i} }}{{e_{i} \cdot {\text{LL}}_{i} }}} \hfill & {e_{i} ,{\text{LL}}_{i} \ne 0} \hfill \\ {{\text{if}}(e_{i} < e_{\text{best}} )\& \& (e_{i} > {\text{LL}}_{i} ) \to } \hfill & {\frac{{2rand() \cdot {\text{LL}}_{i} }}{{1/(2.e_{i} )}}} \hfill & {e_{i} \ne 0} \hfill \\ {{\text{if}}(e_{i} > e_{\text{best}} ) \to } \hfill & {\frac{{{\text{LL}}_{i} }}{1/(2.rand())}} \hfill & {} \hfill \\ \end{array} } \right. \) |
25. \( V_{{{\text{LL}}_{i + 1} }} = V_{{{\text{LL}}_{i} }} + \alpha_{\text{LL}} rand()(e_{\text{best}} - e_{i} ) + \beta_{\text{LL}} rand ( )({\text{LL}}_{i} - e_{i} ) \) |
26. \( V_{i + 1} = V_{{{\text{HL}}_{i + 1} }} + V_{{{\text{LL}}_{i + 1} }} \) |
27. e_{i+1} = e_{i} + V_{i+1} |
28. \( o_{i + 1} = o_{i} + rand(\{ - 1,1\} )*\frac{{rand(\min_{s} ,\max_{s} )}}{1 + rand()} \) |
29. if (\( f(o_{i + 1} ) > f({\text{HL}}_{i} ) \)) e_{nearest} = O_{i+1} → go to 30 |
30. While (k_{o} ≤ b) |
31. While (l ≤ n_{l}) |
32. if \( (f(o_{{k_{o} }} ) > f({\text{LL}}_{l} )) \) |
33. e_{nearest} = O_{i+1} |
34. Loop//i < N |
35. Loop//iter < max_iter |
36. End. |
Parameters of SSO algorithm
# | Parameter | Description |
---|---|---|
1 | Iter | Algorithm iteration |
2 | e_{i} | Explorer particle |
3 | o_{i} | Aimless particle |
4 | HL | Head leader |
5 | LL_{i} | Local leader |
6 | n_{l} | The number of local leader |
7 | e_{best} | The best position the particle has ever had |
8 | α_{HL} β_{HL} | Control parameters of convergence speed toward particle HL |
9 | α_{LL} β_{LL} | Control parameters of convergence speed toward particle LL |
10 | V_{HL} | Velocity vector of particle toward HL |
11 | V_{LL} | Velocity vector of particle toward LL |
12 | min_{s}, max_{s} | The minimum and the maximum amount of benchmark function |
13 | b | The number of aimless particle |
3 Benchmark functions
Benchmarks that are used to test the ability of the proposed algorithm
# | Function | Equation | Domain | F_{min} | D | |
---|---|---|---|---|---|---|
f_{1} | Sphere | \( \sum\nolimits_{i = 1}^{D} {x_{i}^{2} } \) | ±5.12 | 0 | 30 | Unimodal |
f_{2} | Rosenbrock | \( \sum\nolimits_{i = 1}^{D - 1} {\left( {100\left( {x_{i + 1} - x_{i}^{2} } \right)^{2} + (x_{i} - 1)^{2} } \right)} \) | ±50 | 0 | 30 | |
f_{3} | Schwefel’s P2.22 | \( \sum\nolimits_{i = 1}^{D} {|x_{i} | + \prod\nolimits_{i = 1}^{D} {|x_{i} |} } \) | ±10 | 0 | 30 | |
f_{4} | Quadric | \( \sum\nolimits_{i = 1}^{D} {\left( {\sum\nolimits_{j = 1}^{i} {x_{j} } } \right)^{2} } \) | ±100 | 0 | 30 | |
f_{5} | Step | \( \sum\nolimits_{i = 1}^{D} {\left( {\left\lfloor {x_{i} + 0.5} \right\rfloor } \right)}^{2} \) | ±100 | 0 | 30 | |
f_{6} | Quadric noise | \( \sum\nolimits_{i = 1}^{D} {ix_{i}^{4} } + {\text{rand}}[0,1) \) | ±1.28 | 0 | 30 | |
f_{7} | Ackley | \( 20 + e - 20e^{{ - 0.2\sqrt {\frac{1}{D}\sum\nolimits_{i = 1}^{D} {x_{i}^{2} } } }} - e^{{\frac{1}{D}\cos (2\pi x_{i} )}} \) | ±32 | 0 | 30 | Multimodal |
f_{8} | Griewank | \( \sum\nolimits_{i = 1}^{D} {\left( {\frac{{x_{i}^{2} }}{4000}} \right)} - \prod\nolimits_{i = 1}^{D} {\cos \left( {\frac{{x_{i} }}{\sqrt i }} \right)} + 1 \) | ±600 | 0 | 30 | |
f_{9} | Rastrigin | \( \sum\nolimits_{i = 1}^{D} {\left( {x_{i}^{2} - 10\cos (2\pi x_{i} ) + 10} \right)} \) | ±5.12 | 0 | 30 | |
f_{10} | Perm #1 [86] | \( \sum\nolimits_{x = 1}^{4} {\left[ {\sum\nolimits_{i = 1}^{4} {(i^{k} + \beta )((x_{i} /i)^{k} - 1)} } \right]}^{2} \) | ±4 β = 50 | 0 | 30 | |
f_{11} | Schwefel | \( \sum\nolimits_{i = 1}^{D} { - x\sin (\sqrt {x_{i} } } ) \) | ±500 | −12569.5 | 30 | |
f_{12} | Non-continuous Rastrigin | \( \begin{gathered} \sum\nolimits_{i = 1}^{D} {\left( {y_{i}^{2} - 10\cos (2\pi x_{i} ) + 10} \right)} \hfill \\ {\text{where}}\quad yi = \left\{ {\begin{array}{*{20}c} {x_{i} } & {|x_{i} | < 0.5} \\ {\frac{{{\text{round}}(2x_{i} )}}{2}} & {|x_{i} | \ge 0.5} \\ \end{array} } \right. \hfill \\ \end{gathered} \) | ±5.12 | 0 | 30 | |
f_{13} | Generalized penalized | \( \pi /D\left\{ {10\sin^{2} (\pi y_{i} ) + \sum\nolimits_{i = 1}^{D - 1} {(y_{i} - 1)^{2} } \left[ {1 + 10\sin^{2} (\pi y_{i + 1} )} \right] + (y_{D} - 1)^{2} } \right\} + \sum\nolimits_{i = 1}^{D} {u(x_{i} ,10,100,4)} \) \( {\text{where}}\quad yi = 1 + \frac{1}{4}(x_{i} + 1),u(x_{i} ,a,k,m) = \left\{ {\begin{array}{*{20}c} \hline {k(x_{i} - a)^{m} } & {x_{i} > a} \\ 0 & { - a \le x_{i} \le a} \\ {k( - x_{i} - a)^{m} } & {x_{i} < - a} \\ \end{array} } \right. \) | ±50 | 0 | 30 | |
f_{14} | Rotated | \( 418.9829*D - \sum\nolimits_{i = 1}^{D} {z_{i} } \) \( z_{i} = \left\{ {\begin{array}{*{20}c} {y_{i} \sin (\sqrt {|y_{i} |} )} & {{\text{if}}|y_{i} | \le 500} \\ 0 & {\text{otherwise}} \\ \end{array} } \right. \) \( y_{i} = y_{i}^{\prime } + 420.96\quad y^{\prime } = M*(x - 420.96), \) M is an orthogonal matrix | ±500 | 0 | 30 | Rotated and shifted |
f_{15} | Rotated Rastrigin | \( \begin{gathered} \sum\nolimits_{i = 1}^{D} {\left( {y_{i}^{2} - 10\cos (2\pi y_{i} ) + 10} \right)} \hfill \\ y = M*x \hfill \\ \end{gathered} \) | ±5.12 | 0 | 30 | |
f_{16} | Rotated Ackley | \( \begin{gathered} - 20\exp \left( { - 0.2\sqrt {\frac{1}{D}\sum\nolimits_{i = 1}^{D} {y_{i}^{2} } } } \right) - \exp \left( {\frac{1}{D}\sum\nolimits_{i = 1}^{D} {\cos 2\pi y_{i} } } \right) + 20 + e \hfill \\ y = M*x \hfill \\ \end{gathered} \) | ±32 | 0 | 30 | |
f_{17} | Rotated Griewank | \( \begin{gathered} \sum\nolimits_{i = 1}^{D} {\left( {\frac{{y_{i}^{2} }}{4,000}} \right)} - \prod\nolimits_{i = 1}^{D} {\cos \left( {\frac{{y_{i} }}{\sqrt i }} \right) + 1} \hfill \\ y = M*x \hfill \\ \end{gathered} \) | ±600 | 0 | 30 | |
f_{18} | Shifted Rosenbrock | \( \sum\nolimits_{i = 1}^{D} {\left( {100\left( {z_{i}^{2} - z_{i + 1} } \right)^{2} + (z_{i} - 1)^{2} } \right) + f\_{\text{bias}}_{6} } \) \( \begin{gathered} z = x - o + 1,\quad x = [x_{1} ,x_{2} , \ldots ,x_{D} ] \hfill \\ o = [o_{1} ,o_{2} , \ldots o_{D} ]:{\text{the}}\,{\text{shifted}}\,{\text{global}}\,{\text{optimum}} \hfill \\ \end{gathered} \) | ±100 | 390 | 30 | |
f_{19} | Shifted Rastrigin | \( \sum\nolimits_{i = 1}^{v} {\left( {z_{i}^{2} - 10\cos (2\pi z_{i} ) + 10} \right) + f\_{\text{bias}}_{9} } \) \( o = [o_{1} ,o_{2} , \ldots ,o_{D} ]:{\text{the}}\,{\text{shifted}}\,{\text{global}}\,{\text{optimum}} \) | ±5 | −330 | 30 |
The first problem is the sphere function, and it is easy to solve. The second problem is the Rosenbrock function. It can be treated as a multimodal problem. It has a narrow valley from the perceived local optima to the global optimum. In the experiments below, we find that the algorithms that perform well on sphere function perform well on Rosenbrock function too. Ackley’s function has one narrow global optimum basin and many minor local optima. It is probably the easiest problem among the six as its local optima are shallow. Griewank’s function has a \( \prod\nolimits_{i = 1}^{D} {\cos (x_{i} /\sqrt i )} \) component causing linkages among variables, thereby making it difficult to reach the global optimum. An interesting phenomenon of Griewank’s function is that it is more difficult for lower dimensions than for higher dimensions [75].
Rastrigin’s function is a complex multimodal problem with a large number of local optima. When attempting to solve Rastrigin’s function, algorithms may easily fall into a local optimum. Hence, an algorithm capable of maintaining a larger diversity is likely to yield better results. Non-continuous Rastrigin’s function is constructed based on the Rastrigin’s function and has the same number of local optima as the continuous Rastrigin’s function. The complexity of Schwefel’s function is due to its deep local optima being far from the global optimum. It will be hard to find the global optimum if many particles fall into one of the deep local optima.
In rotated Schwefel’s function, in order to keep the global optimum in the search range after rotation, noting that the original global optimum of Schwefel’s function is at [420.96, 420.96,…, 420.96], y′ = M * (x − 420.96) and y = y′ + 420.96 are used instead of y = M * x. Since Schwefel’s function has better solutions out of the search range [500, −500]^{D}, when |y_{i}| > 500, z_{i} = 0.001(y_{i} − 500)^{2}, that is, z_{i} is set in portion to the square distance between y_{i} and the bound.
4 Experiments
A number of benchmarks have been used to test the proposed algorithm. Choosing each of these functions for testing had a particular reason, for instance Ackley function is used to aimless particles testing. These particles are expected to find one of the minimum areas with their random movements and guide the group toward there. Each of these functions has particular features that generally have been applied in numerous articles to examine the optimization methods. The SSO method with different iterations of particles and different iterations has been tested by every function of bench and then has been compared to FSO and PSO.
Comparing the performance results between two optimization methods (PSO and FSO) and proposed method (SSO) in the function Ackley
# | Particle | Iteration | PSO_{gbest} | FSO_{gbest} | SSO_{gbest} |
---|---|---|---|---|---|
1 | 10 | 100 | 1.8703e-005 | 7.1776e-007 | 3.8612e-009 |
2 | 20 | 100 | 1.8703e-005 | 3.5021e-007 | 4.7025e-012 |
3 | 30 | 100 | 1.8703e-005 | 2.8182e-008 | 1.2436e-014 |
4 | 40 | 100 | 1.8703e-005 | 2.5565e-009 | 3.7155e-015 |
5 | 5 | 100 | 5.2714e-004 | 9.4081e-006 | 2.1381e-006 |
6 | 3 | 100 | 2.5799 | 3.6774e-001 | 5.1442e-001 |
7 | 2 | 100 | 14.4370 | 1.0895 | 7.0286 |
8 | 50 | 1,000 | 1.003e-015 | 1.003e-015 | 5.0842e-22 |
Comparison between SSO, PSO, and FSO in the function Sphere
Particle | Iteration | PSO_{gbest} | FSO_{gbest} | SSO_{gbest} |
---|---|---|---|---|
100 | 1,000 | 1.9240e-082 | 0 | 0 |
50 | 1,000 | 7.5243e-078 | 0 | 0 |
30 | 1,000 | 3.4892e-064 | 2.2378e-091 | 0 |
10 | 1,000 | 4.9240e-041 | 6.4182e-087 | 8.0016e-095 |
5 | 1,000 | 2.2926e-038 | 2.5051e-076 | 6.0971e-085 |
3 | 1,000 | 6.5324e-034 | 1.0036e-061 | 4.0176e-032 |
2 | 1,000 | 4.5519e-028 | 7.0584e-041 | 7.4218e-012 |
Comparison between SSO and PSO, FSO in the function Griewank
Particle | Iteration | PSO_{gbest} | FSO_{gbest} | SSO_{gbest} |
---|---|---|---|---|
100 | 1,000 | 2.4192e-003 | 2.0008e-009 | 7.5301e-012 |
50 | 1,000 | 7.7481e-002 | 1.7812e-007 | 1.4704e-011 |
30 | 1,000 | 3.4521e-002 | 3.2304e-006 | 3.3501e-009 |
10 | 1,000 | 1.0081e-002 | 1.4102e-005 | 4.8516e-008 |
5 | 1,000 | 1.2047e-001 | 2.5051e-005 | 4.0072e-003 |
3 | 1,000 | 2.0489e-001 | 6.1099e-004 | 1.0051e-002 |
2 | 1,000 | 2.1105e-001 | 4.0529e-004 | 1.0063e-002 |
SSO with two particles in function Griewank has a good performance. It shows more proper behavior comparing to other two methods in the iterations less than 100, but in iterations more than 100, FSO escapes better from local minimum points.
Comparison between SSO and PSO, FSO in the function Rastrigin
Particle | Iteration | PSO_{gbest} | FSO_{gbest} | SSO_{gbest} |
---|---|---|---|---|
100 | 1,000 | 1.08254e-004 | 7.3841e-009 | 3.3557e-015 |
50 | 1,000 | 2.3961e-002 | 1.0014e-006 | 7.2471e-013 |
30 | 1,000 | 8.73 | 4.5106e-005 | 8.4519e-012 |
10 | 1,000 | 10.26 | 9.0045e-005 | 1.0041e-009 |
5 | 1,000 | 10.57 | 1.5067e-002 | 6.7581e-004 |
3 | 1,000 | 14.81 | 2.15 | 2.0076e-002 |
2 | 1,000 | 14.73 | 4.78 | 2.12 |
Between SSO, PSO, and FSO in the function Rosenbrock
Particle | Iteration | PSO_{gbest} | FSO_{gbest} | SSO_{gbest} |
---|---|---|---|---|
100 | 1,000 | 5.0158 | 8.4573e-001 | 4.1217e-004 |
50 | 1,000 | 11.2104 | 1.381 | 9.2483e-003 |
30 | 1,000 | 18.0025 | 3.4502 | 2.1172e-003 |
10 | 1,000 | 26.5891 | 4.8604 | 1.0043e-001 |
5 | 1,000 | 29.4508 | 8.955 | 1.2541 |
3 | 1,000 | 38.25 | 13.104 | 3.402 |
2 | 1,000 | 38.6271 | 13.8004 | 4.815 |
As shown in Table 8, the PSO represents inappropriate responses. But FSO with 100 particles has a quite good response. The SSO has been able to pass different areas of this function and recognizes different minimum points. Appointing some local leaders and using aimless particles o_{i} have been a great help to intensify the feasibility of finding the minimum points.
Comparison between SSO, PSO, and FSO in the function perm
Particle | Iteration | PSO_{gbest} | FSO_{gbest} | SSO_{gbest} |
---|---|---|---|---|
100 | 1,000 | 3.4573e-003 | 2.0792e-004 | 1.0102e-004 |
50 | 1,000 | 9.1824e-002 | 4.0482e-004 | 2.5724e-004 |
30 | 1,000 | 3.0045e-001 | 1.6614e-003 | 7.8254e-004 |
20 | 1,000 | 8.4793e-001 | 5.3554e-003 | 4.1047e-003 |
15 | 1,000 | 0.14 | 7.0008e-002 | 8.4471e-003 |
10 | 1,000 | 1.004 | 9.7329e-002 | 2.5711e-003 |
5 | 1,000 | 3.024 | 1.3381e-001 | 6.1049e-002 |
Convergence speed in the function Perm is very good. In iterations less than 200, it has been able to achieve considerable minimum points. Common aspect between these three methods is in iterations more than 400. All three methods have got stuck in local minimum points and could not release themselves up to the end. The FSO did not have a good commencement. It means that in iterations less than 100, it had a bad behavior comparing to the PSO. But in iterations more than 100, it has found proper minimum points.
Global best has been used for comparing the proposed method to other optimization methods up to now; however, this topology is bad for multimodal functions [77].
Local best PSO, or lbest PSO, uses a ring social network topology where smaller neighborhoods are defined for each particle. The social component reflects the information exchanged within the neighborhood of the particle, reflecting local knowledge of the environment. With reference to the velocity equation, the social contribution to particle velocity is proportional to the distance between a particle and the best position found by the neighborhood of particles [78].
Comparison between SSO and PSO, FSO in the local best
Benchmark | PSO_{Lbest} | FSO_{Lbest} | SSO_{Lbest} | |
---|---|---|---|---|
Ackley | Mean | 7.547e-002 | 2.84e-004 | 4.841e-007 |
Min | 7.99e-015 | 3.045e-019 | 7.172e-024 | |
Max | 1.5017 | 5.21e-001 | 1.842 | |
Griewank | Mean | 9.399e-002 | 2.874e-002 | 1.895e-005 |
Min | 0 | 0 | 0 | |
Max | 5.407e-001 | 1.472e-001 | 2.647e-001 | |
Quadric | Mean | 1.039e-011 | 3.542e-014 | 1.521e-015 |
Min | 8.2216e-016 | 5.157e-021 | 2.708e-023 | |
Max | 1.6906e-010 | 2.657e-012 | 1.952e-011 | |
Quadric noise | Mean | 1.325e-002 | 2.238e-004 | 1.411e-004 |
Min | 4.8734e-004 | 2.416-e006 | 5.274e-007 | |
Max | 2.9155e-002 | 1.286e-002 | 1.561e-002 | |
Rastrigin | Mean | 54.2849 | 7.124e-003 | 1.604e-005 |
Min | 25.8689 | 2.206e-005 | 4.522e-010 | |
Max | 85.5663 | 2.071e-001 | 1.055e-002 | |
Rosenbrock | Mean | 3.2552 | 5.602e-004 | 8.741e-005 |
Min | 4.128e-005 | 1.0477e-008 | 8.525e-010 | |
Max | 19.091 | 2.255e-002 | 2.581e-002 | |
Sphere | Mean | 5.514e-160 | 4.117e-171 | 2.11e-185 |
Min | 1.3621e-177 | 2.554e-180 | 0 | |
Max | 2.757e-158 | 1.478e-142 | 3.344e-161 |
According to the results of the Table 10, the SSO method, considering the Ibest, is better than the other two methods FSO and PSO. The reason why it is better is local searching of function Sphere as well as having a local leader, which causes the particles not to converge prematurely and not to get stuck in local optimum points. The SSO has found more optimum points in multimodal functions and has achieved better results. A considering thing about the SSO is that the results of local best have been improved in comparison with the global best. This is an attribute of the SSO. Besides, the results of the FSO have been noticeable.
Shows different methods of PSO
Algorithm | Year | Topology | Parameters setting | Reference |
---|---|---|---|---|
GPSO | 1998 | Global star | w: 0.9−0.4, c_{1}, c_{2} = 2 | [23] |
LPSO | 2002 | Local ring | w: 0.9−0.4, c_{1}, c_{2} = 2 | [54] |
VPSO | 2002 | Local von Neumann | w: 0.9−0.4, c_{1}, c_{2} = 2 | [55] |
FIPS | 2004 | Local URing | \( \chi = 0.729,\sum {c_{i} = 4.1} \) | [58] |
HPSO-TVAC | 2004 | Global star | w: 0.9−0.4, c_{1} = 2.5 – 0.5, c_{2} = 0.5 – 2.5 | [37] |
DMS-PSO | 2005 | Dynamic multiswarm | w: 0.9−0.2, c_{1} = c_{2} = 2, m = 3, R = 5 | [42] |
CLPSO | 2006 | Comprehensive learning | w: 0.9−0.2, c = 1.49445, m = 7 | [59] |
OPSO | 2008 | Orthogonal particle swarm | w: 0.9−0.4, c_{1} = c_{2} = 0.2, V_{max} = 0.5*rang | [87] |
APSO | 2009 | Adaptive swarm | Adaptation of the inertia weight | [50] |
OLPSO | 2010 | Orthogonal learning particle swarm | w: 0.9−0.4, c = 2, G = 5, V_{max} = 0.2*rang | [53] |
Comparison between SSO and optimization several methods
Algorithm | Sphere | Rosenbrock | Ackley | Griewank | Rastrigin | Schwefel |
---|---|---|---|---|---|---|
GPSO | 1.98e-053 | 28.1 | 1.15e-014 | 2.37e-002 | 8.68 | −10,090.16 |
LPSO | 4.77e-029 | 21.8627 | 1.85e-014 | 1.10e-02 | 7.25 | −9,628.35 |
VPSO | 5.11e-038 | 37.6469 | 1.40e-014 | 1.31e-02 | 8.07 | −9,845.27 |
FIPS | 3.21e-030 | 22.5387 | 7.69e-015 | 9.04e-04 | 10.92 | −10,113.8 |
HPSO-TVAC | 3.38e-041 | 13 | 2.06e-010 | 1.07e-02 | 3.71 | −10,868.57 |
DMS-PSO | 3.85e-054 | 32.3 | 8.52e-015 | 1.31e-02 | 6.42 | −9,593.33 |
CLPSO | 1.89e-019 | 11 | 2.01e-012 | 6.45e-13 | 6.64e-011 | −12,557.65 |
OPSO | 6.45e-018 | 49.61 | 6.23e-009 | 2.29e-03 | 6.97 | −8,402.53 |
APSO | 1.45e-150 | 2.84 | 1.11e-014 | 1.67e-02 | 1.01e-14 | −12,569.5 |
OLPSO-G | 4.12e-054 | 21.52 | 7.98e-015 | 4.83e-03 | 1.07 | −9,821.74 |
OLPSO-L | 1.11e-038 | 1.26 | 4.14-e015 | 0 | 0 | −12,150.63 |
SSO | 0 | 2.4373e-001 | 4.7025e-012 | 4.8516e-008 | 1.8104e-010 | −12,569.5 |
Best method | SSO | SSO | OLPSO-L | OLPSO-L | OLPSO-L | SSO&APSO |
Algorithm | Schwefel’s P2.22 | Quadric | Step | Quadric noise | Perm | N_Rastrigin | Generalized Penalized |
---|---|---|---|---|---|---|---|
GPSO | 2.51e-034 | 6.45e-002 | 0 | 7.77e-003 | 1.02e-001 | 15.5 | 1.04e-002 |
LPSO | 2.03e-020 | 18.6 | 0 | 1.49e-002 | 1.41e-002 | 30.4 | 2.18e-030 |
VPSO | 6.29e-027 | 1.44 | 0 | 1.08e-002 | 12.5 | 21.33 | 3.46e-003 |
FIPS | 1.32e-017 | 0.77 | 0 | 2.55e-003 | 5.68e-001 | 35.91 | 1.22e-031 |
HPSO-TVAC | 6.90e-023 | 2.89e-007 | 0 | 5.54e-002 | 2.02e-002 | 1.83 | 7.07e-030 |
DMS-PSO | 2.61e-029 | 47.5 | 0 | 1.10e-002 | 2.78 | 32.8 | 2.05e-032 |
CLPSO | 1.01e-013 | 395 | 0 | 3.92e-003 | 4.05e-001 | 1.67e-002 | 1.59e-021 |
OPSO | 1.26e-010 | 2.44e-002 | 0 | 4.87e-002 | 2.33e-002 | 2.49e-006 | 1.56e-019 |
APSO | 5.15e-084 | 1.13e-010 | 0 | 4.66e-003 | 2.94e-003 | 4.14e-016 | 3.27e-031 |
OLPSO-G | 9.85e-030 | 5.59e-006 | 0 | 6.21e-003 | 1.28 | 1.05e-011 | 1.59e-032 |
OLPSO-L | 7.67e-022 | 1.56e-001 | 0 | 1.32e-002 | 5.31e-002 | 6.32e-009 | 1.57e-032 |
SSO | 1.58e-078 | 4.16e-015 | 0 | 2.86e-003 | 1.01e-004 | 6.04e-019 | 1.84e-031 |
Best method | APSO | SSO | # | FIPS | SSO | SSO | OLPSO-L |
Algorithm | Rotated Schwefel | Rotated Rastrigin | Rotated Ackley | Rotated Griewank | Shifted Rosenbrock | Shifted Rastrigin |
---|---|---|---|---|---|---|
GPSO | 4.61e-003 | 60.02 | 1.93 | 1.80e-002 | 427.93 | −223.18 |
LPSO | 4.50e-003 | 53.36 | 1.55 | 1.68e-003 | 432.33 | −234.95 |
VPSO | 4.29e-003 | 71.05 | 2.56e-002 | 4.91e-003 | 501.29 | −284.39 |
FIPS | 4.41e-003 | 1.50e-02 | 3.16e-007 | 1.28e-008 | 424.83 | −245.77 |
HPSO-TVAC | 5.32e-003 | 52.9 | 9.29 | 9.26e-003 | 494.2 | −318.33 |
DMS-PSO | 4.04e-003 | 41.97 | 2.42e-014 | 1.02e-002 | 502.51 | −303.17 |
CLPSO | 4.39e-003 | 87.14 | 5.91e-005 | 7.96e-005 | 403.07 | −330 |
OPSO | 4.48e-003 | 63.78 | 1.49e-008 | 1.28e-003 | 2.45e007 | −284.11 |
APSO | 2.98e-003 | 51.78 | 6.41e-012 | 2.25e-008 | 431.47 | −314.21 |
OLPSO-G | 4.00e-003 | 46.09 | 7.69e-015 | 1.68e-003 | 424.75 | −328.57 |
OLPSO-L | 3.13e-003 | 53.35 | 4.28e-015 | 4.19e-08 | 415.94 | −330 |
SSO | 3.11e-003 | 41.02 | 1.08e-14 | 1.93e-011 | 403.48 | −330 |
Best method | APSO | SSO | OLPSO-L | SSO | SSO | CLPSO&OLPSO-L&SSO |
The ranking of different methods in every benchmark function
f_{1} | f_{2} | f_{3} | f_{4} | f_{5} | f_{6} |
---|---|---|---|---|---|
SSO | SSO | OLPSO_L | OLPSO_L | OLPSO_L | SSO(1) |
APSO | OLPSO_L | FIPS | CLPSO | APSO | APSO(1) |
DMS-PSO | APSO | OLPSO_G | SSO | CLPSO | CLPSO |
OLPSO_G | CLPSO | DMS_PSO | FIPS | SSO | OLPSO_L |
GPSO | HPSO | APSO | OPSO | OLPSO_G | HPSO |
HPSO | OLPSO_G | GPSO | OLPSO_G | HPSO | FIPS |
OLPSO_L | LPSO | VPSO | HPSO | DMS-PSO | GPSO |
VPSO | FIPS | LPSO | LPSO | OPSO | VPSO |
FIPS | GPSO | CLPSO | VPSO(8) | LPSO | OLPSO_G |
LPSO | DMS_PSO | SSO | DMS_PSO(8) | VPSO | LPSO |
CLPSO | VPSO | HPSO | APSO | GPSO | DMS_PSO |
OPSO | OPSO | OPSO | GPSO | FIPS | OPSO |
f_{7} | f_{8} | f_{9} | f_{10} | f_{11} | f_{12} |
---|---|---|---|---|---|
APSO | SSO | All equals | FIPS | SSO | SSO |
SSO | APSO | … | SSO | APSO | APSO |
GPSO | HPSO | … | CLPSO | LPSO | OLPSO_G |
OLPSO_G | OLPSO_G | … | APSO | HPSO | OLPSO_L |
DMS_PSO | OPSO | … | OLPSO_G | OPSO | OPSO |
VPSO | GPSO | … | GPSO | OLPSO_L | CLPSO |
HPSO | OLPSO_L | … | VPSO | GPSO | HPSO |
OLPSO_L | FIPS | … | DMS_PSO | CLPSO | GPSO |
LPSO | VPSO | … | OPSO | FIPS | VPSO |
FIPS | LPSO | … | OLPSO_L | OLPSO_G | LPSO |
CLPSO | DMS_PSO | … | LPSO | DMS_PSO | DMS_PSO |
OPSO | CLPSO | … | HPSO | VPSO | FIPS |
f_{13} | f_{14} | f_{15} | f_{16} | f_{17} | f_{18} |
---|---|---|---|---|---|
APSO | SSO | OLPSO_L | SSO | CLPSO | SSO(1) |
SSO | DMS_PSO | OLPSO_G | FIPS | SSO | CLPSO(1) |
OLPSO_L | OLPSO_G | SSO | APSO | OLPSO_L | OLPSO_L(1) |
OLPSO_G | APSO | DMS_PSO | OLPSO_L | OLPSO_G | OLPSO_G |
DMS_PSO | HPSO | APSO | CLPSO | FIPS | HPSO |
VPSO | OLPSO_L | OPSO | OPSO | GPSO | APSO |
CLPSO | LPSO | FIPS | OLPSO_G(7) | APSO | DMS_PSO |
FIPS | GPSO | CLPSO | LPSO(7) | LPSO | VPSO |
OPSO | OPSO | VPSO | VPSO | HPSO | OPSO |
LPSO | VPSO | LPSO | HPSO | VPSO | FIPS |
GPSO | CLPSO | GPSO | DMS_PSO | DMS_PSO | LPSO |
HPSO | FIPS | HPSO | GPSO | OPSO | GPSO |
Various behaviors of the eleven PSO methods are seen in the Ackley function, and the best result is achieved by OLPSO-L, and OLPSO-G, FIPS, and DMS-PSO have achieved almost the same results. HPSO and OPSO proved to have gotten trapped in local optima points and to obtain the worst results. In the iterations less than 250, SSO had the fastest convergence among the other methods, but the particles were trapped in local optima and could not escape from this point until the iteration of 600. In the iteration of 600, the particles escaped from the local optima point and reached a better result, but in the iteration of 700, they got trapped in the other local optima again and failed to provide better results up to the end of iterations. However, some good methods such as APSO have also similar behaviors toward SSO, but they have shown better performance.
The best performance in the Griewank function has been achieved by OLPSO-L, CLPSO and SSO, respectively. The other methods have nearly identical performance, and getting trapped in local optima points is a serious problem in all of them and has been unable to achieve good results. In the iterations less than 300, the speed of convergence of SSO is better than that of the other PSO methods.
The best results in the non-continuous Rastrigin function were obtained by SSO. The speed of convergence and escaping from local optima points in the low iterations can be seen well by the SSO. The function is one of the difficult functions for the various PSO methods. The APSO and OLPSO-G methods could also achieve good results.
The best result in the Rosenbrock function was obtained by SSO; however, in the initial iterations (less than 100), FIPS has managed to reach a good optima point very quickly.
Between 100 and 400 iterations, the performance of APSO has been better than that of the other methods, but unfortunately it has been trapped in a local optima point and has been able to be optimized. After iteration of 400, SSO has managed to escape from a local optima point and present the best performance.
SSO has had the best performance in the rotated Griewank function. APSO, FIPS, and OLPSO-L methods have also achieved acceptable results. In the iterations less than 250, APSO has converged faster than SSO and has had better performance, but after the iteration of 250, SSO method has managed to become more efficient.
The ranking of different methods in three groups of benchmark functions
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Unimodal | OLPSO_L | SSO | APSO | OLPSO_G | CLPSO | HPSO-TVAC | FIPS | DMS-PSO | GPSO | VPSO | LPSO | OPSO |
Multimodal | SSO | APSO | OLPSO_G | OLPSO_L | GPSO | FIPS | OPSO | DMS-PSO | CLPSO | LPSO | VPSO HPSO_TVAC | |
Rotated_shifted | SSO | OLPSO_L | OLPSO_G | APSO | CLPSO | DMS-PSO | FIPS | VPSO | OPSO | LPSO HPSO_TVAC | GPSO |
According to the results of the Table 14, in unimodal functions, the OLPSO_L, the proposed method, and the OLPSO_L are placed first to third, respectively. In the multimodal functions, the proposed method, the APSO, and the OLPSO_G are placed first to third, respectively. In the rotated and shifted functions, the proposed method, the OLPSO_L, and the OLPSO_G are placed first to third, respectively.
5 Conclusion
In this paper, the method of SSO is a new optimization method that is inspired by the PSO and FSO methods. This method has represented a new optimization method with examining the swallow swarms and their behavioral features. In this method, the particles can have three distinct duties during the searching period: explorer particles, aimless particles, and leader particles. Every particle shows a different behavior regarding its position. They interact with each other continuously. This new method has particular features such as high convergence speed in different functions and not getting stuck in local minimum points. If a particle gets stuck in one of these points, assistance offered by local leader particles and/or aimless particles give hope to it to flee. Different optimization methods such as FSO and kinds of PSO were performed in MATLAB. Different benchmark functions have been used. These methods have been compared to SSO and have been tested. In some functions, the SSO has represented a more optimized response comparing to the other methods, and in some functions and positions, it is rated in second place or third place. It is not claimed in this research that the SSO is the best optimization method but it can be applied in different engineering problems for optimizing the functions and problems. Regarding the results achieved, it can be claimed that it is one of the best optimization methods in swarm intelligence. This method should be examined and refined by other researchers. Using fuzzy logic for increasing the flexibility of this method is a part of future activities.
References
- 1.Bonabeau E, Dorigo M, Theraulaz G (1999) Swarm intelligence: from natural to artificial systems. Oxford University Press, New YorkMATHGoogle Scholar
- 2.Dorigo M, Maniezzo V, Colorni A (1996) The ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybern Part B 26(1):29–41CrossRefGoogle Scholar
- 3.Dorigo M, Gambardella LM (1997) Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans Evol Comput 1(1):53–66CrossRefGoogle Scholar
- 4.Dorigo M, Stützle T (2004) Ant colony optimization. MIT Press, CambridgeMATHCrossRefGoogle Scholar
- 5.Kennedy J, Eberhart RC (1995) Particle swarm optimization. Proceedings of the IEEE international conference on neural networks. IEEE Press, Piscataway, pp 1942–1948CrossRefGoogle Scholar
- 6.Clerc M (2007) Particle swarm optimization. ISTE Ltd., LondonGoogle Scholar
- 7.Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization: an overview. Swarm Intell 1(1):33–57CrossRefGoogle Scholar
- 8.Li XL (2003) A new intelligent optimization-artificial fish swarm algorithm. PhD thesis, Zhejiang University, China, June, 2003Google Scholar
- 9.Jiang MY, Yuan DF (2006) Artificial fish swarm algorithm and its applications. In: Proceedings of the international conference on sensing, computing and automation, (ICSCA’2006). Chongqing, China, 8–11 May. 2006, pp 1782–1787Google Scholar
- 10.Xiao JM, Zheng XM, Wang XH (2006) A modified artificial fish-swarm algorithm. In Proc. of the IEEE 6th World Congress on Intelligent Control and Automation, (WCICA’2006). Dalian, China, 21–23 June 2006, pp 3456–3460Google Scholar
- 11.Krishnanand KN, Ghose D (2005) Detection of multiple source locations using a glowworm metaphor with applications to collective robotics. Proceedings of IEEE swarm intelligence symposium. IEEE Press, Piscataway, pp 84–91Google Scholar
- 12.Krishnanand KN, Ghose D (2006) Glowworm swarm based optimization algorithm for multimodal functions with collective robotics applications. Multiagent Grid Syst 2(3):209–222MATHGoogle Scholar
- 13.Krishnanand KN, Ghose D (2006) Theoretical foundations for multiple rendezvous of glowworm inspired mobile agents with variable local-decision domains. Proceedings of American control conference. IEEE Press, Piscataway, pp 3588–3593Google Scholar
- 14.Krishnanand KN, Ghose D (2009) Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell 3:87–124. doi:10.1007/s11721-008-0021-5 Google Scholar
- 15.Dorigo M, Trianni V, Sahin E, Gross R, Labella TH, Baldassarre G, Nolfi S, Deneubourg J-L, Mondada F, Floreano D, Gambardella LM (2004) Evolving self-organizing behaviors for a swarm-bot. Autonomous Robots 17(2–3):223–245CrossRefGoogle Scholar
- 16.Fronczek JW, Prasad NR (2005) Bio-inspired sensor swarms to detect leaks in pressurized systems. In: Proceedings of IEEE international conference on systems, man and cybernetics. IEEE Press, Piscataway, pp 1967–1972Google Scholar
- 17.Zarzhitsky D, Spears DF, Spears WM (2005) Swarms for chemical plume tracing. Proceedings of IEEE Swarm intelligence symposium. IEEE Press, Piscataway, pp 249–256Google Scholar
- 18.Zadeh LA (1965) Fuzzy sets. Inf Control 8:338–353Google Scholar
- 19.Heppner H, Grenander U (1990) A stochastic non-linear model for coordinated bird flocks. In: Krasner S (ed) The ubiquity of chaos. AAAS, Washington, pp 233–238Google Scholar
- 20.Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science. IEEE, Nagoya, Japan, Piscataway, pp 39–43Google Scholar
- 21.Eberhart RC, Simpson PK, Dobbins RW (1996) Computational intelligence PC tools. Academic Press, BostonGoogle Scholar
- 22.Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization an overview. Swarm Intell 1:33–57. doi:10.1007/s11721-007-0002-0 Google Scholar
- 23.Shi Y, Eberhart RC (1998) A modified particle swarm optimizer. In: Proceedings of IEEE world congress on computational intelligence, pp 69–73Google Scholar
- 24.Clerc M, Kennedy J (2002) The particle swarm-explosion, stability and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73CrossRefGoogle Scholar
- 25.Trelea IC (2003) The particle swarm optimization algorithm: convergence analysis and parameter selection. Inf Process Lett 85(6):317–325MathSciNetMATHCrossRefGoogle Scholar
- 26.Yasuda K, Ide A, Iwasaki N (2003) Stability analysis of particle swarm optimization. In: Proceedings of the 5th metaheuristics international conference, pp. 341–346Google Scholar
- 27.Kadirkamanathan V, Selvarajah K, Fleming PJ (2006) Stability analysis of the particle dynamics in particle swarm optimizer. IEEE Trans Evol Comput 10(3):245–255CrossRefGoogle Scholar
- 28.van den Bergh F, Engelbrecht AP (2006) A study of particle optimization particle trajectories. Inf Sci 176(8):937–971MATHCrossRefGoogle Scholar
- 29.Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. In: Proceedings of IEEE congress on evolution and computation, pp 1945–1950Google Scholar
- 30.Shi Y, Eberhart RC (2001) Fuzzy adaptive particle swarm optimization. IEEE Congr Evol Comput 1:101–106Google Scholar
- 31.Eberhart RC, Shi Y (2001) Tracking and optimizing dynamic systems with particle swarms. In: Proceedings of IEEE congress on evolution and computation, Seoul, Korea, pp 94–97Google Scholar
- 32.Clerc M (1999) The swarm and the queen: toward a deterministic and adaptive particle swarm optimization. In: Proceedings of IEEE Congress on Evolution and Computation, pp 1951–1957Google Scholar
- 33.Clerc M, Kennedy J (2002) The particle swarm-explosion, stability and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73CrossRefGoogle Scholar
- 34.Eberhart RC, Shi Y (2000) Comparing inertia weights and constriction factors in particle swarm optimization. In: Proceeding of IEEE Congress on Evolution and Computation, pp 84–88Google Scholar
- 35.Kennedy J (1997) The particle swarm social adaptation of knowledge. In: Proceedings of IEEE international conference on Evolution and computation. Indianapolis, IN, pp 303–308Google Scholar
- 36.Suganthan PN (1999) Particle swarm optimizer with neighborhood operator. In: Proceedings of IEEE congress on evolution and computation. Washington DC, pp 1958–1962Google Scholar
- 37.Ratnaweera A, Halgamuge S, Watson H (2004) Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255CrossRefGoogle Scholar
- 38.Angeline PJ (1998) Using selection to improve particle swarm optimization. In: Proceedings of IEEE congress on evolution and computation. Anchorage, AK, pp 84–89Google Scholar
- 39.Juang CF (2004) A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans Syst Man Cybern B Cybern 34(2):997–1006CrossRefGoogle Scholar
- 40.Chen YP, Peng WC, Jian MC (2007) Particle swarm optimization with recombination and dynamic linkage discovery. IEEE Trans Syst Man Cybern B Cybern 37(6):1460–1470Google Scholar
- 41.Andrews PS (2006) An investigation into mutation operators for particle swarm optimization. In: Proceedings of IEEE congress on evolution and computation. Vancouver, BC, Canada, pp 1044–1051Google Scholar
- 42.Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer with local search. In: Proceedings of IEEE congress on evolution and computation, pp 522–528Google Scholar
- 43.Zhang WJ, Xie XF (2003) DEPSO: hybrid particle swarm with differential evolution operator. In: Proceedings of IEEE conference on systems, man, cybernetics, pp 3816–3821Google Scholar
- 44.van den Bergh F, Engelbrecht AP (2004) A cooperative approach to particle swarm optimization. IEEE Trans Evol Comput 8(3):225–239CrossRefGoogle Scholar
- 45.Ratnaweera A, Halgamuge S, Watson H (2004) Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255CrossRefGoogle Scholar
- 46.Parsopoulos KE, Vrahatis MN (2004) On the computation of all global minimizers through particle swarm optimization. IEEE Trans Evol Comput 8(3):211–224MathSciNetCrossRefGoogle Scholar
- 47.Brits R, Engelbrecht AP, van den Bergh F (2002) A niching particle swarm optimizer. In: Proceedings of 4th Asia-Pacific conference on simulation and evolution and learning, pp. 692–696Google Scholar
- 48.Brits R, Engelbrecht AP, van den Bergh F (2007) Locating multiple optima using particle swarm optimization. Appl Math Comput 189(2):1859–1883MathSciNetMATHCrossRefGoogle Scholar
- 49.Parrott D, Li XD (2006) Locating and tracking multiple dynamic optima by a particle swarm model using speciation. IEEE Trans Evol Comput 10(4):440–458CrossRefGoogle Scholar
- 50.Zhan Z, Zhang J, Li Y, Shu-Hung Chung H (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B Cybern 39(6):1362–1381CrossRefGoogle Scholar
- 51.Liu J-L, Chang C–C (2008) Novel orthogonal momentum-type particle swarm optimization applied to solve large parameter optimization problems. J Artif Evol Appl 1:1–9MathSciNetCrossRefGoogle Scholar
- 52.Sivanandam SN, Visalakshi P (2009) Dynamic task scheduling with load balancing using parallel orthogonal particle swarm optimization. Int J Bio Inspired Comput 1(4):276–286CrossRefGoogle Scholar
- 53.Zhan Z-H, Zhang J, Li Y, Shi Y-H (2011) Orthogonal learning particle swarm optimization. IEEE Trans Evol Comput 15(6):832–847 Google Scholar
- 54.Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of IEEE congress on evolution and computation. Honolulu, HI, pp 1671–1676Google Scholar
- 55.Kennedy J, Mendes R (2006) Neighborhood topologies in fully informed and best-of-neighborhood particle swarms. IEEE Trans Syst Man Cyber Part C Appl Rev 36(4):515–519CrossRefGoogle Scholar
- 56.Hu X, Eberhart RC (2002) Multiobjective optimization using dynamic neighborhood particle swarm optimization. In: Proceedings of IEEE congress on evolution and computation. Honolulu, HI, pp 1677–1681Google Scholar
- 57.Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer. In: Proceedings of swarm intelligence symposium, pp 124–129Google Scholar
- 58.Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: Simpler, maybe better. IEEE Trans Evol Comput 8(3):204–210CrossRefGoogle Scholar
- 59.Liang JJ, Qin AK, Suganthan PN, Baskar S (2006) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans Evol Comput 10(3):281–295CrossRefGoogle Scholar
- 60.Li LX, Shao ZJ, Qian JX (2002) An Optimizing method based on autonomous animals: fish-swarm algorithm. Syst Eng Theory Pract 22(11):32–38Google Scholar
- 61.Zhang M, Shao C, Li F, Gan Y, Sun J (2006) Evolving neural network classifiers and feature subset using artificial fish swarm. In: Proceedings of the 2006 IEEE international conference on mechatronics and automation, June 25–28. Luoyang, ChinaGoogle Scholar
- 62.Jiang M, Wang Y, Rubio F, Yuan D (2007) Spread spectrum code estimation by artificial fish swarm algorithm. In: IEEE international symposium on intelligent signal processing (WISP)Google Scholar
- 63.Jiang MY, Yuan DF (2005) Wavelet threshold optimization with artificial fish swarm algorithm. In: Proceedings of the IEEE international conference on neural networks and brain, (ICNN&B’2005), Beijing, China, 13–15, pp 569–572Google Scholar
- 64.Paul Gorenzel W, Salmon TP (1994) Swallows, prevention and control of wildlife damageGoogle Scholar
- 65.Lazareck LJ, Moussavi Z Adaptive swallowing sound segmentation by variance dimensionGoogle Scholar
- 66.Angela T, Chris R (1989) Swallows and martins: an identification guide and handbook. Houghton-Mifflin. ISBN 0-395-51174-7Google Scholar
- 67.Bijlsma RG, van den Brink B (2005) A Barn Swallow Hirundo rustica roost under attack:timing and risks in the presence of African Hobbies Falco cuvieri. Ardea 93(1):37–48Google Scholar
- 68.Saino N, Galeotti P, Sacchi R, Møller A (1997) Song and immunological condition in male barn swallows (Hirundo rustica). Behav Ecol 8(94):364–371. doi:10.1093/beheco/8.4.364 (http://dx.doi.org/10.1093%2Fbeheco%2F8.4.364)
- 69.Brown CR (1986) Cliff swallow colonies as information centers. Science 234:83–85Google Scholar
- 70.Brown CR, Brown M, Shaffer ML (1991) food sharing signals among socially foraging cliff swallows. Anim Behav 42:551–564CrossRefGoogle Scholar
- 71.Safran R (2010) Barn swallows: sexual and social behavior. Encycl Animal Behav 1:139–144 (Elsevier)Google Scholar
- 72.Snapp BD (1976) Colonial breeding in the barn swallow (hirundo rustica) and its adaptive significance. Condor 783471480Google Scholar
- 73.Smith LC, Raouf SA, Brown MB, Wingfield JC, Brown CR (2005) Testosterone and group size in cliff swallows: testing the “challenge hypothesis” in a colonial bird. Horm Behav 47:76–82CrossRefGoogle Scholar
- 74.Mccarty JP, Winkler DW (1999) Foraging ecology and diet tree swallows feeding selectivity of nestlings. The Condor IO 1:246–254. The cooper ornithological societyGoogle Scholar
- 75.Whitley D, Rana D, Dzubera J, Mathias E (1996) Evaluating evolutionary algorithms. Artif Intell 85(1–2):245–276CrossRefGoogle Scholar
- 76.Salomon R (1996) Reevaluating genetic algorithm performance under coordinate rotation of benchmark functions. BioSystems 39:263–278CrossRefGoogle Scholar
- 77.Esquivel SC, Coello CAC (2003) On the use of particle swarm optimization with multimodal functions. IEEE Congr Evol Comput 2:1130–1136Google Scholar
- 78.Engelbrecht AP (2005) Fundamentals of computational swarm intelligence. Wily, New YorkGoogle Scholar
- 79.Esmin AAA, Lambert-Torres G, Alvarenga GB (2006) UFLA, Brazil, hybrid evolutionary algorithm based on PSO and GA mutation, sixth international conference on hybrid intelligent systems. HIS ‘06Google Scholar
- 80.Settles M, Soule T (2005) Breeding swarms: A GA/PSO Hybrid. In: GECCO ‘05: proceedings of the 2005 conference on genetic and evolutionary computation, pp 161–168Google Scholar
- 81.Meng Y, Kazeem O (2007) A hybrid ACO/PSO control algorithm for distributed swarm robots. In: Proceedings of the 2007 IEEE swarm intelligence symposium (SIS 2007)Google Scholar
- 82.Gomez-Cabrero D, Ranasinghe DN (2005) Fine-tuning the ant colony system algorithm through particle swarm optimization, technical report TR07-2005. Departamento de Estadistica e Investigacio Operativa, Universitat de Valencia, Burjassot, SpainGoogle Scholar
- 83.Chen H, Wang S, Li J, Li Y (2007) A hybrid of artificial fish swarm algorithm and particle swarm optimization for feed forward neural network training, 2007 international conference on intelligent systems and knowledge engineering (ISKE 2007)Google Scholar
- 84.Shi H, Bei Z (2008) Application of improved ant colony algorithm. In: 4th International conference on natural computation. ICNC ‘08Google Scholar
- 85.Shi H, Bei Z (2009) A mixed ant colony algorithm for function optimization. In: Proceedings of the 21st annual international conference on Chinese control and decision IEEE Press Piscataway, NJ, USA, pp 3919–3923Google Scholar
- 86.Mishra SK (2006) Performance of differential evolution and particle swarm methods on some relatively harder multi-modal benchmark functions. Available at SSRN: http://ssrn.com/abstract=937147
- 87.Ho S-Y, Lin H-S, Liauh W-H, Ho S-J (2008) OPSO: Orthogonal particle swarm optimization and its application to task assignment problems. IEEE Trans Syst Man Cybern Part A 38(2):288–298Google Scholar
- 88.Berliner S (2004) The Birders Report. http://home.earthlink.net/~s.berliner/