Abstract
In our previous works, we empirically showed that a number of \(6_{\pm 2}\) informants may endow particle swarm optimization (PSO) with an optimized learning procedure in comparison with other combinations of informants. In this way, the new version PSO6, that evolves new particles from six informants (neighbors), performs more accurately than other existing versions of PSO and is able to generate good particles for a longer time. Despite this advantage, PSO6 may show certain attraction to local basins derived from its moderate performance on non-separable complex problems (typically observed in PSO versions). In this paper, we incorporate a local search procedure to the PSO6 with the aim of correcting this disadvantage. We compare the performance of our proposal (PSO6-Mtsls) on a set of 40 benchmark functions against that of other PSO versions, as well as against the best recent proposals in the current state of the art (with and without local search). The results support our conjecture that the (quasi)-optimally informed PSO, hybridized with local search mechanisms, reaches a high rate of success on a large number of complex (non-separable) continuous optimization functions.
Similar content being viewed by others
Notes
MALLBA Library, Directory Mallba/rep/PSO/soco2010
The complete information about featured algorithms in SOCO’10 is available in http://sci2s.ugr.es/EAMHCO/.
References
Alba E, Luque G, García-Nieto J, Ordoñez G, Leguizamón G (2007) MALLBA: a software library to design efficient optimisation algorithms. Int J Innovative Comput Appl (IJICA) 1(1):74–85
Auger A, Hansen N (2005) A restart CMA evolution strategy with increasing population size. IEEE Congr Evol Comput 2:1769–1776
Chen J, Qin Z, Liu Y, Lu J (2005) Particle swarm optimization with local search. In: Neural Networks and Brain, 2005. ICNN B ’05. International Conference on, vol 1, pp 481–484
Clerc M, Kennedy J (2002) The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73
Das S, Koduru P, Gui M, Cochran M, Wareing A, Welch SM, Babin BR (2006) Adding local search to particle swarm optimization. In: IEEE Congress on Evolutionary Computation, CEC 2006, pp 428–433
dos Santos Coelho L, Mariani VC (2006) Particle swarm optimization with quasi-newton local search for solving economic dispatch problem. In: IEEE International Conference on Systems, Man and Cybernetics, SMC 2006, vol 4, pp 3109–3113
Eberhart R, Shi Y (2000) Comparing inertia weights and constriction factors in particle swarm optimization. In: Proceedings of the IEEE Congress on Evolutionary Computation CEC’00., vol 1, La Jolla, pp 84–88
El Dor A, Clerc M, Siarry P (2012) A multi-swarm PSO using charged particles in a partitioned search space for continuous optimization. Comput Optim Appl 53(1):271–295
García S, Molina D, Lozano M, Herrera F (2009) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005. J Heuristics 15(6):617–644
García-Nieto J, Alba E (2011) Empirical computation of the quasi-optimal number of informants in particle swarm optimization. In: Proceedings of the 13th annual conference on Genetic and evolutionary computation, GECCO ’11. ACM, New York, pp 147–154
García-Nieto J, Alba E (2012) Why six informants is optimal in PSO. In: Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference, GECCO ’12. ACM, New York, pp 25–32
Herrera F, Lozano M (2009) Workshop for evolutionary algorithms and other metaheuristics for continuous optimization problems: a scalability test. Technical report, SCI2S, University of Granada, Pisa
Herrera F, Lozano M, Molina D (2010) Test suite for the special issue of soft computing on scalability of evolutionary algorithms and other metaheuristics for large scale continuous optimization problems. Technical report, SCI2S, University of Granada, Spain
Kennedy J, Eberhart RC (2001) Swarm intelligence. Morgan Kaufmann Pub, San Francisco
Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of the Congress of Evolutionary Computation CEC’02, vol 2. IEEE Computer Society, Washington, DC, pp 1671–1676
Li C, Yang S, Nguyen TT (2012) A self-learning particle swarm optimizer for global optimization problems. IEEE Trans Syst Man Cybern Part B Cybern 42(3):627–646
Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer with local search. In: The IEEE Congress on Evolutionary Computation, CEC 2005, vol 1, pp 522–528
Liao T, Montes de Oca MA, Aydin D, Stützle T, Dorigo M (2011) An incremental ant colony algorithm with local search for continuous optimization. In: Proceedings of the 13th annual conference on Genetic and evolutionary computation, GECCO ’11. ACM, New York, pp 125–132
Mendes R, Kennedy J, Neves J (June 2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 8(3):204–210
Mohais AS, Mendes R, Ward C, Posthoff C (2005) Neighborhood re-structuring in particle swarm optimization. In: LNCS 3809. Proceedings of the 18th Australian Joint Conference on Artificial Intelligence. Springer, New York, pp 776–785
Monson CK, Seppi KD (2005) Exposing origin-seeking bias in PSO. In: Proceedings of the 2005 conference on Genetic and evolutionary computation, GECCO ’05. ACM, New York, pp 241–248
Montes de Oca MA, Aydin D, Stützle T (2011) An incremental particle swarm for large-scale continuous optimization problems: an example of tuning-in-the-loop (re)design of optimization algorithms. Soft Comput 15:2233–2255
Montes de Oca MA, Stützle T, Van den Enden K, Dorigo M (2011) Incremental social learning in particle swarms. IEEE Trans Syst Man Cybern Part B 41(2):368–384
Muelas S, La Torre A, Peña J (2009) A memetic differential evolution algorithm for continuous optimization. In: Proceedings of the 2009 Ninth International Conference on Intelligent Systems Design and Applications, ISDA ’09. IEEE Computer Society, Washington, DC, pp 1080–1084
Muelas S, Peña J, La Torre A, Robles V (2010) A new initialization procedure for the distributed estimation of distribution algorithms. Soft Comput 15(4):713–720
Müller CL, Baumgartner B, Sbalzarini IF (2009) Particle swarm cma evolution strategy for the optimization of multi-funnel landscapes. In: Proceedings of the Eleventh conference on Congress on Evolutionary Computation, CEC’09. IEEE Press, Piscataway, pp 2685–2692
Powell MJD (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput J 7(2):155–162
PSO-Central-Group (2011) Standard PSO 2006, 2007, and 2011. Technical Report [online] http://www.particleswarm.info/. Particle Swarm Central, Jan 2011
Qu B, Liang J, Suganthan P (2012) Niching particle swarm optimization with local search for multi-modal optimization. Inform Sci 197:131–143
Sheskin DJ (2007) Handbook of parametric and nonparametric statistical procedures. Chapman & Hall/CRC, New York
Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the CEC’05 special session on real-parameter optimization. Technical Report KanGAL Report 2005005, Nanyang Technological University, Singapore and Kanpur, India
Sutton AM, Whitley D, Lunacek M, Howe A (2006) PSO and multi-funnel landscapes: how cooperation might limit exploration. In: Proceedings of the 8th annual conference on Genetic and evolutionary computation, GECCO ’06. ACM, pp 75–82
Tang K, Yao X, Suganthan PN, MacNish C, Chen YP, Chen CM, Yang Z (2007) Benchmark functions for the CEC’08 special session and competition on large scale global optimization. Technical report, Nature Inspired Computation and Applications Laboratory, USTC, China, November
Thain D, Tannenbaum T, Livny M (2005) Distributed computing in practice: the condor experience. Concurr Pract Exp 17(2–4):323–356
Tseng L, Chun C (2008) Multiple trajectory search for large scale global optimization. In: IEEE Congress on Evolutionary Computation, CEC’2008. IEEE Press, pp 3052–3059
Tseng L, Chen C (2009) Multiple trajectory search for unconstrained/constrained multi-objective optimization. In: Proceedings of the Eleventh conference on Congress on Evolutionary Computation, CEC’09. IEEE Press, Piscataway, pp 1951–1958
Acknowledgments
Authors acknowledge funds from the Spanish Ministry of Economy and Competitiveness (MEC) and FEDER under contract TIN2011-28194 (RoadMe project http://roadme.lcc.uma.es). It is also partially founded by project number 8.06/5.47.4142 in collaboration with the VSB-Technical University of Ostrava.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by V. Loia.
Rights and permissions
About this article
Cite this article
García-Nieto, J., Alba, E. Hybrid PSO6 for hard continuous optimization. Soft Comput 19, 1843–1861 (2015). https://doi.org/10.1007/s00500-014-1368-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-014-1368-8