Science China Information Sciences

, Volume 53, Issue 5, pp 980–989 | Cite as

An adaptive hybrid optimizer based on particle swarm and differential evolution for global optimization

Research Papers

Abstract

This paper presents extensive experiments on a hybrid optimization algorithm (DEPSO) we recently developed by combining the advantages of two powerful population-based metaheuristics—differential evolution (DE) and particle swarm optimization (PSO). The hybrid optimizer achieves on-the-fly adaptation of evolution methods for individuals in a statistical learning way. Two primary parameters for the novel algorithm including its learning period and population size are empirically analyzed. The dynamics of the hybrid optimizer is revealed by tracking and analyzing the relative success ratio of PSO versus DE in the optimization of several typical problems. The comparison between the proposed DEPSO and its competitors involved in our previous research is enriched by using multiple rotated functions. Benchmark tests involving scalability test validate that the DEPSO is competent for the global optimization of numerical functions due to its high optimization quality and wide applicability.

Keywords

global optimization statistical learning differential evolution particle swarm optimization hybridization adaptation rotated function 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chen J, Xin B, Peng Z H, et al. Statistical learning makes the hybridization of particle swarm and differential evolution more efficient-a novel hybrid optimizer. Sci China Ser F-Inf Sci, 2009, 52: 1278–1282MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Clerc M, Kennedy J. The particle swarm: explosion, stability and convergence in a multi-dimensional complex space. IEEE Trans Evol Comput, 2002, 6: 58–73CrossRefGoogle Scholar
  3. 3.
    Price K, Storn R M, Lampinen J A. Differential Evolution: a Practical Approach to Global Optimization (Natural Computing Series). New York: Springer, 2005MATHGoogle Scholar
  4. 4.
    Kennedy J, Mendes R. Population structure and particle swarm performance. In: Proceedings of the World Congress on Computational Intelligence, Honolulu, HI, USA, 2002. 1671–1676Google Scholar
  5. 5.
    Ratnaweera A, Halgamuge S K, Watson H C. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput, 2004, 8: 240–255CrossRefGoogle Scholar
  6. 6.
    Hsieh S T, Sun T Y, Liu C C, et al. Efficient population utilization strategy for particle swarm optimizer. IEEE Trans Syst Man Cybern - Part B: Cybern, 2009, 39: 444–456CrossRefGoogle Scholar
  7. 7.
    Janson S, Middendorf M. A hierarchical particle swarm optimizer and its adaptive variant. IEEE Trans Syst Man Cybern-Part B: Cybern, 2005, 35: 1272–1282CrossRefGoogle Scholar
  8. 8.
    Liang J J, Qin A K, Suganthan P N, et al. Comprehensive learning particle swarm optimizer for global optimization of multimodal funcions. IEEE Trans Evol Comput, 2006, 10: 281–295CrossRefGoogle Scholar
  9. 9.
    Mendes R, Kennedy J. The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput, 2004, 8: 204–210CrossRefGoogle Scholar
  10. 10.
    Bergh F van den, Engelbrecht A P. A cooperative approach to particle swarm optimization. IEEE Trans Evol Comput, 2004, 8: 225–239CrossRefGoogle Scholar
  11. 11.
    Liang J J, Suganthan P N. Dynamic multi-swarm particle swarm optimizer. In: Proceedings of the IEEE Swarm Intelligence Symposium, Pasadena, California, USA, 2005. 124–129Google Scholar
  12. 12.
    Zhang W J, Xie X F. DEPSO: hybrid particle swarm with differential evolution operator. In: Proceedings of the IEEE International Conference on System, Man, Cybernetics, Washington, DC, USA, 2003. 3816–3821Google Scholar
  13. 13.
    Juang C F. A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans Syst Man Cybern - Part B: Cybern, 2004. 34: 997–1006CrossRefGoogle Scholar
  14. 14.
    Chakraborty U K. Advances in Differential Evolution. Berlin: Springer-Verlag, 2008MATHCrossRefGoogle Scholar
  15. 15.
    Chen J, Xin B, Peng Z H, et al. Optimal contraction theorem for exploration-exploitation tradeoff in search and optimization. IEEE Trans Syst Man Cybern-Part A: Syst & Human, 2009, 39: 680–691CrossRefGoogle Scholar
  16. 16.
    Eiben A E, Hinterding R, Michalewicz Z. Parameter control in evolutionary algorithms. IEEE Trans Evol Comput, 1999, 3: 124–141CrossRefGoogle Scholar
  17. 17.
    Sutton A M, Whitley D, Lunacek M, Howe A. PSO and multi-funnel landscapes: how cooperation might limit exploration. In: Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Seattle, Washington, USA, 2006. 75–82Google Scholar
  18. 18.
    Salomon R. Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions: a survey of some theoretical and practical aspects of genetic algorithms. BioSystems, 1996, 39: 263–278CrossRefGoogle Scholar
  19. 19.
    Liang J J, Suganthan P N, Deb K. Novel composition functions for numerical global optimization. In: Proceedings of the IEEE Swarm Intelligence Symposium, Pasadena, California, USA, 2005. 68–75Google Scholar
  20. 20.
    Brest J, Greiner S, Bošković B, et al. Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans Evol Comput, 2006, 10: 646–657CrossRefGoogle Scholar

Copyright information

© Science China Press and Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  1. 1.School of AutomationBeijing Institute of TechnologyBeijingChina
  2. 2.Key Laboratory of Complex System Intelligent Control and DecisionMinistry of EducationBeijingChina

Personalised recommendations