Advertisement

Running-Time Analysis of Particle Swarm Optimization with a Single Particle Based on Average Gain

  • Wu Hongyue
  • Huang Han
  • Yang ShulingEmail author
  • Zhang Yushan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10593)

Abstract

Running-time analysis of the particle swarm optimization (PSO) is a hard study in the field of swarm intelligence, especially for the PSO whose solution and velocity are encoded continuously. In this study, running-time analysis on particle swarm optimization with a single particle (PSO-SP) is analyzed. Elite selection strategy and stochastic disturbance are combined into PSO-SP in order to improve optimization capacity and adjust the direction of the velocity of the single particle. Running-time analysis on PSO-SP based on the average gain model is applied in two different situations including uniform distribution and standard normal distribution. The theoretical results show running-time of the PSO-SP with stochastic disturbance of both distributions is exponential. Besides, in the same accuracy and the same fitness difference value, running-time of the PSO-SP with stochastic disturbance of uniform distribution is better than that of standard normal distribution.

Keywords

Swarm intelligence Particle swarm optimization Running-time analysis Average gain model 

Notes

Acknowledgement

This work is supported by National Natural Science Foundation of China (61370102), Guangdong Natural Science Funds for Distinguished Young Scholar (2014A030306050), the Ministry of Education - China Mobile Research Funds (MCM20160206) and Guangdong High-level personnel of special support program (2014TQ01X664).

References

  1. 1.
    Jägersküpper, J.: How the (1 + 1) ES using isotropic mutations minimizes positive definite quadratic forms. Theor. Comput. Sci. 361(1), 38–56 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Huang, H., Xu, W.D., Zhang, Y.S., Lin, Z.Y., Hao, Z.F.: Runtime analysis for continuous (1 + 1) evolutionary algorithm based on average gain model. Sci. China 44, 811–824 (2014)Google Scholar
  3. 3.
    Kennedy, J., Eberhart, R.: Particle swarm optimization. In: 1995 IEEE International Conference on Neural Networks, pp. 1942–1948 (1995)Google Scholar
  4. 4.
    Yao, X., Xu, Y.: Recent advances in evolutionary computation. J. Comput. Sci. Technol. 21, 1–18 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence, pp. 69–73. IEEE (1998)Google Scholar
  6. 6.
    He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127(1), 57–85 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    He, J., Yao, X.: From an individual to a population: an analysis of the first hitting time of population-based evolutionary algorithms. IEEE Trans. Evol. Comput. 6(5), 495–511 (2008)Google Scholar
  8. 8.
    Oliveto, Pietro S., Witt, C.: Simplified drift analysis for proving lower bounds in evolutionary computation. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 82–91. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-87700-4_9 CrossRefGoogle Scholar
  9. 9.
    Gutjahr, W.J.: First steps to the runtime complexity analysis of ant colony optimization. Comput. Oper. Res. 35(9), 2711–2727 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Doerr, B., Neumann, F., Sudholt, D., Witt, C.: On the runtime analysis of the 1-ANT ACO algorithm. In: Conference on Genetic and Evolutionary Computation. vol. 65, pp. 33–40. ACM (2007)Google Scholar
  11. 11.
    Sudholt, D., Witt, C.: Runtime analysis of binary PSO. In: Conference on Genetic and Evolutionary Computation, pp. 135–142. ACM (2008)Google Scholar
  12. 12.
    Qian, C., Yu, Y., Zhou, Z.H.: An analysis on recombination in multi-objective evolutionary optimization. Artif. Intell. 204(204), 99–119 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Qian, C., Tang, K., Zhou, Z.-H.: Selection hyper-heuristics can provably be helpful in evolutionary multi-objective optimization. In: Handl, J., Hart, E., Lewis, P.R., López-Ibáñez, M., Ochoa, G., Paechter, B. (eds.) PPSN 2016. LNCS, vol. 9921, pp. 835–846. Springer, Cham (2016). doi: 10.1007/978-3-319-45823-6_78 CrossRefGoogle Scholar
  14. 14.
    Qian, C., Yu, Y., Zhou, Z.H.: Analyzing evolutionary optimization in noisy environments. Evol. Comput. 1 (2013)Google Scholar
  15. 15.
    Qian, C., Yu, Y., Jin, Y., Zhou, Z.H.: On the effectiveness of sampling for evolutionary optimization in noisy environments. In: Parallel Problem Solving from Nature – PPSN XIII. Springer, Heidelberg, pp. 33–55 (2014)Google Scholar
  16. 16.
    He, J., Yao, X.: Average drift analysis and population scalability. IEEE Trans. Evol. Comput. 21(3), 426–439 (2017)MathSciNetGoogle Scholar
  17. 17.
    Rowe, J.E., Sudholt, D.: The choice of the offspring population size in the (1, λ) EA. Theor. Comput. Sci. 545(545), 20–38 (2014)CrossRefzbMATHGoogle Scholar
  18. 18.
    Witt, C.: Why standard particle swarm optimisers elude a theoretical runtime analysis. In: ACM SIGEVO International Workshop on Foundations of Genetic Algorithms, FOGA 2009, Proceedings, Orlando, Florida, USA, January 9–11, 2009, pp. 13–20. DBLP (2009)Google Scholar
  19. 19.
    Lehre, P.K., Witt, C.: Finite first hitting time versus stochastic convergence in particle swarm optimisation. 53, 1–20 (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Wu Hongyue
    • 1
  • Huang Han
    • 1
  • Yang Shuling
    • 1
    Email author
  • Zhang Yushan
    • 2
  1. 1.School of Software EngineeringSouth China University of TechnologyGuangzhouChina
  2. 2.School of Mathematics and StatisticsGuangdong University of Finance and EconomicsGuangzhouChina

Personalised recommendations