How Much Forcing Is Necessary to Let the Results of Particle Swarms Converge?

  • Bernd Bassimir
  • Manuel SchmittEmail author
  • Rolf Wanka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8472)


In order to improve the behavior of Particle Swarm Optimization (PSO), the classical method is often extended by additional operations. Here, we are interested in how much “PSO” remains in this case, and how often the extension takes over the computation. We study the variant of PSO that applies random velocities (then called forced moves) as soon as the so-called potential of the swarm falls below a certain bound. We show experimentally that the number of iterations the swarm actually deviates from the classical PSO behavior is small as long as the particles are sufficiently far away from any local optimum. As soon as the swarm comes close to a local optimum, the number of forced moves increases significantly and approaches a value that depends on the swarm size and the problem dimension, but not on the actual fitness function, an observation that can be used as a stopping criterion. Additionally, we provide an explanation for the observed phenomenon in terms of the swarm’s potential.


Particle Swarm Local Optimum Particle Swarm Optimization Algorithm Global Attractor Swarm Size 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Clerc, M., Kennedy, J.: The particle swarm - explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation 6, 58–73 (2002); doi: 10.1109/4235.985692
  2. 2.
    Lehre, P.K., Witt, C.: Finite first hitting time versus stochastic convergence in particle swarm optimisation (2011).
  3. 3.
    Schmitt, M., Wanka, R.: Particle swarm optimization almost surely finds local optima. In: Proc. 15th Genetic and Evolutionary Computation Conference (GECCO), pp. 1629–1636 (2013); doi: 10.1145/2463372.2463563
  4. 4.
    Schmitt, M., Wanka, R.: Particles prefer walking along the axes: Experimental insights into the behavior of a particle swarm. In: Companion of Proc. 15th Genetic and Evolutionary Computation Conference (GECCO), pp. 17–18 (2013); doi: 10.1145/2464576.2464583
  5. 5.
    Shi, X.H., Liang, Y.C., Leeb, H.P., Lu, C., Wang, Q.: Particle swarm optimization-based algorithms for TSP and generalized TSP. Information Processing Letters, (103), 169–176 (2007); doi: 10.1016/j.ipl.2007.03.010
  6. 6.
    Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y.-P., Auger, A., Tiwari, S.: Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical report, KanGAL Report Number 2005005 (Kanpur Genetic Algorithms Laboratory, IIT Kanpur) (2005)Google Scholar
  7. 7.
    van den Bergh, F., Engelbrecht, A.P.: A new locally convergent particle swarm optimiser. In: Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC) vol. 3, pp. 94–99 (2002); doi: 10.1109/ICSMC.2002.1176018

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of Erlangen-NurembergErlangenGermany

Personalised recommendations