Skip to main content
Log in

Population statistics for particle swarm optimization: Single-evaluation methods in noisy optimization problems

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Particle swarm optimization (PSO) is a metaheuristic whose quality of results deteriorates significantly in optimization problems subject to noise. The underlying reason to such a deterioration is that the effect of noise hinders the ability of particles to distinguish good from bad solutions, leading them to suffer from deception, blindness and disorientation. A deceived particle is not partially attracted to the true best solution in its neighborhood, a blinded particle misses an opportunity to improve upon its personal best solution, and a disoriented particle mistakenly prefers a worse solution. These conditions need to be addressed via noise mitigation mechanisms to prevent (or at least reduce) such a deterioration. Single-evaluation methods are the name by which we refer to PSO algorithms that address the effect of noise without performing additional function evaluations. The first of these algorithms was PSO with evaporation (PSO-E), which was proposed to reduce blindness in the swarms, and reports have suggested that it succeeds at finding better solutions than the regular PSO in different stochastic and dynamic optimization problems. However, PSO-E depends on an evaporation factor whose value is determined empirically, and the swarm is always at risk of exhibiting divergent behaviour. In this article, we propose a method to determine a priori the evaporation factor for PSO-E, and we also propose a new PSO with probabilistic updates (PSO-PU) to prevent the risk of divergence. Additionally, we take a different approach and develop a new PSO with average neighborhoods (PSO-AN) to blur the effect of noise and thereby reduce deception. Experiments on 20 large-scale benchmark functions subject to different levels of noise show that the regular PSO (lacking a noise mitigation mechanism) generally finds better solutions than PSO-E and PSO-PU because their approaches cause too much disorientation. However, PSO-AN finds better solutions than the regular PSO thanks to the improved quality of its neighborhood best solutions that partially attract the swarm towards better regions of the search space.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Andradóttir S (1998) A review of simulation optimization techniques. In: Proceedings of the 2003 winter simulation conference, vol 1, pp 151–158

  • April J, Glover F, Kelly J, Laguna M (2003) Practical introduction to simulation optimization. In: Proceedings of the 2003 winter simulation conference, vol 1, pp 71–78

  • April J, Better M, Glover F, Kelly J, Laguna M (2006) Enhancing business process management with simulation optimization. In: Proceedings of the Winter Simulation Conference, pp 642–649

  • Branke J (1999) Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of the 1999 congress on evolutionary computation, vol 3, pp 1875–1882

  • Carlisle A, Dozier G (2002) Tracking changing extrema with adaptive particle swarm optimizer. In: Proceedings of the 5th biannual world automation congress, vol 13, pp 265–270

  • Cui X, Potok TE (2007) Distributed adaptive particle swarm optimizer in dynamic environment. In: Proceedings of the IEEE international parallel and distributed processing symposium, pp 1–7

  • Cui X, Hardin CT, Ragade RK, Potok TE, Elmaghraby AS (2005) Tracking non-stationary optimal solution by particle swarm optimizer. In: Proceedings of the 6th international conference on software engineering. Artificial intelligence, networking and parallel/distributed computing, pp 133–138

  • Cui X, Charles JS, Potok TE (2009) A simple distributed particle swarm optimization for dynamic and noisy environments. In: Nature inspired cooperative strategies for optimization, studies in computational intelligence, vol 236. Springer, Berlin, pp 89–102

  • Deng G (2007) Simulation-based optimization. PhD thesis, University of Wisconsin, Madison

  • Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the 6th international symposium on micro machine and human science, pp 39–43

  • Eberhart RC, Shi Y (2001) Tracking and optimizing dynamic systems with particle swarms. In: Proceedings of the IEEE congress on evolutionary computation, pp 94–100

  • Engelbrecht AP (2006) Fundamentals of computational swarm intelligence. Wiley, New York

  • Fernandez-Marquez JL, Arcos JL (2009) An evaporation mechanism for dynamic and noisy multimodal optimization. In: Proceedings of the genetic and evolutionary computation conference, pp 17–24

  • Fernandez-Marquez JL, Arcos JL (2010) Adapting particle swarm optimization in dynamic and noisy environments. In: Proceedings of the IEEE congress on evolutionary computation, pp 1–8

  • Fu MC (2002) Optimization for simulation: theory vs. practice. INFORMS J Comput 14(3):192–215

    Article  MathSciNet  Google Scholar 

  • Fu MC, Andradóttir S, II JSC, Glover F, Harrell CR, Ho YC, Kelly JP, Robinson SM (2000) Integrating optimization and simulation: research and practice. In: Proceedings of the winter simulation conference, pp 610–616

  • Fu M, Glover F, April J (2005) Simulation optimization: a review, new developments, and applications. In: Proceedings of the winter simulation conference, pp 83–95

  • Jin Y, Branke J (2005) Evolutionary optimization in uncertain environments—a survey. IEEE Trans Evol Comput 9(3):303–317

    Article  Google Scholar 

  • Mendes R (2004) Population topologies and their influence in particle swarm performance. PhD thesis, Universidade do Minho, Portugal

  • Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 8(3):204–210

    Article  Google Scholar 

  • Morrison RW, De Jong KA (1999) A test problem generator for non-stationary environments. In: Proceedings of the 1999 congress on evolutionary computation, vol 3, pp 1875–1882

  • Ólafsson S, Kim J (2002) Simulation optimization. In: Proceedings of the winter simulation conference, pp 79–84

  • Pietro AD (2007) Optimising evolutionary strategies for problems with varying noise strength. PhD thesis, The University of Western Australia

  • Poli R (2008) Analysis of the publications on the applications of particle swarm optimisation. J Artif Evol Appl 2008(4):1–10

    MathSciNet  Google Scholar 

  • Pugh J, Martinoli A, Zhang Y (2005) Particle swarm optimization for unsupervised robotic learning. In: Proceedings of the IEEE swarm intelligence symposium, pp 92–99

  • Rada-Vilela J, Zhang M, Seah W (2011) Random asynchronous PSO. In: Proceedings of the 5th international conference on automation, robotics and applications, pp 220–225

  • Rada-Vilela J, Zhang M, Seah W (2012a) Evaporation mechanisms for particle swarm optimization. In: Proceedings of the international conference on simulated evolution and learning, pp 238–247

  • Rada-Vilela J, Zhang M, Seah W (2012b) A performance study on the effects of noise and evaporation in particle swarm optimization. In: Proceedings of the IEEE congress on evolutionary computation, pp 873–880

  • Rada-Vilela J, Johnston M, Zhang M (2014) Population statistics for particle swarm optimization: deception, blindness and disorientation in noisy problems. In: Technical report 14-01, Victoria University of Wellington. http://ecs.victoria.ac.nz/Main/TechnicalReportSeries

  • Shi Y, Eberhart R (1998) A modified particle swarm optimizer. In: Proceedings of the IEEE world congress on computational intelligence, pp 69–73

  • Syberfeldt A (2009) A multi-objective evolutionary approach to simulation-based optimisation of real-world problems. PhD thesis, De Montfort University, UK

  • Tang K, Li X, Suganthan PN, Yang Z, Weise T (2009) Benchmark functions for the CEC’2010 special session and competition on large-scale global optimization. In: Technical report. Nature Inspired Computation and Applications Laboratory, USTC, China

  • van den Bergh F (2002) An analysis of particle swarm optimizers. PhD thesis, University of Pretoria, South Africa

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Rada-Vilela.

Additional information

Communicated by V. Loia.

Appendices

Appendix A: Quality of results—sets \(\mathcal {A}\), \(\mathcal {E}\), \(\mathcal {B}\)

See Fig. 8.

Fig. 8
figure 8

Quality of results. The boxplots represent the true objective values (left axis) of the best solutions found by the algorithms (bottom axis) at each level of noise (top axis) in all independent runs. The algorithms are abbreviated as (e) PSO-LE, (E) PSO-HE, (p) PSO-LP, (P) PSO-HP, (a) PSO, and (n) PSO-AN. The boxplots are coloured from light to dark gray to ease the comparison. The benchmark functions are minimization problems; therefore, lower objective values indicate better solutions

Appendix B: Quality of results—sets \(\mathcal {C}\), \(\mathcal {D}\)

See Fig. 9.

Fig. 9
figure 9

Quality of results. The boxplots represent the true objective values (left axis) of the best solutions found by the algorithms (bottom axis) at each level of noise (top axis) in all independent runs. The algorithms are abbreviated as (e) PSO-LE, (E) PSO-HE, (p) PSO-LP, (P) PSO-HP, (a) PSO, and (n) PSO-AN. The boxplots are coloured from light to dark gray to ease the comparison. The benchmark functions are minimization problems; therefore, lower objective values indicate better solutions

Appendix C: Regular operation, blindness and disorientation

See Fig. 10.

Fig. 10
figure 10

Regular operation, blindness and disorientation. The stacked barplots represent the average proportions (left axis) of regular operation (dark gray), blindness (medium gray) and disorientation (light gray) experienced by a particle for each algorithm (bottom axis) on the benchmark functions subject to levels of noise \(\sigma \in \{0.06, 0.12, 0.18, 0.24, 0.30\}\) (bars from left to right). The algorithms are abbreviated as (e) PSO-LE, (E) PSO-HE, (p) PSO-LP, (P) PSO-HP, (a) PSO, and (n) PSO-AN. Larger proportions of regular operation and smaller ones of blindness and disorientation are better

Appendix D: Updates in regular operation

See Fig. 11.

Fig. 11
figure 11

Regular updates and discards. The stacked barplots represent the average proportions (left axis) of regular updates (dark gray) and discards (light gray) experienced by a particle for each algorithm (bottom axis) on the benchmark functions subject to levels of noise \(\sigma \in \{0.00, 0.06, 0.12, 0.18, 0.24, 0.30\}\) (bars from left to right). The algorithms are abbreviated as (e) PSO-LE, (E) PSO-HE, (p) PSO-LP, (P) PSO-HP, (a) PSO, and (n) PSO-AN. Larger proportions of regular updates are better

Appendix E: Causes of blindness

See Fig. 12.

Fig. 12
figure 12

Causes of blindness. The stacked barplots represent the average proportions (left axis) of blindness caused by memory (dark gray) and the environment (light gray) in a particle for each algorithm (bottom axis) on the benchmark functions subject to levels of noise \(\sigma \in \{0.06, 0.12, 0.18, 0.24, 0.30\}\) (bars from left to right). The algorithms are abbreviated as (e) PSO-LE, (E) PSO-HE, (p) PSO-LP, (P) PSO-HP, (a) PSO, and (n) PSO-AN

Appendix F: Causes of disorientation

See Fig. 13.

Fig. 13
figure 13

Causes of disorientation. The stacked barplots represent the average proportions (left axis) of disorientation caused by memory (dark gray) and the environment (light gray) in a particle for each algorithm (bottom axis) on the benchmark functions subject to levels of noise \(\sigma \in \{0.00, 0.06, 0.12, 0.18, 0.24, 0.30\}\) (bars from left to right). The algorithms are abbreviated as (e) PSO-LE, (E) PSO-HE, (p) PSO-LP, (P) PSO-HP, (a) PSO, and (n) PSO-AN. Particles from the regular PSO and PSO-AN do not suffer from disorientation in the absence of noise

Appendix G: Effect of disorientation and blindness

See Fig. 14.

Fig. 14
figure 14

Effect of disorientation and blindness. The stacked barplots represent the average proportions of weighted magnitudes (left axis) of deterioration caused by disorientation (dark gray) and hypothetical improvements missed by blindness (light gray) on the benchmark functions subject to the levels of noise \(\sigma \in \{0.00, 0.06, 0.12, 0.18, 0.24, 0.30\}\) (bars from left to right). The algorithms are abbreviated as (e) PSO-LE, (E) PSO-HE, (p) PSO-LP, (P) PSO-HP, (a) PSO, and (n) PSO-AN. Particles from the regular PSO and PSO-AN do not suffer from blindness or disorientation in the absence of noise

Appendix H: Optimization curves

See Fig. 15.

Fig. 15
figure 15

Optimization curves. The plots represent the average objective values (left axis) of the personal best solutions over all the independent runs at each iteration (bottom axis) of PSO-LE (marked with bullet) and PSO-LP (no marks) on the benchmark functions subject to levels of noise \(\sigma _{00}\) (solid line), \(\sigma _{06}\) (long-dashed line), \(\sigma _{18}\) (short-dashed line), and \(\sigma _{30}\) (dotted line). The benchmark functions are minimization problems; therefore, lower objective values indicate better solutions

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rada-Vilela, J., Johnston, M. & Zhang, M. Population statistics for particle swarm optimization: Single-evaluation methods in noisy optimization problems. Soft Comput 19, 2691–2716 (2015). https://doi.org/10.1007/s00500-014-1438-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-014-1438-y

Keywords

Navigation