Skip to main content

Advertisement

Log in

GuASPSO: a new approach to hold a better exploration–exploitation balance in PSO algorithm

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

This paper presents a new variant of particle swarm optimization (PSO) algorithm named guided adaptive search-based particle swarm optimizer (GuASPSO). In this algorithm, the personal best particles are all divided into a linearly decreasing number of clusters. Then, the unique global best guide of a given particle located at a cluster is obtained as the weighted average calculated over other clusters’ best particles. Since the clustered particles are being well-distributed over the whole search space in the clustering process, there would be a moderate distance between each particle and its unique global best guide, contributing the particles neither to be trapped in local optima nor engaged in a drift leading to lose diversity in the search space. In this approach, the number of clusters is high at the early iterations and is gradually decreased by lapse of iterations to less stress the diversity factor and further stress the fitness role to cause the particles to better converge to the optimal point. Holding this balance between global and personal bests’ role to attract the particles, on the one hand and between convergence and diversity, on the other hand, can hold a better exploration–exploitation balance in the proposed algorithm. To test the performance of GuASPSO, four popular meta-heuristic algorithms, including genetic algorithm, gravitational search algorithm, gray wolf optimizer, and PSO algorithm as well as 23 standard benchmark functions as the test beds, are employed. The experimental results validated GuASPSO as a robust well-designed algorithm to handle various optimization problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Abbreviations

D :

Number of dimensions of the optimization problem

N :

Swarm size

t :

Number of iterations

w :

Inertia weight

r 1 :

First random vector

r 2 :

Second random vector

c 1 :

Cognitive acceleration coefficient

c 2 :

Social acceleration coefficient

χ :

Constriction coefficient

k max :

Maximum value of k (appearing in Eq. 4)

k min :

Minimum value of k (appearing in Eq. 4)

t max :

Maximum number of iterations

X :

Input vectors to the SOM network (Pbest particles)

W i :

Weight vectors of the SOM network neurons

M :

Number of the neurons/clusters

η(t):

Variable learning-rate parameter

τ 1 :

Maximum number of SOM iterations

Ncluster(t):

Number of the clusters in the tth iteration

\( W_{c}^{t} \) :

Weight of the cth active cluster

\( \left| {C_{c}^{t} } \right| \) :

Number of the Pbest particles collected in the cth active cluster at tth iteration

\( {\text{Gbest}}_{i}^{t} \) :

Unique Gbest particle of the ith particle at the tth iteration

\( {\text{Cbest}}_{j}^{t} \) :

Cluster best: the best Pbest particle in the jth cluster at the tth iteration

c(i):

Cluster the ith Pbest particle is belonging to

w i :

Weight assigned to the ith criteria in compromise programming

Z i :

Ith criterion

Z i,best :

The best ith criterion

Z i,worst :

The worst ith criterion

CPI:

Compromise programming index

References

  • Arora S, Barak B (2007) Computational complexity: a modern approach. Princeton University, Princeton

    MATH  Google Scholar 

  • Chang J-X, Bai T, Huang Q, Yang D-W (2013) Optimization of water resources utilization by PSO-GA. Water Resour Manag 27:3525–3540

    Article  Google Scholar 

  • Chen L-H, Chen C–T, Lin D–W (2011) Application of integrated back-propagation network and self-organizing map for groundwater level forecasting. J Water Resour Plan Manag 137:352–365

    Article  Google Scholar 

  • Clerc M, Kennedy J (2002) The particle swarm—explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73

    Article  Google Scholar 

  • Das R, Oor KT (2013) Application of simulated annealing in a rectangular fin with variable heat transfer coefficient. Inverse Probl Sci Eng 21(8):1352–1367

    Article  MathSciNet  Google Scholar 

  • Das R, Prasad DK (2015) Prediction of porosity and thermal diffusivity in a porous fin using differential evolution algorithm. Swarm Evolut Comput. https://doi.org/10.1016/j.swevo.2015.03.001i

    Article  Google Scholar 

  • Das R, Akay B, Singla RK, Sing K (2016a) Application of artificial bee colony algorithm for inverse modeling of a solar collector. Inverse Probl Sci Eng. https://doi.org/10.1080/17415977.2016.1209748

    Article  Google Scholar 

  • Das R, Sing K, Akay B, Gogoi TK (2016b) Application of artificial bee colony algorithm for maximizing heat transfer in a perforated fin. Proc Inst Mech Eng Part E J Process Mech Eng 232:38–48

    Article  Google Scholar 

  • Das R, Sing K, Gogoi TK (2017) Estimation of critical dimensions for a trapezoidal-shaped steel fin using hybrid differential evolution algorithm. Neural Comput Appl 28:1683–1693

    Article  Google Scholar 

  • Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. Comput Intell Mag, IEEE 1:28–39

    Article  Google Scholar 

  • Družeta S, Ivić S (2016) Examination of benefits of personal fitness improvement dependent inertia for particle swarm optimization. Soft Comput. https://doi.org/10.1007/s00500-015-2016-7

    Article  Google Scholar 

  • García S, Molina D, Lozano M, Herrera F (2008) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 special session on real parameter optimization. J Heuristics. https://doi.org/10.1007/s10732-008-9080-4

    Article  MATH  Google Scholar 

  • Gong Y-J, Li JJ, Zhou Y, Li Y, Chung HS-H, Shi Y-H, Zhang J (2016) Genetic learning particle swarm optimization. IEEE Trans Cybern 46(10):2277–2290

    Article  Google Scholar 

  • Grimaccia F, Mussetta M, Zich RE (2007) Genetical swarm optimization: self-adaptive hybrid evolutionary algorithm for electromagnetics. IEEE Trans Antennas Propag 55(3):781–785

    Article  Google Scholar 

  • Haykin S (2009) Neural networks and learning machines, 3rd edn. Prentice Hall, Englewood Cliffs

    Google Scholar 

  • Janson S, Middendorf M (2005) A hierarchical particle swarm optimizer and its adaptive variant. IEEE Trans Syst Man Cybern B Cybern 35(6):1272–1282

    Article  Google Scholar 

  • Jeong S, Hasegawa S, Shimoyama K, Obayashi S (2009) Development and investigation of efficient GA/PSO-HYBRID algorithm applicable to real-world design optimization. IEEE Comput Intell Mag 4(3):36–44

    Article  Google Scholar 

  • Juang C-F (2004) A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans Syst Man Cybern B Cybern 34(2):997–1006

    Article  Google Scholar 

  • Kao Y-T, Zahar E (2008) A hybrid genetic algorithm and particle swarm optimization for multimodal functions. Appl Soft Comput 8(2):849–857

    Article  Google Scholar 

  • Karaboga, D (2005) An idea based on honey bee swarm for numerical optimization. Technical report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department

  • Kennedy J, Eberhart R (1995) Particle swarm optimisation. 1995. In: Proceedings IEEE international conference on neural networks, vol IV. IEEE Service Center, Piscataway, pp 1942–1948

  • Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Sci New Ser 220(4598):671–680

    MathSciNet  MATH  Google Scholar 

  • Li W-T, Shi X-W, Hei Y-Q, Liu S-F, Zhu J (2010) A hybrid optimization algorithm and its application for conformal array pattern synthesis. IEEE Trans Antennas Propag 58(10):3401–3406

    Article  Google Scholar 

  • Li S, Tan M, Tsang IW, Kwok JT-Y (2011) A hybrid PSO-BFGS strategy for global optimization of multimodal functions. IEEE Trans Syst Man Cybern B Cybern 41(4):1003–1014

    Article  Google Scholar 

  • Li C-H, Yang S-X, Nguyen TT (2012) A self-learning particle swarm optimizer for global optimization problems. IEEE Trans Syst Man Cybern B Cybern 42(3):627–646

    Article  Google Scholar 

  • Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer. In: Proceedings of the 2003 IEEE swarm intelligence symposium, Pasadena, CA, USA, pp 124–129

  • Liang JJ, Qin AK, Suganthan PN, Baskar S (2006) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans Evol Comput 10(3):281–295

    Article  Google Scholar 

  • Liu Z-H, Zhang J, Zhou S-W, Li X-H, Liu K (2013) Coevolutionary particle swarm optimization using AIS and its application in multiparameter estimation of PMSM. IEEE Trans Cybern 43(6):1921–1935

    Article  Google Scholar 

  • Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 8(3):204–210

    Article  Google Scholar 

  • Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61

    Article  Google Scholar 

  • Nasir MdD, Das S, Sengupta S, Haldar U, Suganthan PNA (2012) Dynamic neighborhood learning based particle swarm optimizer for global numerical optimization. Inf Sci 209:16–36

    Article  MathSciNet  Google Scholar 

  • Ostadmohammadi Arani B, Mirzabeygi P, Shariat Panahi M (2013) An improved PSO algorithm with a territorial diversity-preserving scheme and enhanced exploration–exploitation balance. Swarm Evolut Comput 11:1–15

    Article  Google Scholar 

  • Peram T, Veeramachaneni K, Mohan CK (2003) Fitness-distance-ratio based particle swarm optimization. In: Proceedings swarm intelligence symposium, pp 174–181

  • Premalatha K, Natarajan AM (2009) Hybrid PSO and GA for global maximization. Int J Open Probl Comput Math 2(4):597–608

    MathSciNet  Google Scholar 

  • Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179:2232–2248

    Article  MATH  Google Scholar 

  • Ratnaweera A, Halgamuge SK, Watson HC (2004) Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255

    Article  Google Scholar 

  • Ren Z-H, Zhang A-M, Wen C-Y, Feng Z-R (2014) A scatter learning particle swarm optimization algorithm for multimodal problems. IEEE Trans Cybern 44(7):1127–1140

    Article  Google Scholar 

  • Rezaei F, Safavi HR, Mirchi A, Madani K (2017) f-MOPSO: an alternative multi-objective PSO algorithm for conjunctive water use management. J Hydro-Environ Res 14:1–18

    Article  Google Scholar 

  • Shi Y, Eberhart R (1998) A modified particle swarm optimizer. In: Proceedings of IEEE international conference on evolutionary computation, Anchorage, AK, USA, pp 69–73

  • Shi X-H, Liang Y-C, Lee H-P, Liu C, Wang LM (2005) An improved GA and a novel PSO-GA-based hybrid algorithm. Inf Process Lett 93(5):255–261

    Article  MathSciNet  MATH  Google Scholar 

  • Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11:341–359

    Article  MathSciNet  MATH  Google Scholar 

  • Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Nanyang Technological University, Singapore and Kanpur Genetic Algorithms Laboratory, IIT Kanpur, technical report

  • Tang KS, Man KF, Kwong S, He Q (1996) Genetic algorithms and their applications. IEEE Signal Process Mag 13(6):22–37

    Article  Google Scholar 

  • Valdez F, Melin P, Castillo O, Montiel O (2008) A new evolutionary method with a hybrid approach combining particle swarm optimization and genetic algorithms using fuzzy logic for decision making. In: Proceedings of IEEE congress on evolutionary computation, pp 1333–1339

  • Van den Bergh F, Engelbrecht AP (2004) A cooperative approach to particle swarm optimization. IEEE Trans Evol Comput 8(3):225–239

    Article  Google Scholar 

  • Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3:82–102

    Article  Google Scholar 

  • Zeleny M (1973) Compromise programming, multiple criteria decision-making. In: Cochrane JL, Zeleny M (eds) Multiple criteria decision making. University of South Carolina Press, Columbia, pp 263–301

    Google Scholar 

  • Zhan Z-H, Zhang J, Li Y, Chung HS-H (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B Cybern 39(6):1362–1381

    Article  Google Scholar 

  • Zhan Z-H, Zhang J, Li Y, Shi Y-H (2011) Orthogonal learning particle swarm optimization. IEEE Trans Evol Comput 15(6):832–847

    Article  Google Scholar 

  • Zhang J, Yang S (2014) A novel PSO algorithm based on an incremental-PID-controlled search strategy. Soft Comput. https://doi.org/10.1007/s00500-014-1560-x

    Article  Google Scholar 

  • Zhang C-S, Ning J-X, Lu S, Ouyang D-T, Ding T-N (2009) A novel hybrid differential evolution and particle swarm optimization algorithm for unconstrained optimization. Oper Res Lett 37(2):117–122

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hamid R. Safavi.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by V. Loia.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

See Tables 10, 11, 12, 13, 14, 15, 16, 17.

Table 10 \( a_{ij} \) in \( F_{14} \)
Table 11 \( a_{i} \) and \( b_{i} \) in \( F_{15} \)
Table 12 \( a_{ij} \) and \( c_{i} \) in \( F_{19} \)
Table 13 \( P_{ij} \) in \( F_{19} \)
Table 14 \( a_{ij} \) and \( c_{i} \) in \( F_{20} \)
Table 15 \( P_{ij} \) in \( F_{20} \)
Table 16 \( a_{ij} \) and \( c_{i} \) in \( F_{21} \), \( F_{22} \) and \( F_{23} \)
Table 17 Optima in functions presented in Table 3

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rezaei, F., Safavi, H.R. GuASPSO: a new approach to hold a better exploration–exploitation balance in PSO algorithm. Soft Comput 24, 4855–4875 (2020). https://doi.org/10.1007/s00500-019-04240-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-019-04240-8

Keywords

Navigation