Skip to main content
Log in

A survey, taxonomy and progress evaluation of three decades of swarm optimisation

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

While the concept of swarm intelligence was introduced in 1980s, the first swarm optimisation algorithm was introduced a decade later, in 1992. In this paper, nineteen representative original swarm optimisation algorithms are analysed to extract their common features and design a taxonomy for swarm optimisation. We use twenty-nine benchmark problems to compare the performance of these nineteen algorithms in the form they were first introduced in the literature against five state-of-the-art swarm algorithms. This comparison reveals the advancements made in this field over three decades. It reveals that, while the state-of-the-art swarm optimisation algorithms are indeed competitive in terms of the quality of solutions they find, their complexities have evolved to be more computationally demanding when compared to the nineteen original algorithms of swarm optimisation. The investigation suggests that there is an urge to continue to design swarm optimisation algorithms that are simpler, while maintaining their current competitive performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

The data are available.

References

  • Ab Wahab MN, Nefti-Meziani S, Atyabi A (2015) A comprehensive review of swarm optimization algorithms. PLoS ONE 10(5)

  • Abbass HA (2001a) MBO: Marriage in honey bees optimization-a haplometrosis polygynous swarming approach. In: Proceedings of the 2001 Congress on Evolutionary Computation CEC2001, pp 207–214

  • Abbass HA (2001b) A single queen single worker honey–bees approach to 3-sat. In: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, pp 807–814

  • Adarsh B, Raghunathan T, Jayabarathi T, Yang XS (2016) Economic dispatch using chaotic bat algorithm. Energy 96:666–675

    Article  Google Scholar 

  • Afshar A, Haddad OB, Marino MA, Adams B (2007) Honey-bee mating optimization (HBMO) algorithm for optimal reservoir operation. J Franklin Inst 344(5):452–462

    Article  MATH  Google Scholar 

  • Akbari R, Hedayatzadeh R, Ziarati K, Hassanizadeh B (2012) A multi-objective artificial bee colony algorithm. Swarm Evol Comput 2:39–52

    Article  Google Scholar 

  • Al-Kheraif AA, Hashem M, Al Esawy MSS (2018) Developing charcot-marie-tooth disease recognition system using bacterial foraging optimization algorithm based spiking neural network. J Med Syst 42(10):192

    Article  Google Scholar 

  • AlRashidi MR, El-Hawary ME (2009) A survey of particle swarm optimization applications in electric power systems. IEEE Trans Evol Comput 13(4):913–918

    Article  Google Scholar 

  • Alswaitti M, Albughdadi M, Isa NAM (2018) Density-based particle swarm optimization algorithm for data clustering. Expert Syst Appl 91:170–186

    Article  Google Scholar 

  • Amiri B, Fathian M, Maroosi A (2009) Application of shuffled frog-leaping algorithm on clustering. Int J Adv Manuf Technol 45(1–2):199–209

    Article  Google Scholar 

  • Arun B, Kumar TV (2015) Materialized view selection using marriage in honey bees optimization. Int J Nat Comput Res (IJNCR) 5(3):1–25

    Article  Google Scholar 

  • Awad N, Ali M, Liang J, Qu B, Suganthan P (2016) Problem definitions and evaluation criteria for the cec 2017 special session and competition on single objective bound constrained real-parameter numerical optimization. In: Technical report, Nanyang technological University Singapore

  • Azad MAK, Rocha AMA, Fernandes EM (2014) Improved binary artificial fish swarm algorithm for the 0–1 multidimensional knapsack problems. Swarm Evol Comput 14:66–75

    Article  Google Scholar 

  • Bahrami S, Hooshmand RA, Parastegari M (2014) Short term electric load forecasting by wavelet transform and grey model improved by PSO (particle swarm optimization) algorithm. Energy 72:434–442

    Article  Google Scholar 

  • Bell JE, McMullen PR (2004) Ant colony optimization techniques for the vehicle routing problem. Adv Eng Inform 18(1):41–48

    Article  Google Scholar 

  • Beni G (1988) The concept of cellular robotic system. In: Proceedings intelligent control, 1988, IEEE International Symposium on, IEEE, pp 57–62

  • Beni G, Wang J (1993) Swarm intelligence in cellular robotic systems. In: Robots and Biological Systems: Towards a New Bionics?, Springer, pp 703–712

  • Biswas A, Dasgupta S, Das S, Abraham A (2007) Synergy of PSO and bacterial foraging optimization-a comparative study on numerical benchmarks. Innovations in hybrid intelligent systems. Springer, pp 255–263

  • Biswas A, Dasgupta S, Das S, Abraham A (2007) Synergy of PSO and bacterial foraging optimization-a comparative study on numerical benchmarks. Innovations in hybrid intelligent systems. Springer, pp 255–263

  • Blackwell T, Branke J, Li X (2008) Particle swarms for dynamic optimization problems. Swarm intelligence. Springer, pp 193–217

  • Blum C, Vallès MY, Blesa MJ (2008) An ant colony optimization algorithm for DNA sequencing by hybridization. Comput Op Res 35(11):3620–3635

    Article  MATH  Google Scholar 

  • Bolaji AL, Babatunde BS, Shola PB (2018) Adaptation of binary pigeon-inspired algorithm for solving multidimensional knapsack problem. Soft computing: theories and applications. Springer, pp 743–751

  • Bonabeau E, Marco DdRDF, Dorigo M, Théraulaz G, Theraulaz G, et al. (1999) Swarm intelligence: from natural to artificial systems. 1, Oxford university press

  • Bozorg Haddad O, Afshar A (2004) MBO (marriage bees optimization), a new heuristic approach in hydrosystems design and operation. In: Proceedings of the 1st international conference on managing rivers in the 21st century: issues and challenges. Penang, Malaysia, pp 21–23

  • Cai X, Xz Gao, Xue Y (2016) Improved bat algorithm with optimal forage strategy and random disturbance strategy. Int J Bio-Inspired Comput 8(4):205–214

    Article  Google Scholar 

  • Camacho-Villalón CL, Dorigo M, Stützle T (2019) The intelligent water drops algorithm: why it cannot be considered a novel algorithm. Swarm Intell 13(3–4):173–192

    Article  Google Scholar 

  • Celik Y, Ulker E (2013) An improved marriage in honey bees optimization algorithm for single objective unconstrained optimization. The Sci World J

  • Chakraborty A, Kar AK (2017) Swarm intelligence: A review of algorithms. Nature-inspired computing and optimization. Springer, pp 475–494

  • Chen K, Xue B, Zhang M, Zhou F (2020) An evolutionary multitasking-based feature selection method for high-dimensional classification. IEEE Transactions on Cybernetics

  • Chen WN, Zhang J (2009) An ant colony optimization approach to a grid workflow scheduling problem with various QoS requirements. IEEE Trans Syst Man Cybern Part C (Appl Rev) 39(1):29–43

    Article  Google Scholar 

  • Chen WN, Zhang J, Lin Y, Chen N, Zhan ZH, Chung HSH, Li Y, Shi YH (2013) Particle swarm optimization with an aging leader and challengers. IEEE Trans Evol Comput 17(2):241–258

    Article  Google Scholar 

  • Cheng Y, Jiang M, Yuan D (2009) Novel clustering algorithms based on improved artificial fish swarm algorithm. In: Fuzzy Systems and Knowledge Discovery, 2009. FSKD’09. Sixth International Conference on, IEEE, vol 3, pp 141–145

  • Choong SS, Wong LP, Lim CP (2019) An artificial bee colony algorithm with a modified choice function for the traveling salesman problem. Swarm Evol Comput 44:622–635

    Article  Google Scholar 

  • Chu SC, Tsai PW, Pan JS (2006) Cat swarm optimization. In: Pacific Rim International Conference on Artificial Intelligence, Springer, pp 854–858

  • Chu SC, Tsai PW et al (2007) Computational intelligence based on the behavior of cats. Int J Innov Comput Inform Control 3(1):163–173

    Google Scholar 

  • Chu X, Wu T, Weir JD, Shi Y, Niu B, Li L (2018) Learning–interaction–diversification framework for swarm intelligence optimizers: a unified perspective. Neural Comput Appl, pp 1–21

  • Coello CAC (2019) Constraint-handling techniques used with evolutionary algorithms. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp 485–506

  • Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans Evol Comput 8(3):256–279

    Article  Google Scholar 

  • Coello CC, Lechuga MS (2002) Mopso: A proposal for multiple objective particle swarm optimization. In: Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), IEEE, vol 2, pp 1051–1056

  • Crawford B, Soto R, Berríos N, Johnson F, Paredes F, Castro C, Norero E (2015) A binary cat swarm optimization algorithm for the non-unicost set covering problem. Math Probl Eng. https://doi.org/10.1155/2015/578541a

    Article  MathSciNet  MATH  Google Scholar 

  • Darwish A, Hassanien AE, Das S (2020) A survey of swarm and evolutionary computing approaches for deep learning. Artif Intell Rev 53(3):1767–1812

    Article  Google Scholar 

  • Dasgupta S, Das S, Abraham A, Biswas A (2009) Adaptive computational chemotaxis in bacterial foraging optimization: an analysis. IEEE Trans Evol Comput 13(4):919–941

    Article  Google Scholar 

  • Del Ser J, Osaba E, Molina D, Yang XS, Salcedo-Sanz S, Camacho D, Das S, Suganthan PN, Coello CAC, Herrera F (2019) Bio-inspired computation: Where we stand and what’s next. Swarm and Evolutionary Computation, 48:220–250

  • Ding S, An Y, Zhang X, Wu F, Xue Y (2017) Wavelet twin support vector machines based on glowworm swarm optimization. Neurocomputing 225:157–163

    Article  Google Scholar 

  • Dorigo M (1992) Optimization, learning and natural algorithms. PhD Thesis, Politecnico di Milano

  • Dorigo M, Gambardella LM (1997) Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans Evol Comput 1(1):53–66

    Article  Google Scholar 

  • Dorigo M, Stützle T (2003) The ant colony optimization metaheuristic: Algorithms, applications, and advances. Handbook of metaheuristics. Springer, Berlin, pp 250–285

  • Dorigo M, Stützle T (2009) Ant colony optimization: overview and recent advances. Techreport, IRIDIA, Universite Libre de Bruxelles

  • Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybern Part B (Cybernetics) 26(1):29–41

    Article  Google Scholar 

  • Dorigo M, Birattari M, Stützle T (2006) Ant colony optimization. IEEE Computational Intelligence Magazine 1556(603X/06)

  • Dou R, Duan H (2016) Pigeon inspired optimization approach to model prediction control for unmanned air vehicles. Aircr Eng Aerospace Technol: An Int J 88(1):108–116

    Article  Google Scholar 

  • Dou R, Duan H (2017) Lévy flight based pigeon-inspired optimization for control parameters optimization in automatic carrier landing system. Aerosp Sci Technol 61:11–20

    Article  Google Scholar 

  • Duan H, Li C (2015) Quantum-behaved brain storm optimization approach to solving loney’s solenoid problem. IEEE Trans Magn 51(1):1–7

  • Duan H, Luo Q (2015) New progresses in swarm intelligence-based computation. Int J Bio-Inspired Comput 7(1):26–35

    Article  Google Scholar 

  • Duan H, Qiao P (2014) Pigeon-inspired optimization: a new swarm intelligence optimizer for air robot path planning. Int J Intell Comput Cybern 7(1):24–37

    Article  MathSciNet  Google Scholar 

  • Duan H, Li S, Shi Y (2013) Predator-prey brain storm optimization for dc brushless motor. IEEE Trans Magn 49(10):5336–5340

    Article  Google Scholar 

  • Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Micro Machine and Human Science, 1995. MHS’95., Proceedings of the Sixth International Symposium on, IEEE, pp 39–43

  • Ebrahimi J, Hosseinian SH, Gharehpetian GB (2010) Unit commitment problem solution using shuffled frog leaping algorithm. IEEE Trans Power Syst 26(2):573–581

    Article  Google Scholar 

  • Eiben AE, Smith J (2015) From evolutionary computation to the evolution of things. Nature 521(7553):476–482

    Article  Google Scholar 

  • Elattar EE (2019) Environmental economic dispatch with heat optimization in the presence of renewable energy based on modified shuffle frog leaping algorithm. Energy 171:256–269

    Article  Google Scholar 

  • Elbeltagi E, Hegazy T, Grierson D (2007) A modified shuffled frog-leaping optimization algorithm: applications to project management. Struct Infrastruct Eng 3(1):53–60

    Article  Google Scholar 

  • Elsawy A, Selim MM, Sobhy M (2019) A hybridised feature selection approach in molecular classification using CSO and GA. Int J Comput Appl Technol 59(2):165–174

    Article  Google Scholar 

  • Eusuff M, Lansey K, Pasha F (2006) Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization. Eng Optim 38(2):129–154

    Article  MathSciNet  Google Scholar 

  • Eusuff MM, Lansey KE (2003) Optimization of water distribution network design using the shuffled frog leaping algorithm. J Water Resour Plan Manag 129(3):210–225

    Article  Google Scholar 

  • Fang C, Wang L (2012) An effective shuffled frog-leaping algorithm for resource-constrained project scheduling problem. Comput Op Res 39(5):890–901

    Article  MathSciNet  MATH  Google Scholar 

  • Fang N, Zhou J, Zhang R, Liu Y, Zhang Y (2014) A hybrid of real coded genetic algorithm and artificial fish swarm algorithm for short-term optimal hydrothermal scheduling. Int J Electr Power Energy Syst 62:617–629

    Article  Google Scholar 

  • Farzi S (2009) Efficient job scheduling in grid computing with modified artificial fish swarm algorithm. Int J comput Theory Eng 1(1):13

    Article  Google Scholar 

  • Feng L, Zhou L, Zhong J, Gupta A, Ong YS, Tan KC, Qin AK (2018) Evolutionary multitasking via explicit autoencoding. IEEE Trans Cybern 49(9):3457–3470

    Article  Google Scholar 

  • Fu H, Li Z, Liu Z, Wang Z (2018) Research on big data digging of hot topics about recycled water use on micro-blog based on particle swarm optimization. Sustainability 10(7):2488

    Article  Google Scholar 

  • Gandomi AH, Yang XS (2014) Chaotic bat algorithm. J Comput Sci 5(2):224–232

    Article  MathSciNet  Google Scholar 

  • Garcia MP, Montiel O, Castillo O, Sepúlveda R, Melin P (2009) Path planning for autonomous mobile robot navigation with ant colony optimization and fuzzy cost function evaluation. Appl Soft Comput 9(3):1102–1110

    Article  Google Scholar 

  • García-Martínez C, Cordón O, Herrera F (2007) A taxonomy and an empirical analysis of multiple objective ant colony optimization algorithms for the bi-criteria tsp. Eur J Oper Res 180(1):116–148

    Article  MATH  Google Scholar 

  • Gravel M, Price WL, Gagné C (2002) Scheduling continuous casting of aluminum using a multiple objective ant colony optimization metaheuristic. Eur J Oper Res 143(1):218–229

    Article  MATH  Google Scholar 

  • Guilford T, Roberts S, Biro D, Rezek I (2004) Positional entropy during pigeon homing II: navigational interpretation of bayesian latent state models. J Theor Biol 227(1):25–38

    Article  MathSciNet  MATH  Google Scholar 

  • Gülcü Ş, Mahi M, Baykan ÖK, Kodaz H (2018) A parallel cooperative hybrid method based on ant colony optimization and 3-opt algorithm for solving traveling salesman problem. Soft Comput 22(5):1669–1685

    Article  Google Scholar 

  • Gupta A, Ong YS, Feng L (2015) Multifactorial evolution: toward evolutionary multitasking. IEEE Trans Evol Comput 20(3):343–357

    Article  Google Scholar 

  • Gupta S, Deep K (2019) A novel random walk grey wolf optimizer. Swarm Evol Comput 44:101–112

    Article  Google Scholar 

  • Halim AH, Ismail I, Das S (2020) Performance assessment of the metaheuristic optimization algorithms: an exhaustive review. Artificial Intelligence Review pp 1–87

  • Han M, Liu S (2017) An improved binary chicken swarm optimization algorithm for solving 0-1 knapsack problem. In: 2017 13th International Conference on Computational Intelligence and Security (CIS), IEEE, pp 207–210

  • Hansen N, Müller SD, Koumoutsakos P (2003) Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol Comput 11(1):1–18

    Article  Google Scholar 

  • Hao R, Luo D, Duan H (2014) Multiple UAVs mission assignment based on modified pigeon-inspired optimization algorithm. In: Proceedings of 2014 IEEE Chinese Guidance, Navigation and Control Conference, IEEE, pp 2692–2697

  • He L, Li W, Zhang Y, Cao Y (2019) A discrete multi-objective fireworks algorithm for flowshop scheduling with sequence-dependent setup times. Swarm and Evolutionary Computation p 100575

  • Hernández-Ocaña B, Chávez-Bosquez O, Hernández-Torruco J, Canul-Reich J, Pozos-Parra P (2018) Bacterial foraging optimization algorithm for menu planning. IEEE Access 6:8619–8629

    Article  Google Scholar 

  • Houssein EH, Gad AG, Hussain K, Suganthan PN (2021) Major advances in particle swarm optimization: Theory, analysis, and application. Swarm Evol Comput 63:100868

    Article  Google Scholar 

  • Huang C, Li Y, Yao X (2019) A survey of automatic parameter tuning methods for metaheuristics. IEEE Trans Evol Comput 24(2):201–216

    Article  Google Scholar 

  • Jayabarathi T, Raghunathan T, Adarsh B, Suganthan PN (2016) Economic dispatch using hybrid grey wolf optimizer. Energy 111:630–641

    Article  Google Scholar 

  • Jayakumar DN, Venkatesh P (2014) Glowworm swarm optimization algorithm with topsis for solving multiple objective environmental economic dispatch problem. Appl Soft Comput 23:375–386

    Article  Google Scholar 

  • Jhang JY, Lin CJ, Lin CT, Young KY (2018) Navigation control of mobile robots using an interval type-2 fuzzy controller based on dynamic-group particle swarm optimization. Int J Control Autom Syst 16(5):2446–2457

    Article  Google Scholar 

  • Jiang M, Wang Z, Qiu L, Guo S, Gao X, Tan KC (2020) A fast dynamic evolutionary multiobjective algorithm via manifold transfer learning. IEEE Transactions on Cybernetics

  • Jin X, Xie S, He J, Lin Y, Wang Y, Wang N (2018) Optimization of tuned mass damper parameters for floating wind turbines by using the artificial fish swarm algorithm. Ocean Eng 167:130–141

    Article  Google Scholar 

  • Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Tech. rep., Technical report-tr06, Erciyes university, engineering faculty, computer engineering department

  • Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim 39(3):459–471

    Article  MathSciNet  MATH  Google Scholar 

  • Karaboga D, Okdem S, Ozturk C (2012) Cluster based wireless sensor network routing using artificial bee colony algorithm. Wireless Netw 18(7):847–860

    Article  Google Scholar 

  • Karaboga D, Gorkemli B, Ozturk C, Karaboga N (2014) A comprehensive survey: artificial bee colony (abc) algorithm and applications. Artif Intell Rev 42(1):21–57

    Article  Google Scholar 

  • Kennedy J (2006) Swarm intelligence. Handbook of nature-inspired and innovative computing. Springer, Berlin, pp 187–219

    Chapter  Google Scholar 

  • Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Neural Networks, 1995. Proceedings., IEEE International Conference on, IEEE, vol 4, pp 1942–1948

  • Khairuzzaman AKM, Chaudhury S (2017) Multilevel thresholding using grey wolf optimizer for image segmentation. Expert Syst Appl 86:64–76

    Article  Google Scholar 

  • Komaki G, Kayvanfar V (2015) Grey wolf optimizer algorithm for the two-stage assembly flow shop scheduling problem with release time. J Comput Sci 8:109–120

    Article  Google Scholar 

  • Kowsalya M et al (2014) Optimal size and siting of multiple distributed generators in distribution system using bacterial foraging optimization. Swarm Evol Comput 15:58–65

    Article  Google Scholar 

  • Krause J, Cordeiro J, Parpinelli RS, Lopes HS (2013) A survey of swarm algorithms applied to discrete optimization problems. Swarm intelligence and bio-inspired computation. Elsevier, Amsterdam, pp 169–191

    Chapter  Google Scholar 

  • Krishnanand K, Ghose D (2006) Glowworm swarm based optimization algorithm for multimodal functions with collective robotics applications. Multiagent Grid Syst 2(3):209–222

    Article  MATH  Google Scholar 

  • Krishnanand K, Ghose D (2009) A glowworm swarm optimization based multi-robot system for signal source localization. Design and control of intelligent robotic systems. Springer, Berlin, pp 49–68

    Chapter  Google Scholar 

  • Krishnanand K, Ghose D (2009b) Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell 3(2):87–124

    Article  Google Scholar 

  • Kumar A, Misra RK, Singh D (2015) Butterfly optimizer. In: 2015 IEEE Workshop on Computational Intelligence: Theories, Applications and Future Directions (WCI), pp 1–6, 10.1109/WCI.2015.7495523

  • Kumar A, Misra RK, Singh D (2017a) Improving the local search capability of effective butterfly optimizer using covariance matrix adapted retreat phase. In: 2017 IEEE Congress on Evolutionary Computation (CEC), IEEE, pp 1835–1842

  • Kumar A, Maini T, Misra RK, Singh D (2019a) Butterfly constrained optimizer for constrained optimization problems. Computational Intelligence: Theories. Springer, Applications and Future Directions-Volume II, pp 477–486

  • Kumar A, Misra RK, Singh D, Mishra S, Das S (2019b) The spherical search algorithm for bound-constrained global optimization problems. Appl Soft Comput 85:105734

    Article  Google Scholar 

  • Kumar A, Das S, Zelinka I (2020) A self-adaptive spherical search algorithm for real-world constrained optimization problems. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, pp 13–14

  • Kumar B, Kalra M, Singh P (2017b) Discrete binary cat swarm optimization for scheduling workflow applications in cloud systems. In: 2017 3rd International Conference on Computational Intelligence & Communication Technology (CICT), IEEE, pp 1–6

  • Kumar KS, Jayabarathi T (2012) Power system reconfiguration and loss minimization for an distribution systems using bacterial foraging optimization algorithm. Int J Elect Power Energy Syst 36(1):13–17

    Article  Google Scholar 

  • Kumar PB, Sahu C, Parhi DR (2018) A hybridized regression-adaptive ant colony optimization approach for navigation of humanoids in a cluttered environment. Appl Soft Comput 68:565–585

    Article  Google Scholar 

  • Langari RK, Sardar S, Mousavi SAA, Radfar R (2019) Combined fuzzy clustering and firefly algorithm for privacy preserving in social networks. Expert Syst Appl, p 112968

  • Lee JH, Song JY, Kim DW, Kim JW, Kim YJ, Jung SY (2018) Particle swarm optimization algorithm with intelligent particle number control for optimal design of electric machines. IEEE Trans Industr Electron 65(2):1791–1798

    Article  Google Scholar 

  • Li J, Pan Q, Xie S (2012) An effective shuffled frog-leaping algorithm for multi-objective flexible job shop scheduling problems. Appl Math Comput 218(18):9353–9371

    MathSciNet  MATH  Google Scholar 

  • Li J, Zheng S, Tan Y (2016) The effect of information utilization: Introducing a novel guiding spark in the fireworks algorithm. IEEE Trans Evol Comput 21(1):153–166

    Article  Google Scholar 

  • Xl Li (2002) An optimizing method based on autonomous animats: fish-swarm algorithm. Syst Eng Theory Pract 22(11):32–38

    Google Scholar 

  • Liang JJ, Baskar S, Suganthan PN, Qin AK (2006a) Performance evaluation of multiagent genetic algorithm. Nat Comput 5(1):83–96

    Article  MathSciNet  MATH  Google Scholar 

  • Liang JJ, Qin AK, Suganthan PN, Baskar S (2006b) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans Evol Comput 10(3):281–295

    Article  Google Scholar 

  • Lin Q, Lin W, Zhu Z, Gong M, Li J, Coello CAC (2020) Multimodal multi-objective evolutionary optimization with dual clustering in decision and objective spaces. IEEE Trans Evolut Comput

  • Liu W, Wang Z, Liu X, Zeng N, Bell D (2018) A novel particle swarm optimization approach for patient clustering from emergency departments. IEEE Trans Evolut Comput

  • Lynn N, Ali MZ, Suganthan PN (2018) Population topologies for particle swarm optimization and differential evolution. Swarm Evol Comput 39:24–35

    Article  Google Scholar 

  • Majhi R, Panda G, Majhi B, Sahoo G (2009) Efficient prediction of stock market indices using adaptive bacterial foraging optimization (ABFO) and BFO based techniques. Expert Syst Appl 36(6):10097–10104

    Article  Google Scholar 

  • Mavrovouniotis M, Yang S, Yao X (2014) Multi-colony ant algorithms for the dynamic travelling salesman problem. In: 2014 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE), IEEE, pp 9–16

  • Mavrovouniotis M, Li C, Yang S (2017) A survey of swarm intelligence for dynamic optimization: Algorithms and applications. Swarm Evol Comput 33:1–17

    Article  Google Scholar 

  • Meng X, Liu Y, Gao X, Zhang H (2014) A new bio-inspired algorithm: chicken swarm optimization. In: International conference in swarm intelligence, Springer, pp 86–94

  • Merkle D, Middendorf M, Schmeck H (2002) Intelligentes energiemanagement ant colony optimization for resource-constrained project scheduling. IEEE Trans Evol Comput 6:4

    Article  Google Scholar 

  • Van der Merwe D, Engelbrecht AP (2003) Data clustering using particle swarm optimization. In: Evolutionary Computation, 2003. CEC’03. The 2003 Congress on, IEEE, vol 1, pp 215–220

  • Mezura-Montes E, Coello CAC (2011) Constraint-handling in nature-inspired numerical optimization: past, present and future. Swarm Evol Comput 1(4):173–194

    Article  Google Scholar 

  • Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161

    Article  Google Scholar 

  • Mirjalili S, Mirjalili SM, Lewis A (2014a) Grey wolf optimizer. Adv Eng Softw 69:46–61

    Article  Google Scholar 

  • Mirjalili S, Mirjalili SM, Yang XS (2014b) Binary bat algorithm. Neural Comput Appl 25(3–4):663–681

    Article  Google Scholar 

  • Mirjalili S, Saremi S, Mirjalili SM, Coelho LdS (2016) Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. Expert Syst Appl 47:106–119

    Article  Google Scholar 

  • Mishra S, Kumar A, Singh D, Misra RK (2019) Butterfly optimizer for placement and sizing of distributed generation for feeder phase balancing. Computational Intelligence: Theories. Springer, Applications and Future Directions-Volume II, pp 519–530

  • Mousavi SM, Tavana M, Alikar N, Zandieh M (2019) A tuned hybrid intelligent fruit fly optimization algorithm for fuzzy rule generation and classification. Neural Comput Appl 31(3):873–885

    Article  Google Scholar 

  • Niknam T, rasoul Narimani M, Jabbari M, Malekpour AR, (2011) A modified shuffle frog leaping algorithm for multi-objective optimal power flow. Energy 36(11):6420–6432

    Article  Google Scholar 

  • Niu B, Liu J, Bi Y, Xie T, Tan L (2014) Improved bacterial foraging optimization algorithm with information communication mechanism. In: Computational Intelligence and Security (CIS), 2014 Tenth International Conference on, IEEE, pp 47–51

  • Niu B, Liu J, Wu T, Chu X, Wang Z, Liu Y (2017) Coevolutionary structure-redesigned-based bacterial foraging optimization. IEEE/ACM Trans Comput Biology Bioinform

  • Niu P, Niu S, Chang L et al (2019) The defect of the grey wolf optimization algorithm and its verification method. Knowl-Based Syst 171:37–43

    Article  Google Scholar 

  • Nouiri M, Bekrar A, Jemai A, Niar S, Ammari AC (2018) An effective and distributed particle swarm optimization algorithm for flexible job-shop scheduling problem. J Intell Manuf 29(3):603–615

    Article  Google Scholar 

  • Oliveira M, Pinheiro D, Macedo M, Bastos-Filho C, Menezes R (2018) Unveiling swarm intelligence with network science \(-\) the metaphor explained. arXiv preprint arXiv:181103539

  • Ong YS, Gupta A (2016) Evolutionary multitasking: a computer science view of cognitive multitasking. Cogn Comput 8(2):125–142

    Article  Google Scholar 

  • Ouaarab A, Ahiod B, Yang XS (2014) Discrete cuckoo search algorithm for the travelling salesman problem. Neural Comput Appl 24(7–8):1659–1669

    Article  Google Scholar 

  • Pai PF, Yang SL, Chang PT (2009) Forecasting output of integrated circuit industry by support vector regression models with marriage honey-bees optimization algorithms. Expert Syst Appl 36(7):10746–10751

    Article  Google Scholar 

  • Pan QK, Tasgetiren MF, Suganthan PN, Chua TJ (2011) A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem. Inf Sci 181(12):2455–2468

    Article  MathSciNet  Google Scholar 

  • Pan QK, Sang HY, Duan JH, Gao L (2014) An improved fruit fly optimization algorithm for continuous function optimization problems. Knowl-Based Syst 62:69–83

    Article  Google Scholar 

  • Pan WT (2011) A new evolutionary computation approach: fruit fly optimization algorithm. In: 2011 Conference of Digital Technology and Innovation Management, pp 382–391

  • Pan WT (2012) A new fruit fly optimization algorithm: taking the financial distress model as an example. Knowl-Based Syst 26:69–74

    Article  Google Scholar 

  • Parpinelli RS, Lopes HS (2011) New inspirations in swarm intelligence: a survey. Int J Bio-Inspired Comput 3(1):1–16

    Article  Google Scholar 

  • Passino KM (2002) Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst 22(3):52–67

    Article  Google Scholar 

  • Paula Garcia de R, Lima de BSLP, Castro Lemonge de AC, Jacob BP (2017) A rank-based constraint handling technique for engineering design optimization problems solved by genetic algorithms. Compu Struct 187:77–87

    Article  Google Scholar 

  • Pradhan PM, Panda G (2012) Solving multiobjective problems using cat swarm optimization. Expert Syst Appl 39(3):2956–2964

    Article  Google Scholar 

  • Qiu H, Duan H (2014) Receding horizon control for multiple UAV formation flight based on modified brain storm optimization. Nonlinear Dyn 78(3):1973–1988

    Article  Google Scholar 

  • Qiu H, Duan H (2018) A multi-objective pigeon-inspired optimization approach to UAV distributed flocking among obstacles. Inform Sci

  • Rajasekhar A, Lynn N, Das S, Suganthan PN (2017) Computing with the collective intelligence of honey bees-a survey. Swarm Evol Comput 32:25–48

    Article  Google Scholar 

  • Rao H, Shi X, Rodrigue AK, Feng J, Xia Y, Elhoseny M, Yuan X, Gu L (2019) Feature selection based on artificial bee colony and gradient boosting decision tree. Appl Soft Comput 74:634–642

    Article  Google Scholar 

  • Santosa B, Ningrum MK (2009) Cat swarm optimization for clustering. In: 2009 International Conference of Soft Computing and Pattern Recognition, IEEE, pp 54–59

  • Schyns M (2015) An ant colony system for responsive dynamic vehicle routing. Eur J Oper Res 245(3):704–718

    Article  MathSciNet  MATH  Google Scholar 

  • Sekhar GC, Sahu RK, Baliarsingh A, Panda S (2016) Load frequency control of power system under deregulated environment using optimal firefly algorithm. Int J Elect Power Energy Syst 74:195–211

    Article  Google Scholar 

  • Senthilnath J, Omkar S, Mani V (2011) Clustering using firefly algorithm: performance study. Swarm Evol Comput 1(3):164–171

    Article  Google Scholar 

  • Sharafi Y, Khanesar MA, Teshnehlab M (2013) Discrete binary cat swarm optimization algorithm. In: 2013 3rd IEEE International Conference on Computer, Control and Communication (IC4), IEEE, pp 1–6

  • Shehab M, Khader AT, Laouchedi M, Alomari OA (2019) Hybridizing cuckoo search algorithm with bat algorithm for global numerical optimization. J Supercomput 75(5):2395–2422

    Article  Google Scholar 

  • Shen W, Guo X, Wu C, Wu D (2011) Forecasting stock indices using radial basis function neural networks optimized by artificial fish swarm algorithm. Knowl-Based Syst 24(3):378–385

    Article  Google Scholar 

  • Shi Y (2011a) Brain storm optimization algorithm. In: International Conference in Swarm Intelligence, Springer, pp 303–309

  • Shi Y (2011b) An optimization algorithm based on brainstorming process. Int J Swarm Intell Res (IJSIR) 2(4):35–62

    Article  Google Scholar 

  • Shi Y, Eberhart R (1998) A modified particle swarm optimizer. In: Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on, IEEE, pp 69–73

  • Socha K, Dorigo M (2008) Ant colony optimization for continuous domains. Eur J Oper Res 185(3):1155–1173

    Article  MathSciNet  MATH  Google Scholar 

  • Song Z, Peng J, Li C, Liu PX (2018) A simple brain storm optimization algorithm with a periodic quantum learning strategy. IEEE Access 6:19968–19983

    Article  Google Scholar 

  • Sörensen K (2015) Metaheuristics—the metaphor exposed. Int Trans Oper Res 22(1):3–18

    Article  MathSciNet  MATH  Google Scholar 

  • Soto R, Crawford B, Aste Toledo A, Castro C, Paredes F, Olivares R et al (2019) Solving the manufacturing cell design problem through binary cat swarm optimization with dynamic mixture ratios. Comput Intell Neurosci

  • Subudhi B, Pradhan R (2018) Bacterial foraging optimization approach to parameter extraction of a photovoltaic module. IEEE Trans Sustain Energy 9(1):381–389

    Article  Google Scholar 

  • Sun C, Duan H, Shi Y (2013) Optimal satellite formation reconfiguration based on closed-loop brain storm optimization. IEEE Comput Intell Mag 8(4):39–51

    Article  Google Scholar 

  • Sun Y (2014) A hybrid approach by integrating brain storm optimization algorithm with grey neural network for stock index forecasting. In: Abstract and Applied Analysis, Hindawi, vol 2014

  • Szeto WY, Wu Y, Ho SC (2011) An artificial bee colony algorithm for the capacitated vehicle routing problem. Eur J Oper Res 215(1):126–135

    Article  Google Scholar 

  • Tan Y, Zy Zheng (2013) Research advance in swarm robotics. Def Technol 9(1):18–39

    Article  Google Scholar 

  • Tan Y, Zhu Y (2010) Fireworks algorithm for optimization. In: International Conference in Swarm Intelligence, Springer, pp 355–364

  • Teo J, Abbass HA (2003) A true annealing approach to the marriage in honey-bees optimization algorithm. Int J Comput Intell Appl 3(02):199–211

    Article  Google Scholar 

  • Tsai PW, Pan JS, Chen SM, Liao BY (2012) Enhanced parallel cat swarm optimization based on the taguchi method. Expert Syst Appl 39(7):6309–6319

    Article  Google Scholar 

  • Valian E, Mohanna S, Tavakoli S (2011) Improved cuckoo search algorithm for feedforward neural network training. Int J Artif Intell Appl 2(3):36–43

    Google Scholar 

  • Wachowiak MP, Smolíková R, Zheng Y, Zurada JM, Elmaghraby AS et al (2004) An approach to multimodal biomedical image registration utilizing particle swarm optimization. IEEE Trans Evolut Comput 8(3):289–301

    Article  Google Scholar 

  • Wahid F, Alsaedi AKZ, Ghazali R (2019) Using improved firefly algorithm based on genetic algorithm crossover operator for solving optimization problems. Journal of Intelligent & Fuzzy Systems (Preprint):1–16

  • Walton S, Hassan O, Morgan K, Brown M (2011) Modified cuckoo search: a new gradient free optimisation algorithm. Chaos, Solitons Fractals 44(9):710–718

    Article  Google Scholar 

  • Wang CR, Zhou CL, Ma JW (2005) An improved artificial fish-swarm algorithm and its application in feed-forward neural networks. In: Machine Learning and Cybernetics, 2005. Proceedings of 2005 International Conference on, IEEE, vol 5, pp 2890–2894

  • Wang G, Chu HE, Zhang Y, Chen H, Hu W, Li Y, Peng X (2015) Multiple parameter control for ant colony optimization applied to feature selection problem. Neural Comput Appl 26(7):1693–1708

    Article  Google Scholar 

  • Wang H, Wang W, Sun H, Rahnamayan S (2016a) Firefly algorithm with random attraction. Int J Bio-Inspired Comput 8(1):33–41

    Article  Google Scholar 

  • Wang J, Hou R, Wang C, Shen L (2016b) Improved v-support vector regression model based on variable selection and brain storm optimization for stock price forecasting. Appl Soft Comput 49:164–178

    Article  Google Scholar 

  • Wang L, Xl Zheng, Sy Wang (2013) A novel binary fruit fly optimization algorithm for solving the multidimensional knapsack problem. Knowl-Based Syst 48:17–23

    Article  Google Scholar 

  • Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82

    Article  Google Scholar 

  • Wu B, Qian C, Ni W, Fan S (2012) The improvement of glowworm swarm optimization for continuous optimization problems. Expert Syst Appl 39(7):6335–6342

    Article  Google Scholar 

  • Wu CC, Chen JY, Lin WC, Lai K, Liu SC, Yu PW (2018) A two-stage three-machine assembly flow shop scheduling with learning consideration to minimize the flowtime by six hybrids of particle swarm optimization. Swarm and Evolutionary Computation

  • Wu D, Kong F, Gao W, Shen Y, Ji Z (2015) Improved chicken swarm optimization. In: 2015 IEEE international conference on cyber technology in automation, control, and intelligent systems (CYBER), IEEE, pp 681–686

  • Wu G, Mallipeddi R, Suganthan PN (2019) Ensemble strategies for population-based optimization algorithms-a survey. Swarm Evol Comput 44:695–711

    Article  Google Scholar 

  • Xiong J, Liu J, Chen Y, Abbass HA (2014) A knowledge-based evolutionary multiobjective approach for stochastic extended resource investment project scheduling problems. IEEE Trans Evol Comput 18(5):742–763

    Article  Google Scholar 

  • Xu P, Luo W, Lin X, Qiao Y, Zhu T (2019) Hybrid of PSO and CMA-ES for global optimization. In: 2019 IEEE Congress on Evolutionary Computation (CEC), IEEE, pp 27–33

  • Xue Q, Duan H (2017) Robust attitude control for reusable launch vehicles based on fractional calculus and pigeon-inspired optimization. IEEE/CAA J Automatica Sinica 4(1):89–97

    Article  MathSciNet  Google Scholar 

  • Yang C, Tu X, Chen J (2007) Algorithm of marriage in honey bees optimization based on the wolf pack search. In: The 2007 International Conference on Intelligent Pervasive Computing (IPC 2007), IEEE, pp 462–467

  • Yang XS (2009) Firefly algorithms for multimodal optimization. In: International symposium on stochastic algorithms, Springer, pp 169–178

  • Yang XS (2010a) Firefly algorithm, stochastic test functions and design optimisation. arXiv preprint arXiv:10031409

  • Yang XS (2010b) A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010), Springer, pp 65–74

  • Yang XS (2011) Bat algorithm for multi-objective optimisation. Int J Bio-Inspired Comput 3(5):267–274

    Article  Google Scholar 

  • Yang XS (2013) Multiobjective firefly algorithm for continuous optimization. Eng Comput 29(2):175–184

    Article  Google Scholar 

  • Yang XS, Deb S (2009) Cuckoo search via lévy flights. In: 2009 World congress on nature & biologically inspired computing (NaBIC), IEEE, pp 210–214

  • Yang XS, Deb S (2010) Engineering optimisation by cuckoo search. Int J Math Model Numer Optim 1(4):330–343

    MATH  Google Scholar 

  • Yang XS, Gandomi AH (2012) Bat algorithm: a novel approach for global engineering optimization. Eng Comput

  • Yang XS, Deb S, Zhao YX, Fong S, He X (2018) Swarm intelligence: past, present and future. Soft Comput 22(18):5923–5933

    Article  Google Scholar 

  • Yazdani D, Cheng R, Yazdani D, Branke J, Jin Y, Yao X (2021) A survey of evolutionary continuous dynamic optimization over two decades-part b. IEEE Trans Evolut Comput

  • Yildizdan G, Baykan ÖK (2020) A novel modified bat algorithm hybridizing by differential evolution algorithm. Expert Syst Appl 141:112949

    Article  Google Scholar 

  • Yin PY, Glover F, Laguna M, Zhu JX (2010) Cyber swarm algorithms-improving particle swarm optimization using adaptive memory strategies. Eur J Oper Res 201(2):377–389

    Article  MathSciNet  MATH  Google Scholar 

  • Yiyue W, Hongmei L, Hengyang H (2012) Wireless sensor network deployment using an optimized artificial fish swarm algorithm. In: Computer Science and Electronics Engineering (ICCSEE), 2012 International Conference on, IEEE, vol 2, pp 90–94

  • Yu B, Yang ZZ, Yao B (2009) An improved ant colony optimization for vehicle routing problem. Eur J Oper Res 196(1):171–176

    Article  MATH  Google Scholar 

  • Yuan Y, Ong YS, Gupta A, Tan PS, Xu H (2016) Evolutionary multitasking in permutation-based combinatorial optimization problems: Realization with tsp, qap, lop, and jsp. In: 2016 IEEE Region 10 Conference (TENCON), IEEE, pp 3157–3164

  • Zhang B, Duan H (2015) Three-dimensional path planning for uninhabited combat aerial vehicle based on predator-prey pigeon-inspired optimization in dynamic environment. IEEE/ACM Trans Comput Biol Bioinf 14(1):97–107

    Article  MathSciNet  Google Scholar 

  • Zhang G, Shi Y (2018) Hybrid sampling evolution strategy for solving single objective bound constrained problems. In: 2018 IEEE Congress on Evolutionary Computation (CEC), IEEE, pp 1–7

  • Zhang S, Lee CK, Yu K, Lau HC (2017) Design and development of a unified framework towards swarm intelligence. Artif Intell Rev 47(2):253–277

    Article  Google Scholar 

  • Zhang X, Duan H, Yang C (2014) Pigeon-inspired optimization approach to multiple UAVs formation reconfiguration controller design. In: Proceedings of 2014 IEEE Chinese Guidance, Navigation and Control Conference, IEEE, pp 2707–2712

  • Zhao B, Gao J, Chen K, Guo K (2018) Two-generation Pareto ant colony algorithm for multi-objective job shop scheduling problem with alternative process plans and unrelated parallel machines. J Intell Manuf 29(1):93–108

    Article  Google Scholar 

  • Zhao J, Wen F, Dong ZY, Xue Y, Wong KP (2012) Optimal dispatch of electric vehicles and wind power using enhanced particle swarm optimization. IEEE Trans Industr Inf 8(4):889–899

    Article  Google Scholar 

  • Zheng YJ, Xu XL, Ling HF, Chen SY (2015) A hybrid fireworks optimization method with differential evolution operators. Neurocomputing 148:75–82

    Article  Google Scholar 

  • Zhou J, Nekouie A, Arslan CA, Pham BT, Hasanipanah M (2019a) Novel approach for forecasting the blast-induced aop using a hybrid fuzzy system and firefly algorithm. Engineering with Computers pp 1–10

  • Zhou J, Yao X, Chan FT, Lin Y, Jin H, Gao L, Wang X (2019b) An individual dependent multi-colony artificial bee colony algorithm. Inf Sci 485:114–140

    Article  Google Scholar 

  • Zhou Y, He F, Hou N, Qiu Y (2018) Parallel ant colony optimization on multi-core SIMD CPUs. Futur Gener Comput Syst 79:473–487

    Article  Google Scholar 

  • Zhu G, Kwong S (2010) Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl Math Comput 217(7):3166–3173

    MathSciNet  MATH  Google Scholar 

Download references

Funding

This work is supported by China Scholarship Council (Grant No. 201708440307) and UNSW-Canberra scholarship program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Liu.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Code availability

The codes are available.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Bacteria foraging optimisation

In 2002, Passino (2002) modelled the foraging activity of E.coli bacteria swarm as a distributed optimisation process and proposed bacterial foraging optimisation (BFO) algorithm, which consists of four main steps: chemotaxis, swarming, reproduction and elimination & dispersal. The E.coli bacteria swarm move by taking small steps to obtain maximum nutrients. During the chemotaxis step, bacteria have two ways of movement: swim and tumble. Each bacterium can keep swimming in a direction or tumble to another direction, and alternate between these two ways via flagella to search for the place with better nutrient concentration. It is called swarming when there are chemical signals released to help bacteria swarm together. Here we consider the case where no chemical signals released due to the small effect of swarming in BFO. Natural selection would eliminate the bacteria with poor healthy condition and sudden or long process of environmental change would kill a group of bacteria. This inspires the reproduction and elimination & dispersal steps.

The pseudo code of BFO is shown in Algorithm 12. The position of each bacterium \(X_i\) is considered as a potential solution of the optimisation problem whose objective value is denoted as \(f(X_i)\). After initialisation, BFO executes Chemotaxis step, where each bacterium i updates its position by tumbling to a random direction \(\frac{\varDelta (i)}{\sqrt{\varDelta ^T(i)\varDelta (i)}}\) and moves the length of the step size Step(i). As long as it reaches a better area with a better \(f(X_i)\), the bacterium continues to swim in this specific direction but with a maximum number of swim \(N_s\). Otherwise the bacterium tumbles to a random direction again. After \(N_c\) chemotaxis steps, a reproduction step is executed by reproducing the most healthy N/2 bacteria and replacing the worst healthy N/2 bacteria with the reproduced bacteria. When \(N_{re}\) reproduction steps are exhausted, it is followed by the elimination-dispersal step where each bacterium may be eliminated according to the probability \(p_{ed}\) and be placed at a new random location. Bacterial positions keep updating until \(N_{ed}\) elimination-dispersal steps are executed for finding the best solutions.

figure l

Since the appearance of BFO, its improvement and application have become popular research fields for researchers from various areas like management and engineering. A hybrid algorithm involving PSO and BFO was presented by Biswas et al. (2007b) for the optimisation of multi-modal and high dimensional functions. Considering the chemotactic movement is one of the most important driving forces of BFO, Dasgupta et al. (2009) proposed adaptive computational chemotaxis schemes to accelerate the convergence rate and optimise the frequency-modulated sound wave synthesis problem successfully. Moreover, Niu et al. (2017) designed a coevolutionary strategy with convergence status evaluation to balance exploration and exploitation of BFO. Until now, BFO has been successfully applied to a wide range of real-world optimisation problems, including menu panning (Hernández-Ocaña et al. 2018), parameter extraction (Subudhi and Pradhan 2018), disease recognition (Al-Kheraif et al. 2018), power system optimisation (Kumar and Jayabarathi 2012; Kowsalya et al. 2014), stock market indices prediction (Majhi et al. 2009), among those.

1.2 Shuffled frog leaping algorithm

Based on the natural memetics for the optimisation of water distribution network design, Eusuff and Lansey (2003) first presented shuffled frog leaping algorithm (SFLA), which is also considered as an extension of PSO (Eusuff et al. 2006). It is assumed that frogs are leaping in a swamp which has a number of stones and are aiming at finding the stone with the maximum concentration of food. In SFLA, frogs are considered as hosts of memes and can be partitioned into different memeplexes which are introduced as groups of mutually supported memes. SFLA performs local search within the memeplex, where frogs interact with each other by getting information from the best frog of memeplex or the best of the entire population. This local search is similar to particle swarm optimisation in concept. After a certain times of local search, the global exploration happens periodically by shuffling the virtual frogs and reorganising them into new memeplexes. This shuffling strategy allows for the exchange of information between different memeplexes and benefits frogs to move toward a global optimum.

The pseudo code of SFLA is shown in Algorithm 13. The position of each frog \(X_i\) is a candidate solution of the optimisation problem whose objective value is denoted as \(f(X_i)\). After the initialisation, the fogs are ranked in decreasing order and are partitioned into \(N_{mp}\) memeplexes according to the rank. Each memeplex evolves by local search according to the following frog-leaping algorithm. Within each memeplex, q frogs are selected to construct the submemeplex according to the probability \(p_j\), and the worst frog’s position is improved by using the information of the best frog of the memeplex or the best of the whole population. If no improvement is made, a new frog will be generated randomly to replace the worst frog. Each memeplex is updated based on the updated worst frog. This evolutionary step executes \(N_{es}\) times for each memeplex in the frog-leaping algorithm, followed by the shuffle of memeplexes where all the updated memeplexes are gathered to generate the new population X and the best position of the whole population Gbest is updated. This process starts again from partitioning the population into memeplexes until the termination conditions are met.

figure m

SFLA has been further improved and applied to various real-word problems, ranging from electrical power production (Ebrahimi et al. 2010; Niknam et al. 2011), job scheduling (Li et al. 2012) to clustering (Amiri et al. 2009). Elbeltagi et al. (2007) introduced a new search acceleration parameter into the original SFLA to balance the global and local search for solving project management problem. Fang and Wang (2012) presented a new SFLA for solving project scheduling problem with constrained resource. A permutation-based local research (PBLS) and forward-backward improvement (FBI) were combined to enhance the exploitation ability of SFLA. A modified SFLA (MSFLA) was proposed by Elattar (2019) by introducing the movement inertia equation from PSO and the crossover and mutation operators from GA to optimise the combined heat, emission and economic dispatch (CHEED) problem.

1.3 Cat swarm optimisation

Cat swarm optimisation (CTSO) was presented by Chu et al. in 2006 by inspecting and simulating the cat behaviour, consisting of two major models named seeking mode and tracking mode (Chu et al. 2006, 2007). Each cat has its position representing a candidate solution and a fitness value which is the objective value. The seeking model mimics the case of the cat in resting and looking around for seeking the next move position. The positions sought by the cat in the defined Seeking Range of the Dimension (SRD) are put in the Seeking Memory Pool (SMP) and one of them will be selected by the cat to move to according to the probabilities. In tracing model, the cat is tracing the cat with best fitness value by moving according to its own velocity.

The pseudo code of CTSO is shown in Algorithm 14. The Boolean variable SPC decides whether the current position of the cat will be put in the seeking memory pool. The Counts of Dimension to Change CDC decides how many dimensions will differ. After the initialisation, the Mixture Ratio MR is applied to decide whether a cat is in seeking model or tracing model. For each cat i, if it is in seeking mode, it will look around to find j candidate positions by randomly plus or minus SRD percent of the current value according to CDC. If SPC equals to 1, the current position of the cat will be retained as one of the candidate positions. After that, the cat will move to one candidate position selected randomly according to the probability \(p_{it}\). If the cat is in tracing model, its move is decided by its own velocity \(V_i\) which is guided by the cat with best fitness value Gbest. These two modes are applied to the cat alternatively until the termination conditions are met and the best position \(X_*\) found by the cat swarm will be output as the optimal solution.

figure n

CSO has been extended and applied to many problems such as clustering (Santosa and Ningrum 2009) since its appearance. Pradhan and Panda (2012) presented a new multi-objective algorithm based on CTSO by adopting the concept of Pareto dominance and external archive. An enhanced parallel CTSO was presented by adding the Taguchi method into the tracing mode of CTSO to improve the accuracy and reduce the computational time (Tsai et al. 2012). Binary Cat Swarm Optimisation (BCSO) was proposed for dealing with discrete problems and was implemented on 0-1 knapsack problem (Sharafi et al. 2013). BCSO has also been used for the non-unicost set covering problem (Crawford et al. 2015), scheduling workflow applications in cloud environment (Kumar et al. 2017b) and manufacturing cell design problem (Soto et al. 2019).

1.4 Glowworm swarm optimisation

In 2006, Krishnanand and Ghose presented Glowworm Swarm Optimisation (GSO) for the optimisation of multimodal function problems (Krishnanand and Ghose 2006, 2009b). The agents in GSO are called glowworms that have luciferin to emit light and interact with other glowworms within the neighbourhood. Positions of glowworms represent candidate solutions and the fitness values of positions are evaluated by the objective function. The level of luciferin of each glowworm is associated with the fitness of its current location and updates with time. In the movement phase, glowworms are attracted by neighbours with higher luciferin value which means brighter glow.

The pseudo code of GSO is shown in Algorithm 15. \(Dt_{ij}\) is the Euclidean distance between glowworms i and j. Each iteration in GSO consists of the luciferin-update phase and the movement phase. The algorithm initialises with random position \(X_i\) and an equal level of luciferin \(l_i\) for each glowworm i. After that, the luciferin \(l_i\) updates based on its previous \(l_i\) which decays with time and the fitness value \(f(X_i)\) which enhances the luciferin of the current position. Then each glowworm moves towards a neighbour j selected based on the probability \(p_{ij}\) from a set of neighbours \(NB_i\) who should be within the neighbourhood range \(r_d^i\) and have higher luciferin \(l_j\). The neighbourhood range \(r_d^i\) is updated adaptively to control the current number of neighbours \(|NB_i|\). The positions of glowworms are updated through the movement phase to search for the optimal solution for the optimisation problem until the termination conditions are met.

figure o

GSO has been successfully applied to different problems and a number of variants of GSO have been presented to improve its performance. For example, Krishnanand and Ghose (2009a) implemented GSO in multi-robot system to locate multiple signal source such as light, heat, sound, etc. Ding et al. (2017) applied GSO to optimise the parameters in wavelet twin support vector machine for determining the parameter automatically before the training process. For dealing with Multi-Objective Environmental Economic Dispatch problem (MOEED), Jayakumar and Venkatesh (2014) presented a new GSO by incorporating an overall fitness ranking tool and a time varying step size. An improved GSO was proposed to enhance its accuracy and convergence rate by Wu et al. (2012) via using greedy acceptance criteria and new movement formulas in the movement phase.

1.5 Firefly algorithm

Yang (2009), Yang (2010a) proposed Firefly Algorithm (FA) by getting inspiration from the flashing characteristics of fireflies in 2009 and applied it to standard pressure design optimisation problem in 2010. The attractiveness of a firefly is decided by its brightness (light intensity) which is evaluated by the objective function and decays with the distance from it. A firefly is attracted to another brighter firefly and explores search space via the movement to other brighter fireflies.

The pseudo code of FA is shown in Algorithm 16. The firefly swarm are initialised randomly in the D dimensional search space and the light intensity of the firefly i is evaluated by the objective function \(f(X_i)\). For each firefly, it moves towards each another firefly with higher light intensity in the swarm. The move distance is related to the attractiveness \(\chi\) between the firefly i and the firefly j, and a random parameter \(\psi\). The attractiveness \(\chi\) updates after each movement via \(e^{(-\varphi \cdot Dt_{ij}^2)}\) since it varies with the distance \(Dt_{ij}\). The best position found during the iteration process is recorded as the Gbest and will be output when the algorithm stops.

figure p

Many variants of FA have been proposed to improve its performance in solving different problems. For example, Yang (2013) extended FA to solve multi-objective continuous problems. In order to accelerate convergence rate and enhance the global search ability, a random attracted model and a concept of Cauchy jump were cooperated into FA by Wang et al. (2016a). Wahid et al. (2019) combined the crossover operator of GA with the firefly position movement stage of GSO to enhance the exploitation capability. FA has also been used for a wide range of applications such as clustering (Senthilnath et al. 2011), load frequency control (Sekhar et al. 2016), privacy preserving (Langari et al. 2019) and blast-induced air overpressure forecasting (Zhou et al. 2019a).

1.6 Cuckoo search

Cuckoo search (CS) was proposed by Yang and Deb (2009) based on the cuckoo breeding behaviour and the L\(\acute{e}\)vy flight behaviour of some birds. Each egg in a nest represents a solution and a cuckoo egg represents a new solution. In this algorithm, it is assumed that each cuckoo lays one egg at a time and each nest has one egg. The cuckoo generates new egg (solution) via L\(\acute{e}\)vy flight and leave the egg in the host nest where the egg with higher quality is kept. The number of the available host nests is fixed. The host bird might abandon the old nest and build a new one with a probability. The updating way of solutions in CS is similar to that of PSO, but the L\(\acute{e}\)vy flight applied in CS is more beneficial to the exploration.

The pseudo code of CS is shown in Algorithm 17. The N host nests are initialised randomly in the D dimensional search space. Then perform L\(\acute{e}\)vy flights to generate new solutions which are compared with the old solutions. The moving step size Step is also related to the difference between \(X_i\) and the best position Gbest. If the new solution is better, it is accepted as the new egg in the nest. After that, a fraction \(P_{ab}\) of the worst nests are abandoned and replaced by the new nests with new eggs generated via L\(\acute{e}\)vy flights at new locations. The best position found during the evolution is recorded as the Gbest and will be output when the termination conditions are met.

figure q

CS has attracted much attention since its appearance due to its superior performance. Yang and Deb (2010) validated the effectiveness of CS on addressing engineering optimisation problems such as the design of springs and welded beam structure. Walton et al. (2011) proposed a modified CS by allowing information exchange among the top solutions. For training feedforward neural networks, an improved CS was proposed by employing a parameters tuning strategy (Valian et al. 2011). To solve the TSP problem, a discrete CS was presented by reconstructing its population and employing a new kind of cuckoos (Ouaarab et al. 2014).

1.7 Bat algorithm

Yang (2010b) presented bat algorithm (BA) in 2010 based on the echolocation behaviour of bats, which enables bats to find the prey and avoid obstacles in the dark environment. The bats sense distance and distinguish prey from background barriers via echolocation. To looking for prey, the bat at the position \(X_i\) flies with velocity \(V_i\), pulse frequency \(Pf_i\), pulse emission rate \(Pr_i\) and loudness \(A_i\), which are automatically adjusted based on the proximity of the prey. BA can be considered as a combination of PSO and the local search controlled by the loudness and pulse rate (Yang and Gandomi 2012).

The pseudo code of BA is shown in Algorithm 18, where \(\bar{A}\) is the average loudness of the bat swarm at the current iteration t. The position of the bat represents a candidate solution and the proximity of the prey represents the quality of the solution. After the initialisation, the velocity \(V_i\) and position \(X_i\) of each bat i are updated based on its pulse frequency \(Pf_i\) and the position of the global best bat Gbest. The pulse rate \(Pr_i\) is used as the probability of local search part where a new solution is generated via a local random walk based the best solution. If the new solution is better than the old one and a randomly generated value is less than the loudness Ai, the new solution will be accepted. \(Pr_i\) will increase and Ai will decrease. After that, the best solution Gbest is recorded and this process loops until the algorithm stops.

figure r

Some variants of BA have been proposed for further improving its effectiveness and addressing various optimisation problems such as multi-objective problems (Yang 2011) and binary problems (Mirjalili et al. 2014b). Gandomi and Yang (2014) proposed a chaotic BA where different chaotic systems are used to replace the parameters in BA and validated its effectiveness on solving numerical benchmark functions. The chaotic BA was applied to address the constrained economic dispatch problem later (Adarsh et al. 2016). An improved BA was proposed by designing an optimal forage strategy to guide the search direction of the bat and employing a random disturbance strategy to increase the global search capability (Cai et al. 2016). Hybridising with other algorithms is also an efficient way to improve the search ability of the algorithm such as the hybridisation with DE (Yildizdan and Baykan 2020) and the hybridisation with CS (Shehab et al. 2019).

1.8 Fireworks algorithm

Fireworks algorithm (FWA) was proposed by Tan and Zhu (2010) based on the phenomenon of fireworks explosion. A number of sparks will fill the local area around the firework after the explosion, which is regarded as a local search in the space around a specific position in FWA. Two search mechanisms are employed in FWA by simulating two specific fireworks explosion. One is to generate numerous sparks within an amplitude in a well-manufactured explosion and another is to generate a few in a bad explosion. After this, N sparks are selected from the current sparks and set as fireworks for the next explosion.

The pseudo code of FA algorithm is shown in Algorithm 19, where \(N_{sp}\) is used to control the number of sparks yielded by the firework, \(\varepsilon\) is the smallest constant in computer. N fireworks are set off in the search space randomly in the initialisation. For each firework i, the number of sparks \(\hat{N_{sf}^i}\) and the amplitude of explosion \(A_i\) are calculated. \(\hat{N_{sf}^i}\) sparks are generated randomly within the amplitude \(A_i\) while Gaussian explosion is designed for generating \({\hat{m}}\) special sparks to keep the diversity. All the positions of new sparks are evaluated and the best position \(X_*\) are kept for the next explosion. \(N-1\) other positions are kept for next explosion, selected from all the sparks based on the probability \(p(X_i)\) which is related to their distance to other sparks \(R(X_i)\). This process repeats until the termination conditions are met and the best position will be output as the optimal solution.

figure s

FWA has been developed and applied to many problems. For example, Zheng et al. (2015) improved FWA by adding DE operators to guide the generation of new positions for improving the diversity and avoiding premature convergence. Considering the effect of information utilisation, a novel Guiding Spark (GS) was introduced by Li et al. (2016) for improving the exploration and exploitation ability of FWA. He et al. (2019) presented a Discrete Multi-Objective FWA (DMOFWA) to solve the Multi-Objective FlowShop Scheduling Problem with Sequence-Dependent Setup Times (MOFSP-SDST). Opposition-based learning and clustering analysis are adopted in DMOFWA to improve its exploration ability and cluster firework individuals respectively.

1.9 Fruit fly optimisation

Based on the foraging behaviour of fruit flies, Fruit fly Optimisation Algorithm (FOA) (Pan 2011, 2012) was proposed by Pan in 2011 with two search processes: osphresis search and vision search. Fruit flies locate the food source via smelling and fly to the corresponding positions in the osphresis search process while in the vision search process, fruit flies use vision to converge to the best fruit fly with the highest smell concentration which is evaluated by the fitness function.

The pseudo code of FOA algorithm is shown in Algorithm 20 . The initial position of fruit fly swam is generated randomly in the search space as \((x_{axis}, y_{axis})\). For each fruit fly i, it flies to the new position \((x_{i}, y_{i})\) around the swarm position with random directions and distances RandomValue for searching food in osphresis search process. The smell concentration judgement value \(Sm_i\) is calculated as the reciprocal of its distance to the origin \(Dist_i\). The best fruit fly Gbest with highest smell concentration Smell which is evaluated by the fitness function \(f(Sm_i)\) is selected and set as the fruit fly swarm position \((x_{axis}, y_{axis})\) for next generation in vision search process. These two processes repeat until FOA stops.

figure t

A number of variants of FOA have been proposed and it has been applied to different problems. To solve the multidimensional knapsack problem (MKP), for instance, Wang et al. (2013) proposed a new binary FOA by designing a group generating probability vector for generating new solutions. A new control parameter was introduced to tune the search range in osphresis search process adaptively and a new solution generation way was designed to improve the accuracy of FOA by Pan et al. (2014). Mousavi et al. (2019) presented a hybrid intelligent FOA by combing it with a heuristic algorithm to generate and classify fuzzy rules and select the best rules.

1.10 Chicken swarm optimisation

Chicken swarm optimisation algorithm (CCSO) is a relatively new SOA proposed by Meng et al. (2014), simulating the hierarchical order and behaviour of the chicken swarm. The chicken swarm can be classified into different groups, each of which has one rooster, many hens and chicks. In CCSO, the several chicken with best fitness values are set as roosters while those with worst fitness values are acted as chicks. The rest are designated as hens and the mother-child relationship between hens and chicks is randomly established. Different kinds of chicken follow different rules to search for food, causing different updating rules for their positions.

The pseudo code of CCSO algorithm is shown in Algorithm 21, where \(Gaussian(0,\sigma ^2)\) is a Gaussian distribution with mean 0 and standard deviation \(\sigma ^2\), \(\varepsilon\) is the smallest constant in computer, \(X_r\) is a randomly selected rooster. The chicken swarm are randomly distributed in the search space in the initialisation, followed by the grouping of the swarm and the determination of the hierarchical order in each group, which only update every G iteration times. Different chicken in the swarm adopt different movement rules. For roosters with better fitness f(X), they can search in a wider area than those with worse fitness. The movement of hens are decided by the positions of the rooster in its group \(X_{ri}\) and a randomly selected rooster or hen \(X_{rh}\) while the chicks follow their mother \(X_{mi}\) for foraging. The new positions are evaluated and chicken positions update if the new positions are better.

figure u

CSO has been developed and applied for solving a number of problems. For example, in order to solve 0-1 knapsack problem, Han and Liu (2017) proposed a binary improved CSO by integrating greedy strategy and mutated strategy. inertia weight and learning factor are adopted in CSO to avoid premature convergence when dealing with high dimensional optimisation problems (Wu et al. 2015). Elsawy et al. (2019) presented a new wrapper feature selection algorithm by combining CSO and GA.

1.11 Parameters tuning of PSO and ACO

Here we tuned two of the most famous algorithms (PSO and ACO) to demonstrate that fine tuning these algorithms in their original form does not help still when we compare their fine-tuned performance to the current state-of-the-art algorithms. As is known, the weighting coefficient w of the previous velocity V is the most important parameter in PSO and \(\varPhi\) and q are the two most important parameters in ACO. The effects of different \(w, \varPhi\) and q values on the performance of PSO/ACO when addressing 30-D problems comparison results are presented in Tables 5, 67. The best results are shown in bold and ‘*’ means the statistical significance. We can see from these tables that different values of these parameters yielded best results on different problems, and even the best results cannot help the comparison to the results obtained by the state-of-the-art algorithms.

Table 5 The effects of w on the performance of PSO when addressing 30-D problems
Table 6 The effects of \(\varPsi\) on the performance of ACO when addressing 30-D problems
Table 7 The effects of q on the performance of ACO when addressing 30-D problems
Table 8 Nomenclatures
Fig. 6
figure 6

The taxonomy based on the proposed general framework where G is Guided, UG is Unguided, C is Continuous, E is Event-based (for linear fusion)

Fig. 7
figure 7

The taxonomy based on the proposed general framework where G is Guided, UG is Unguided, C is Continuous, E is Event-based (for nonlinear fusion)

Table 9 Results for 10-D problems
Table 10 Results for 10-D problems (continued on Table 9)
Table 11 Results for 30-D problems
Table 12 Results for 30-D problems (continued on Table 11)
Table 13 Results for 50-D problems
Table 14 Results for 50-D problems (continued on Table 13)
Table 15 Results for 100-D problems
Table 16 Results for 100-D problems (continued from Table 15)
Table 17 Rank of all algorithms based on average fitness for 10-D problems
Table 18 Rank of all algorithms based on average fitness for 30-D problems
Table 19 Rank of all algorithms based on average fitness for 50-D problems
Table 20 Rank of all algorithms based on average fitness for 100-D problems
Table 21 Computational complexity

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, J., Anavatti, S., Garratt, M. et al. A survey, taxonomy and progress evaluation of three decades of swarm optimisation. Artif Intell Rev 55, 3607–3725 (2022). https://doi.org/10.1007/s10462-021-10095-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-021-10095-z

Keywords

Navigation