Skip to main content
Log in

Improved monarch butterfly optimization for unconstrained global search and neural network training

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

This work is a seminal attempt to address the drawbacks of the recently proposed monarch butterfly optimization (MBO) algorithm. This algorithm suffers from premature convergence, which makes it less suitable for solving real-world problems. The position updating of MBO is modified to involve previous solutions in addition to the best solution obtained thus far. To prove the efficiency of the Improved MBO (IMBO), a set of 23 well-known test functions is employed. The statistical results show that IMBO benefits from high local optima avoidance and fast convergence speed which helps this algorithm to outperform basic MBO and another recent variant of this algorithm called greedy strategy and self-adaptive crossover operator MBO (GCMBO). The results of the proposed algorithm are compared with nine other approaches in the literature for verification. The comparative analysis shows that IMBO provides very competitive results and tends to outperform current algorithms. To demonstrate the applicability of IMBO at solving challenging practical problems, it is also employed to train neural networks as well. The IMBO-based trainer is tested on 15 popular classification datasets obtained from the University of California at Irvine (UCI) Machine Learning Repository. The results are compared to a variety of techniques in the literature including the original MBO and GCMBO. It is observed that IMBO improves the learning of neural networks significantly, proving the merits of this algorithm for solving challenging problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. http://archive.ics.uci.edu/ml/

References

  1. Baluja S (1994) Population-based incremental learning. A method for integrating genetic search based function optimization and competitive learning. Technical report, DTIC Document

  2. Boussaïd I, Lepagnot J, Siarry P (2013) A survey on optimization metaheuristics. Inf Sci 237:82–117

    Article  MathSciNet  MATH  Google Scholar 

  3. Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2 (4):303–314

    Article  MathSciNet  MATH  Google Scholar 

  4. Digalakis JG, Margaritis KG (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77(4):481– 506

    Article  MathSciNet  MATH  Google Scholar 

  5. Ding S, Su C, Yu J (2011) An optimizing bp neural network algorithm based on genetic algorithm. Artif Intell Rev 36(2): 153–162

    Article  Google Scholar 

  6. Dorigo M, Birattari M, Stützle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39

    Article  Google Scholar 

  7. Faris H, Aljarah I, Mirjalili S (2016) Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45(2):322–332

    Article  Google Scholar 

  8. Faris H, Hassonah MA, Ala’ M, Mirjalili S, Aljarah I (2017) Al-Zoubi A multi-verse optimizer approach for feature selection and optimizing svm parameters based on a robust system architecture. Neural Computing and Applications, pp 1–15

  9. Faris H, Sheta A (2016) A comparison between parametric and non-parametric soft computing approaches to model the temperature of a metal cutting tool. Int J Comput Integr Manuf 29(1):64–75

    Google Scholar 

  10. Faris H, Sheta AF, Öznergiz E (2016) Mgp-cc: a hybrid multigene gp-cuckoo search method for hot rolling manufacture process modeling. Systems Science and Control Engineering, (just-accepted), pp 1–16

  11. Gori M, Tesi A (1992) On the problem of local minima in backpropagation. IEEE Trans Pattern Anal Mach Intell, (1):76– 86

  12. Gudise VG, Venayagamoorthy GK (2003) Senior-Member /eee. Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Inproceedings of the IEEE Swarm Intelligence Symposium 2003 SIS 2003, pp 110–117

  13. Gupta JND, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6): 679–684

    Article  Google Scholar 

  14. Hoffmeister F, Bäck T (1990) Genetic algorithms and evolution strategies: Similarities and differences. In: International conference on parallel problem solving from nature. Springer, pp 455–469

  15. Holland J (1992) Genetic algorithms. Scientific American, pp 66–72

  16. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5): 359–366

    Article  Google Scholar 

  17. Jaddi NS, Abdullah S, Hamdan AR (2015) Multi-population cooperative bat algorithm-based optimization of artificial neural network model. Inf Sci 294:628–644

    Article  MathSciNet  Google Scholar 

  18. Jaddi NS, Abdullah S, Hamdan AR (2015) Optimization of neural network model using modified bat-inspired algorithm. Appl Soft Comput 37:71–86

    Article  Google Scholar 

  19. Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (abc) optimization algorithm for training feed-forward neural networks. In: Modeling decisions for artificial intelligence. Springer, pp 318–329

  20. Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. J Glob Optim 39(3):459–471

    Article  MathSciNet  MATH  Google Scholar 

  21. Kennedy J (1997) The particle swarm: Social adaptation of knowledge Proceedings of the 1997 international conference on evolutionary computation. IEEE Service Center, Piscataway, NJ, pp 303–308

    Google Scholar 

  22. Kennedy J (1998) The behavior of particles. Evolutionary Programming VII, pp 581–587

  23. Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks. NJ, USA, pp 1942–1948

  24. Lichman M (2013) UCI machine learning repository

  25. María Luna J, Romero C, Romero JR, Ventura S (2015) An evolutionary algorithm for the discovery of rare class association rules in learning management systems. Appl Intell 42(3):501– 513

    Article  Google Scholar 

  26. Mirjalili SM, Abedi K, Mirjalili S (2013) Optical buffer performance enhancement using particle swarm optimization in ring-shape-hole photonic crystal waveguide. Optik-Int J Light Electron Opt 124(23):5989–5993

    Article  Google Scholar 

  27. Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43 (1):150– 161

    Article  Google Scholar 

  28. Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67

    Article  Google Scholar 

  29. Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput Appl 27(2):495–513

    Article  Google Scholar 

  30. Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209

    Article  MathSciNet  Google Scholar 

  31. Molga M, Smutnicki C (2005) Test functions for optimization needs. Test functions for optimization needs

  32. Price K, Storn RM, Lampinen JA (2006) Differential evolution: a practical approach to global optimization. Springer Science and Business Media

  33. Rumelhart DE, Hinton GE, Williams RJ (1986) Parallel distributed processing: explorations in the microstructure of cognition, vol. 1. chapter Learning Internal Representations by Error Propagation. MIT Press, MA, USA, pp 318–362

    Google Scholar 

  34. Yang XS (2008) Nature-inspired metaheuristic algorithms. Luniver Press, USA

    Google Scholar 

  35. Sexton RS, Dorsey RE, Johnson JD (1998) Toward global optimization of neural networks: a comparison of the genetic algorithm and backpropagation. Decis Support Syst 22(2):171–185

    Article  Google Scholar 

  36. Sexton RS, Gupta JND (2000) Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inf Sci 129(1–4):45–59

    Article  MATH  Google Scholar 

  37. Siddique MNH, Tokhi MO (2001) Training neural networks: backpropagation vs. genetic algorithms. In: Proceedings of the international joint conference on neural networks, 2001, IJCNN’01. IEEE, vol 4, pp 26732678

  38. Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713

    Article  Google Scholar 

  39. Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359

    Article  MathSciNet  MATH  Google Scholar 

  40. Svozil D, Kvasnicka V, Pospichal J (1997) Introduction to multi-layer feed-forward neural networks. Chemom Intell Lab Syst 39(1):43–62

    Article  Google Scholar 

  41. Ventura S, Luna JM (2016) Pattern mining with evolutionary algorithms. Springer Publishing Company, Incorporated, 1st edition

  42. Wang G-G, Deb S, Cui Z (2015) Monarch butterfly optimization. Neural Computing and Applications, pp 1–20

  43. Wang G-G, Deb S, Zhao X, Cui Z A new monarch butterfly optimization with an improved crossover operator. Operational Research, pp 1–25

  44. Wang G-G, Gandomi AH, Zhao X, Chu HCE (2016) Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. Soft Comput 20(1):273–285

    Article  Google Scholar 

  45. Wang G, Guo L, Wang H, Duan H, Liu L, Li J (2014) Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput Appl 24(3-4):853–871

    Article  Google Scholar 

  46. Ismail Wdaa AS (2008) Differential evolution for neural networks learning enhancement. PhD thesis, Universiti Teknologi , Malaysia

    Google Scholar 

  47. Wienholt W (1993) Minimizing the system error in feedforward neural networks with evolution strategy. In: ICANN’93. Springer, pp 490–493

  48. Yang X-S (2010) Firefly algorithm, levy flights and global optimization. In: Research and development in intelligent systems XXVI. Springer, pp 209–218

  49. Yang X-S (2010) A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010). Springer, pp 65–74

  50. Yang X-S (2012) Flower pollination algorithm for global optimization. In: IEEE international conference on neural networks. NJ, USA, pp 19421948. Springer, pp 240–249

  51. Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3 (2):82–102

    Article  Google Scholar 

  52. Yu J, Wang S, Xi L (2008) Evolving artificial neural networks using an improved pso and dpso. Neurocomputing 71(4): 1054–1060

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ibrahim Aljarah.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Faris, H., Aljarah, I. & Mirjalili, S. Improved monarch butterfly optimization for unconstrained global search and neural network training. Appl Intell 48, 445–464 (2018). https://doi.org/10.1007/s10489-017-0967-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-017-0967-3

Keywords

Navigation