Advertisement

Applied Intelligence

, Volume 43, Issue 1, pp 150–161 | Cite as

How effective is the Grey Wolf optimizer in training multi-layer perceptrons

  • Seyedali Mirjalili
Article

Abstract

This paper employs the recently proposed Grey Wolf Optimizer (GWO) for training Multi-Layer Perceptron (MLP) for the first time. Eight standard datasets including five classification and three function-approximation datasets are utilized to benchmark the performance of the proposed method. For verification, the results are compared with some of the most well-known evolutionary trainers: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Evolution Strategy (ES), and Population-based Incremental Learning (PBIL). The statistical results prove the GWO algorithm is able to provide very competitive results in terms of improved local optima avoidance. The results also demonstrate a high level of accuracy in classification and approximation of the proposed trainer.

Keywords

Grey Wolf optimizer MLP Learning neural network Evolutionary algorithm Multi-layer perceptron 

References

  1. 1.
    McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophysics 5:115–133CrossRefzbMATHMathSciNetGoogle Scholar
  2. 2.
    Bebis G, Georgiopoulos M (1994) Feed-forward neural networks. Potentials, IEEE 13:27–31CrossRefGoogle Scholar
  3. 3.
    Kohonen T (1990) The self-organizing map. Proc IEEE 78:1464–1480CrossRefGoogle Scholar
  4. 4.
    Park J, Sandberg IW (1993) Approximation and radial-basis-function networks. Neural Comput 5:305–316CrossRefGoogle Scholar
  5. 5.
    Dorffner G (1996) Neural networks for time series processing, in Neural Network WorldGoogle Scholar
  6. 6.
    Ghosh-Dastidar S, Adeli H (2009) Spiking neural networks. Int J Neural Syst 19:295–308CrossRefGoogle Scholar
  7. 7.
    Reed RD, Marks RJ (1998) Neural smithing: supervised learning in feedforward artificial neural networks. Mit PressGoogle Scholar
  8. 8.
    Caruana R, Niculescu-Mizil A (2006) An empirical comparison of supervised learning algorithms. In: Proceedings of the 23rd international conference on Machine learning, pp 161–168Google Scholar
  9. 9.
    Hinton GE, Sejnowski TJ (1999) Unsupervised learning: foundations of neural computation. MIT pressGoogle Scholar
  10. 10.
    Wang D (2001) Unsupervised learning: foundations of neural computation. AI Mag 22:101Google Scholar
  11. 11.
    Hertz J (1991) Introduction to the theory of neural computation. Basic Books 1Google Scholar
  12. 12.
    Wang G-G, Guo L, Gandomi AH, Hao G-S, Wang H (2014) Chaotic krill herd algorithm. Inf Sci 274:17–34Google Scholar
  13. 13.
    Wang G-G, Gandomi AH, Alavi AH, Hao G-S (2013) Hybrid krill herd algorithm with differential evolution for global numerical optimization. Neural Comput App. doi: 10.1007/s00521-013-1485-9
  14. 14.
    Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61CrossRefGoogle Scholar
  15. 15.
    Van Laarhoven PJ, Aarts EH (1987) Simulated annealing. SpringerGoogle Scholar
  16. 16.
    Szu H, Hartley R (1987) Fast simulated annealing. Phys Lett A 122:157–162CrossRefGoogle Scholar
  17. 17.
    Mitchell M, Holland JH, Forrest S (1993) When will a genetic algorithm outperform hill climbing? In: NIPS:51–58Google Scholar
  18. 18.
    Goldfeld SM, Quandt RE, Trotter HF (1966) Maximization by quadratic hill-climbing. Econometrica: J Econ Soc:541–551Google Scholar
  19. 19.
    Mirjalili S, Mohd Hashim SZ, Moradian Sardroudi H (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218:11125–11137CrossRefzbMATHMathSciNetGoogle Scholar
  20. 20.
    Whitley D, Starkweather T, Bogart C (1990) Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel comput 14:347–361CrossRefGoogle Scholar
  21. 21.
    Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training, learning vol. 6Google Scholar
  22. 22.
    Gudise V G, Venayagamoorthy G K (2003) Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Proceedings swarm intelligence symposium, 2003. SIS’03, pp 110–117Google Scholar
  23. 23.
    Blum C, Socha K (2005) Training feed-forward neural networks with ant colony optimization: an application to pattern classification. In: 5th international conference on, Hybrid Intelligent Systems, 2005. HIS’05, p 6Google Scholar
  24. 24.
    Socha K, Blum C (2007) An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput Appl 16:235–247CrossRefGoogle Scholar
  25. 25.
    Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks,” in Modeling decisions for artificial intelligence ed: Springer, pp 318–329Google Scholar
  26. 26.
    Ozturk C, Karaboga D (2011) Hybrid Artificial Bee Colony algorithm for neural network training. In: 2011 IEEE Congress on, Evolutionary Computation (CEC), pp 84–88Google Scholar
  27. 27.
    Ilonen J, Kamarainen J-K, Lampinen J (2003) Differential evolution training algorithm for feed-forward neural networks. Neural Process Lett 17:93–105CrossRefGoogle Scholar
  28. 28.
    Slowik A, Bialko M (2008) Training of artificial neural networks using differential evolution algorithm. In: 2008 Conference on, Human System Interactions, pp 60–65Google Scholar
  29. 29.
    Green II RC, Wang L, Alam M (2012) Training neural networks using central force optimization and particle swarm optimization: insights and comparisons. Expert Syst Appl 39:555–563CrossRefGoogle Scholar
  30. 30.
    Pereira L, Rodrigues D, Ribeiro P, Papa J, Weber SA (2014) Social-spider optimization-based artificial neural networks training and its applications for Parkinson’s disease identification. In: 2014 IEEE 27th international symposium on in computer-based medical systems (CBMS), pp 14–17Google Scholar
  31. 31.
    Yu JJ, Lam AY, Li VO (2011) Evolutionary artificial neural network based on chemical reaction optimization. In: 2011 IEEE congress on, evolutionary computation (CEC), pp 2083–2090Google Scholar
  32. 32.
    Pereira LA, Afonso LC, Papa JP, Vale ZA, Ramos CC, Gastaldello DS, Souza AN (2013) Multilayer perceptron neural networks training through charged system search and its Application for non-technical losses detection. In: 2013 IEEE PES conference on, innovative smart grid technologies latin America (ISGT LA), pp 1–6Google Scholar
  33. 33.
    Moallem P, Razmjooy N (2012) A multi layer perceptron neural network trained by invasive weed optimization for potato color image segmentation. Trends Appl Sci Res 7:445–455CrossRefGoogle Scholar
  34. 34.
    Uzlu E, Kankal M, Akpınar A, Dede T (2014) Estimates of energy consumption in Turkey using neural networks with the teaching–learning-based optimization algorithm. Energy 75:295–303CrossRefGoogle Scholar
  35. 35.
    Mirjalili S, Sadiq AS (2011) Magnetic optimization algorithm for training multi layer perceptron. In: Communication Software and Networks (ICCSN), 2011 IEEE 3rd International Conference, IEEE, pp 42–46Google Scholar
  36. 36.
    Belew RK, McInerney J, Schraudolph NN (1990) Evolving networks: Using the genetic algorithm with connectionist learningGoogle Scholar
  37. 37.
    Blake C, Merz CJ (1998) {UCI} Repository of machine learning databasesGoogle Scholar
  38. 38.
    Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209CrossRefMathSciNetGoogle Scholar
  39. 39.
    Beyer H-G, Schwefel H-P (2002) Evolution strategies–a comprehensive introduction. Nat Comput 1:3–52CrossRefzbMATHMathSciNetGoogle Scholar
  40. 40.
    Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. Evolutionary Comput IEEE Trans 3:82–102CrossRefGoogle Scholar
  41. 41.
    Yao X, Liu Y (1997) Fast evolution strategies. In: evolutionary programming VI, pp 149–161Google Scholar
  42. 42.
    Baluja S (1994) Population-based incremental learning. a method for integrating genetic search based function optimization and competitive learning, DTIC DocumentGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.School of Information and Communication TechnologyGriffith UniversityNathan, BrisbaneAustralia

Personalised recommendations