Advertisement

Automatic selection of hidden neurons and weights in neural networks using grey wolf optimizer based on a hybrid encoding scheme

  • Hossam Faris
  • Seyedali Mirjalili
  • Ibrahim AljarahEmail author
Original Article
  • 51 Downloads

Abstract

In neural networks, finding optimal values for the number of hidden neurons and connection weights simultaneously is considered a challenging task. This is because altering the hidden neurons substantially impacts the entire structure of a neural network and increases the complexity of training process that requires special considerations. In fact, the number of variables changes proportional to the number of hidden nodes when training neural networks. As one of the seminal attempts, a hybrid encoding scheme is first proposed to deal with the aforementioned challenges. A set of recent and well-regarded stochastic population-based algorithms is then employed to optimize the number of hidden neurons and connection weights in a single hidden feedforward neural network (FFNN). In the experiments, twenty-three standard classification datasets are employed to benchmark the proposed technique qualitatively and quantitatively. The results show that the hybrid encoding scheme allows optimization algorithms to conveniently find the optimal values for both the number of hidden nodes and connection weights. Also, the recently proposed grey wolf optimizer (GWO) outperformed other algorithms.

Keywords

Grey wolf optimizer GWO Global optimization Multilayer perceptron Neural network Optimization 

Notes

References

  1. 1.
    Aljarah I, Ludwig SA (2013) A new clustering approach based on glowworm swarm optimization. In: Proceedings of 2013 IEEE congress on evolutionary computation conference, Cancun, Mexico, IEEE XploreGoogle Scholar
  2. 2.
    Aljarah I, Faris H, Mirjalili S, Al-Madi N (2016) Training radial basis function networks using biogeography-based optimizer. Neural Comput Appl 29:1–25Google Scholar
  3. 3.
    Aljarah I, Faris H, Mirjalili S (2018) Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput 22(1):1–15Google Scholar
  4. 4.
    Aljarah I, Faris H, Mirjalili S, Al-Madi N (2018) Training radial basis function networks using biogeography-based optimizer. Neural Comput Appl 29(7):529–553Google Scholar
  5. 5.
    Amiri M, Amnieh HB, Hasanipanah M, Khanli LM (2016) A new combination of artificial neural network and \(K\)-nearest neighbors models to predict blast-induced ground vibration and air-overpressure. Eng Comput 32:1–14Google Scholar
  6. 6.
    Armaghani DJ, Hasanipanah M, Mohamad ET (2016) A combination of the ICA-ANN model to predict air-overpressure resulting from blasting. Eng Comput 32(1):155–171Google Scholar
  7. 7.
    Bolaji AL, Ahmad AA, Shola PB (2016) Training of neural network for pattern classification using fireworks algorithm. Int J Syst Assur Eng Manag 9:1–8Google Scholar
  8. 8.
    Ding S, Li H, Chunyang S, Junzhao Y, Jin F (2013) Evolutionary artificial neural networks: a review. Artif Intell Rev 39(3):251–260Google Scholar
  9. 9.
    Dua D, Karra Taniskidou E (2017) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA. http://archive.ics.uci.edu/ml
  10. 10.
    Faris H, Aljarah I, Al-Madi N, Mirjalili S (2016) Optimizing the learning process of feedforward neural networks using lightning search algorithm. Int J Artif Intell Tools 25(06):1650033Google Scholar
  11. 11.
    Faris H, Aljarah I, Mirjalili S (2016) Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45:1–11Google Scholar
  12. 12.
    Faris H, Sheta AF, Öznergiz E (2016) MGP–CC: a hybrid multigene GP–Cuckoo search method for hot rolling manufacture process modelling. Syst Sci Control Eng 4(1):39–49Google Scholar
  13. 13.
    Faris H, Aljarah I, Mirjalili S (2017) Evolving radial basis function networks using moth–flame optimizer. In: Samui P, Roy SS, Balas VE (eds) Handbook of neural computation. Elsevier, New York, pp 537–550Google Scholar
  14. 14.
    Faris H, Aljarah I, Mirjalili S (2018) Improved monarch butterfly optimization for unconstrained global search and neural network training. Appl Intell 48(2):445–464Google Scholar
  15. 15.
    Faris H, Aljarah I, Al-Betar MA, Mirjalili S (2018) Grey wolf optimizer: a review of recent variants and applications. Neural comput Appl.  https://doi.org/10.1007/s00521-017-3272-5 Google Scholar
  16. 16.
    Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USAzbMATHGoogle Scholar
  17. 17.
    Gordan B, Armaghani DJ, Hajihassani M, Monjezi M (2016) Prediction of seismic slope stability through combination of particle swarm optimization and neural network. Eng Comput 32(1):85–97Google Scholar
  18. 18.
    Gupta S, Deep K (2019) A novel random walk grey wolf optimizer. Swarm Evol Comput 44:101–112Google Scholar
  19. 19.
    Gupta JND, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6):679–684Google Scholar
  20. 20.
    Hagan MT, Menhaj MB (1994) Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Netw 5(6):989–993Google Scholar
  21. 21.
    Hasanipanah M, Noorian-Bidgoli M, Armaghani DJ, Khamesi H (2016) Feasibility of PSO-ANN model for predicting surface settlement caused by tunneling. Eng Comput 32:1–11Google Scholar
  22. 22.
    Hecht-Nielsen R (1987) Kolmogorov’s mapping neural network existence theorem. In: Proceedings of the international conference on neural networks, IEEE Press, New York, vol 3, pp 11–13Google Scholar
  23. 23.
    Heidari AA, Pahlavani P (2017) An efficient modified grey wolf optimizer with Iévy flight for optimization tasks. Appl Soft Comput 60:115–134Google Scholar
  24. 24.
    Holland JH (1992) Adaptation in natural and artificial systems. MIT Press, CambridgeGoogle Scholar
  25. 25.
    Hush DR (1989) Classification with neural networks: a performance analysis. In: Proceedings of the IEEE international conference on systems engineering, pp 277–280Google Scholar
  26. 26.
    Jianbo Y, Xi L, Wang S (2007) An improved particle swarm optimization for evolving feedforward artificial neural networks. Neural Process Lett 26(3):217–231Google Scholar
  27. 27.
    Jianbo Y, Wang S, Xi L (2008) Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing 71(4–6):1054–1060Google Scholar
  28. 28.
    Kaastra I, Boyd M (1996) Designing a neural network for forecasting financial and economic time series. Neurocomputing 10(3):215–236Google Scholar
  29. 29.
    Kanellopoulos I, Wilkinson GG (1997) Strategies and best practice for neural network image classification. Int J Remote Sens 18(4):711–725Google Scholar
  30. 30.
    Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Technical report, Technical report-TR06, Erciyes University, Engineering Faculty, Computer Engineering DepartmentGoogle Scholar
  31. 31.
    Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In: Torra V, Narukawa Y, Yoshida Y (eds) Modeling decisions for artificial intelligence. Springer, Berlin. pp 318–329Google Scholar
  32. 32.
    Karaboga D, Gorkemli B, Ozturk C, Karaboga N (2014) A comprehensive survey: artificial bee colony (ABC) algorithm and applications. Artif Intell Rev 42(1):21–57Google Scholar
  33. 33.
    Kennedy J (2011) Particle swarm optimization. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning. Springer, Berlin, pp 760–766Google Scholar
  34. 34.
    Kenter T, Borisov A, Van Gysel C, Dehghani M, de Rijke M, Mitra B (2018) Neural networks for information retrieval. In: Proceedings of the eleventh ACM international conference on web search and data mining, ACM, pp 779–780Google Scholar
  35. 35.
    Liu Z, Liu A, Wang C, Niu Z (2004) Evolving neural network using real coded genetic algorithm (GA) for multispectral image classification. Future Gener Comput Syst 20(7):1119–1129Google Scholar
  36. 36.
    Long W, Jiao J, Liang X, Tang M (2018) An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization. Eng Appl Artif Intell 68:63–80Google Scholar
  37. 37.
    Masters T (1993) Practical neural network recipes in C++. Morgan Kaufmann, BurlingtonzbMATHGoogle Scholar
  38. 38.
    Meissner M, Schmuker M, Schneider G (2006) Optimized particle swarm optimization (OPSO) and its application to artificial neural network training. BMC Bioinform 7(1):125Google Scholar
  39. 39.
    Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. In: Proceedings of the 2002 international joint conference on neural networks, 2002. IJCNN ’02, vol 2, pp 1895–1899Google Scholar
  40. 40.
    Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161Google Scholar
  41. 41.
    Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl 27(4):1053–1073Google Scholar
  42. 42.
    Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133Google Scholar
  43. 43.
    Mirjalili S, Hashim SZM, Sardroudi HM (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218(22):11125–11137MathSciNetzbMATHGoogle Scholar
  44. 44.
    Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61Google Scholar
  45. 45.
    Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209MathSciNetGoogle Scholar
  46. 46.
    Paola JD (1994) Neural network classification of multispectral imagery. The University of Arizona, USA, Master TeziGoogle Scholar
  47. 47.
    Reza Peyghami M, Khanduzi R (2013) Novel MLP neural network with hybrid Tabu search algorithm. Neural Netw World 23(3):255Google Scholar
  48. 48.
    Ripley BD (1993) Statistical aspects of neural networks. In: Networks and chaos: statistical and probabilistic aspects, vol 50, pp 40–123Google Scholar
  49. 49.
    Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536.  https://doi.org/10.1038/323533a0 zbMATHGoogle Scholar
  50. 50.
    Seiffert U (2001) Multiple layer perceptron training using genetic algorithms. In: Proceedings of the European symposium on artificial neural networks, Bruges, BélgicaGoogle Scholar
  51. 51.
    Sexton RS, Gupta JND (2000) Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inf Sci 129(1):45–59zbMATHGoogle Scholar
  52. 52.
    Sexton RS, Dorsey RE, Johnson JD (1999) Optimization of neural networks: a comparative analysis of the genetic algorithm and simulated annealing. Eur J Oper Res 114(3):589–601zbMATHGoogle Scholar
  53. 53.
    Sharma S, Salgotra R, Singh U (2017) An enhanced grey wolf optimizer for numerical optimization. In: Innovations in information, embedded and communication systems (ICIIECS), 2017 international conference on, IEEE, pp 1–6Google Scholar
  54. 54.
    Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713Google Scholar
  55. 55.
    Tsai J-T, Chou J-H, Liu T-K (2006) Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm. IEEE Trans Neural Netw 17(1):69–80Google Scholar
  56. 56.
    Wang C (1994) A theory of generalization in learning machines with neural network applications. PhD thesisGoogle Scholar
  57. 57.
    Wang G-G , Guo L, Gandomi AH, Cao L, Alavi AH, Duan H, Li J (2013) Lévy-flight krill herd algorithm. Math Probl Eng 2013Google Scholar
  58. 58.
    Wang G-G, Gandomi AH, Alavi AH (2014) An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl Math Model 38(9):2454–2462MathSciNetzbMATHGoogle Scholar
  59. 59.
    Wang L, Li Y, Huang J, Lazebnik S (2018) Learning two-branch neural networks for image-text matching tasks. IEEE Trans Pattern Anal Mach Intell.  https://doi.org/10.1109/TPAMI.2018.2797921 Google Scholar
  60. 60.
    Xu H, Liu X, Su J (2017) An improved grey wolf optimizer algorithm integrated with Cuckoo Search. In: 2017 9th IEEE international conference on intelligent data acquisition and advanced computing systems: technology and applications (IDAACS)Google Scholar
  61. 61.
    Zhang Y, Wang S, Ji G (2015) A comprehensive survey on particle swarm optimization algorithm and its applications. Math Probl Eng 1:32MathSciNetzbMATHGoogle Scholar
  62. 62.
    Zhao L, Qian F (2011) Tuning the structure and parameters of a neural network using cooperative binary-real particle swarm optimization. Expert Syst Appl 38(5):4972–4977Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Business Information Technology Department, King Abdullah II School for Information TechnologyThe University of JordanAmmanJordan
  2. 2.School of Information and Communication TechnologyGriffith UniversityBrisbaneAustralia

Personalised recommendations