Skip to main content
Log in

Automatic selection of hidden neurons and weights in neural networks using grey wolf optimizer based on a hybrid encoding scheme

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

In neural networks, finding optimal values for the number of hidden neurons and connection weights simultaneously is considered a challenging task. This is because altering the hidden neurons substantially impacts the entire structure of a neural network and increases the complexity of training process that requires special considerations. In fact, the number of variables changes proportional to the number of hidden nodes when training neural networks. As one of the seminal attempts, a hybrid encoding scheme is first proposed to deal with the aforementioned challenges. A set of recent and well-regarded stochastic population-based algorithms is then employed to optimize the number of hidden neurons and connection weights in a single hidden feedforward neural network (FFNN). In the experiments, twenty-three standard classification datasets are employed to benchmark the proposed technique qualitatively and quantitatively. The results show that the hybrid encoding scheme allows optimization algorithms to conveniently find the optimal values for both the number of hidden nodes and connection weights. Also, the recently proposed grey wolf optimizer (GWO) outperformed other algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Aljarah I, Ludwig SA (2013) A new clustering approach based on glowworm swarm optimization. In: Proceedings of 2013 IEEE congress on evolutionary computation conference, Cancun, Mexico, IEEE Xplore

  2. Aljarah I, Faris H, Mirjalili S, Al-Madi N (2016) Training radial basis function networks using biogeography-based optimizer. Neural Comput Appl 29:1–25

    Google Scholar 

  3. Aljarah I, Faris H, Mirjalili S (2018) Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput 22(1):1–15

    Article  Google Scholar 

  4. Aljarah I, Faris H, Mirjalili S, Al-Madi N (2018) Training radial basis function networks using biogeography-based optimizer. Neural Comput Appl 29(7):529–553

    Article  Google Scholar 

  5. Amiri M, Amnieh HB, Hasanipanah M, Khanli LM (2016) A new combination of artificial neural network and \(K\)-nearest neighbors models to predict blast-induced ground vibration and air-overpressure. Eng Comput 32:1–14

    Article  Google Scholar 

  6. Armaghani DJ, Hasanipanah M, Mohamad ET (2016) A combination of the ICA-ANN model to predict air-overpressure resulting from blasting. Eng Comput 32(1):155–171

    Article  Google Scholar 

  7. Bolaji AL, Ahmad AA, Shola PB (2016) Training of neural network for pattern classification using fireworks algorithm. Int J Syst Assur Eng Manag 9:1–8

    Google Scholar 

  8. Ding S, Li H, Chunyang S, Junzhao Y, Jin F (2013) Evolutionary artificial neural networks: a review. Artif Intell Rev 39(3):251–260

    Article  Google Scholar 

  9. Dua D, Karra Taniskidou E (2017) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA. http://archive.ics.uci.edu/ml

  10. Faris H, Aljarah I, Al-Madi N, Mirjalili S (2016) Optimizing the learning process of feedforward neural networks using lightning search algorithm. Int J Artif Intell Tools 25(06):1650033

    Article  Google Scholar 

  11. Faris H, Aljarah I, Mirjalili S (2016) Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45:1–11

    Article  Google Scholar 

  12. Faris H, Sheta AF, Öznergiz E (2016) MGP–CC: a hybrid multigene GP–Cuckoo search method for hot rolling manufacture process modelling. Syst Sci Control Eng 4(1):39–49

    Article  Google Scholar 

  13. Faris H, Aljarah I, Mirjalili S (2017) Evolving radial basis function networks using moth–flame optimizer. In: Samui P, Roy SS, Balas VE (eds) Handbook of neural computation. Elsevier, New York, pp 537–550

    Chapter  Google Scholar 

  14. Faris H, Aljarah I, Mirjalili S (2018) Improved monarch butterfly optimization for unconstrained global search and neural network training. Appl Intell 48(2):445–464

    Article  Google Scholar 

  15. Faris H, Aljarah I, Al-Betar MA, Mirjalili S (2018) Grey wolf optimizer: a review of recent variants and applications. Neural comput Appl. https://doi.org/10.1007/s00521-017-3272-5

    Article  Google Scholar 

  16. Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA

    MATH  Google Scholar 

  17. Gordan B, Armaghani DJ, Hajihassani M, Monjezi M (2016) Prediction of seismic slope stability through combination of particle swarm optimization and neural network. Eng Comput 32(1):85–97

    Article  Google Scholar 

  18. Gupta S, Deep K (2019) A novel random walk grey wolf optimizer. Swarm Evol Comput 44:101–112

    Article  Google Scholar 

  19. Gupta JND, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6):679–684

    Article  Google Scholar 

  20. Hagan MT, Menhaj MB (1994) Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Netw 5(6):989–993

    Article  Google Scholar 

  21. Hasanipanah M, Noorian-Bidgoli M, Armaghani DJ, Khamesi H (2016) Feasibility of PSO-ANN model for predicting surface settlement caused by tunneling. Eng Comput 32:1–11

    Article  Google Scholar 

  22. Hecht-Nielsen R (1987) Kolmogorov’s mapping neural network existence theorem. In: Proceedings of the international conference on neural networks, IEEE Press, New York, vol 3, pp 11–13

  23. Heidari AA, Pahlavani P (2017) An efficient modified grey wolf optimizer with Iévy flight for optimization tasks. Appl Soft Comput 60:115–134

    Article  Google Scholar 

  24. Holland JH (1992) Adaptation in natural and artificial systems. MIT Press, Cambridge

    Book  Google Scholar 

  25. Hush DR (1989) Classification with neural networks: a performance analysis. In: Proceedings of the IEEE international conference on systems engineering, pp 277–280

  26. Jianbo Y, Xi L, Wang S (2007) An improved particle swarm optimization for evolving feedforward artificial neural networks. Neural Process Lett 26(3):217–231

    Article  Google Scholar 

  27. Jianbo Y, Wang S, Xi L (2008) Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing 71(4–6):1054–1060

    Google Scholar 

  28. Kaastra I, Boyd M (1996) Designing a neural network for forecasting financial and economic time series. Neurocomputing 10(3):215–236

    Article  Google Scholar 

  29. Kanellopoulos I, Wilkinson GG (1997) Strategies and best practice for neural network image classification. Int J Remote Sens 18(4):711–725

    Article  Google Scholar 

  30. Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Technical report, Technical report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department

  31. Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In: Torra V, Narukawa Y, Yoshida Y (eds) Modeling decisions for artificial intelligence. Springer, Berlin. pp 318–329

    Chapter  Google Scholar 

  32. Karaboga D, Gorkemli B, Ozturk C, Karaboga N (2014) A comprehensive survey: artificial bee colony (ABC) algorithm and applications. Artif Intell Rev 42(1):21–57

    Article  Google Scholar 

  33. Kennedy J (2011) Particle swarm optimization. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning. Springer, Berlin, pp 760–766

    Google Scholar 

  34. Kenter T, Borisov A, Van Gysel C, Dehghani M, de Rijke M, Mitra B (2018) Neural networks for information retrieval. In: Proceedings of the eleventh ACM international conference on web search and data mining, ACM, pp 779–780

  35. Liu Z, Liu A, Wang C, Niu Z (2004) Evolving neural network using real coded genetic algorithm (GA) for multispectral image classification. Future Gener Comput Syst 20(7):1119–1129

    Article  Google Scholar 

  36. Long W, Jiao J, Liang X, Tang M (2018) An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization. Eng Appl Artif Intell 68:63–80

    Article  Google Scholar 

  37. Masters T (1993) Practical neural network recipes in C++. Morgan Kaufmann, Burlington

    MATH  Google Scholar 

  38. Meissner M, Schmuker M, Schneider G (2006) Optimized particle swarm optimization (OPSO) and its application to artificial neural network training. BMC Bioinform 7(1):125

    Article  Google Scholar 

  39. Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. In: Proceedings of the 2002 international joint conference on neural networks, 2002. IJCNN ’02, vol 2, pp 1895–1899

  40. Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161

    Article  Google Scholar 

  41. Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl 27(4):1053–1073

    Article  Google Scholar 

  42. Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133

    Article  Google Scholar 

  43. Mirjalili S, Hashim SZM, Sardroudi HM (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218(22):11125–11137

    MathSciNet  MATH  Google Scholar 

  44. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61

    Article  Google Scholar 

  45. Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209

    Article  MathSciNet  Google Scholar 

  46. Paola JD (1994) Neural network classification of multispectral imagery. The University of Arizona, USA, Master Tezi

  47. Reza Peyghami M, Khanduzi R (2013) Novel MLP neural network with hybrid Tabu search algorithm. Neural Netw World 23(3):255

    Article  Google Scholar 

  48. Ripley BD (1993) Statistical aspects of neural networks. In: Networks and chaos: statistical and probabilistic aspects, vol 50, pp 40–123

    Chapter  MATH  Google Scholar 

  49. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536. https://doi.org/10.1038/323533a0

    Article  MATH  Google Scholar 

  50. Seiffert U (2001) Multiple layer perceptron training using genetic algorithms. In: Proceedings of the European symposium on artificial neural networks, Bruges, Bélgica

  51. Sexton RS, Gupta JND (2000) Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inf Sci 129(1):45–59

    Article  MATH  Google Scholar 

  52. Sexton RS, Dorsey RE, Johnson JD (1999) Optimization of neural networks: a comparative analysis of the genetic algorithm and simulated annealing. Eur J Oper Res 114(3):589–601

    Article  MATH  Google Scholar 

  53. Sharma S, Salgotra R, Singh U (2017) An enhanced grey wolf optimizer for numerical optimization. In: Innovations in information, embedded and communication systems (ICIIECS), 2017 international conference on, IEEE, pp 1–6

  54. Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713

    Article  Google Scholar 

  55. Tsai J-T, Chou J-H, Liu T-K (2006) Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm. IEEE Trans Neural Netw 17(1):69–80

    Article  Google Scholar 

  56. Wang C (1994) A theory of generalization in learning machines with neural network applications. PhD thesis

  57. Wang G-G , Guo L, Gandomi AH, Cao L, Alavi AH, Duan H, Li J (2013) Lévy-flight krill herd algorithm. Math Probl Eng 2013

  58. Wang G-G, Gandomi AH, Alavi AH (2014) An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl Math Model 38(9):2454–2462

    Article  MathSciNet  MATH  Google Scholar 

  59. Wang L, Li Y, Huang J, Lazebnik S (2018) Learning two-branch neural networks for image-text matching tasks. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2018.2797921

    Article  Google Scholar 

  60. Xu H, Liu X, Su J (2017) An improved grey wolf optimizer algorithm integrated with Cuckoo Search. In: 2017 9th IEEE international conference on intelligent data acquisition and advanced computing systems: technology and applications (IDAACS)

  61. Zhang Y, Wang S, Ji G (2015) A comprehensive survey on particle swarm optimization algorithm and its applications. Math Probl Eng 1:32

    MathSciNet  MATH  Google Scholar 

  62. Zhao L, Qian F (2011) Tuning the structure and parameters of a neural network using cooperative binary-real particle swarm optimization. Expert Syst Appl 38(5):4972–4977

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ibrahim Aljarah.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Faris, H., Mirjalili, S. & Aljarah, I. Automatic selection of hidden neurons and weights in neural networks using grey wolf optimizer based on a hybrid encoding scheme. Int. J. Mach. Learn. & Cyber. 10, 2901–2920 (2019). https://doi.org/10.1007/s13042-018-00913-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-018-00913-2

Keywords

Navigation