Skip to main content

RDE-OP: A Region-Based Differential Evolution Algorithm Incorporation Opposition-Based Learning for Optimising the Learning Process of Multi-layer Neural Networks

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12694))

Abstract

Learning in multi-layer neural networks (MLNNs) involves finding appropriate weights and biases and is a challenging and important task since the performance of MLNNs is directly dependent on the weights. Conventional algorithms such as back-propagation suffer from difficulties including a tendency to get stuck in local optima. Population-based metaheuristic algorithms can be used to address these issues. In this paper, we propose a novel learning approach, RDE-OP, based on differential evolution (DE) boosted by a region-based scheme and an opposition-based learning strategy. DE is a population-based metaheuristic algorithm which has shown good performance in solving optimisation problems. Our approach integrates two effective concepts with DE. First, we find, using a clustering algorithm, regions in search space and select the cluster centres to represent these. Then, an updating scheme is proposed to include the clusters in the current population. In the next step, our proposed algorithm employs a quasi-opposition-based learning strategy for improved exploration of the search space. Experimental results on different datasets and in comparison with both conventional and population-based approaches convincingly indicate excellent performance of RDE-OP.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://archive.ics.uci.edu/ml/index.php.

References

  1. Aljarah, I., Faris, H., Mirjalili, S.: Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput. 22(1), 1–15 (2018)

    Article  Google Scholar 

  2. Amirsadri, S., Mousavirad, S.J., Ebrahimpour-Komleh, H.: A Levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training. Neural Comput. Appl. 30(12), 3707–3720 (2018)

    Article  Google Scholar 

  3. Bairathi, D., Gopalani, D.: Salp swarm algorithm (SSA) for training feed-forward neural networks. In: Bansal, J.C., Das, K.N., Nagar, A., Deep, K., Ojha, A.K. (eds.) Soft Computing for Problem Solving. AISC, vol. 816, pp. 521–534. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1592-3_41

    Chapter  Google Scholar 

  4. Battiti, R.: First-and second-order methods for learning: between steepest descent and newton’s method. Neural Comput. 4(2), 141–166 (1992)

    Article  Google Scholar 

  5. Beale, H.D., Demuth, H.B., Hagan, M.: Neural network design. Pws, Boston (1996)

    Google Scholar 

  6. Cai, Z., Gong, W., Ling, C.X., Zhang, H.: A clustering-based differential evolution for global optimization. Appl. Soft Comput. 11(1), 1363–1379 (2011)

    Article  Google Scholar 

  7. Choi, T.J., Ahn, C.W.: Adaptive Cauchy differential evolution with strategy adaptation and its application to training large-scale artificial neural networks. In: International Conference on Bio-Inspired Computing: Theories and Applications, pp. 502–510 (2017)

    Google Scholar 

  8. Damavandi, N., Safavi-Naeini, S.: A hybrid evolutionary programming method for circuit optimization. IEEE Trans. Circ. Syst. I Regul. Pap. 52(5), 902–910 (2005)

    Article  MathSciNet  Google Scholar 

  9. Das, S., Konar, A.: Automatic image pixel clustering with an improved differential evolution. Appl. Soft Comput. 9(1), 226–236 (2009)

    Article  Google Scholar 

  10. Deb, K.: A population-based algorithm-generator for real-parameter optimization. Soft Comput. 9(4), 236–253 (2005)

    Article  Google Scholar 

  11. Fister, I., Fister, D., Deb, S., Mlakar, U., Brest, J.: Post hoc analysis of sport performance with differential evolution. Neural Comput. Appl. 32, 1–10 (2018)

    Google Scholar 

  12. Foresee, F.D., Hagan, M.T.: Gauss-newton approximation to bayesian learning. Int. Conf. Neural Networks 3, 1930–1935 (1997)

    Google Scholar 

  13. Gudise, V.G., Venayagamoorthy, G.K.: Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: IEEE Swarm Intelligence Symposium, pp. 110–117 (2003)

    Google Scholar 

  14. Ilonen, J., Kamarainen, J.K., Lampinen, J.: Differential evolution training algorithm for feed-forward neural networks. Neural Process. Lett. 17(1), 93–105 (2003)

    Article  Google Scholar 

  15. Karaboga, D., Akay, B., Ozturk, C.: Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In: International Conference on Modeling Decisions for Artificial Intelligence, pp. 318–329 (2007)

    Google Scholar 

  16. Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Global Optim. 39(3), 459–471 (2007)

    Article  MathSciNet  Google Scholar 

  17. Khastavaneh, H., Ebrahimpour-Komleh, H.: Neural network-based learning kernel for automatic segmentation of multiple sclerosis lesions on magnetic resonance images. J. Biomed. Phys. Eng. 7(2), 155 (2017)

    Google Scholar 

  18. Khishe, M., Safari, A.: Classification of sonar targets using an MLP neural network trained by dragonfly algorithm. Wireless Pers. Commun. 108(4), 2241–2260 (2019)

    Article  Google Scholar 

  19. MacQueen, J.: Some methods for classification and analysis of multivariate observations. In: 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297 (1967)

    Google Scholar 

  20. Mirjalili, S.: How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl. Intell. 43(1), 150–161 (2015)

    Article  Google Scholar 

  21. Mirjalili, S.: Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 27(4), 1053–1073 (2016)

    Article  MathSciNet  Google Scholar 

  22. Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016)

    Article  Google Scholar 

  23. Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014)

    Article  Google Scholar 

  24. Mousavirad, S.J., Asilian Bidgoli, A., Rahnamayan, S.: Tackling deceptive optimization problems using opposition-based DE with center-based Latin hypercube initialization. In: 14th International Conference on Computer Science and Education (2019)

    Google Scholar 

  25. Mousavirad, S.J., Bidgoli, A.A., Ebrahimpour-Komleh, H., Schaefer, G.: A memetic imperialist competitive algorithm with chaotic maps for multi-layer neural network training. Int. J. Bio-Inspired Comput. 14(4), 227–236 (2019)

    Article  Google Scholar 

  26. Mousavirad, S.J., Bidgoli, A.A., Ebrahimpour-Komleh, H., Schaefer, G., Korovin, I.: An effective hybrid approach for optimising the learning process of multi-layer neural networks. In: International Symposium on Neural Networks, pp. 309–317 (2019)

    Google Scholar 

  27. Mousavirad, S.J., Ebrahimpour-Komleh, H.: Multilevel image thresholding using entropy of histogram and recently developed population-based metaheuristic algorithms. Evol. Intell. 10(1–2), 45–75 (2017)

    Article  Google Scholar 

  28. Mousavirad, S.J., Jalali, S.M.J., Sajad, A., Abbas, K., Schaefer, G., Nahavandi, S.: Neural network training using a biogeography-based learning strategy. In: International Conference on Neural Information Processing (2020)

    Google Scholar 

  29. Mousavirad, S.J., Rahnamayan, S.: Evolving feedforward neural networks using a quasi-opposition-based differential evolution for data classification. In: IEEE Symposium Series on Computational Intelligence (2020)

    Google Scholar 

  30. Mousavirad, S.J., Rahnamayan, S., Schaefer, G.: Many-level image thresholding using a center-based differential evolution algorithm. In: Congress on Evolutionary Computation (2020)

    Google Scholar 

  31. Mousavirad, S.J., Schaefer, G., Ebrahimpour-Komleh, H.: The human mental search algorithm for solving optimisation problems. In: Hassanien, A.-E., Taha, M.H.N., Khalifa, N.E.M. (eds.) Enabling AI Applications in Data Science. SCI, vol. 911, pp. 27–47. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-52067-0_2

    Chapter  Google Scholar 

  32. Mousavirad, S.J., Schaefer, G., Jalali, S.M.J., Korovin, I.: A benchmark of recent population-based metaheuristic algorithms for multi-layer neural network training. In: Genetic and Evolutionary Computation Conference Companion, pp. 1402–1408 (2020)

    Google Scholar 

  33. Mousavirad, S.J., Schaefer, G., Korovin, I.: An effective approach for neural network training based on comprehensive learning. In: International Conference on Pattern Recognition (2020)

    Google Scholar 

  34. Mousavirad, S., Akhlaghian, F., Mollazade, K.: Classification of rice varieties using optimal color and texture features and BP neural networks. In: 7th Iranian Conference on Machine Vision and Image Processing, pp. 1–5 (2011)

    Google Scholar 

  35. Phansalkar, V., Sastry, P.: Analysis of the back-propagation algorithm with momentum. IEEE Trans. Neural Netw. 5(3), 505–506 (1994)

    Article  Google Scholar 

  36. Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.: Quasi-oppositional differential evolution. In: IEEE Congress on Evolutionary Computation, pp. 2229–2236 (2007)

    Google Scholar 

  37. Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: IEEE International Conference on Evolutionary Computation, pp. 69–73 (1998)

    Google Scholar 

  38. Slowik, A.: Application of an adaptive differential evolution algorithm with multiple trial vectors to artificial neural network training. IEEE Trans. Ind. Electron. 58(8), 3160–3167 (2010)

    Article  Google Scholar 

  39. Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997)

    Article  MathSciNet  Google Scholar 

  40. Tizhoosh, H.R.: Opposition-based learning: a new scheme for machine intelligence. In: International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, vol. 1, pp. 695–701 (2005)

    Google Scholar 

  41. Yu, C.C., Liu, B.D.: A backpropagation algorithm with adaptive learning rate and momentum coefficient. Int. Joint Conf. Neural Netw. 2, 1218–1223 (2002)

    Google Scholar 

Download references

Acknowledgements

This paper is published due to the financial support of the RFBR under research project 18-29-03225.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mousavirad, S.J., Schaefer, G., Korovin, I., Oliva, D. (2021). RDE-OP: A Region-Based Differential Evolution Algorithm Incorporation Opposition-Based Learning for Optimising the Learning Process of Multi-layer Neural Networks. In: Castillo, P.A., Jiménez Laredo, J.L. (eds) Applications of Evolutionary Computation. EvoApplications 2021. Lecture Notes in Computer Science(), vol 12694. Springer, Cham. https://doi.org/10.1007/978-3-030-72699-7_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-72699-7_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-72698-0

  • Online ISBN: 978-3-030-72699-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics