Can Differential Evolution Be an Efficient Engine to Optimize Neural Networks?

  • Marco Baioletti
  • Gabriele Di Bari
  • Valentina PoggioniEmail author
  • Mirco Tracolli
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10710)


In this paper we present an algorithm that optimizes artificial neural networks using Differential Evolution. The evolutionary algorithm is applied according the conventional neuroevolution approach, i.e. to evolve the network weights instead of backpropagation or other optimization methods based on backpropagation. A batch system, similar to that one used in stochastic gradient descent, is adopted to reduce the computation time. Preliminary experimental results are very encouraging because we obtained good performance also in real classification dataset like MNIST, that are usually considered prohibitive for this kind of approach.


  1. 1.
    Bari, G.D.: Denn: Differential evolution for neural networks. Master thesis (2017)Google Scholar
  2. 2.
    Bengio, Y., Goodfellow, I.J., Courville, A.: Deep learning. Nature 521, 436–444 (2015)CrossRefGoogle Scholar
  3. 3.
    Brest, J., Boskovic, B., Mernik, M., Zumer, V.: Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 10(6), 646–657 (2006)CrossRefGoogle Scholar
  4. 4.
    Cardamone, L., Loiacono, D., Lanzi, P.L.: Evolving competitive car controllers for racing games with neuroevolution. In: Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, pp. 1179–1186. ACM (2009)Google Scholar
  5. 5.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 160–167. ACM, New York (2008)Google Scholar
  6. 6.
    Das, S., Abraham, A., Chakraborty, U.K., Konar, A.: Differential evolution using a neighborhood-based mutation operator. IEEE Trans. Evol. Comput. 13(3), 526–553 (2009)CrossRefGoogle Scholar
  7. 7.
    Das, S., Mullick, S.S., Suganthan, P.: Recent advances in differential evolution an updated survey. Swarm Evol. Comput. 27, 1–30 (2016)CrossRefGoogle Scholar
  8. 8.
    Das, S., Suganthan, P.N.: Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15(1), 4–31 (2011)CrossRefGoogle Scholar
  9. 9.
    Donate, J.P., Li, X., Sánchez, G.G., de Miguel, A.S.: Time series forecasting by evolving artificial neural networks with genetic algorithms, differential evolution and estimation of distribution algorithm. Neural Comput. Appl. 22(1), 11–20 (2013)CrossRefGoogle Scholar
  10. 10.
    Floreano, D., Dürr, P., Mattiussi, C.: Neuroevolution: from architectures to learning. Evol. Intell. 1(1), 47–62 (2008)CrossRefGoogle Scholar
  11. 11.
    Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv preprint arXiv:1410.5401 (2014)
  12. 12.
    Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., Colmenarejo, S.G., Grefenstette, E., Ramalho, T., Agapiou, J., et al.: Hybrid computing using a neural network with dynamic external memory. Nature 538(7626), 471–476 (2016)CrossRefGoogle Scholar
  13. 13.
    Hausknecht, M., Lehman, J., Miikkulainen, R., Stone, P.: A neuroevolution approach to general atari game playing. IEEE Trans. Comput. Intell. AI Games 6(4), 355–366 (2014)CrossRefGoogle Scholar
  14. 14.
    Heidrich-Meisner, V., Igel, C.: Neuroevolution strategies for episodic reinforcement learning. J. Algorithms 64(4), 152–168 (2009)CrossRefGoogle Scholar
  15. 15.
    Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A.R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., Kingsbury, B.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  16. 16.
    Igel, C.: Neuroevolution for reinforcement learning using evolution strategies. In: The 2003 Congress on Evolutionary Computation, 2003. CEC 2003, vol. 4, pp. 2588–2595 (2003)Google Scholar
  17. 17.
    Ilonen, J., Kamarainen, J.K., Lampinen, J.: Differential evolution training algorithm for feed-forward neural networks. Neural Process. Lett. 17(1), 93–105 (2003)CrossRefGoogle Scholar
  18. 18.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25, pp. 1097–1105. Curran Associates Inc. (2012)Google Scholar
  19. 19.
    Leema, N., Nehemiah, H.K., Kannan, A.: Neural network classifier optimization using differential evolution with global information and back propagation algorithm for clinical datasets. Appl. Soft Comput. 49, 834–844 (2016). Scholar
  20. 20.
    Masters, T., Land, W.: A new training algorithm for the general regression neural network. In: 1997 IEEE International Conference on Systems, Man, and Cybernetics, vol. 3, pp. 1990–1994 (1997)Google Scholar
  21. 21.
    Mattiussi, C., Dürr, P., Floreano, D.: Center of mass encoding: a self-adaptive representation with adjustable redundancy for real-valued parameters. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO 2007, pp. 1304–1311. ACM, New York (2007)Google Scholar
  22. 22.
    Miikkulainen, R.: Neuroevolution, pp. 716–720. Springer, Boston (2010). Scholar
  23. 23.
    Morse, G., Stanley, K.O.: Simple evolutionary optimization can rival stochastic gradient descent in neural networks. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO) 2016, pp. 477–484. ACM, New York (2016)Google Scholar
  24. 24.
    Piotrowski, A.P.: Differential evolution algorithms applied to neural network training suffer from stagnation. Appl. Soft Comput. 21, 382–406 (2014)CrossRefGoogle Scholar
  25. 25.
    Price, K., Storn, R.M., Lampinen, J.A.: Differential Evolution: A Practical Approach to Global Optimization. Springer, Heidelberg (2006). Scholar
  26. 26.
    Reed, S., de Freitas, N.: Neural programmer-interpreters. Technical report, arXiv:1511.06279 (2015).
  27. 27.
    Santucci, V., Baioletti, M., Milani, A.: Algebraic differential evolution algorithm for the permutation flowshop scheduling problem with total flowtime criterion. IEEE Trans. Evol. Comput. 20(5), 682–694 (2016)CrossRefGoogle Scholar
  28. 28.
    Schaffer, J.D., Whitley, D., Eshelman, L.J.: Combinations of genetic algorithms and neural networks: a survey of the state of the art. In: Proceedings of COGANN 1992: International Workshop on Combinations of Genetic Algorithms and Neural Networks, pp. 1–37 (1992)Google Scholar
  29. 29.
    Schraudolph, N.N., Belew, R.K.: Dynamic parameter encoding for genetic algorithms. Mach. Learn. 9(1), 9–21 (1992)Google Scholar
  30. 30.
    Tracolli, M.: Enhancing denn with adaboost and self adaptation. Master thesis (2017)Google Scholar
  31. 31.
    Vesterstrom, J., Thomsen, R.: A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems. In: Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753), vol. 2, pp. 1980–1987 (2004)Google Scholar
  32. 32.
    Wang, L., Zeng, Y., Chen, T.: Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Syst. Appl. 42(2), 855–863 (2015)CrossRefGoogle Scholar
  33. 33.
    Yao, X.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Marco Baioletti
    • 1
  • Gabriele Di Bari
    • 1
  • Valentina Poggioni
    • 1
    Email author
  • Mirco Tracolli
    • 1
  1. 1.University of PerugiaPerugiaItaly

Personalised recommendations