Advertisement

Neural Processing Letters

, Volume 17, Issue 1, pp 93–105 | Cite as

Differential Evolution Training Algorithm for Feed-Forward Neural Networks

  • Jarmo Ilonen
  • Joni-Kristian Kamarainen
  • Jouni Lampinen
Article

Abstract

An evolutionary optimization method over continuous search spaces, differential evolution, has recently been successfully applied to real world and artificial optimization problems and proposed also for neural network training. However, differential evolution has not been comprehensively studied in the context of training neural network weights, i.e., how useful is differential evolution in finding the global optimum for expense of convergence speed. In this study, differential evolution has been analyzed as a candidate global optimization method for feed-forward neural networks. In comparison to gradient based methods, differential evolution seems not to provide any distinct advantage in terms of learning rate or solution quality. Differential evolution can rather be used in validation of reached optima and in the development of regularization terms and non-conventional transfer functions that do not necessarily provide gradient information.

differential evolution evolutionary algorithms feed-forward neural network neural network training 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alander, J. T.: An Indexed Bibliography of Genetic Algorithms and Neural Networks, ftp.uwasa.fi/cs/report94-1/gaNNbib.ps.Z.,1999.Google Scholar
  2. 2.
    Charalambous, C.: Conjugate gradient algorithm for efficient training of artificial neural networks, IEE G (Circuits, Devices and Systems), 139(3) (1992), 301–310.Google Scholar
  3. 3.
    Day, S. P. and Camporese, D. S.: A stochastic training technique for feed-forward neural networks, In: IJCNN International Joint Conference on Neural Networks, 3 (1990), 607–612.Google Scholar
  4. 4.
    Doyle, S., Corcoran, D. and Connell, J.: Automated mirror design using an evolution strategy, Optical Engineering, 38(2) (1999), 323–333.CrossRefADSGoogle Scholar
  5. 5.
    Fischer, M. M., Hlavackova-Schindler, K. and Reismann, M.: A global search procedure for parameter estimation in neural spatial interaction modelling, Regional Science, 78(2) (1999), 119–134.Google Scholar
  6. 6.
    Gang, L., Yiqing, T. and Fu, T.: A fast evolutionary algorithm for neural network training using differential evolution, In: ICYCS '99 Fifth International Conference for Young Computer Scientists, 1 (1999), 507–511.Google Scholar
  7. 7.
    Hagan, M. T. and Menhaj, M. B.: Training feedforward networks with the Marquadt algorithm, IEEE Transactions on Neural Networks, 5(6) 1994.Google Scholar
  8. 8.
    Japkowicz, N. and Hanson, S. J.: Adaptability of the backpropagation procedure, In: IJCNN '99 International Joint Conference on Neural Networks, 3 (1999), 1710–1715.Google Scholar
  9. 9.
    Lampinen, J.: A Bibliography of Differential Evolution Algorithm, http: //www.lut.fi/ ~jlampine/debiblio.htm., 2001.Google Scholar
  10. 10.
    Lampinen, J. and Zelinka, I.: New Ideas in Optimization, Chapt. Mechanical Engineering Design Optimization by Differential Evolution, McGraw-Hill, (1999), pp. 127–146.Google Scholar
  11. 11.
    Liang, M. Wang, S.-X. and Luo Y.-H.: Fast learning algorithms for multi-layered feedforward neural network, In: IEEE 1994 National Aerospace and Electronics Conference NAECON1994, 2 (1994), 787–790.Google Scholar
  12. 12.
    Magoulas, G., Plagianakos, V. and Vrahatis, M.: Hybrid methods using evolutionary algorithms for on-line training, In: Proceedings of IJCNN '01, International Joint Conference on Neural Networks, 3 (2001), 2218–2223.Google Scholar
  13. 13.
    Masters, T. and Land, W.: A new training algorithm for the general regression neural network, In: IEEE International Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, 3 (1997), 1990–1994.Google Scholar
  14. 14.
    Moller, M.: Efficient training of feed-forward neural networks, Ph.D. thesis, Computer Science Department, Aarhus University, Arhus, Denmark, 1997.Google Scholar
  15. 15.
    Neelaveni, R., Gurusamy, G. and Hemavathy, L.: Adaptive genetic algorithm and differential evolution based backpropagation neural network for epileptic pattern recognition, In: Proceedings of the National Conference on Technology Convergence for Information, Communication and Entertainment, (2001), 26–30.Google Scholar
  16. 16.
    Prechelt, L.: PROBEN1 — A set of benchmarks and benchmarking rules for neural network training algorithms,Technical report, Fakultat fur Informatik, Universitat Karlsruhe, Germany, 1994.Google Scholar
  17. 17.
    Prechelt, L.: Some notes on neural learning algorithm benchmarking, Neurocomputing, 9(3) (1995), 343–347.CrossRefGoogle Scholar
  18. 18.
    Rogalsky, T., Kocabiyik, S. and Derksen, R.: Differential evolution in aerodynamic optimization, Canadian Aeronautics and Space Journal, 46(4) (2000), 183–190.Google Scholar
  19. 19.
    Schmitz, G. P. and Aldrich, C.: Combinatorial Evolution of Regression Nodes in Feedforward Neural Networks, Neural Networks, 12(1) (1999), 175–189.CrossRefGoogle Scholar
  20. 20.
    Storn, R. and Price, K.: Differential Evolution — A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces, Technical Report TR-95-012, International Computer Science Institute, Berkeley, CA, USA (http://www.icsi.berkeley.edu/ techreports/1995.abstracts/tr-95-012.html), 1995.Google Scholar
  21. 21.
    Storn, R. and Price, K.: Differential evolution — a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization, 11(4) 1997, 341–359.zbMATHMathSciNetCrossRefGoogle Scholar
  22. 22.
    Stumberger, G., Dolinar, D., Pahner, U. and Hameyer, K.: Optimization of radial active magnetic bearings using the finite element technique and differential evolution algorithm, IEEE Transactions on Magnetics, 36(4) (2000), 1009–1013.CrossRefGoogle Scholar
  23. 23.
    Wang, F.-S. and Sheu, J.-W.: Multiobjective parameter estimation problems of fermentation processes using a high ethanol tolerance yeast, Chemical Engineering Science, 55(18) (2000), 3685–3695.CrossRefGoogle Scholar
  24. 24.
    Yao, X.: Evolving artificial neural networks, Proceedings of the IEEE, 87(9) (1999), 1423–1447.CrossRefGoogle Scholar
  25. 25.
    Zelinka, I. and Lampinen, J.: An evolutionary learning algorithms for neural networks, In: 5th International Conference on Soft Computing MENDEL '99, (1999), 410–414.Google Scholar

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  • Jarmo Ilonen
    • 1
  • Joni-Kristian Kamarainen
    • 1
  • Jouni Lampinen
    • 1
  1. 1.Laboratory of Information ProcessingLappeenranta University of TechnologyLappeenrantaFinland

Personalised recommendations