Abstract
An evolutionary optimization method over continuous search spaces, differential evolution, has recently been successfully applied to real world and artificial optimization problems and proposed also for neural network training. However, differential evolution has not been comprehensively studied in the context of training neural network weights, i.e., how useful is differential evolution in finding the global optimum for expense of convergence speed. In this study, differential evolution has been analyzed as a candidate global optimization method for feed-forward neural networks. In comparison to gradient based methods, differential evolution seems not to provide any distinct advantage in terms of learning rate or solution quality. Differential evolution can rather be used in validation of reached optima and in the development of regularization terms and non-conventional transfer functions that do not necessarily provide gradient information.
Similar content being viewed by others
References
Alander, J. T.: An Indexed Bibliography of Genetic Algorithms and Neural Networks, ftp.uwasa.fi/cs/report94-1/gaNNbib.ps.Z.,1999.
Charalambous, C.: Conjugate gradient algorithm for efficient training of artificial neural networks, IEE G (Circuits, Devices and Systems), 139(3) (1992), 301–310.
Day, S. P. and Camporese, D. S.: A stochastic training technique for feed-forward neural networks, In: IJCNN International Joint Conference on Neural Networks, 3 (1990), 607–612.
Doyle, S., Corcoran, D. and Connell, J.: Automated mirror design using an evolution strategy, Optical Engineering, 38(2) (1999), 323–333.
Fischer, M. M., Hlavackova-Schindler, K. and Reismann, M.: A global search procedure for parameter estimation in neural spatial interaction modelling, Regional Science, 78(2) (1999), 119–134.
Gang, L., Yiqing, T. and Fu, T.: A fast evolutionary algorithm for neural network training using differential evolution, In: ICYCS '99 Fifth International Conference for Young Computer Scientists, 1 (1999), 507–511.
Hagan, M. T. and Menhaj, M. B.: Training feedforward networks with the Marquadt algorithm, IEEE Transactions on Neural Networks, 5(6) 1994.
Japkowicz, N. and Hanson, S. J.: Adaptability of the backpropagation procedure, In: IJCNN '99 International Joint Conference on Neural Networks, 3 (1999), 1710–1715.
Lampinen, J.: A Bibliography of Differential Evolution Algorithm, http: //www.lut.fi/ ~jlampine/debiblio.htm., 2001.
Lampinen, J. and Zelinka, I.: New Ideas in Optimization, Chapt. Mechanical Engineering Design Optimization by Differential Evolution, McGraw-Hill, (1999), pp. 127–146.
Liang, M. Wang, S.-X. and Luo Y.-H.: Fast learning algorithms for multi-layered feedforward neural network, In: IEEE 1994 National Aerospace and Electronics Conference NAECON1994, 2 (1994), 787–790.
Magoulas, G., Plagianakos, V. and Vrahatis, M.: Hybrid methods using evolutionary algorithms for on-line training, In: Proceedings of IJCNN '01, International Joint Conference on Neural Networks, 3 (2001), 2218–2223.
Masters, T. and Land, W.: A new training algorithm for the general regression neural network, In: IEEE International Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, 3 (1997), 1990–1994.
Moller, M.: Efficient training of feed-forward neural networks, Ph.D. thesis, Computer Science Department, Aarhus University, Arhus, Denmark, 1997.
Neelaveni, R., Gurusamy, G. and Hemavathy, L.: Adaptive genetic algorithm and differential evolution based backpropagation neural network for epileptic pattern recognition, In: Proceedings of the National Conference on Technology Convergence for Information, Communication and Entertainment, (2001), 26–30.
Prechelt, L.: PROBEN1 — A set of benchmarks and benchmarking rules for neural network training algorithms,Technical report, Fakultat fur Informatik, Universitat Karlsruhe, Germany, 1994.
Prechelt, L.: Some notes on neural learning algorithm benchmarking, Neurocomputing, 9(3) (1995), 343–347.
Rogalsky, T., Kocabiyik, S. and Derksen, R.: Differential evolution in aerodynamic optimization, Canadian Aeronautics and Space Journal, 46(4) (2000), 183–190.
Schmitz, G. P. and Aldrich, C.: Combinatorial Evolution of Regression Nodes in Feedforward Neural Networks, Neural Networks, 12(1) (1999), 175–189.
Storn, R. and Price, K.: Differential Evolution — A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces, Technical Report TR-95-012, International Computer Science Institute, Berkeley, CA, USA (http://www.icsi.berkeley.edu/ techreports/1995.abstracts/tr-95-012.html), 1995.
Storn, R. and Price, K.: Differential evolution — a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization, 11(4) 1997, 341–359.
Stumberger, G., Dolinar, D., Pahner, U. and Hameyer, K.: Optimization of radial active magnetic bearings using the finite element technique and differential evolution algorithm, IEEE Transactions on Magnetics, 36(4) (2000), 1009–1013.
Wang, F.-S. and Sheu, J.-W.: Multiobjective parameter estimation problems of fermentation processes using a high ethanol tolerance yeast, Chemical Engineering Science, 55(18) (2000), 3685–3695.
Yao, X.: Evolving artificial neural networks, Proceedings of the IEEE, 87(9) (1999), 1423–1447.
Zelinka, I. and Lampinen, J.: An evolutionary learning algorithms for neural networks, In: 5th International Conference on Soft Computing MENDEL '99, (1999), 410–414.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Ilonen, J., Kamarainen, JK. & Lampinen, J. Differential Evolution Training Algorithm for Feed-Forward Neural Networks. Neural Processing Letters 17, 93–105 (2003). https://doi.org/10.1023/A:1022995128597
Issue Date:
DOI: https://doi.org/10.1023/A:1022995128597