Evolving Feed-forward Neural Networks Through Evolutionary Mutation Parameters

  • M. Annunziato
  • I. Bertini
  • R. Iannone
  • S. Pizzuti
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4224)


In this paper we show a preliminary work on evolutionary mutation parameters in order to understand whether it is possible or not to skip mutation parameters tuning. In particular, rather than considering mutation parameters as global environmental features, we regard them as endogenous features of the individuals by putting them directly in the genotype. In this way we let the optimal values emerge from the evolutionary process itself. As case study, we apply the proposed methodology to the training of feed-forward neural netwoks on nine classification benchmarks and compare it to other five well established techniques. Results show the effectiveness of the proposed appraoch to get very promising results passing over the boring task of off-line optimal parameters tuning.


Particle Swarm Optimisation Travelling Salesman Problem Complex Adaptive System Artificial Life Mutation Parameter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Alander, J.T.: An indexed bibliography of genetic algorithms and neural networks. Technical Report 94-1-NN, University of Vaasa, Department of Information Technology and Production Economics (1998)Google Scholar
  2. Annunziato, M., Bertini, I., Lucchetti, M., Pannicelli, A., Pizzuti, S.: Adaptivity of Artificial Life Environment for On-Line Optimization of Evolving Dynamical Systems. In: Proc. EUNITE 2001, Tenerife, Spain (2001)Google Scholar
  3. Annunziato, M., Bertini, I., Pannicelli, A., Pizzuti, S.: Evolutionary feed-forward neural networks for traffic prediction. In: Proceedings of EUROGEN2003, Barcelona, Spain (2003)Google Scholar
  4. Annunziato, M., Lucchetti, M., Orsini, G., Pizzuti, S.: Artificial Life And Online Flows Optimisation In Energy Networks. In: IEEE Swarm Intelligence Symposium, Pasadena, CA, USA (2005)Google Scholar
  5. Bäch, T.: Self-adaptation in genetic algorithms. In: Varela, F.J., Bourgine, P. (eds.) Towards a Practice of Autonomous Systems, pp. 263–271. Bradford/MIT Press, Cambridge (1992)Google Scholar
  6. Balakrishnan, K., Honavar, V.: Evolutionary Design of Neural Architectures – A Preliminary Taxonomy and Guide to Literature, Technical Report CS TR95-01, Dept. of Computer Science, Iowa State University (1995)Google Scholar
  7. Bedau, M.A., Seymour, R.: Adaptation of Mutation Rates in a Simple Model of Evolution. Complexity International 2 (1995)Google Scholar
  8. Blake, C.L., Merz, C.J.: UCI repository of machine learning databases, University of California, Irvine (1998),
  9. Cant-Paz, E., Kamath, C.: An empirical comparison of combinations of evolutionary algorithms and neural networks for classification problems. IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, 915–927 (2005)Google Scholar
  10. Cantú-Paz, E., Kamath, C.: Evolving neural networks for the classification of galaxies. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2002, pp. 1019–1026. Morgan Kaufmann Publishers, San Francisco (2002)Google Scholar
  11. Cleary, J.G., Trigg, L.E.: K*: An Instance- based Learner Using an Entropic Distance Measure. In: Proceedings of the 12th International Conference on Machine learning, pp. 108–114 (1995)Google Scholar
  12. Davis, M.W.: The natural formation of gaussian mutation strategies in evolutionary programming. In: Sebald, A.V., Fogel, L.J. (eds.) Proceedings of the 3rd Annual Conference on Evolutionary Programming, World Scientific, River Edge (1994)Google Scholar
  13. De Falco, I., Della Coppa, A., Tarantino, E.: Impiego della particle swarm optimization per la classificazione in database. In: II Italian Artificial Life Workshop 2005, Rome, Italy ISTC-CNR (2005)Google Scholar
  14. Demiroz, G., Guvenir, A.: Classification by voting feature intervals. In: ECML-1997 (1997)Google Scholar
  15. Fogel, D.B., Fogel, L.J., Atmar, J.W.: Meta-evolutionary programming. In: Chen, R.R. (ed.) Proceedings of the 25th Asilomar Conference on Signals, Systems and Computers, pp. 540–545. Maple Press, San Jose (1991)Google Scholar
  16. Hwang, M.W., Choi, J.Y., Park, J.: Evolutionary projection neural networks. In: Proceedings of the 1997 IEEE International Conference on Evolutionary Computation, ICEC 1997, Piscataway, NJ, USA, pp. 667–671. IEEE Press, Los Alamitos (1997)CrossRefGoogle Scholar
  17. Jan, T., Kim: Energy Dependent Adaptation of Mutation Rates in Computer Models of Evolution. In: Proceedings of ALIFE VI, Los Angeles, CA (1998)Google Scholar
  18. Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: Proc. IEEE International Conference on Neural Networks, IV, pp. 1942–1948. IEEE Service Center, Piscataway (1995)CrossRefGoogle Scholar
  19. Kenneth, O.S., Miikkulainen, R.: Evolving Neural Networks Through Augmenting Topologies (2002)Google Scholar
  20. Langton, C.G.: Artificial life in Artificial life. In: Langton, C. (ed.), Addison-Wesley, Reading (1989)Google Scholar
  21. Langton, C.G.: The Garden in the Machine. The Emerging Science of Artificial Life, Princeton University Press, Princeton (1989)Google Scholar
  22. Liu, Y., Yao, X.: Evolutionary design of artificial neural networks with different nodes. In: Proc. of the 1996 IEEE Int’l Conf. on Evolutionary Computation (ICEC 1996), pp. 670–675. IEEE Press, New York (1996)CrossRefGoogle Scholar
  23. Matteucci, M.: ELeaRNT: Evolutionary Learning of Rich Neural Network Topologies, Technical Report N CMU-CALD-02-103, Department of Computer Science – Carnegie Mellon University, Pittsburgh PA (2002)Google Scholar
  24. Metzgar, D., Wills, C.: Evidence for the Adaptive Evolution of Mutation Rates. Cell 101, 581–584 (2000)CrossRefGoogle Scholar
  25. Prudencio, R.B.C., Ludermir, T.B.: Evolutionary Design Of Neural Networks: Application To River Flow Prediction. In: Proceedings of the IASTED International Conference on Artificial Intelligence and Applications, AIA 2001, Marbella, Spain (2001)Google Scholar
  26. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by backpropagating errors. Nature 323, 533–536 (1986)CrossRefGoogle Scholar
  27. Webb, G.I.: MultiBoosting: a technique for combining boosting and wagging. Machine Learning 40(2), 159–196 (2000)CrossRefGoogle Scholar
  28. White, D., Ligomenides, P.: GANNet: a genetic algorithm for optimizing topology and weights in neural network design. In: Mira, J., Cabestany, J., Prieto, A.G. (eds.) IWANN 1993. LNCS, vol. 686, pp. 322–327. Springer, Heidelberg (1993)Google Scholar
  29. Witten, I.H., Frank, E.: Data mining: practical machine learning tool and technique with Java implementation. Morgan Kaufmann, San Francisco (2000)Google Scholar
  30. Yao, X.: Evolving Artificial Neural Networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • M. Annunziato
    • 1
  • I. Bertini
    • 1
  • R. Iannone
    • 2
  • S. Pizzuti
    • 1
  1. 1.Energy New technologies and Environment Agency‘Casaccia’ R.C.RomeItaly
  2. 2.Dept. of Computer ScienceUniversity of Rome La Sapienza’RomeItaly

Personalised recommendations