Advertisement

Topology Optimization and Training of Recurrent Neural Networks with Pareto-Based Multi-objective Algorithms: A Experimental Study

  • M. P. Cuéllar
  • M. Delgado
  • M. C. Pegalajar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4507)

Abstract

The simultaneous topology optimization and training of neu- ral networks is a problem widely studied in the last years, specially for feedforward models. In the case of recurrent neural networks, the existing proposals attempt to only optimize the number of hidden units, since the problem of topology optimization is more difficult due to the feedback connections in the network structure. In this work, we make a study of the effects and difficulties for the optimization of network connections, hidden neurons and network training for dynamical recurrent models. In the experimental section , the proposal is tested in time series prediction problems.

Keywords

Recurrent Neural Networks Multi-Objective optimization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice-Hall, Englewood Cliffs (1999)zbMATHGoogle Scholar
  2. 2.
    Giles, C.L., Chen, D., Sun, G.Z., Chen, H., Lee, Y.C., Goudreau, M.W.: Constructive learning of recurrent neural networks: problems with recurrent cascade correlation and a simple solution. IEEE Transactions on Neural Networks 6(4), 829 (1995)CrossRefGoogle Scholar
  3. 3.
    Yao, X.: Evolving artificial neural networks. Proc. of the IEEE 87(9), 1423–1447 (1999)CrossRefGoogle Scholar
  4. 4.
    Jin, Y.: Multi-objective machine learning. Springer, New York (2006)zbMATHGoogle Scholar
  5. 5.
    Mandic, D.P., Chambers, J.: Recurrent Neural Networks for Prediction. John Wiley and sons, Chichester (2001)Google Scholar
  6. 6.
    Teo, J., Abbass, H.A.: Information-theoretic landscape analysis of neuro-controlled embodied organisms. Neural Computing and Applications 31(1), 80–89 (2004)Google Scholar
  7. 7.
    Teo, J., Abbass, H.A.: Multi-objectivity and complexity in embodied cognition. IEEE Trans. on Evolutionary Computation 9(4), 337–370 (2005)CrossRefGoogle Scholar
  8. 8.
    Cuéllar, M.P., Delgado, M., Pegalajar, M.C.: Multiobjetive evolutionary optimization for elman recurrent neural networks, applied to time series prediction. Fuzzy Economical Review X(1), 165–181 (2005)Google Scholar
  9. 9.
    Delgado, M., Pegalajar, M.C.: A multiobjective genetic algorithm for obtaining the optimal size of a recurrent neural network for grammatical inference. Pattern Recognition 38(9), 1444–1456 (2005)CrossRefzbMATHGoogle Scholar
  10. 10.
    Zitzler, E., Laumanns, M., Thiele, L.: Spea2: Improving the strength pareto evolutionary algorithm. Technical Report 103, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH), Zurich, Switzerland (May 2001)Google Scholar
  11. 11.
    Deb, K., Pratap, A., Agarwal, A., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation 6(2), 182–197 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • M. P. Cuéllar
    • 1
  • M. Delgado
    • 1
  • M. C. Pegalajar
    • 1
  1. 1.Dept. Computer Science and Artificial Intelligence, E.T.S. Ingeniería Informática, C/. Pdta. Daniel Saucedo Aranda s.n. 18071, University of GranadaSpain

Personalised recommendations