Genetic synthesis of discrete-time recurrent neural network

  • F. J. Marín
  • F. Sandoval
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 686)


In this paper, we proposed a different genetic model for optimizing both network architecture and connection weights of Discrete-Time Recurrent Neural Networks in evolutionary process. Empirical studies show that our model can efficiently generate the appropiate network size and topology for small applications. We have used two experiments: a parity function and a finite state machine for detection of sequences.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Almeida, L., “A learning rule for asynchronous perceptrons with feedback in a combinatorial environment”, IEEE First Conference on Neural Networks, vol. 2, pp. 609–618, 1987.Google Scholar
  2. [2]
    Almeida, L., “Backpropagation in Perceptrons with Feedback”, in Neural Computers, (Eds. Roff Eckmiller and C.v.d. Malsburg), pp. 199–208, Springer-Verlag, 1989.Google Scholar
  3. [3]
    Fahlman, S.E. and Lebiere, C., “The cascade-correlation learning architecture”, in Advances in Neural Information Processing Systems (Ed. D. S. Touretzky), vol. 2, pp. 524–532, Morgan Kauffmann, 1990.Google Scholar
  4. [4]
    Fontanari, J.F. and Meir, R., “Evolving a learning algorithm for the binary perceptron”, Network, vol. 2, pp. 353–359, 1991.Google Scholar
  5. [5]
    Frean, M., “The upstart algorithm: a method for constructing and training feed-forward neural networks”. Neural Computation, vol. 2, pp. 198–209, 1990.Google Scholar
  6. [6]
    Gruau, F.C., “Cellular Encoding of Genetic Neural Networks”, Technical Report #92-21, Laboratoire de l'Informatique du Parallelisme, Ecole Normale Superieure de Lyon, Mai 1992.Google Scholar
  7. [7]
    Harp, S.A. and Samad, T. and Guha, A., “Towards the genetic synthesis of neural networks”, in Third International Conference on Genetic Algorithms (Ed. J. D. Schaffer), pp. 360–369, Morgan Kauffmann, 1989.Google Scholar
  8. [8]
    Hirose, Y. and Yamashita, K. and Hijiya, S., “Back-propagation algorithm which varies the number of hidden units”, Neural Networks, vol. 4, pp. 61–66, 1991.Google Scholar
  9. [9]
    Holland, J.H. “Adaptation in Natural and Artificial Systems”, University of Michigan press, Ann Arbor, 1975.Google Scholar
  10. [10]
    Kitano, H., “Designing neural networks using genetic algorithms with graph generation system”, Complex Systems, vol. 4, pp. 461–476, 1990.Google Scholar
  11. [11]
    Pineda, F.J., “Generalization of Back-propagation to Recurrent Neural Networks”, Physical Review Letters, vol. 59, pp. 2229–2232, American Physical Society, 1987.Google Scholar
  12. [12]
    Pineda, F.J., “Dynamics and Architecture for Neural Computation”, Journal of Complexity, vol. 4, pp. 216–245, Academic Press, 1988.Google Scholar
  13. [13]
    Rumelhart, D.E., Hinton, G.E. and Williams, R.J., “Learning internal representations by error propagation”, Parallel Distributed Proccessing, vol. 1, pp. 310–362. MIT Press, 1986.Google Scholar
  14. [14]
    Schraudolph, N.N., Computer Science & Engeniering Department, University of California, San Diego, La Jolla, CA 92093-0114.Google Scholar
  15. [15]
    Whitley, D., Starkweather, T. and Bogart, C., “Genetic algorithms and neural networks: optimizing connections and connectivity”. Parallel Computing, vol. 14, pp. 347–361, 1990.Google Scholar
  16. [16]
    Williams, R.J. and Zisper, D., “A learning algorithm for continually running fully recurrent neural networks”, Neural Computation, vol. 1, pp. 270–280, 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • F. J. Marín
    • 1
  • F. Sandoval
    • 2
  1. 1.Dpto. Arquitectura y Tecnología de Computadores y ElectrónicaUniversidad de MálagaMálagaSpain
  2. 2.Dpto. Tecnología ElectrónicaUniversidad de MálagaMálagaSpain

Personalised recommendations