Skip to main content

Rates of Learning in Gradient and Genetic Training of Recurrent Neural Networks

  • Conference paper
Artificial Neural Nets and Genetic Algorithms

Abstract

In this paper, gradient descent and genetic techniques are used for on-line training of recurrent neural networks. A singular perturbation model for gradient learning of fixed points introduces the problem of the rate of learning formulated as the relative speed of evolution of the network and the adaptation process, and motivates an analogous study when genetic training is used. The existence of bounds for the rate of learning in order to guarantee convergence is obtained in both gradient and genetic training. Some computer simulations confirm theoretical predictions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baldi, P.: Gradient Descent Learning Algorithm Overview: A General Dynamical Systems Perspective, IEEE Trans, on Neural Networks 6, 182–195 (1995).

    Article  Google Scholar 

  2. Kokotovic, P. V., Khalil, H. K., O’Reilly J.: Singular Perturbation Methods in Control: Analysis and Design. Academic Press 1986.

    Google Scholar 

  3. Kushner, H. J., Clark, D. S.: Stochastic Approximation Methods for Constrained and Unconstrained Systems. Springer-Verlag 1978.

    Google Scholar 

  4. Ljung, L.: Analysis of recursive stochastic algorithms, IEEE Tr. Aut. Cont. 22 551–575 (1977).

    Article  MathSciNet  MATH  Google Scholar 

  5. Pineda, F. J.: Generalization of Back-Propagation to Recurrent Neural Networks, Physical Review Letters 59 2229–2232 (1987).

    Article  MathSciNet  Google Scholar 

  6. Riaza, R., Zufiria, P. J.: A singular perturbation approach to fixed point learning in dynamical systems and neural networks, Proc. 2nd World Multiconf. Systemics, Cybernetics and Informatics, Orlando, USA, 1 616–623 (1998).

    Google Scholar 

  7. Saberi, A., Khalil, H., Quadratic-type Lya-punov functions for singularly perturbed systems, IEEE Tr. Aut. Cont. 29 542–550 (1984).

    Article  MathSciNet  MATH  Google Scholar 

  8. Whitley, D., Genetic Algorithms and Neural Networks, in “Genetic Algorithms in Engineering and Computer Science”, Winter, G., Périaux, J., Galán, M., Cuesta, P., eds. John Wiley & Sons, 203–216 (1995).

    Google Scholar 

  9. Wieland, A. P., Evolving Neural Networks Controllers for Unstable Systems, IEEE Intl. J. Conf. on Neural Networks, Seattle, USA, 2 667–673 (1991).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Wien

About this paper

Cite this paper

Riaza, R., Zufiria, P.J. (1999). Rates of Learning in Gradient and Genetic Training of Recurrent Neural Networks. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6384-9_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6384-9_17

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83364-3

  • Online ISBN: 978-3-7091-6384-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics