Advertisement

General transient length upper bound for recurrent neural networks

  • A. M. C. -L. Ho
  • Ph. De Wilde
Computational Models of Neurons and Neural Nets
Part of the Lecture Notes in Computer Science book series (LNCS, volume 930)

Abstract

We show how to construct a Lyapunov function for a discrete recurrent neural network using the variable-gradient method. This method can also be used to obtain the Hopfield energy function. Using our Lyapunov function, we compute an upper bound for the transient length for our neural network dynamics. We also show how our Lyapunov function can provide insights into the effect that the introduction of self-feedback weights to our neural network has on the sizes of the basins of attraction of the equilibrium points of the neural network state space.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    F. Csáki. Modern Control Theory. Akadémiai Kiado Budapest, 1971.Google Scholar
  2. [2]
    P. Floréen. Worst-case convergence times for hopfield memories. IEEE Trans. on Neural Networks, 2(5):533–535, 1991.Google Scholar
  3. [3]
    F. Fogelman-Soulie et. al. Transient length in sequential iteration of threshold functions. Discrete Applied Mathematics, 6:95–98, 1983.Google Scholar
  4. [4]
    E. Goles and S. Martínez. Neural and Automata Networks. Kluwer Academic Publisher, 1990.Google Scholar
  5. [5]
    E. Goles et. al. Decreasing energy functions as a tool for studying threshold networks. Discrete Applied Mathematics, 12:261–277, 1985.Google Scholar
  6. [6]
    M. Hirsch and S. Smale. Differential Equations, Dynamical Systems and Linear Algebra. Academic Press, 1974.Google Scholar
  7. [7]
    J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. of Nat. Acad. Sci, U.S.A., 79:2554–2558, 1982.Google Scholar
  8. [8]
    Y. Kamp and M. Hasler. Recursive Neural Networks for Associative Memory. Wiley, 1990.Google Scholar
  9. [9]
    J. Komlós and R. Paturi. Convergence results in associative memory model. Neural Networks, 1:239–250, 1988.Google Scholar
  10. [10]
    A. Michel et. al. Qualitative analysis of neural networks. IEEE Trans. on Circuits and Systems, 36(2):229–243, 1989.Google Scholar
  11. [11]
    R. R. Mohler. Nonlinear Systems:Dynamics and Control. Prentice Hall, 1991.Google Scholar
  12. [12]
    P. C. Parks. A. M. Lyapunov's stability theory—100 years on. IMA Journal of Mathematical Control and Information, 9:275–303, 1992.Google Scholar
  13. [13]
    N. Peterfreud and Y. Baram. Second-order bounds on the domain of attraction and the rate of convergence of nonlinear dynamical systems and neural networks. IEEE Trans. on Neural Networks, 5(4):551–560, 1994.Google Scholar
  14. [14]
    J.-J. E. Slotine and Li W. Applied Nonlinear Control. Prentice Hall, 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • A. M. C. -L. Ho
    • 1
  • Ph. De Wilde
    • 1
  1. 1.Department of Electrical and Electronic EngineeringImperial College of Science Technology and MedicineLondonUK

Personalised recommendations