Part of the Network Theory and Applications book series (NETA, volume 13)


Generally, neural networks can be divided into two large classes. One class contains feedforward neural networks (FNNs), and the other contains recurrent neural networks (RNNs). This book focused on RNNs only. The essential difference between FNNs and RNNs is the presence of a feedback mechanism among the neurons in the latter. A FNN is a network without any feedback connections among its neurons, while a RNN has at least one feedback connection. Since RNNs allow feedback connections in neurons, the network topology can be very general: any neuron can be connected to any other, even to itself. Allowing the presence of feedback connections among neurons has an advantage, it leads naturally to an analysis of the networks as dynamic systems, in which the state of a network, at one moment in time, depends on the state at a previous moment in time. The topology of RNNs is shown in Figure 1.1.


Equilibrium Point Energy Function Convergence Analysis Recurrent Neural Network Cellular Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media Dordrecht 2004

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringUniversity of Electronic Science and Technology of ChinaChengduPeople’s Republic of China
  2. 2.Department of Electrical and Computer EngineeringThe National University of SingaporeSingapore

Personalised recommendations