Approximation errors of state and output trajectories using recurrent neural networks
This paper addresses the problem of estimating training error bounds of state and output trajectories for a class of recurrent neural networks as models of nonlinear dynamic systems. We present training error bounds of trajectories between the recurrent neural network models and the target systems. The bounds are obtained provided that the models have been trained on N trajectories with N independent random initial values which are uniformly distributed over [a, b]m ε Rm.
Unable to display preview. Download preview PDF.
- 1.Almeida, L.: A learning rule for asynchronous perceptions with feedback in a combinatorial environment. Proceedings of the First International Conference on Neural Networks. (1987) 609–618.Google Scholar
- 2.Narendra, K., Parthasarathy, K.: Gradient methods for the optimization of dynamical systems containing neural networks. IEEE Transactions on Neural Networks. 2 (1991) 252–262.Google Scholar
- 3.Pearlmutter, B.: Learning state space trajectories in recurrent neural networks. Neural Computation. 1 (1989) 263–269.Google Scholar
- 4.Pineda, F.: Generalization of back-propagation to recurrent neural networks. Physical Review Letters. 59 (1987) 2229–2232.Google Scholar
- 5.Sontag, E.: Mathematical Control Theory. New York: Springer-Verlag, (1990).Google Scholar
- 6.Werbos, P.: Backpropagation through time: What it does and how to do it. Proceedings of IEEE. 78 (1990) 1550–1560.Google Scholar
- 7.Williams, R., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Computation. 1 (1989) 270–280.Google Scholar