Abstract
Error Back-Propagation (BP) method and its variations are popular methods for the supervised learning of neural networks. BP method can be regarded as an approximate steepest descent method for minimizing the sum of error functions, which uses exact derivatives of each error function. Thus, they have the global convergence property under some natural conditions. On the other hand, Real Time Recurrent Learning method (RTRL) is also one of variations of BP method for the recurrent neural network (RNN) which is suited for handling time sequences. Since, for the real-time learning, this method cannot utilize exact outputs from the network, approximate derivatives of each error function are used to update the weights. Therefore, although RTRL is widely used in practice, its global convergence property is not known yet. In this paper, we show that RTRL has the global convergence property under almost the same conditions as other variations of BP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bertsekas, D.P.. Incremental least squares methods and the extended Kalman filter. SIAM Journal on Optimization 1996; 6(3): 807–822.
Bertsekas, D.P.. Nonlinear Programming: 2nd Edition, Athena Scientific, Belmont, MA, 1999.
Connor, J. T., Martin R. D., Atlas, L.E.. Recurrent neural networks and robust time series prediction. IEEE Transactions on Neural Networks 1994; 5(2): 240–254.
Gaivoronski, A. A.. Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods, part 1. Optimization Methods and Software 1994; 4(2): 117–134.
Hertz, J., Krogh, A., Palmer, R. G.. Introduction to the Theory of Neural Computation, Addison-Wesley, Redwood City, 1991.
Ljung, L.. Analysis of recursive stochastic algorithms. IEEE Transactions on Automatic Control 1977; AC-22(4): 551–575.
Luo, Z.Q., Tseng, P.. Analysis of an approximate gradient projection method with applications to the back propagation algorithm. Optimization Methods and Software 1994; 4(2): 85–101.
Mak, M. W., Ku, K. W., Lu, Y. L.. On the improvement of the real time recurrent learning algorithm for recurrent neural networks. Neurocomputing 1999; 24: 13–36.
Mangasarian, O.L., Solodov, M.V.. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optimization and Software 1994; 4(2): 103–116.
Narendra, K. S., Parthasarathy, K.. Identification and Control of dynamical systems using neural networks. IEEE Transactions on Neural Networks 1990; 1(1): 4–27.
Ortega, J. M.. Numerical Analysis: A Second Course, Academic Press, NewYork, NY, 1972.
Parlos, A. G., Chong K. T., Atiya, A. F.. Application of the recurrent multilayer perception in modeling complex process dynamics. IEEE Transactions on Neural Networks 1994; 5(2): 255–266.
Rumelhart, D. E., Hinton, G. E., Williams, R. J.. Learning internal representations by error propagation, in D. E. Rumelhart, J. L. McClelland and the PDP Research Group Eds., Parallel Distributed Processing MIT Press, Cambridge, MA, 1986; 318–362.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2002 Springer Science+Business Media New York
About this chapter
Cite this chapter
Tatsumi, K., Tanino, T., Fukushima, M. (2002). Global Convergence Property of Error Back-Propagation Method for Recurrent Neural Networks. In: Kozan, E., Ohuchi, A. (eds) Operations Research/Management Science at Work. International Series in Operations Research & Management Science, vol 43. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0819-9_15
Download citation
DOI: https://doi.org/10.1007/978-1-4615-0819-9_15
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-5254-9
Online ISBN: 978-1-4615-0819-9
eBook Packages: Springer Book Archive