Abstract
A new neural network online training weight update scheme based on the use of a compound gradient vector is presented in this paper. The convergent analysis indicates that because the compound gradient vector is employed during the weight update, the convergent speed of the presented algorithm is faster than the standard BP algorithm. The comprehensive parameter adaptation and the saturation compensation approaches that are introduced in the scheme enhance convergent performance. Several simulations have been conducted and the results demonstrate the satisfactory convergent performance and strong robustness obtained using the improved neural networks online learning scheme for real time control involving uncertainty parameters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Kuan Chung-Ming, Hornik Kurt: Convergence of Learning Algorithms with Constant Learning Rates. IEEE Transactions on Neural Networks, vol.2 (5, 1991) 484–489
Ngolediage J.E., Naguib R.N.G, Dlay S.S.: Fast Back-Propagation for Supervised Learning. Proceedings of 1993 Internatioanl Joint Conference on Neural Networks, (1993) 2591–2594
Maugoulas G.D., Vrahatis M.N., Androulakis G.S.: Effective Backpropagation Training with variable stepsize, Neural Networks, (1,1997) 69–82
Van der Smagt P.P.: Minimisation methods for Training Feedforward Neural networks, Neural Networks, (1, 1994) 1–11
Van Ooyen A., Nienhuis B.: Improving the convergence of the Back-Propagation Algorithm, Neural Networks, (3,1992) 465–471
Zhou G. Si J.: Advanced Neural Networks Training Algorithm with Reduced Complexity based on Jacobian Deficiency, IEEE Transactions on Neural Networks, (3,1998) 448–453
Hagan M.T., Menhaj M.B.: Training feedforward Neural Networks with the Marquardt Algorithm, IEEE Transactions on Neural Networks, (6, 1994) 989–993
Samad T.: Backpropagation Improvements based Heuristic Arguments, Proceedings of International joint Conference on Neural Networks, (1990) 565–568
Bello M. G.: Enhanced Training Algorithms, and Integrated Training/Architecture Selection for Multilayer Perceptron Networks, IEEE Transactions on Neural networks, (6,1992) 864–875
Shah S. Palmieri F.: MEKA-A Fast, Local Algorithm for Training Feedforward Neural Networks, Proceedings of international Joint Conference on neural Networks, (1990) 41–46
Parisi R., Di Claudio E. D., Orlandi G., Rao B. D.: A generalized Learning Paradigm Exploiting the Structure of Feedforward Neural Networks, IEEE Transactions on Neural networks, (6,1996) 1450–1459
Wilamowski Bogdan M., Iqlikci Serdar, Kaynak Okyay, Onder Efe M.: An Algorithm for Fast Convergence in Training Neural Networks, IEEE Proceedings of International
Zaiping Chen, Jun LI, Hui Zhao, Qiang Gao, Youjun Yue, Zhenlin Xu: Online Training of Neural Network Control for Electric Motor Drives. The Proceedings of IEEE Conference on Systems, Man and Cybernetics (to be published) 2002
Xu Lina: Neural Networks Control. Harbin Industrial University Press, Harbin (1999) 123–124
Chen Zaiping, Du Taihang, Gao Qiang, Li Lianbing, Liu Zuojun: Control System Simulations and CAD. Tianjin University Press, Tianjin(2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Chen, Z., Li, J., Yue, Y., Gao, Q., Zhao, H., Xu, Z. (2002). A Neural Network Online Training Algorithm Based on Compound Gradient Vector. In: McKay, B., Slaney, J. (eds) AI 2002: Advances in Artificial Intelligence. AI 2002. Lecture Notes in Computer Science(), vol 2557. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36187-1_33
Download citation
DOI: https://doi.org/10.1007/3-540-36187-1_33
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-00197-3
Online ISBN: 978-3-540-36187-9
eBook Packages: Springer Book Archive