Abstract
A new algorithm to exploit the learning rates of gradient descent method is presented, based on the second-order Taylor expansion of the error energy function with respect to learning rate, at some values decided by “award-punish” strategy. Detailed deduction of the algorithm applied to RBF networks is given. Simulation studies show that this algorithm can increase the rate of convergence and improve the performance of the gradient descent method.
Similar content being viewed by others
References
B. Mulgrew, Applying radial basis functions, IEEE SP Magazine, 13(1996)2, 50–65.
E. S. Chng, H. H. Yang, Orthogonal least-square learning algorithm with local adaptation process for the radial basis function networks, IEEE SP Letters, 3(1996)8, 253–255.
R. A. Jacobs, Increased rates of convergence through learning rate adaptation, Neural Networks, 1(1988)4, 295–308.
X. H. Yu, G. A. Chen, Efficient back-propagation learning using optimal learning rate and momentum, Neural Networks, 10(1997)3, 517–527.
Author information
Authors and Affiliations
Additional information
Supported by the Open Foundation of State Key Lab of Transmission of Wide-Band Fiber and Technologies of Communication Systems
About this article
Cite this article
Lin, J., Liu, Y. Efficient gradient descent method of RBF neural entworks with adaptive learning rate. J. of Electron.(China) 19, 255–258 (2002). https://doi.org/10.1007/s11767-002-0046-7
Issue Date:
DOI: https://doi.org/10.1007/s11767-002-0046-7