Skip to main content
Log in

Efficient gradient descent method of RBF neural entworks with adaptive learning rate

  • Papers
  • Published:
Journal of Electronics (China)

Abstract

A new algorithm to exploit the learning rates of gradient descent method is presented, based on the second-order Taylor expansion of the error energy function with respect to learning rate, at some values decided by “award-punish” strategy. Detailed deduction of the algorithm applied to RBF networks is given. Simulation studies show that this algorithm can increase the rate of convergence and improve the performance of the gradient descent method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. B. Mulgrew, Applying radial basis functions, IEEE SP Magazine, 13(1996)2, 50–65.

    Article  Google Scholar 

  2. E. S. Chng, H. H. Yang, Orthogonal least-square learning algorithm with local adaptation process for the radial basis function networks, IEEE SP Letters, 3(1996)8, 253–255.

    Article  Google Scholar 

  3. R. A. Jacobs, Increased rates of convergence through learning rate adaptation, Neural Networks, 1(1988)4, 295–308.

    Article  Google Scholar 

  4. X. H. Yu, G. A. Chen, Efficient back-propagation learning using optimal learning rate and momentum, Neural Networks, 10(1997)3, 517–527.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Supported by the Open Foundation of State Key Lab of Transmission of Wide-Band Fiber and Technologies of Communication Systems

About this article

Cite this article

Lin, J., Liu, Y. Efficient gradient descent method of RBF neural entworks with adaptive learning rate. J. of Electron.(China) 19, 255–258 (2002). https://doi.org/10.1007/s11767-002-0046-7

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11767-002-0046-7

Key words

Navigation