Abstract
A new two step gradient-based backpropagation training method is proposed for neural networks in this paper. Based on the Barzilai and Borwein steplength update rule and the technique of Resilient Gradient Descent method, we give a new descent direction and steplength update rule. The new two step learning rate improves the speed and the success rate. Experimental results show that the proposed method has considerably improved convergence speed, and for the chosen test problems, outperforms other well-known training methods.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
McCulloch, W.S., Pitts, W.: A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. of Math. Bio. 1(5), 115–133 (1943)
Liu, Z., Liu, J.: Seismic-controlled Nonlinear Extrapolation of Well Parameters Using Neural Networks. Geophysics 63(6), 2035–2041 (1998)
Seiffert, U.: Training of large-Scale feed-forward neural networks. In: 2006 International Joint Conference on Neural Networks, pp. 10780–10785 (2006)
Hammer, B., Strickert, M., Villmann, T.: Supervised neural gas with general similarity measure. Neural Processing Letters 21, 21–44 (2005)
Bishop, C.M.: Neural Networks for Pattern Recognization. Clarendon Press, Oxford (1995)
Plagianakos, V.P., Sotiropoulos, D.G., Vrahatis, M.N.: A Nonmonotone Backpropagation Training Method for Neural Networks. Dept. of Mathematics, Univ. of Patras, Technical Report No. 98-04 (1998)
Sotiropoulos, D.G., Kostopoulos, A.E., Grapsa, T.N.: A spectral version of perry’s conjugate gradient method for neural network training. In: Proceedings of 4th GRACM Congress on Computational Mechanics, pp. 291–298 (2002)
Lippmann, R.P.: An introduction to Computing With Neural Nets. IEEE ASSP Magazine 4(87), 4–23 (1987)
Seiffert, U., Michaelis, B.: Directed random search for multiple layer perceptron training. In: Neural Networks for Signal Processing IEEE, pp. 193–202 (2001)
Seiffert, U.: Multiple layer perceptron training using Genetic Algorithms. In: Proceedings of the 9. European Symposium on Artificial Neural Networks, pp. 159–164 (2001)
Barzilai, J., Borwein, J.M.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)
Riedmiller, M., Braun, H.: A direct adaptive method for faster Backpropagation learning: The RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 586–591. IEEE, New Jersey (1993)
Moeler, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6, 525–533 (1993)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Mu, X., Zhang, Y. (2010). A New Two-Step Gradient-Based Backpropagation Training Method for Neural Networks. In: Zhang, L., Lu, BL., Kwok, J. (eds) Advances in Neural Networks - ISNN 2010. ISNN 2010. Lecture Notes in Computer Science, vol 6063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13278-0_13
Download citation
DOI: https://doi.org/10.1007/978-3-642-13278-0_13
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-13277-3
Online ISBN: 978-3-642-13278-0
eBook Packages: Computer ScienceComputer Science (R0)