Skip to main content

A New Two-Step Gradient-Based Backpropagation Training Method for Neural Networks

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6063))

Abstract

A new two step gradient-based backpropagation training method is proposed for neural networks in this paper. Based on the Barzilai and Borwein steplength update rule and the technique of Resilient Gradient Descent method, we give a new descent direction and steplength update rule. The new two step learning rate improves the speed and the success rate. Experimental results show that the proposed method has considerably improved convergence speed, and for the chosen test problems, outperforms other well-known training methods.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. McCulloch, W.S., Pitts, W.: A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. of Math. Bio. 1(5), 115–133 (1943)

    Article  MathSciNet  Google Scholar 

  2. Liu, Z., Liu, J.: Seismic-controlled Nonlinear Extrapolation of Well Parameters Using Neural Networks. Geophysics 63(6), 2035–2041 (1998)

    Article  Google Scholar 

  3. Seiffert, U.: Training of large-Scale feed-forward neural networks. In: 2006 International Joint Conference on Neural Networks, pp. 10780–10785 (2006)

    Google Scholar 

  4. Hammer, B., Strickert, M., Villmann, T.: Supervised neural gas with general similarity measure. Neural Processing Letters 21, 21–44 (2005)

    Article  Google Scholar 

  5. Bishop, C.M.: Neural Networks for Pattern Recognization. Clarendon Press, Oxford (1995)

    Google Scholar 

  6. Plagianakos, V.P., Sotiropoulos, D.G., Vrahatis, M.N.: A Nonmonotone Backpropagation Training Method for Neural Networks. Dept. of Mathematics, Univ. of Patras, Technical Report No. 98-04 (1998)

    Google Scholar 

  7. Sotiropoulos, D.G., Kostopoulos, A.E., Grapsa, T.N.: A spectral version of perry’s conjugate gradient method for neural network training. In: Proceedings of 4th GRACM Congress on Computational Mechanics, pp. 291–298 (2002)

    Google Scholar 

  8. Lippmann, R.P.: An introduction to Computing With Neural Nets. IEEE ASSP Magazine 4(87), 4–23 (1987)

    Article  Google Scholar 

  9. Seiffert, U., Michaelis, B.: Directed random search for multiple layer perceptron training. In: Neural Networks for Signal Processing IEEE, pp. 193–202 (2001)

    Google Scholar 

  10. Seiffert, U.: Multiple layer perceptron training using Genetic Algorithms. In: Proceedings of the 9. European Symposium on Artificial Neural Networks, pp. 159–164 (2001)

    Google Scholar 

  11. Barzilai, J., Borwein, J.M.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  12. Riedmiller, M., Braun, H.: A direct adaptive method for faster Backpropagation learning: The RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 586–591. IEEE, New Jersey (1993)

    Chapter  Google Scholar 

  13. Moeler, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6, 525–533 (1993)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mu, X., Zhang, Y. (2010). A New Two-Step Gradient-Based Backpropagation Training Method for Neural Networks. In: Zhang, L., Lu, BL., Kwok, J. (eds) Advances in Neural Networks - ISNN 2010. ISNN 2010. Lecture Notes in Computer Science, vol 6063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13278-0_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13278-0_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13277-3

  • Online ISBN: 978-3-642-13278-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics