Skip to main content
SpringerLink
Log in
Menu
Find a journal Publish with us
Search
Cart
  1. Home
  2. Neural Processing Letters
  3. Article

AVLR-EBP: A Variable Step Size Approach to Speed-up the Convergence of Error Back-Propagation Algorithm

  • Open Access
  • Published: 22 February 2011
  • volume 33, pages 201–214 (2011)
Download PDF

You have full access to this open access article

Neural Processing Letters Aims and scope Submit manuscript
AVLR-EBP: A Variable Step Size Approach to Speed-up the Convergence of Error Back-Propagation Algorithm
Download PDF
  • Arman Didandeh1,
  • Nima Mirbakhsh1,
  • Ali Amiri2 &
  • …
  • Mahmood Fathy3 
  • 681 Accesses

  • 9 Citations

  • Explore all metrics

  • Cite this article

Abstract

A critical issue of Neural Network based large-scale data mining algorithms is how to speed up their learning algorithm. This problem is particularly challenging for Error Back-Propagation (EBP) algorithm in Multi-Layered Perceptron (MLP) Neural Networks due to their significant applications in many scientific and engineering problems. In this paper, we propose an Adaptive Variable Learning Rate EBP algorithm to attack the challenging problem of reducing the convergence time in an EBP algorithm, aiming to have a high-speed convergence in comparison with standard EBP algorithm. The idea is inspired from adaptive filtering, which leaded us into two semi-similar methods of calculating the learning rate. Mathematical analysis of AVLR-EBP algorithm confirms its convergence property. The AVLR-EBP algorithm is utilized for data classification applications. Simulation results on many well-known data sets shall demonstrate that this algorithm reaches to a considerable reduction in convergence time in comparison to the standard EBP algorithm. The proposed algorithm, in classifying the IRIS, Wine, Breast Cancer, Semeion and SPECT Heart datasets shows a reduction of the learning epochs relative to the standard EBP algorithm.

Download to read the full article text

Working on a manuscript?

Avoid the common mistakes

References

  1. Amiri A, Fathy M, Amintoosi M, Sadoghi-Yazdi H (2007) Modified quantized input variable step size LMS, QX-VSS LMS algorithm applied to signal prediction. Proceeidings of 4th IEEE GCC Conference, pp 162–168

  2. Abid S, Fnaiech F, Jervis BW, Cheriet M (2005) Fast training of multilayer perceptrons with a mixed norm algorithm. IEEE International Joint Conference on Neural Networks Proceedings, pp 1018–1022

  3. Fnaiech F, Abid S (2008) Fast training of multilayer perceptrons with least mean fourth (LMF) algorithm. Int J Soft Comput 3: 359–367

    Google Scholar 

  4. Frank A, Asuncion A (2010) UCI machine learning repository. [http://archive.ics.uci.edu/ml]. University of California, School of Information and Computer Science, Irvine, CA

  5. Haykin S (1998) Neural networks, a comprehensive foundation, 2nd edn. Prentice Hall International Inc., McMaster University, Hamilton, Ontario, Canada

    Google Scholar 

  6. Hur M, Choi JY, Baek J-S, Seo JS (2008) Generalized normalized gradient descent algorithm based on estimated a posteriori error. Proceeding of the 10th International Conference on Advanced Communication Technology, pp 23–26

  7. Magoulas GD, Vrahatis MN, Androulakis GS (1999) Improving the convergence of the backpropagation algorithm using learning rate adaptation methods. Neural Comput 11: 1769–1796

    Article  Google Scholar 

  8. Plagianakos VP, Vrahatis MN, Magoulas GD (1999) Nonmonotone methods for backpropagation training with adaptive learning r. International Joint Conference on Neural Networks, pp 1762–1767

  9. Reymond HK, Edward WJ (1992) A variable step size LMS algorithm. IEEE Transaction on Signal Processing 40(7): 1633–1642

    Article  Google Scholar 

  10. Sadoghi-Yazdi H, Lotfizad M, Fathy M (2006) Car tracking by quantized input LMS, QX-LMS algorithm in traffic scenes. IEE Proceeding on Vision, Image and Signal Processing 153(1): 37–45

    Article  Google Scholar 

  11. Semeion research center of sciences of communication, via Sersale 117, 00128 Rome, Italy Tattile Via Gaetano Donizetti, 1-3-5,25030 Mairano (Brescia), Italy

  12. Shao H, Wu W (2006) Convergence of BP algorithm with variable learning rates for FNN training. Proceedings of the 5th Mexican international Conference on Artificial intelligence, pp 245–252

  13. Wu W, Feng G, Li X (2002) Training multilayer perceptrons via minimization of sum of ridge functions. Adv Comput Math 17: 331–347

    Article  MathSciNet  MATH  Google Scholar 

  14. Zhao X-Y, Lai K-S, Dai D-M (2007) An improved BP algorithm and its application in classification of surface defects of steel plate. Int J Iron Steel Res 14(2): 52–55

    Article  Google Scholar 

Download references

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Author information

Authors and Affiliations

  1. Department of Computer Science and IT, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran

    Arman Didandeh & Nima Mirbakhsh

  2. Computer Engineering Group, Engineering Department, Zanjan University, Zanjan, Iran

    Ali Amiri

  3. Computer Engineering Department, Iran University of Science and Technology, Tehran, Iran

    Mahmood Fathy

Authors
  1. Arman Didandeh
    View author publications

    You can also search for this author in PubMed Google Scholar

  2. Nima Mirbakhsh
    View author publications

    You can also search for this author in PubMed Google Scholar

  3. Ali Amiri
    View author publications

    You can also search for this author in PubMed Google Scholar

  4. Mahmood Fathy
    View author publications

    You can also search for this author in PubMed Google Scholar

Corresponding author

Correspondence to Nima Mirbakhsh.

Rights and permissions

Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Reprints and Permissions

About this article

Cite this article

Didandeh, A., Mirbakhsh, N., Amiri, A. et al. AVLR-EBP: A Variable Step Size Approach to Speed-up the Convergence of Error Back-Propagation Algorithm. Neural Process Lett 33, 201–214 (2011). https://doi.org/10.1007/s11063-011-9173-1

Download citation

  • Published: 22 February 2011

  • Issue Date: April 2011

  • DOI: https://doi.org/10.1007/s11063-011-9173-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Neural networks
  • MLP
  • EBP
  • Algorithm
  • Classification

Working on a manuscript?

Avoid the common mistakes

Advertisement

Search

Navigation

  • Find a journal
  • Publish with us

Discover content

  • Journals A-Z
  • Books A-Z

Publish with us

  • Publish your research
  • Open access publishing

Products and services

  • Our products
  • Librarians
  • Societies
  • Partners and advertisers

Our imprints

  • Springer
  • Nature Portfolio
  • BMC
  • Palgrave Macmillan
  • Apress
  • Your US state privacy rights
  • Accessibility statement
  • Terms and conditions
  • Privacy policy
  • Help and support

Not affiliated

Springer Nature

© 2023 Springer Nature