Abstract
A critical issue of Neural Network based large-scale data mining algorithms is how to speed up their learning algorithm. This problem is particularly challenging for Error Back-Propagation (EBP) algorithm in Multi-Layered Perceptron (MLP) Neural Networks due to their significant applications in many scientific and engineering problems. In this paper, we propose an Adaptive Variable Learning Rate EBP algorithm to attack the challenging problem of reducing the convergence time in an EBP algorithm, aiming to have a high-speed convergence in comparison with standard EBP algorithm. The idea is inspired from adaptive filtering, which leaded us into two semi-similar methods of calculating the learning rate. Mathematical analysis of AVLR-EBP algorithm confirms its convergence property. The AVLR-EBP algorithm is utilized for data classification applications. Simulation results on many well-known data sets shall demonstrate that this algorithm reaches to a considerable reduction in convergence time in comparison to the standard EBP algorithm. The proposed algorithm, in classifying the IRIS, Wine, Breast Cancer, Semeion and SPECT Heart datasets shows a reduction of the learning epochs relative to the standard EBP algorithm.
References
Amiri A, Fathy M, Amintoosi M, Sadoghi-Yazdi H (2007) Modified quantized input variable step size LMS, QX-VSS LMS algorithm applied to signal prediction. Proceeidings of 4th IEEE GCC Conference, pp 162–168
Abid S, Fnaiech F, Jervis BW, Cheriet M (2005) Fast training of multilayer perceptrons with a mixed norm algorithm. IEEE International Joint Conference on Neural Networks Proceedings, pp 1018–1022
Fnaiech F, Abid S (2008) Fast training of multilayer perceptrons with least mean fourth (LMF) algorithm. Int J Soft Comput 3: 359–367
Frank A, Asuncion A (2010) UCI machine learning repository. [http://archive.ics.uci.edu/ml]. University of California, School of Information and Computer Science, Irvine, CA
Haykin S (1998) Neural networks, a comprehensive foundation, 2nd edn. Prentice Hall International Inc., McMaster University, Hamilton, Ontario, Canada
Hur M, Choi JY, Baek J-S, Seo JS (2008) Generalized normalized gradient descent algorithm based on estimated a posteriori error. Proceeding of the 10th International Conference on Advanced Communication Technology, pp 23–26
Magoulas GD, Vrahatis MN, Androulakis GS (1999) Improving the convergence of the backpropagation algorithm using learning rate adaptation methods. Neural Comput 11: 1769–1796
Plagianakos VP, Vrahatis MN, Magoulas GD (1999) Nonmonotone methods for backpropagation training with adaptive learning r. International Joint Conference on Neural Networks, pp 1762–1767
Reymond HK, Edward WJ (1992) A variable step size LMS algorithm. IEEE Transaction on Signal Processing 40(7): 1633–1642
Sadoghi-Yazdi H, Lotfizad M, Fathy M (2006) Car tracking by quantized input LMS, QX-LMS algorithm in traffic scenes. IEE Proceeding on Vision, Image and Signal Processing 153(1): 37–45
Semeion research center of sciences of communication, via Sersale 117, 00128 Rome, Italy Tattile Via Gaetano Donizetti, 1-3-5,25030 Mairano (Brescia), Italy
Shao H, Wu W (2006) Convergence of BP algorithm with variable learning rates for FNN training. Proceedings of the 5th Mexican international Conference on Artificial intelligence, pp 245–252
Wu W, Feng G, Li X (2002) Training multilayer perceptrons via minimization of sum of ridge functions. Adv Comput Math 17: 331–347
Zhao X-Y, Lai K-S, Dai D-M (2007) An improved BP algorithm and its application in classification of surface defects of steel plate. Int J Iron Steel Res 14(2): 52–55
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
About this article
Cite this article
Didandeh, A., Mirbakhsh, N., Amiri, A. et al. AVLR-EBP: A Variable Step Size Approach to Speed-up the Convergence of Error Back-Propagation Algorithm. Neural Process Lett 33, 201–214 (2011). https://doi.org/10.1007/s11063-011-9173-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-011-9173-1
Keywords
- Neural networks
- MLP
- EBP
- Algorithm
- Classification