Advertisement

Neural Computing & Applications

, Volume 8, Issue 3, pp 218–225 | Cite as

An Enhanced Training Algorithm for Multilayer Neural Networks Based on Reference Output of Hidden Layer

  • Y. Li
  • A. B. Rad
  • W. Peng
Original Article

Abstract

In this paper, the authors propose a new training algorithm which does not only rely upon the training samples, but also depends upon the output of the hidden layer. We adjust both the connecting weights and outputs of the hidden layer based on Least Square Backpropagation (LSB) algorithm. A set of ‘required’ outputs of the hidden layer is added to the input sets through a feedback path to accelerate the convergence speed. The numerical simulation results have demonstrated that the algorithm is better than conventional BP, Quasi-Newton BFGS (an alternative to the conjugate gradient methods for fast optimisation) and LSB algorithms in terms of convergence speed and training error. The proposed method does not suffer from the drawback of the LSB algorithm, for which the training error cannot be further reduced after three iterations.

Key words: Backpropagation; BFGS Quasi-Newtonalgorithm; Conjugate gradient algorithm; Least squares; Multilayer neural networks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag London Limited 1999

Authors and Affiliations

  • Y. Li
    • 1
  • A. B. Rad
    • 1
  • W. Peng
    • 2
  1. 1.1 Department of Electrical Engineering, The Hong Kong Polytechnic University, Kowloon, Hong KongHK
  2. 2.School of Engineering, The Flinders University of South Australia, Adelaide, AustraliaAU

Personalised recommendations