Automatic scaling using gamma learning for feedforward neural networks

  • A. P. Engelbrecht
  • I. Cloete
  • J. Geldenhuys
  • J. M. Zurada
Learning
Part of the Lecture Notes in Computer Science book series (LNCS, volume 930)

Abstract

Standard error back-propagation requites output data that is scaled to lie within the active area of the activation function. We show that normalizing data to conform to this requirement is not only a time-consuming process, but can also introduce inaccuracies in modelling of the data. In this paper we propose the gamma learning rule for feedforward neural networks which eliminates the need to scale output data before training. We show that the utilization of “self-scaling” units results in faster convergence and more accurate results compared to the rescaled results of standard back-propagation.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Morgan, 1991]
    DP Morgan and CL Scofield, Neural Networks and Speech Processing, Kluwer Academic Publishers, 1991.Google Scholar
  2. [Rigler, 1991]
    AK Rigler, JM Irvine and TP Vogl, Rescaling of Variables in Back Propagation Learning, Neural Networks, 4, pp 225–229, 1991.Google Scholar
  3. [Zurada, 1992a]
    JM Zurada, Introduction to Artificial Neural Systems, West Publishing Company, 1992.Google Scholar
  4. [Zurada, 1992b]
    JM Zurada, Lambda Learning Rule for Feedforward Neural Networks, Proceedings of the IEEE International Conference on Neural Networks, March 28–31, 1992, San Fransisco, California.Google Scholar

Copyright information

© Springer-Verlag 1995

Authors and Affiliations

  • A. P. Engelbrecht
    • 1
  • I. Cloete
    • 1
  • J. Geldenhuys
    • 1
  • J. M. Zurada
    • 1
    • 2
  1. 1.Computer Science DepartmentUniversity of StellenboschStellenboschSouth Africa
  2. 2.University of LouisvilleLouisvilleUSA

Personalised recommendations