Automatic scaling using gamma learning for feedforward neural networks
Standard error back-propagation requites output data that is scaled to lie within the active area of the activation function. We show that normalizing data to conform to this requirement is not only a time-consuming process, but can also introduce inaccuracies in modelling of the data. In this paper we propose the gamma learning rule for feedforward neural networks which eliminates the need to scale output data before training. We show that the utilization of “self-scaling” units results in faster convergence and more accurate results compared to the rescaled results of standard back-propagation.
Unable to display preview. Download preview PDF.
- [Morgan, 1991]DP Morgan and CL Scofield, Neural Networks and Speech Processing, Kluwer Academic Publishers, 1991.Google Scholar
- [Rigler, 1991]AK Rigler, JM Irvine and TP Vogl, Rescaling of Variables in Back Propagation Learning, Neural Networks, 4, pp 225–229, 1991.Google Scholar
- [Zurada, 1992a]JM Zurada, Introduction to Artificial Neural Systems, West Publishing Company, 1992.Google Scholar
- [Zurada, 1992b]JM Zurada, Lambda Learning Rule for Feedforward Neural Networks, Proceedings of the IEEE International Conference on Neural Networks, March 28–31, 1992, San Fransisco, California.Google Scholar