Abstract
This paper presents a theoretical and empirical analysis of the evolution of a feedforward neural network (FFNN) trained using backpropagation (BP). The results of two sets of experiments axe presented which illustrate the nature of BP’s search through weight space as the network learns to classify the training data. The search is shown to be driven by the initial values of the weights in the output layer of neurons.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S.E. Fahlman. Faster-learning variations on BP: An empirical study. In D. Touretszky, G. Hinton, and T. Sejinowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 38–51, 1989.
R.A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1:295–307, 1988.
D. Rumelhart, G. Hinton, and R. Williams. Learning representations by BP errors. Letters to Nature, 323:533–535, 1996.
I.K. Sethi. Entropy nets: From decision trees to neural networks. Proceedings of the IEEE, 78(10):1605–1613, 1990.
H. Sompolinsky and A. Crisanti. Chaos in random neural networks. Physical Review Letters, 61(3):259–262, 1988.
P.F.M.J. Verschure. Chaos-based learning. Complex Systems, 5:359–370, 1991.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag Wien
About this paper
Cite this paper
McLean, D., Bandar, Z., O’Shea, J.D. (1998). The Evolution of a Feedforward Neural Network trained under Backpropagation. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6492-1_114
Download citation
DOI: https://doi.org/10.1007/978-3-7091-6492-1_114
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-83087-1
Online ISBN: 978-3-7091-6492-1
eBook Packages: Springer Book Archive