Abstract
In this book, a method to mathematically analyze the learning behavior of (multi-layer) feed-forward neural networks, the Vector Decomposition Method (VDM), has been introduced. With the VDM, a large number of phenomena during training of feed-forward neural networks have been analyzed mathematically, which results in easy-to-read equations and hence results in insight into the learning processes in feed-forward neural networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
M. Jabri and B. Flower, “Weight Perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multi-layer networks”, IEEE tr. Neural Networks, pp. 154–157, 1992
M. Jabri, “Practical Performance and Credit Assignment Efficiency of Analog Multi-Layer Perception Perturbation Based Training Algorithms”, Microeuro’ 94, Turin, 1994, Italy
T. Lehmann, “Hardware Learning in Analogue VLSI Neural Networks”, Ph.D. thesis, Technical University of Denmark, 1994
P.A. Shoemaker, M.J. Carlin and R.L. Shimabukuro, “Back propagation Learning with Trinary Quantization of Weight Updates”, Neural Networks, pp. 231–241, 1991
G.v. Steenwijk, K. Hoen and H. Wallinga, “A Nonvolatile Analog Programmable Voltage Source Using the VIPMOS EEPROM Structure”, IEEE J. Solid State Circuits, pp. 784–788, 1993
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1995 Springer Science+Business Media New York
About this chapter
Cite this chapter
Annema, AJ. (1995). Conclusions. In: Feed-Forward Neural Networks. The Springer International Series in Engineering and Computer Science, vol 314. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-2337-6_14
Download citation
DOI: https://doi.org/10.1007/978-1-4615-2337-6_14
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-5990-6
Online ISBN: 978-1-4615-2337-6
eBook Packages: Springer Book Archive