Feed-Forward Neural Networks pp 27-37 | Cite as
The Vector Decomposition Method
Abstract
When implementing neural networks in either software or hardware, one must have specifications for building blocks (either software functions or hardware circuits) in order to make a sufficiently good implementation. In this context, sufficiently good means that the minimum requirements for proper operation must be satisfied and that at the same time the various building blocks must not be over-specified as this is usually expensive (economically or computationally). To derive specifications for the various building blocks for neural networks, a number of ways can be followed. We list three possible approaches and give a short discussion of all.
Keywords
Neural Network Weight Vector Input Vector Input Space Vector ComponentPreview
Unable to display preview. Download preview PDF.
References
- [1]H. Guo and S.B. Gelfand, “Analysis of Gradient Descent Learning Algorithms for Multilayer Feedforward Neural Networks”, IEEE Trans. Circuits and Systems, vol. 38, 883–894, 1991MATHCrossRefGoogle Scholar
- [2]T.M. Heskes and B. Kappen, “Learning processes in neural networks”, Physical Review A, vol. 44, 2718–2726, 1991CrossRefGoogle Scholar
- [3]D.B. Parker, Learning Logic, Technical Report 47, MT, 1985Google Scholar