Parallel Implementation of a Multi-Layer Perceptron
In this paper we describe a parallel implementation of a multi-layer perceptron for a message-passing parallel architecture following the vertical-slicing approach. A theoretical analysis shows that linear scalability may be achieved both in recognition and learning, at the expense of a proper replication of data structures in order to optimize the communication phase.
Scalability is a function of the number of neurons per processor, of the communication bandwidth and of the ratio between processing time and communication time. We show how, given a particular neural network, the number of processing elements that minimizes the execution time can be determined.
The theoretical analysis has been confirmed by an actual implementation in the case of a Transputer-based system with 40 processing nodes.
Unable to display preview. Download preview PDF.
- 1.Mc Clelland, J., Rumelhart, D.E. et al.: “Parallel Distributed Processing”, vol. 1, Cambridge MA, MIT Press, (1986).Google Scholar
- 2.Fogelman-Soulie, F., Gallinari, P., Le Cun, Y. & S. Thiria: “Network Learning”, to appear in “Machine Learning”, vol. 3, Kodratoff, R. Michalski Eds.Google Scholar
- 3.Beynon, T. & N. Dodd: “The implementation of multi-layer perceptrons on transputer networks”, Proc. of the 7th Occam User Group, Grenoble, (1987).Google Scholar
- 4.Robert, F. & S. Wang: “Implementation of a Neural Network on a Hypercube F.P.S. T20”, IFIP Working Group 10.3, Conference on Parallel Processing, Pisa, Italy (April 1988).Google Scholar
- 5.Forrest, B., Roweth, D., Stroud, N., Wallace, D., & G. Wilson: “Neural network models”, Edinburgh preprint no. 87/419, (1987).Google Scholar
- 6.Baiardi, F., Mussardo, R., Serra, R. & G. Valastro: “Feedforward Layered Networks on Message Passing Parallel Computers”, to appear in “Proceedings of the 1989 Workshop on Parallel Architectures and Neural Networks”, E.R. Caianiello editor, World Scientific Publishers, London, in pressGoogle Scholar