Abstract
Progress in neural network R&D and technology transfer to real-world applications depends critically upon the availability of high-speed, massively parallel computing. The MasPar computer is a SIMD architecture well-suited for neural network implementations. We present an implementation of multi-layer perceptron learning using the back-propagation algorithm. The algorithm design is based on a virtual processor approach. An initial implementation of an unoptimized, general design yields approximately 306K connection updates per second (CUPS). This is 75% of previously reported results for an IBM 3090 (listed in Watanabe, et al., 1989, Table II) and 17-fold greater than our own Sun 3/80. We expect significant improvement to several 10’s of MCUPS as we begin to address load-balancing issues, better usage of the (1K) register space available to each processor and development of virtual processor depth reduction techniques.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1990 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Grajski, K.A., Chinn, G., Chen, C., Kuszmaul, C., Tomboulian, S. (1990). Neural Network Simulation on the MasPar MP-1 Massively Parallel Processor. In: International Neural Network Conference. Springer, Dordrecht. https://doi.org/10.1007/978-94-009-0643-3_38
Download citation
DOI: https://doi.org/10.1007/978-94-009-0643-3_38
Publisher Name: Springer, Dordrecht
Print ISBN: 978-0-7923-0831-7
Online ISBN: 978-94-009-0643-3
eBook Packages: Springer Book Archive