Exploiting parallel computers to reduce neural network training time of real applications
Neural networks have been proposed to solve difficult problems like speech and character recognition. However, there has so far not come up any revolutionary system. This paper gives the results of a survey of the ongoing research on neural network applications. Moreover, we point out the demands for the mapping of neural applications onto parallel computer hardware. We propose a flexible mapping of back propagation trained neural networks onto a highly parallel computer.
The experiments undertaken show the need for application specific mapping of the given neural network and training set.
KeywordsNeural Network Output Layer Training Pattern Optical Character Recognition Weight Update
Unable to display preview. Download preview PDF.
- 1.Tom Kavli. Nevrale nett: Hvor vil vi de neste årene ? In Proc. of the Norwegian Neural Network Seminar. SINTEF Instrumentation, November 1994.Google Scholar
- 2.D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representation by error propagation. In Parallel Distributed Processing, volume 1, pages 318–362. The MIT Press, 1986.Google Scholar
- 4.Hiroaki Ishihata et al. Third generation message passing computer AP1000. In Proc. of the International Symposium on Supercomputing, pages 46–55, Nov. 1991.Google Scholar
- 6.Jim Tørresen. Parallelization of Backpropagation Training for Feed-Forward Neural Networks. PhD thesis, Norwegian University of Science and Technology, 1996. ISBN 82-7119-906-4.Google Scholar
- 7.Terrence J.Sejnowski.NETtalk corpus, obtainable fromftp.idiap.ch in pub/benchmarks/ neural/ nettalk. tar. z.Google Scholar
- 10.Darin Jackson and Dan Hammerstrom. Distributing back propagation networks over the Intel iPSC/860 hypercube. In Proc. of Int. Joint Conference on Neural Networks, volume 1, pages 569–574, 1991.Google Scholar
- 12.Andreas Zell et al. Problems of massive parallelism in neural network simulation. In Proc. of IEEE Int. Conference on Neural Networks, pages 1890–1895, 1993.Google Scholar
- 13.Helene Paugam-Moisy. Parallel neural computing based on neural network duplicating. In Ioannis Pitas, editor, Parallel algorithms for digital image processing, computer vision and neural networks, chapter 10, pages 305–340. John Wiley & Sons, 1993.Google Scholar
- 14.Jim Torresen, Shin-ichiro Mori, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. Parallel back propagation training algorithm for MIMD computer with 2D-torus network. In Proceeding of International Conference On Neural Information Processing (ICONIP'94), Seoul, Korea, volume 1, pages 140–145, October 1994.Google Scholar
- 15.Jim Torresen, Shin-ichiro Mori, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. Exploiting multiple degrees of BP parallelism on the highly parallel computer AP1000. In Fourth International Conference on Artificial Neural Networks (ANN'95), pages 483–488, Cambridge, UK, June 1995. IEE.Google Scholar
- 16.Jim Torresen, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. General mapping of feed-forward neural networks onto an MIMD computer. In Proc. of IEEE Int. Conference on Neural Networks (ICNN'95), Perth, Western Australia, 27 November–1 December 1995. IEEE.Google Scholar
- 17.Jim Torresen, Shinji Tomita, and Olav Landsverk. The relation of weight update frequency to convergence of BP. In Proc. Of World Congress on Neural Networks (WCNN'95), volume 1, pages 679–682, Washington, D.C., July 1995. INNS Press.Google Scholar
- 18.Hiroaki Ishihata. Performance evaluation of the AP1000. In Proc. of CAP workshop, pages N-1-8, 1991.Google Scholar
- 19.Terrence J. Sejnowski and Charles R. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145–168, 1987.Google Scholar
- 20.Kwang Bo Cho et al. Image compression using multi-layer perceptron with block classification and SOFM coding. In Proc. of World Congress on Neural Networks, volume 3, pages 26–31, 1994.Google Scholar