Advertisement

Exploiting parallel computers to reduce neural network training time of real applications

  • Jim Torresen
  • Shin-ichiro Mori
  • Hiroshi Nakashima
  • Shinji Tomita
  • Olav Landsverk
VII Poster Session Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1336)

Abstract

Neural networks have been proposed to solve difficult problems like speech and character recognition. However, there has so far not come up any revolutionary system. This paper gives the results of a survey of the ongoing research on neural network applications. Moreover, we point out the demands for the mapping of neural applications onto parallel computer hardware. We propose a flexible mapping of back propagation trained neural networks onto a highly parallel computer.

The experiments undertaken show the need for application specific mapping of the given neural network and training set.

Keywords

Neural Network Output Layer Training Pattern Optical Character Recognition Weight Update 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tom Kavli. Nevrale nett: Hvor vil vi de neste årene ? In Proc. of the Norwegian Neural Network Seminar. SINTEF Instrumentation, November 1994.Google Scholar
  2. 2.
    D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representation by error propagation. In Parallel Distributed Processing, volume 1, pages 318–362. The MIT Press, 1986.Google Scholar
  3. 3.
    Vipin Kumar et al. A scalable parallel formulation of the back propagation algorithm for hypercubes and related architectures. IEEE Trans. on Parallel and Distributed Systems, 5(10):1073–1090, October 1994.CrossRefGoogle Scholar
  4. 4.
    Hiroaki Ishihata et al. Third generation message passing computer AP1000. In Proc. of the International Symposium on Supercomputing, pages 46–55, Nov. 1991.Google Scholar
  5. 5.
    Bernard Widrow et al. Neural networks: Applications in industry, business and science. Communication of ACM, 37(3):93–105, March 1994.CrossRefGoogle Scholar
  6. 6.
    Jim Tørresen. Parallelization of Backpropagation Training for Feed-Forward Neural Networks. PhD thesis, Norwegian University of Science and Technology, 1996. ISBN 82-7119-906-4.Google Scholar
  7. 7.
    Terrence J.Sejnowski.NETtalk corpus, obtainable fromftp.idiap.ch in pub/benchmarks/ neural/ nettalk. tar. z.Google Scholar
  8. 8.
    Alexander Singer. Implementation of artificial neural networks on the Connection Machine. Parallel Computing, 14:305–315, Summer 1990.CrossRefGoogle Scholar
  9. 9.
    Tomas Nordstrom and Bertil Svensson. Using and designing massively parallel computers for artificial neural networks. Journal of Parallel and Distributed Computing, 14(3):260–285, March 1992.CrossRefGoogle Scholar
  10. 10.
    Darin Jackson and Dan Hammerstrom. Distributing back propagation networks over the Intel iPSC/860 hypercube. In Proc. of Int. Joint Conference on Neural Networks, volume 1, pages 569–574, 1991.Google Scholar
  11. 11.
    G. Chinn et al. Systolic array implementations of neural nets on the MasPar MP-1 massively parallel processor. In Proc. of Int. Joint Conference on Neural Networks, volume II, pages 169–173, 1990.CrossRefGoogle Scholar
  12. 12.
    Andreas Zell et al. Problems of massive parallelism in neural network simulation. In Proc. of IEEE Int. Conference on Neural Networks, pages 1890–1895, 1993.Google Scholar
  13. 13.
    Helene Paugam-Moisy. Parallel neural computing based on neural network duplicating. In Ioannis Pitas, editor, Parallel algorithms for digital image processing, computer vision and neural networks, chapter 10, pages 305–340. John Wiley & Sons, 1993.Google Scholar
  14. 14.
    Jim Torresen, Shin-ichiro Mori, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. Parallel back propagation training algorithm for MIMD computer with 2D-torus network. In Proceeding of International Conference On Neural Information Processing (ICONIP'94), Seoul, Korea, volume 1, pages 140–145, October 1994.Google Scholar
  15. 15.
    Jim Torresen, Shin-ichiro Mori, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. Exploiting multiple degrees of BP parallelism on the highly parallel computer AP1000. In Fourth International Conference on Artificial Neural Networks (ANN'95), pages 483–488, Cambridge, UK, June 1995. IEE.Google Scholar
  16. 16.
    Jim Torresen, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. General mapping of feed-forward neural networks onto an MIMD computer. In Proc. of IEEE Int. Conference on Neural Networks (ICNN'95), Perth, Western Australia, 27 November–1 December 1995. IEEE.Google Scholar
  17. 17.
    Jim Torresen, Shinji Tomita, and Olav Landsverk. The relation of weight update frequency to convergence of BP. In Proc. Of World Congress on Neural Networks (WCNN'95), volume 1, pages 679–682, Washington, D.C., July 1995. INNS Press.Google Scholar
  18. 18.
    Hiroaki Ishihata. Performance evaluation of the AP1000. In Proc. of CAP workshop, pages N-1-8, 1991.Google Scholar
  19. 19.
    Terrence J. Sejnowski and Charles R. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145–168, 1987.Google Scholar
  20. 20.
    Kwang Bo Cho et al. Image compression using multi-layer perceptron with block classification and SOFM coding. In Proc. of World Congress on Neural Networks, volume 3, pages 26–31, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Jim Torresen
    • 1
    • 2
  • Shin-ichiro Mori
    • 1
  • Hiroshi Nakashima
    • 1
  • Shinji Tomita
    • 1
  • Olav Landsverk
    • 2
  1. 1.Department of Information Science Faculty of EngineeringKyoto UniversityKyotoJapan
  2. 2.Department of Computer and Information ScienceNorwegian University of Science and TechnologyTrondheimNorway

Personalised recommendations