Advertisement

Parallel implementation of non recurrent neural networks

  • T. Calonge
  • L. Alonso
  • R. Ralha
  • A. L. Sánchez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1215)

Abstract

The computational models introduced by neural network theory exhibit a natural parallelism in the sense that the network can be decomposed in several cellular automata working simultaneously. Following this idea, we present in this paper a parallel implementation of the learning process for two of the main non recurrent neural networks: the Multilayer Perceptron (MLP) and the Self-Organising Map of Kohonen (SOM).

The system we propose integrates both neural networks applied to an isolated word recognition task. The implementation was carried out on a transputer based machine following the model given by the CSP (Communicating Sequential Processes) specifications. Several experiments with different numbers of processors were made in order to evaluate the performance of the proposed system. The aspects related to the load balancing and communication overheads are discussed.

Index Terms

Neural Networks Speech Recognition Parallel Processing & Transputer 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A.L. Sánchez, L. Alonso & V. Cardeñoso, “A Double Neural Network for Word Recognition”, Procc. 10th IASTED International Conference, Innsbruck, February 1992, pp 5–8.Google Scholar
  2. 2.
    Sánchez Lázaro, L. Alonso, C. Alonso, P. de la Fuente, C. Llamas, “Isolated word recognition with a hybrid neural network”, International Journal of Mini & Microcomputers, Vol. 16, No. 3, 1994, pp. 134–140.Google Scholar
  3. 3.
    Haykin S, “Neural Networks — A comprehensive foundation”, IEEE Press — Macmillan College Publishing Company, Inc. 1994.Google Scholar
  4. 4.
    T. Calonge, A.L. Sanchez & L. Alonso, “A transputer neural network implementation for isolated recognition”, Procc. 11th IASTED International Conference, Annecy (France), May 1993, pp 64–66.Google Scholar
  5. 5.
    R. Hetch-Nielsen, “Neurocomputing”, Addison-Wesley Publishing Company, 1990.Google Scholar
  6. 6.
    T. Nordstrom and B. Svensson, “Using and Designing Massively Parallel Computer for Artificial Neural Networks”, J. Parallel and Distributed Computing, Vol. 14, No. 3, 1992, pp. 260–285.Google Scholar
  7. 7.
    Nikola B. Serbedzija, “Simulating Artificial Neural Networks on Parallel Architecture”, Computer Magazine, IEEE Computer Society, Vol. 29, No. 3, March 1996, pp. 56–63.Google Scholar
  8. 8.
    C.A.R. Hoare, “Communicating Sequential Processes”, Prentice Hall International, 1985.Google Scholar
  9. 9.
    C.A.R. Hoare. “OCCAM2 Reference Manual”. Prentice Hall International, 1988.Google Scholar
  10. 10.
    A. Burns, “Programming in OCCAM2”. Addison-Wesley Publishing Company, 1988.Google Scholar
  11. 11.
    B.J.A. Kröse, P.P. van der Smagt, “An Introduction to Neural Networks”, University of Amsterdam. Faculty of Mathematics & Computer Science, Fifth edition, 1993.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • T. Calonge
    • 1
  • L. Alonso
    • 2
  • R. Ralha
    • 3
  • A. L. Sánchez
    • 1
  1. 1.Dpto. de InformáticaUniversidad de ValladolidValladolidSpain
  2. 2.Dpto. Informática y AutomáticaUniversidad de SalamancaSalamancaSpain
  3. 3.Dpto. de MatemáticaUniversidade do MinhoBragaPortugal

Personalised recommendations