Advertisement

Optimal speedup conditions for a parallel back-propagation algorithm

  • Hélène Paugam-Moisy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 634)

Abstract

One way for implementing a parallel back-propagation algorithm is based on distributing the examples to be learned among different processors. This method provides with spectacular speedups for each epoch of back-propagation learning, but it shows a major drawback: parallelization implies alterations of the gradient descent algorithm. This paper presents an implementation of this parallel algorithm on a transputer network. It mentions experimental laws about the back-propagation convergence speed, and claims that optimal conditions still exist for performing an actual speedup by implementing such a parallel algorithm. It points out theoretical and experimental optimal conditions, in terms of the number of processors and the size of the example packets.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    F.Desprez, B.Tourancheau, Modélisation des performances de communication sur le Tnode avec le Logical system transputer toolset, La lettre du transputer et des calculateurs distribués 7 (Lab. d'Info, de Besançon, 1990) 65–72Google Scholar
  2. [2]
    P.Fraigniaud, S.Miguet, Y.Robert, Scattering on a ring of processors, Parallel Computing 13 (North Holland, 1990) 377–383Google Scholar
  3. [3]
    J.Ghosh, K.Hwang, Mapping neural networks onto message-passing multicomputers, Journal of Parallel and Distributed Computing 6 (Academic Press, 1989) 291–330Google Scholar
  4. [4]
    H.Paugam-Moisy, On the convergence of a block-gradient algorithm for backpropagaton learning, Proc. of IJCNN'92-Baltimore (1992) III-919-924Google Scholar
  5. [5]
    A.Singer, Implementations of artificial neural networks on the Connection Machine, Parallel Computing 14 (North-Holland, 1990) 305–316Google Scholar
  6. [6 ]
    A.Singer, Exploiting the inherent parallelism of artificial neural networks to achieve 1300 M interconnects per second, Proc. of INNC-90-Paris (1990) 656–660Google Scholar
  7. [7]
    S.Wang, Communication problems in simulating neural networks on a hypercube machine, 1st European Workshop on Hypercube and Distributed Computers (1988)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Hélène Paugam-Moisy
    • 1
  1. 1.Laboratoire de l'Informatique du ParallélismeEcole Normale Supérieure de LyonLyon Cedex 07France

Personalised recommendations