Advertisement

Efficiency Analysis of Parallel Batch Pattern NN Training Algorithm on General-Purpose Supercomputer

  • Volodymyr Turchenko
  • Lucio Grandinetti
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5518)

Abstract

The theoretic and algorithmic description of the parallel batch pattern back propagation (BP) training algorithm of multilayer perceptron is presented in this paper. The efficiency research of the developed parallel algorithm is fulfilled at progressive increasing of the dimension of parallelized problem on general-purpose parallel computer NEC TX-7.

Keywords

Batch pattern training neural network parallelization efficiency 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Haykin, S.: Neural Networks. Prentice Hall, New Jersey (1999)zbMATHGoogle Scholar
  2. 2.
    Hanzálek, Z.: A Parallel Algorithm for Gradient Training of Feed-forward Neural Networks. Parallel Computing 24(5-6), 823–839 (1998)CrossRefzbMATHGoogle Scholar
  3. 3.
    Murre, J.M.J.: Transputers and Neural Networks: An Analysis of Implementation Constraints and Perform. IEEE Transactions on Neural Networks 4(2), 284–292 (1993)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Topping, B.H.V., Khan, A.I., Bahreininejad, A.: Parallel Training of Neural Networks for Finite Element Mesh Decomposition. Computers and Structures 63(4), 693–707 (1997)CrossRefzbMATHGoogle Scholar
  5. 5.
    Rogers, R.O., Skillicorn, D.B.: Using the BSP Cost Model to Optimise Parallel Neural Network Training. Future Generation Computer Systems 14(5), 409–424 (1998)CrossRefGoogle Scholar
  6. 6.
    Ribeiro, B., Albrecht, R.F., Dobnikar, A., et al.: Parallel Implementations of Feed-forward Neural Network using MPI and C# on .NET Platform. In: Proceedings of the International Conference on Adaptive and Natural Computing Algorithms, Coimbra, pp. 534–537 (2005)Google Scholar
  7. 7.
    Turchenko, V.: Computational Grid vs. Parallel Computer for Coarse-Grain Parallelization of Neural Networks Training. In: Meersman, R., Tari, Z., Herrero, P. (eds.) OTM-WS 2005. LNCS, vol. 3762, pp. 357–366. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  8. 8.
    Turchenko, V.: Fine-Grain Approach to Development of Parallel Training Algorithm of Multi-Layer Perceptron. Artificial Intelligence, the Journal of National Academy of Sciences of Ukraine 1, 94–102 (2006)Google Scholar
  9. 9.
    Dongarra, J., Shimasaki, M., Tourancheau, B.: Clusters and Computational Grids for Scientific Computing. Parallel Computing 27(11), 1401–1402 (2001)CrossRefzbMATHGoogle Scholar
  10. 10.
    Golovko, V., Galushkin, A.: Neural Networks: Training, Models and Applications. Radiotechnika, Moscow (2001) (in Russian)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Volodymyr Turchenko
    • 1
  • Lucio Grandinetti
    • 1
  1. 1.Center of Excellence of High Performance ComputingUniversity of CalabriaRende (CS)Italy

Personalised recommendations