Efficiency Analysis of Parallel Batch Pattern NN Training Algorithm on General-Purpose Supercomputer
The theoretic and algorithmic description of the parallel batch pattern back propagation (BP) training algorithm of multilayer perceptron is presented in this paper. The efficiency research of the developed parallel algorithm is fulfilled at progressive increasing of the dimension of parallelized problem on general-purpose parallel computer NEC TX-7.
KeywordsBatch pattern training neural network parallelization efficiency
Unable to display preview. Download preview PDF.
- 6.Ribeiro, B., Albrecht, R.F., Dobnikar, A., et al.: Parallel Implementations of Feed-forward Neural Network using MPI and C# on .NET Platform. In: Proceedings of the International Conference on Adaptive and Natural Computing Algorithms, Coimbra, pp. 534–537 (2005)Google Scholar
- 8.Turchenko, V.: Fine-Grain Approach to Development of Parallel Training Algorithm of Multi-Layer Perceptron. Artificial Intelligence, the Journal of National Academy of Sciences of Ukraine 1, 94–102 (2006)Google Scholar
- 10.Golovko, V., Galushkin, A.: Neural Networks: Training, Models and Applications. Radiotechnika, Moscow (2001) (in Russian)Google Scholar