The Effect of Process Topology and Load Balancing on Parallel Programming Models for SMP Clusters and Iterative Algorithms
- 39 Downloads
This article focuses on the effect of both process topology and load balancing on various programming models for SMP clusters and iterative algorithms. More specifically, we consider nested loop algorithms with constant flow dependencies, that can be parallelized on SMP clusters with the aid of the tiling transformation. We investigate three parallel programming models, namely a popular message passing monolithic parallel implementation, as well as two hybrid ones, that employ both message passing and multi-threading. We conclude that the selection of an appropriate mapping topology for the mesh of processes has a significant effect on the overall performance, and provide an algorithm for the specification of such an efficient topology according to the iteration space and data dependencies of the algorithm. We also propose static load balancing techniques for the computation distribution between threads, that diminish the disadvantage of the master thread assuming all inter-process communication due to limitations often imposed by the message passing library. Both improvements are implemented as compile-time optimizations and are further experimentally evaluated. An overall comparison of the above parallel programming styles on SMP clusters based on micro-kernel experimental evaluation is further provided, as well.
Keywordsparallel programming high performance computing SMP clusters iterative algorithms tiling MPI OpenMP hybrid programming
Unable to display preview. Download preview PDF.
- 2.M. Athanasaki, A. Sotiropoulos, G. Tsoukalas, and N. Koziris. Pipelined scheduling of tiled nested loops onto clusters of SMPs using memory mapped network interfaces. In Proceedings of the 2002 ACM/IEEE conference on Supercomputing, IEEE Computer Society Press, Baltimore, Maryland, USA, 2002.Google Scholar
- 4.P. Calland, J. Dongarra, and Y. Robert. Tiling on systems with communication/computation overlap. Journal of Concurrency: Practice and Experience, 11(3):139–153, 1999.Google Scholar
- 5.F. Cappello and D. Etiemble. MPI versus MPI+OpenMP on IBM SP for the NAS Benchmarks. In Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), IEEE Computer Society, Dallas, Texas, USA, p. 12, 2000.Google Scholar
- 7.S. Dong and G. Em. Karniadakis. Dual-level parallelism for high-order CFD methods. Journal of Parallel Computing, 30(1):1–20, 2004.Google Scholar
- 8.N. Drosinos and N. Koziris. Performance comparison of pure MPI vs hybrid MPI-OpenMP parallelization models on SMP clusters. In Proceedings of the 18th International Parallel and Distributed Processing Symposium 2004 (CDROM), Santa Fe, New Mexico, p. 10, 2004.Google Scholar
- 9.G. Goumas, N. Drosinos, M. Athanasaki, and N. Koziris. Compiling tiled iteration spaces for clusters. In Proceedings of the IEEE International Conference on Cluster Computing, Illinois, Chicago, pp. 360–369, 2002.Google Scholar
- 10.D. S. Henty. Performance of hybrid message-passing and shared-memory parallelism for discrete element modeling. In Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), IEEE Computer Society, Dallas, Texas, United States, p. 10, 2000.Google Scholar
- 12.G. Em. Karniadakis and R. M. Kirby. Parallel Scientific Computing in C++ and MPI : A Seamless Approach to Parallel Algorithms and their Implementation, Cambridge University Press, 2002.Google Scholar
- 13.G. Krawezik and F. Cappello. Performance comparison of MPI and three OpenMP programming styles on shared memory multiprocessors. Journal of Concurrency and Computation: Practice and Experience, 2003.Google Scholar
- 15.R. D. Loft, S. J. Thomas, and J. M. Dennis. Terascale spectral element dynamical core for atmospheric general circulation models. In Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), ACM Press, Denver, Colorado, p. 18, 2001.Google Scholar
- 19.P. Tang and J. Zigman. Reducing data communication overhead for DOACROSS loop nests. In Proceedings of the 8th International Conference on Supercomputing (ICS'94), Manchester, UK, pp. 44–53, 1994.Google Scholar