Abstract
This article focuses on the effect of both process topology and load balancing on various programming models for SMP clusters and iterative algorithms. More specifically, we consider nested loop algorithms with constant flow dependencies, that can be parallelized on SMP clusters with the aid of the tiling transformation. We investigate three parallel programming models, namely a popular message passing monolithic parallel implementation, as well as two hybrid ones, that employ both message passing and multi-threading. We conclude that the selection of an appropriate mapping topology for the mesh of processes has a significant effect on the overall performance, and provide an algorithm for the specification of such an efficient topology according to the iteration space and data dependencies of the algorithm. We also propose static load balancing techniques for the computation distribution between threads, that diminish the disadvantage of the master thread assuming all inter-process communication due to limitations often imposed by the message passing library. Both improvements are implemented as compile-time optimizations and are further experimentally evaluated. An overall comparison of the above parallel programming styles on SMP clusters based on micro-kernel experimental evaluation is further provided, as well.
This is a preview of subscription content, access via your institution.
References
T. Andronikos, N. Koziris, G. Papakonstantinou, and P. Tsanakas. Optimal scheduling for UET/UET-UCT generalized N-dimensional Grid Task Graphs. Journal of Parallel and Distributed Computing, 57(2):140–165, 1999.
M. Athanasaki, A. Sotiropoulos, G. Tsoukalas, and N. Koziris. Pipelined scheduling of tiled nested loops onto clusters of SMPs using memory mapped network interfaces. In Proceedings of the 2002 ACM/IEEE conference on Supercomputing, IEEE Computer Society Press, Baltimore, Maryland, USA, 2002.
P. Boulet, J. Dongarra, Y. Robert, and F. Vivien. Static tiling for heterogeneous computing platforms. Journal of Parallel Computing, 25(5):547–568, 1999.
P. Calland, J. Dongarra, and Y. Robert. Tiling on systems with communication/computation overlap. Journal of Concurrency: Practice and Experience, 11(3):139–153, 1999.
F. Cappello and D. Etiemble. MPI versus MPI+OpenMP on IBM SP for the NAS Benchmarks. In Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), IEEE Computer Society, Dallas, Texas, USA, p. 12, 2000.
A. Darte, J. Mellor-Crummey, R. Fowler, and D. ChavarrÃa-Miranda. Generalized multipartitioning of multi-dimensional arrays for parallelizing line-sweep computations. Journal of Parallel and Distributed Computing, 63(9):887–911, 2003.
S. Dong and G. Em. Karniadakis. Dual-level parallelism for high-order CFD methods. Journal of Parallel Computing, 30(1):1–20, 2004.
N. Drosinos and N. Koziris. Performance comparison of pure MPI vs hybrid MPI-OpenMP parallelization models on SMP clusters. In Proceedings of the 18th International Parallel and Distributed Processing Symposium 2004 (CDROM), Santa Fe, New Mexico, p. 10, 2004.
G. Goumas, N. Drosinos, M. Athanasaki, and N. Koziris. Compiling tiled iteration spaces for clusters. In Proceedings of the IEEE International Conference on Cluster Computing, Illinois, Chicago, pp. 360–369, 2002.
D. S. Henty. Performance of hybrid message-passing and shared-memory parallelism for discrete element modeling. In Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), IEEE Computer Society, Dallas, Texas, United States, p. 10, 2000.
Y. C. Hu, H. Lu, A. L. Cox, and W. Zwaenepoel. OpenMP for networks of SMPs. Journal of Parallel and Distributed Computing, 60(12):1512–1530, 2000.
G. Em. Karniadakis and R. M. Kirby. Parallel Scientific Computing in C++ and MPI : A Seamless Approach to Parallel Algorithms and their Implementation, Cambridge University Press, 2002.
G. Krawezik and F. Cappello. Performance comparison of MPI and three OpenMP programming styles on shared memory multiprocessors. Journal of Concurrency and Computation: Practice and Experience, 2003.
A. Legrand, H. Renard, Y. Robert, and F. Vivien. Mapping and load-balancing iterative computations on heterogeneous clusters with shared links. IEEE Trans. on Parallel and Distributed Systems, 15(6):546–558, 2004.
R. D. Loft, S. J. Thomas, and J. M. Dennis. Terascale spectral element dynamical core for atmospheric general circulation models. In Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), ACM Press, Denver, Colorado, p. 18, 2001.
C. Morin and I. Puaut. A survey of recoverable distributed shared virtual memory systems. IEEE Trans. on Parallel and Distributed Systems, 8(9):959–969, 1997.
B. V. Protopopov and A. Skjellum. A multi-threaded message passing interface (MPI) architecture: Performance and program issues. Journal of Parallel and Distributed Computing, 61(4):449–466, 2001.
R. Rabenseifner and G. Wellein. Communication and optimization aspects of parallel programming models on hybrid architectures. International Journal of High Performance Computing Applications, 17(1):49–62, 2003.
P. Tang and J. Zigman. Reducing data communication overhead for DOACROSS loop nests. In Proceedings of the 8th International Conference on Supercomputing (ICS'94), Manchester, UK, pp. 44–53, 1994.
M. Wolf and M. Lam. A loop transformation theory and an algorithm to maximize parallelism. IEEE Trans. on Parallel and Distributed Systems, 2(4):452–471, 1991.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Drosinos, N., Koziris, N. The Effect of Process Topology and Load Balancing on Parallel Programming Models for SMP Clusters and Iterative Algorithms. J Supercomput 35, 65–91 (2006). https://doi.org/10.1007/s11227-006-1156-z
Issue Date:
DOI: https://doi.org/10.1007/s11227-006-1156-z
Keywords
- parallel programming
- high performance computing
- SMP clusters
- iterative algorithms
- tiling
- MPI
- OpenMP
- hybrid programming