Abstract
In this paper we present a performance analysis case study of two multilevel parallel benchmark codes implemented in three different programming paradigms applicable to shared memory computer architectures. We describe how detailed analysis techniques help to differentiate between the influences of the programming model itself and other factors, such as implementation specific behavior of the operating system or architectural issues.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Ayguade, E., Gonzalez, M., Martorell, X., Jost, G.: Employing Nested OpenMP for the Parallelization of Multi-Zone Computational Fluid Dynamics Applications. In: Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS 2004), Santa Fe, NM, USA (April 2004)
Bailey, D., Harris, T., Saphir, W., Van der Wijngaart, R., Woo, A., Yarrow, M.: The NAS Parallel Benchmarks 2.0, RNR-95-020, NASA Ames Research Center (1995)
Gonzalez, M., Ayguade, E., Martorell, X., Labarta, J., Navarro, N., Oliver, J.: Nanos Compiler: Supporting Flexible Multilevel Parallelism in OpenMP. Concurrency: Practice and Experience. Special issue on OpenMP 12(12), 1205–1218 (2000)
Jin, H., Jost, G.: Performance Evaluation of Remote Memory Access Programming on Shared Memory Parallel Computer Architectures, NAS Technical Report NAS-03-001, NASA Ames Research Center, Moffett Field, CA (2003)
Jost, G., Jin, H., Labarta, J., Gimenez, J., Caubet, J.: Performance Analysis of Multilevel Parallel Programs. In: Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS 2003), Nice, France (April 2003)
Jin, H., Van der Wijngaart, R.F.: Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks. In: Proceedings of IPDPS 2004, Santa Fe, New Mexico, USA (April 2004, to appear )
Martorell, X., Ayguadé, E., Navarro, N., Corbalan, J., Gonzalez, M., Labarta, J.: Thread Fork/join Techniques for Multi-level Parallelism Exploitation in NUMA Multiprocessors. In: 13th International Conference on Supercomputing (ICS 1999), Rhodes (Greece), June 1999, pp. 294–301 (1999)
MPI 1.1 Standard, http://www-unix.mcs.anl.gov/mpi/mpich
OMPItrace User’s Guide, https://www.cepba.upc.es/paraver/manual_i.htm
OpenMP Fortran Application Program Interface, http://www.openmp.org/
Paraver, http://www.cepba.upc.es/paraver
Shan, H., Pal Singh, J.: A comparison of MPI, SHMEM, and Cache-Coherent Shared Address Space Programming Models on a Tightly-Coupled Multiprocessor. International Journal of Parallel Programming 29(3) (2001)
Shan, H., Pal Singh, J.: Comparison of Three Programming Models for Adaptive Applications on the Origin 2000. Journal of Parallel and Distributed Computing 62, 241–266 (2002)
Taft, J.: Achieving 60 GFLOP/s on the Production CFD Code OVERFLOW-MLP. Parallel Computing 27, 521 (2001)
Van Der Wijngaart, R.F., Jin, H.: NAS Parallel Benchmarks, Multi-Zone Versions, NAS Technical Report NAS-03-010, NASA Ames Research Center, Moffett Field, CA (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jost, G., Labarta, J., Gimenez, J. (2005). What Multilevel Parallel Programs Do When You Are Not Watching: A Performance Analysis Case Study Comparing MPI/OpenMP, MLP, and Nested OpenMP. In: Chapman, B.M. (eds) Shared Memory Parallel Programming with Open MP. WOMPAT 2004. Lecture Notes in Computer Science, vol 3349. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-31832-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-540-31832-3_4
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-24560-5
Online ISBN: 978-3-540-31832-3
eBook Packages: Computer ScienceComputer Science (R0)