Programming Effort vs. Performance with a Hybrid Programming Model for Distributed Memory Parallel Architectures
We investigate here the programming effort and performance of a programming model which is a hybrid between shared memory and messagepassing. This model permits an easy implementation in shared memory, while still being able to benefit from performance advantages of message-passing for performance critical tasks. We have integrated message-passing with a software DSM system, and evaluated the programming effort and performance with three different applications and various degree of messagepassing in the applications.
In two of the applications we found that only a small fraction of the source code lines responsible for interprocess communication were performance critical and it was therefore easy to convert only those to message-passing primitives and still approach the performance of pure message-passing.
KeywordsShared Memory Programming Effort Master Node Shared Memory Model Work Pool
Unable to display preview. Download preview PDF.
- 1.C. Amza, et al. TreadMarks: Shared Memory Computing on Networks of Workstations, IEEE Computer, Vol. 29, no. 2, pp. 18–28, February 1996.Google Scholar
- 2.D. Bailey. et al. The NAS Parallel Benchmarks 2.0, Report NAS-95-020, Nasa Ames Research Center, Moffett Field, Ca, 94035, USA. December, 1995.Google Scholar
- 3.L. Clarke, I. Glendinning and R. Hempel. The MPI Message Passing Interface Standard, The MPI Forum, March 1994Google Scholar
- 4.J. Cordsen and W. Schröder-Preiskcha. On the Coexistance of Shared-Memory and Message-Passing in the programming of Parallel Applications. In Proc. of High-Performance Computing and Networking: International Conference and Exhibition, Vienna, Austria, April 1997.Google Scholar
- 6.J. Heinlein, et al.Integration of Message Passing and Shared Memory in the Stanford FLASH Multiprocessor, inProc. of the 6th Int. Conf. on Support for Programming Languages and Operating Systems, pp. 38–50, October 1994.Google Scholar
- 7.M.D. Hill, et al. Cooperative Shared Memory: Software and Hardware for Scalable Multiprocessors, ACM Transactions on Computer Systems, November 1993Google Scholar
- 8.P.T. Koch, R.J. Fowler. Carlsberg: A Distributed Execution Environment Providing Coherent Shared Memory and Integrated Message-passing, in Proc. of Nordic Workshop on Programming Environment Research, pp 279–293, Lund, Sweden, June 1994.Google Scholar
- 9.D. Kranz, et al. Integrating Message-Passing and Shared-Memory: Early Experiences, in Proc. of the Fourth Symposium on Principles and Practices of Parallel Programming, pp. 54–63, May 1993.Google Scholar
- 11.J.M. MacLaren and J.M. Bull. Lessons Learned when Comparing Shared Memory and Message-passing Codes on Three Modern Parallel Architectures, in Proc. of High-Performance Computing and Networking: International Conference and Exhibition, Amsterdam, The Netherlands, April 1998, pp. 337–346.Google Scholar
- 12.OpenMP consortium, OpenMP: A Proposed Standard API for Shared Memory Programming, White paper, http://www.openmp.org.
- 13.E. Speight, H. Abdel-Schafi and J.K. Bennet. An Integrated Shared-Memory / Message Passing API for Cluster-Based Multicomputing. in Proc. of the Second International Conference on Parallel and Distributed Computing and Networks (PDCN), December 1998.Google Scholar
- 14.S.C. Woo, et al. The SPLASH-2_Programs: Characterization and Methodological Considerations. in Proc. of the 22nd International Symposium on Computer Architecture, pp. 24–36, Santa Margherita Ligure, Italy, June 1995.Google Scholar