Advertisement

Programming Effort vs. Performance with a Hybrid Programming Model for Distributed Memory Parallel Architectures

  • Andreas Rodman
  • Mats Brorsson
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1685)

Abstract

We investigate here the programming effort and performance of a programming model which is a hybrid between shared memory and messagepassing. This model permits an easy implementation in shared memory, while still being able to benefit from performance advantages of message-passing for performance critical tasks. We have integrated message-passing with a software DSM system, and evaluated the programming effort and performance with three different applications and various degree of messagepassing in the applications.

In two of the applications we found that only a small fraction of the source code lines responsible for interprocess communication were performance critical and it was therefore easy to convert only those to message-passing primitives and still approach the performance of pure message-passing.

Keywords

Shared Memory Programming Effort Master Node Shared Memory Model Work Pool 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    C. Amza, et al. TreadMarks: Shared Memory Computing on Networks of Workstations, IEEE Computer, Vol. 29, no. 2, pp. 18–28, February 1996.Google Scholar
  2. 2.
    D. Bailey. et al. The NAS Parallel Benchmarks 2.0, Report NAS-95-020, Nasa Ames Research Center, Moffett Field, Ca, 94035, USA. December, 1995.Google Scholar
  3. 3.
    L. Clarke, I. Glendinning and R. Hempel. The MPI Message Passing Interface Standard, The MPI Forum, March 1994Google Scholar
  4. 4.
    J. Cordsen and W. Schröder-Preiskcha. On the Coexistance of Shared-Memory and Message-Passing in the programming of Parallel Applications. In Proc. of High-Performance Computing and Networking: International Conference and Exhibition, Vienna, Austria, April 1997.Google Scholar
  5. 5.
    S. Dwarkadas, et al. Combining Compile-Time and Run-Time Support for Efficient Software Distributed Shared Memory.Proceedings of the IEEE, pp. 476–486, Vol 87, Number 3, March 1999.CrossRefGoogle Scholar
  6. 6.
    J. Heinlein, et al.Integration of Message Passing and Shared Memory in the Stanford FLASH Multiprocessor, inProc. of the 6th Int. Conf. on Support for Programming Languages and Operating Systems, pp. 38–50, October 1994.Google Scholar
  7. 7.
    M.D. Hill, et al. Cooperative Shared Memory: Software and Hardware for Scalable Multiprocessors, ACM Transactions on Computer Systems, November 1993Google Scholar
  8. 8.
    P.T. Koch, R.J. Fowler. Carlsberg: A Distributed Execution Environment Providing Coherent Shared Memory and Integrated Message-passing, in Proc. of Nordic Workshop on Programming Environment Research, pp 279–293, Lund, Sweden, June 1994.Google Scholar
  9. 9.
    D. Kranz, et al. Integrating Message-Passing and Shared-Memory: Early Experiences, in Proc. of the Fourth Symposium on Principles and Practices of Parallel Programming, pp. 54–63, May 1993.Google Scholar
  10. 10.
    H. Lu, S. Dwarkadas, A.L. Cox, and W. Zwaenepoel. Quantifying the Performance Differences Between PVM and TreadMarks, Journal of Parallel and Distributed Computing, Vol. 43, No. 2, pp. 65–78, June 1997.CrossRefGoogle Scholar
  11. 11.
    J.M. MacLaren and J.M. Bull. Lessons Learned when Comparing Shared Memory and Message-passing Codes on Three Modern Parallel Architectures, in Proc. of High-Performance Computing and Networking: International Conference and Exhibition, Amsterdam, The Netherlands, April 1998, pp. 337–346.Google Scholar
  12. 12.
    OpenMP consortium, OpenMP: A Proposed Standard API for Shared Memory Programming, White paper, http://www.openmp.org.
  13. 13.
    E. Speight, H. Abdel-Schafi and J.K. Bennet. An Integrated Shared-Memory / Message Passing API for Cluster-Based Multicomputing. in Proc. of the Second International Conference on Parallel and Distributed Computing and Networks (PDCN), December 1998.Google Scholar
  14. 14.
    S.C. Woo, et al. The SPLASH-2_Programs: Characterization and Methodological Considerations. in Proc. of the 22nd International Symposium on Computer Architecture, pp. 24–36, Santa Margherita Ligure, Italy, June 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Andreas Rodman
    • 1
  • Mats Brorsson
    • 1
  1. 1.Department of Information TechnologyLund UniversityLUNDSweden

Personalised recommendations