Advertisement

Parallelization for multiprocessors with memory hierarchies

  • Michael Gerndt
  • Hans Moritsch
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 591)

Abstract

Programming shared memory multiprocessors seems to be easier than developing programs for distributed memory systems. The reason is the existence of a global names space for parallel threads providing uniform access to all global data. This programming model seems to be inadequate for systems with a larger number of processors since memory hierarchies are integrated to eliminate the bottleneck of global memory access. Therefore programming these systems has to be done with respect to the distribution of data, thus to enforce the parallel processes exploiting locality of references. This paper describes an ongoing project in which we investigate the applicability of the distributed memory parallelization strategy for shared memory systems with memory hierarchies.

Keywords

shared memory multiprocessors distributed memory multiprocessors domain decomposition program parallelization program transformations 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [BerSo 88]
    D. Bernstein, K. So, Preface: A Fortran Preprocessor for Parallel Workstation Systems, Research Report RC13600, IBM T. J. Watson Research Center, Yorktown Heights, March 1988Google Scholar
  2. [CaKen 88]
    David Callahan, Ken Kennedy, Compiling Progams for Distributed-Memory Multiprocessors, J. Supercomputing, 2(2), 151–169, (Oct.1988)CrossRefGoogle Scholar
  3. [Carmo 89]
    Edward A. Carmona, Parallelizing a Large Scientific Code — Methods, Issues, and Concerns, ACM Proceedings Supercomputing 89, Reno, Nevada 89, 21–31Google Scholar
  4. [DarNP 85]
    F. Darema-Rogers, V.A. Norton, G.F. Pfister, Using a Single-Program-Multiple-Data Computational Model for parallel Execution of Scientific Applications, Research Report IBM Watson Research Center Yorktown HeightsGoogle Scholar
  5. [EHJP 90]
    R. Eigenmann, J. Hoeflinger, G. Jaxon, D. Padua, Cedar Fortran and Its Compiler, Proceedings of the CONPAR90, Zürich 1990, LNCS 457, pp.288–299Google Scholar
  6. [Fox 88]
    Geoffrey C. Fox et al., Solving Problems on Concurrent Processors, Prentice Hall, Englewood Cliffs, 1988Google Scholar
  7. [GarFF 89]
    A. Garcia, D.J. Foster, R.F. Freitas, The Adavanced Computing Environment Multiprocessor Workstation, Research Report RC 14491 IBM T.J.Watson Research Center, Yorktown Heights (Mar 1989)Google Scholar
  8. [GelCar 86]
    D. Gelernter, N. Carriero, Linda on hypercube multicomputers, In: Proceedings of the 1985 SIAM Conference, Society for Industrial and Applied Mathematics, Philadelphia, 1986, 45–55Google Scholar
  9. [Gerndt 90]
    H.M. Gerndt, Updating Distributed Variables in Local Computations, Concurrency: Practice and Experience, Vol.2(3), pp. 171–193 (September 1990)Google Scholar
  10. [Karp 87]
    Alan H. Karp, Programming for Parallelism, Computer 20(5): pp.43–57, May 1987Google Scholar
  11. [KoMeRo 88]
    C. Koelbel, P. Mehrotra, J. Van Rosendale, Semi-Automatic Process Partitioning for Parallel Computation, International Journal of Parallel Programming, Vol.16, No.5, 1987, 365–382CrossRefGoogle Scholar
  12. [LiSch 89]
    Kai Li, Richard Schaefer, A Hypercube Shared Virtual Memory System, ICPP 1989, Vol I, pp. 125–132Google Scholar
  13. [Pfist 85]
    G.F. Pfister, The IBM Research Parallel Processor Prototype (RPS): Inroduction and Architecture, IEEE Proceedings COMPAR 85, pp.764–771Google Scholar
  14. [PinRog 90]
    Keshav Pingali, Anne Rogers, Compiler Parallelization of SIMPLE for a Distributed Memory Machine, Technical Report, Department of Computer Science, Cornell University, No. TR90-1084Google Scholar
  15. [SCMB 90]
    Joel Saltz, Kathleen Crowley, Ravi Mirchandaney, Harry Berryman, RunTime Scheduling and Execution of Loops on Message Passing Machines, Journal of Parallel and Distributed Computing 8, 303–312 (1990)CrossRefGoogle Scholar
  16. [ZBG 88]
    H.P. Zima, H.-J. Bast, H.M. Gerndt, SUPERB: A tool for semi-automatic MIMD/SIMD parallelization, Parallel Computing 6, 1988, 1–18CrossRefGoogle Scholar
  17. [ZiCha 90]
    H.P. Zima, B. Chapman, Supercompilers for Parallel and Vector Computers, Addison-Wesley, New York, 1990Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Michael Gerndt
    • 1
  • Hans Moritsch
    • 1
  1. 1.Institute for Statistics and Computer ScienceUniversity of ViennaViennaAustria

Personalised recommendations