Advertisement

Orthogonal Processor Groups for Message-Passing Programs

  • Thomas Rauber
  • Robert Reilein
  • Gudula Rünger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2110)

Abstract

We consider a generalization of the SPMD programming model to orthogonal processor groups. In this model different partitions of the processors into disjoint processor groups can be exploited simultaneously in a single parallel implementation. The parallel programming model is appropriate for grid based applications working in horizontal or vertical directions as well as and for mixed task and data parallel computations[2]. For those applications we propose a systematic development process for message-passing programs using orthogonal processor groups. The development process starts with a specification of tasks indicating horizontal and vertical sections. A mapping to orthogonal processor groups realizes a group SPMD execution model and a final transformation step generates the corresponding message-passing program.

Keywords

Vertical Section Orthogonal Group Program Part Collective Communication Potential Parallelism 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    S.B. Baden and S.J. Fink. A Programming Methodology for Dual-Tier Multicomputers. IEEE Transactions on Software Engineering, 26(3):212–226, 2000.CrossRefGoogle Scholar
  2. 2.
    H. Bal and M. Haines. Approaches for Integrating Task and Data Parallelism. IEEE Concurrency, 6(3):74–84, July-August 1998.CrossRefGoogle Scholar
  3. 3.
    A. Dierstein, R. Hayer, and T. Rauber. The ADDAP System on the iPSC/860: Automatic Data Distribution and Parallelization. JPDC, 32(1):1–10, 1996.Google Scholar
  4. 4.
    S.R. Kohn and S.B. Baden. Irregular Coarse-Grain Data Parallelism under LPARX. Scientific Programming, 5:185–201, 1995.Google Scholar
  5. 5.
    T. Rauber and G. Rünger. Parallel Execution of Embedded and Iterated Runge-Kutta Methods. Concurrency: Practice and Experience, 11(7):367–385, 1999.CrossRefGoogle Scholar
  6. 6.
    T. Rauber and G. Rünger. A Transformation Approach to Derive Efficient Parallel Implementations. IEEE Transactions on Software Engineering, 26(4):315–339, 2000.CrossRefGoogle Scholar
  7. 7.
    T. Rauber and G. Rünger. Deriving Array Distributions by Optimization Techniques. Journal of Supercomputing, 15:271–293, 2000.zbMATHCrossRefGoogle Scholar
  8. 8.
    E. van de Velde. Data Redistribution and Concurrency. Parallel Computing, 16: 125–138, 1990.CrossRefMathSciNetzbMATHGoogle Scholar
  9. 9.
    P.J. van der Houwen and B.P. Sommeijer. Parallel Iteration of high-order Runge-Kutta Methods with stepsize control. J. Comp. Applied Mathematics, 29:111–127, 1990.zbMATHCrossRefGoogle Scholar
  10. 10.
    G. Zhang, B. Carpenter, G. Fox, X. Li, and Y. Wen. A high level SPMD programming model: HPspmd and its Java language binding. Technical report, NPAC at Syracuse Univ., 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Thomas Rauber
    • 1
  • Robert Reilein
    • 2
  • Gudula Rünger
    • 2
  1. 1.Institut für InformatikUniversität Halle-WittenbergHalleGermany
  2. 2.Fakultät für InformatikTechnische Universität ChemnitzChemnitzGermany

Personalised recommendations