Forward dependence folding as a method of communication optimization in SPMD Programs

  • Zdzislaw Szczerbinski
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1541)


In the paper, a method is proposed for optimizing communication in SPMD programs executed in distributed-memory environments. The programs in question result from parallelizing single loops whose dependence graphs are acyclic. Upon introduction to the basics of data dependence theory, the idea of forward dependence folding is presented. Next, it is shown how dependence folding may be coupled with message aggregation as a method of reducing the number of time-costly interprocessor message transfers. Theoretical considerations are accompanied by experimental results from applying the method to programs executed in a network of workstations.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bacon, D. F., Graham, S. L., Sharp, O. J.: Compiler Transformations for High-Performance Computing. ACM Comp. Surv. 26 (1994) 345–420CrossRefGoogle Scholar
  2. 2.
    Brandes, T.: ADAPTOR Users Guide (Version 4.0) GMD, Schloss Birlinghoven, Germany (1996)Google Scholar
  3. 3.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Par. Comp. 22 (1996) 789–828MATHCrossRefGoogle Scholar
  4. 4.
    Message Passing Interface Forum: MPI: A Message-Passing Interface Standard. Int. J. Supercomp. Appl. 8 (1994) (special issue)Google Scholar
  5. 5.
    Szczerbinski, Z.: Optimization of Parallel Loops by Elimination of redundant Data Dependences. Ph.D. Thesis (in Polish), Silesian Technical University, Faculty of Automatics, Electronics and Computer Science, Gliwice, Poland (1995)Google Scholar
  6. 6.
    Szczerbinski, Z.: An Algorithm for Elimination of Forward Dependences in Parallel Loops. In: Proc. 2nd Int. Conf. Par. Proc. and Appl. Math. PPAM’97, Zakopane, Poland (1997) 398–407Google Scholar
  7. 7.
    Tseng, C.-W.: An Optimizing Fortran D Compiler for MIMD Distributed-Memory Machines. Ph.D Thesis, Rice University, Houston, Texas (1993)Google Scholar
  8. 8.
    Wolfe, M.: High Performance Compilers for Parallel Computing. Addison-Wesley, Redwood City, California (1996)MATHGoogle Scholar
  9. 9.
    Zima, H., Bast, H.-J., Gerndt, M.: SUPERB: A Tool for Semi-Automatic MIMD/SIMD Parallelization. Par. Comp. 6 (1988) 1–18CrossRefGoogle Scholar
  10. 10.
    Zima, H., Chapman, B.: Supercompilers for Parallel and Vector Computers. Addison-Wesley, Wokingham, England (1991)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Zdzislaw Szczerbinski
    • 1
  1. 1.Institute for Theoretical and Applied Computer SciencePolish Academy of SciencesGliwicePoland

Personalised recommendations