Forward dependence folding as a method of communication optimization in SPMD Programs
In the paper, a method is proposed for optimizing communication in SPMD programs executed in distributed-memory environments. The programs in question result from parallelizing single loops whose dependence graphs are acyclic. Upon introduction to the basics of data dependence theory, the idea of forward dependence folding is presented. Next, it is shown how dependence folding may be coupled with message aggregation as a method of reducing the number of time-costly interprocessor message transfers. Theoretical considerations are accompanied by experimental results from applying the method to programs executed in a network of workstations.
Unable to display preview. Download preview PDF.
- 2.Brandes, T.: ADAPTOR Users Guide (Version 4.0) GMD, Schloss Birlinghoven, Germany (1996)Google Scholar
- 4.Message Passing Interface Forum: MPI: A Message-Passing Interface Standard. Int. J. Supercomp. Appl. 8 (1994) (special issue)Google Scholar
- 5.Szczerbinski, Z.: Optimization of Parallel Loops by Elimination of redundant Data Dependences. Ph.D. Thesis (in Polish), Silesian Technical University, Faculty of Automatics, Electronics and Computer Science, Gliwice, Poland (1995)Google Scholar
- 6.Szczerbinski, Z.: An Algorithm for Elimination of Forward Dependences in Parallel Loops. In: Proc. 2nd Int. Conf. Par. Proc. and Appl. Math. PPAM’97, Zakopane, Poland (1997) 398–407Google Scholar
- 7.Tseng, C.-W.: An Optimizing Fortran D Compiler for MIMD Distributed-Memory Machines. Ph.D Thesis, Rice University, Houston, Texas (1993)Google Scholar
- 10.Zima, H., Chapman, B.: Supercompilers for Parallel and Vector Computers. Addison-Wesley, Wokingham, England (1991)Google Scholar