Improving the Sparse Parallelization Using Semantical Information at Compile-Time
This work presents a novel strategy for the parallelization of applications containing sparse references. Our approach is a first step to converge from the data-parallel to the automatic parallelization by taking into account the semantical relationship of vectors composing a higher-level data structure. Applying a sparse privatization and a multi-loops analysis at compile-time we enhance the performance and reduce the number of extra code annotations. The building/updating of a sparse matrix at run-time is also studied in this paper, solving the problem of using pointers and some levels of indirections on the left hand side. The evaluation of the strategy has been performed on a Cray T3E with the matrix transposition algorithm, using different temporary buffers for the sparse communication.
KeywordsSparse Matrix Pointer Vector Parallel Code Automatic Parallelization Compilation Strategy
Unable to display preview. Download preview PDF.
- 1.R. Asenjo. LU Sparse Matrices Factorization on Multiprocessors. PhD thesis, Computer Architecture Dept., University of Málaga, 1997.Google Scholar
- 2.G. Bandera. Semi-Automatic Parallelization of Applications containing Sparse Matrices. PhD thesis, Computer Architecture Dept, University of Málaga, 1999.Google Scholar
- 3.A.J.C. Bik. Compiler Support for Sparse Matrix Computations. PhD thesis, University of Leiden, The Netherlands, 1996.Google Scholar
- 4.D.R. Chakrabarti, N. Shenoy, A. Choudhary, and P. Banerjee. An efficient uniform run-time scheme for mixed regular-irregular applications. In Proc. of ICS’98.Google Scholar
- 5.F. Delaplace and R. Adle. Extension of the dependence analysis for sparse computation. In Proc. of Parallel and Distributed Computing Systems, October 1997.Google Scholar
- 6.R. Ghiya and L.J. Hendren. Putting pointer analysis to work. In Proc. of the 25 th ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages, 1998.Google Scholar
- 7.V. Kotlyar, K. Pingali, and P. Stodghill. Compiling parallel code for sparse matrix applications. In Proc. of Supercomputing, 1997.Google Scholar
- 8.S. Pissanetzky. Sparse Matrix Technology. Academic Press Inc., 1984.Google Scholar
- 9.P. Tu and David Padua. Automatic array privatization. Sixth Workshop on Languages and Compilers for Parallel Computing, 1993.Google Scholar
- 11.J. Wu, R. Das, J. Saltz, and H. Berryman. Distributed memory compiler design for sparse problems. IEEE Trans. on Computers, 44(6):737–753, June 1995.Google Scholar