Advertisement

Improving the Sparse Parallelization Using Semantical Information at Compile-Time

  • Gerardo Bandera
  • Emilio L. Zapata
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1900)

Abstract

This work presents a novel strategy for the parallelization of applications containing sparse references. Our approach is a first step to converge from the data-parallel to the automatic parallelization by taking into account the semantical relationship of vectors composing a higher-level data structure. Applying a sparse privatization and a multi-loops analysis at compile-time we enhance the performance and reduce the number of extra code annotations. The building/updating of a sparse matrix at run-time is also studied in this paper, solving the problem of using pointers and some levels of indirections on the left hand side. The evaluation of the strategy has been performed on a Cray T3E with the matrix transposition algorithm, using different temporary buffers for the sparse communication.

Keywords

Sparse Matrix Pointer Vector Parallel Code Automatic Parallelization Compilation Strategy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    R. Asenjo. LU Sparse Matrices Factorization on Multiprocessors. PhD thesis, Computer Architecture Dept., University of Málaga, 1997.Google Scholar
  2. 2.
    G. Bandera. Semi-Automatic Parallelization of Applications containing Sparse Matrices. PhD thesis, Computer Architecture Dept, University of Málaga, 1999.Google Scholar
  3. 3.
    A.J.C. Bik. Compiler Support for Sparse Matrix Computations. PhD thesis, University of Leiden, The Netherlands, 1996.Google Scholar
  4. 4.
    D.R. Chakrabarti, N. Shenoy, A. Choudhary, and P. Banerjee. An efficient uniform run-time scheme for mixed regular-irregular applications. In Proc. of ICS’98.Google Scholar
  5. 5.
    F. Delaplace and R. Adle. Extension of the dependence analysis for sparse computation. In Proc. of Parallel and Distributed Computing Systems, October 1997.Google Scholar
  6. 6.
    R. Ghiya and L.J. Hendren. Putting pointer analysis to work. In Proc. of the 25 th ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages, 1998.Google Scholar
  7. 7.
    V. Kotlyar, K. Pingali, and P. Stodghill. Compiling parallel code for sparse matrix applications. In Proc. of Supercomputing, 1997.Google Scholar
  8. 8.
    S. Pissanetzky. Sparse Matrix Technology. Academic Press Inc., 1984.Google Scholar
  9. 9.
    P. Tu and David Padua. Automatic array privatization. Sixth Workshop on Languages and Compilers for Parallel Computing, 1993.Google Scholar
  10. 10.
    M. Ujaldón, E.L. Zapata, B. Chapman, and H.P. Zima. Vienna-Fortran/HPF extensions for sparse and irregular problems and their compilation. IEEE Trans. on Parallel and Distributed Systems, 8(10):1068–1083, 1997.CrossRefGoogle Scholar
  11. 11.
    J. Wu, R. Das, J. Saltz, and H. Berryman. Distributed memory compiler design for sparse problems. IEEE Trans. on Computers, 44(6):737–753, June 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Gerardo Bandera
    • 1
  • Emilio L. Zapata
    • 1
  1. 1.Department of Computer ArchitectureUniversity of MálagaMálagaSpain

Personalised recommendations