Advertisement

A technique for mapping sparse matrix computations into regular processor arrays

  • Roman Wyrzykowski
  • Juri Kanevski
Workshop 03: Automatic Parallelization and High-Performance Compilers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1300)

Abstract

A technique for mapping irregular sparse matrix computations into regular parallel networks is proposed. It is based on regularization of the original irregular graph of an algorithm. For this aim, we use a mapping of an original index space corresponding to dense matrices into a new one, which corresponds to a chosen sparse-matrix storage scheme. This regularization is followed by space-time mappings, which transform the algorithm graph into resulting networks. The proposed approach is illustrated by the example of mapping matrix-vector multiplications.

Keywords

Sparse Matrix Sparse Matrice Processor Array Storage Scheme Dependence Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Darte, A., Robert, Y.: Mapping uniform loop nests onto distributed memory architectures. Parallel Computing 20 (1994) 679–710CrossRefGoogle Scholar
  2. 2.
    Hammond, S.W., Law, K.H.: Architecture and operation of a systolic engine for finite element computations. Computers and Structures 30 (1988) 365–374CrossRefGoogle Scholar
  3. 3.
    Jennings, A., McKeown, J.J.: Matrix computation. J. Willey & Sons, 1992Google Scholar
  4. 4.
    Kumar, V., Grama, A., Gupta, A., Karypis, G.: Introduction to parallel computing. Benjamin/Cummings Publish. Comp., 1994Google Scholar
  5. 5.
    Kung, Y.: VLSI array processors. Prentice-Hall, Englewood Cliffs, 1988Google Scholar
  6. 6.
    Melhem, R.: Solution of linear systems with striped sparse matrices. Parallel Comput. 6 (1988) 165–184MathSciNetCrossRefGoogle Scholar
  7. 7.
    Moreno, J.H., Lang, T.: Matrix computations on systolic-type arrays. Kluwer, 1992Google Scholar
  8. 8.
    Pissanetzky, Z.: Sparse matrix technology. Academic Press, London, 1984zbMATHGoogle Scholar
  9. 9.
    Saad, Y.: Krylov subspace methods on supercomputers. SIAM J. Sci. Stat. Comput. 10 (1989) 1200–1232MathSciNetCrossRefGoogle Scholar
  10. 10.
    Shang, W., Fortes, J.A.B.: On time mapping of uniform dependence algorithms into lower dimensional processor arrays. IEEE Trans. Parallel and Distr. Systems 3 (1992) 350–363CrossRefGoogle Scholar
  11. 11.
    Wyrzykowski, R.: Processor arrays for matrix triangularisation with partial pivoting. IEE Proc. E, Comput. Digit. Tech. 139 (1992) 165–169CrossRefGoogle Scholar
  12. 12.
    Wyrzykowski, R., Kanevski, J., Maslennikov, O.: Mapping recursive algorithms into processor arrays, in Proc. Int. Workshop Parallel Numerics'94, M. Vajtersic and P. Zinterhof eds., Bratislava, 1994, 169–191Google Scholar
  13. 13.
    Zhong, X., Rajopadhye, S., Wong, I.: Systematic generation of linear allocation functions in systolic array design. J. VLSI Signal Processing 4 (1992), 279–293 *** DIRECT SUPPORT *** A0008C42 00011CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Roman Wyrzykowski
    • 1
  • Juri Kanevski
    • 2
  1. 1.Dept. of Math. & Comp. SciCzestochowa Technical UniversityCzestochowaPoland
  2. 2.Dept. of ElectronicsTechnical University of KoszalinPoland

Personalised recommendations