On automatic data structure selection and code generation for sparse computations

  • Aart J. C. Bik
  • Harry A. G. Wijshoff
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 768)


Traditionally restructuring compilers were only able to apply program transformations in order to exploit certain characteristics of the target architecture. Adaptation of data structures was limited to e.g. linearization or transposing of arrays. However, as more complex data structures are required to exploit characteristics of the data operated on, current compiler support appears to be inappropriate. In this paper we present the implementation issues of a restructuring compiler that automatically converts programs operating on dense matrices into sparse code, i.e. after a suited data structure has been selected for every dense matrix that in fact is sparse, the original code is adapted to operate on these data structures. This simplifies the task of the programmer and, in general, enables the compiler to apply more optimizations.

Index Terms

Restructuring Compilers Sparse Computations Sparse Matrices 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    A.V. Aho, R. Sethi, and J.D. Ullman. Compilers Principles, Techniques and Tools. Addison-Wesley publishing company, 1986.Google Scholar
  2. [2]
    Randy Allen and Ken Kennedy. Automatic translation of fortran programs to vector form. ACM Transactions on Programming Languages and Systems, Volume 9:491–542, 1987.CrossRefGoogle Scholar
  3. [3]
    Vasanth Balasundaram. Interactive Parallelization of Numerical Scientific Programs. PhD thesis, Department of Computer Science, Rice University, 1989.Google Scholar
  4. [4]
    Vasanth Balasundaram. A mechanism for keeping useful internal information in parallel programming tools: The data access descriptor. Journal of Parallel and Distributed Computing, Volume 9:154–170, 1990.CrossRefGoogle Scholar
  5. [5]
    U. Banerjee. Dependence Analysis for Supercomputing. Kluwer Academic Publishers, Boston, 1988.Google Scholar
  6. [6]
    U. Banerjee. Unimodular transformations of double loops. In Proceedings of Third Workshop on Languages and Compilers for Parallel Computing, 1990.Google Scholar
  7. [7]
    Aart J.C. Bik and Harry A.G. Wijshoff. Advanced compiler optimizations for sparse computations. In Proceedings of Supercomputing 93, 1993. To appear.Google Scholar
  8. [8]
    Aart J.C. Bik and Harry A.G. Wijshoff. Compilation techniques for sparse matrix computations. In Proceedings of the International Conference on Supercomputing, pages 416–424, 1993.Google Scholar
  9. [9]
    Aart J.C. Bik and Harry A.G. Wijshoff. A sparse compiler. Technical Report no. 93–04, Dept. of Computer Science, Leiden University, 1993.Google Scholar
  10. [10]
    A.R. Curtis and J.K. Reid. The solution of large sparse unsymmetric systems of linear equations. Journal Inst. Maths. Applies., Volume 8:344–353, 1971.Google Scholar
  11. [11]
    David S. Dodson, Roger G. Grimes, and John G. Lewis. Sparse extensions to the fortran basic linear algebra subprograms. ACM Transactions on Mathematical Software, Volume 17:253–263, 1991.CrossRefGoogle Scholar
  12. [12]
    I. S. Duff, Roger G. Grimes, and John G. Lewis. Sparse matrix test problems. ACM Transactions on Mathematical Software, Volume 15:1–14, 1989.CrossRefGoogle Scholar
  13. [13]
    I.S. Duff. Data structures, algorithms and software for sparse matrices. In David J. Evans, editor, Sparsity and Its Applications, pages 1–29. Cambridge University Press, 1985.Google Scholar
  14. [14]
    I.S. Duff, A.M. Erisman, and J.K. Reid. Direct Methods for Sparse Matrices. Oxford Science Publications, 1990.Google Scholar
  15. [15]
    I.S. Duff and J.K. Reid. Some design features of a sparse matrix code. ACM Transactions on Mathematical Software, pages 18–35, 1979.Google Scholar
  16. [16]
    C. Eisenbeis, O. Temam, and H. Wijshoff. On efficiently characterizing solutions of linear diophantine equations and its application to data dependence analysis. In Proceedings of the Seventh International Symposium on Computer and Information Sciences, 1992.Google Scholar
  17. [17]
    J. Engelfriet. Attribute grammars: Attribute evaluation methods. In B. Lorho, editor, Methods and Tools for Compiler Construction, pages 103–138. Cambridge University Press, 1984.Google Scholar
  18. [18]
    Alan George and Joseph W. Liu. The design of a user interface for a sparse matrix package. ACM Transactions on Mathematical Software, Volume 5:139–162, 1979.Google Scholar
  19. [19]
    Fred G. Gustavson. Two fast algorithms for sparse matrices: Multiplication and permuted transposition. ACM Transactions on Mathematical Software, Volume 4:250–269, 1978.Google Scholar
  20. [20]
    David J. Kuck. The Structure of Computers and Computations. John Wiley and Sons, New York, 1978. Volume 1.Google Scholar
  21. [21]
    John Michael McNamee. Algorithm 408: A sparse matrix package. Communications of the ACM, pages 265–273, 1971.Google Scholar
  22. [22]
    Samuel P. Midkiff. The Dependence Analysis and Synchronization of Parallel Programs. PhD thesis, C.S.R.D., 1993.Google Scholar
  23. [23]
    David A. Padua and Michael J. Wolfe. Advanced compiler optimizations for supercomputers. Communications of the ACM, pages 1184–1201, 1986.Google Scholar
  24. [24]
    Sergio Pissanetsky. Sparse Matrix Technology. Academic Press, London, 1984.Google Scholar
  25. [25]
    C.D. Polychronoupolos. Parallel Programming and Compilers. Kluwer Academic Publishers, Boston, 1988.Google Scholar
  26. [26]
    Harry A.G. Wijshoff. Implementing sparse blas primitives on concurrent/vector processors: a case study. Technical Report no. 843, Center for Supercomputing Research and Development, University of Illinios, 1989.Google Scholar
  27. [27]
    Michael E. Wolf and Monica S. Lam. A loop transformation theory and an algorithm to maximize parallelism. IEEE Transactions on Parallel and Distributed Algorithms, pages452–471, 1991.Google Scholar
  28. [28]
    Michael J. Wolfe. Optimizing Supercompilers for Supercomputers. Pitman, London, 1989.Google Scholar
  29. [29]
    H. Zima. Supercompilers for Parallel and Vector Computers. ACM Press, New York, 1990.Google Scholar
  30. [30]
    Zahari Zlatev. Computational Methods for General Sparse Matrices. Kluwer Academic Publishers, 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Aart J. C. Bik
    • 1
  • Harry A. G. Wijshoff
    • 1
  1. 1.High Performance Computing Division, Department of Computer ScienceLeiden UniversityRA LeidenThe Netherlands

Personalised recommendations