Advertisement

A Matrix-Type for Performance–Portability

  • N. Peter Drakenberg
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3732)

Abstract

When matrix computations are expressed in conventional programming languages, matrices are almost exclusively represented by arrays, but arrays are also used to represent many other kinds of entities, such as grids, lists, hash tables, etc. The responsibility for achieving efficient matrix computations is usually seen as resting on compilers, which in turn apply loop restructuring and reordering transformations to adapt programs and program fragments to target different architectures. Unfortunately, compilers are often unable to restructure conventional algorithms for matrix computations into their block or block-recursive counterparts, which are required to obtain acceptable levels of perfomance on most current (and future) hardware systems.

We present a datatype which is dedicated to the representation of dense matrices. In contrast to arrays, for which index-based element-reference is the basic (primitive) operation, the primitive operations of our specialized matrix-type are composition and decomposition of/into submatrices. Decomposition of a matrix into submatrices (of unspecified sizes) is a key operation in the development of block algorithms for matrix computations, and through direct and explicit expression of (ambiguous) decompositions of matrices into submatrices, block algorithms can be expressed explicitly and at the same time the task of finding good decomposition parameters (i.e., block sizes) for each specific target system, is exposed to and made suitable for compilers.

Keywords

Dense Matrice Matrix Computation Cholesky Factorization Decomposition Pattern Program Fragment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allen, R., Kennedy, K.: Optimizing Compilers for Modern Architectures: A Dependence-Based Approach. Morgan Kaufmann Publishers, San Francisco (2002)Google Scholar
  2. 2.
    Wolfe, M.: High Performance Compilers for Parallel Computing. Addison-Wesley, Redwood City (1996)zbMATHGoogle Scholar
  3. 3.
    Carr, S., Kennedy, K.: Compiler blockability of numerical algorithms. In: Proc. Supercomputing 1992, Minneapolis, Minn., pp. 114–124 (1992)Google Scholar
  4. 4.
    Carr, S., Lehoucq, R.B.: Compiler blockability of dense matrix factorizations. ACM Trans. Math. Softw. 23, 336–361 (1997)zbMATHCrossRefGoogle Scholar
  5. 5.
    Sarkar, V.: Automatic selection of high-order transformations in the IBM XL FORTRAN compilers. IBM J. Res. Develop. 41, 233–264 (1997)CrossRefGoogle Scholar
  6. 6.
    Wolf, M., Maydan, D., Chen, D.K.: Combining loop transformations considering caches and scheduling. In: Proc. 29th IEEE/ACMIntl. Symp. on Microarchitecture, Paris, France (1996)Google Scholar
  7. 7.
    Schreiber, R., Van Loan, C.: Astorage efficient WY representation for products of householder transformations. SIAM J. Scientific and Statistical Computing 10, 53–57 (1989)zbMATHCrossRefGoogle Scholar
  8. 8.
    Lam, M.S., Rothberg, E.E., Wolf, M.E.: The cache performance and optimizations of blocked algorithms. In: Proc. 4th Int. Conf. on Architectural Support for Programming Languages and Operating Systems, pp. 63–74 (1991)Google Scholar
  9. 9.
    Coleman, S., McKinley, K.S.: Tile size selection using cache organization and data layout. In: Proc. Conf. on Prog. Lang. Design and Implementation, La Jolla, CA, pp. 279–290 (1995)Google Scholar
  10. 10.
    Anderson, E., et al.: LAPACK Users’ Guide, 2nd edn., pp. 19104–26880. SIAM, Philadelphia (1995)Google Scholar
  11. 11.
    Feautrier, P.: Dataflow analysis of scalar and array references. Int. J. Parallel Prog. 20, 23–53 (1991)zbMATHCrossRefGoogle Scholar
  12. 12.
    Maslov, V., Pugh, W.: Simplifying polynomial constraints over integers to make dependence analysis more precise. Technical Report UMIACS-CS-TR-93-68.1, Dept. of Computer Science, University of Maryland, College Park, MD 20742 (1994)Google Scholar
  13. 13.
    Barthou, D., Collard, J.F., Feautrier, P.: Fuzzy array dataflow analysis. J. Parall. Distr. Comput. 40, 210–226 (1997)zbMATHCrossRefGoogle Scholar
  14. 14.
    Peyton Jones, S., Hughes, J. (eds.): Report on the programming language Haskell 98 (1998), http://www.haskell.org/report
  15. 15.
    Milner, R., Tofte, M., Harper, R., MacQueen, D.: The Definition of Standard ML. Revised edn. The MIT Press, Cambridge (1997)Google Scholar
  16. 16.
    Lawson, C., Hanson, R., Kincaid, R., Krogh, F.: Basic linear algebra subprograms for Fortran usage. ACM Trans. Math. Softw. 5, 308–323 (1979)zbMATHCrossRefGoogle Scholar
  17. 17.
    Dongarra, J., Du Croz, J., Hammarling, S., Hansson, R.: An extended set of Fortran basic linear algebra subprograms. ACM Trans. Math. Softw. 17, 1–17, 18–32 (1988)CrossRefGoogle Scholar
  18. 18.
    Dongarra, J., Du Croz, J., Duff, I., Hammarling, S.: A set of level 3 basic linear algebra subprograms. ACM Trans. Math. Softw. 16, 1–17 (1990)zbMATHCrossRefGoogle Scholar
  19. 19.
    Gustavson, F.G.: Recursion leads to automatic blocking for dense linear-algebra algorithms. IBM J. Res. Develop. 41, 737–755 (1997)CrossRefGoogle Scholar
  20. 20.
    Skedzielewski, S., Glauert, J.: IF1 An intermediate form for applicative languages. Technical Report M-170, Lawrence Livermore National Laboratory, Livermore, CA 94550 (1985)Google Scholar
  21. 21.
    Cann, D.C., Evripidou, P.: Advanced array optimizations for high performance functional languages. IEEE Trans. Parall. Distr. Sys. 6, 229–239 (1995)CrossRefGoogle Scholar
  22. 22.
    Agarwal, R.C., Gustavson, F.G., Zubair, M.: Improving performance of linear algebra algorithms for dense matrices, using algorithmic prefetch. IBM J. Res. Develop. 38, 265–275 (1994)zbMATHCrossRefGoogle Scholar
  23. 23.
    Hilfinger, P.N., Colella, P.: FIDIL: A language for scientific programming. In: Grossman, R. (ed.) Symbolic Computing: Applications to ScientificComputing, pp. 97–138. Society for Industrial and Applied Mathematics, Philadelphia (1989)Google Scholar
  24. 24.
    Lin, C.: ZPL Language reference manual. Technical Report 94-10-06, Dept. of Computer Science, University of Washington (1994)Google Scholar
  25. 25.
    Yelick, K., Aiken, A., et al.: Titanium: A high-performance Java dialect. Concurrency: Practice and Experience 10, 825–836 (1998)CrossRefGoogle Scholar
  26. 26.
    Gunnels, J.A., Gustavson, F.G., Henry, G.M., van de Geijn, R.A.: Flame: Formal linear algebra methods environment. ACM Trans. Math. Softw. 27, 422–455 (2001)zbMATHCrossRefGoogle Scholar
  27. 27.
    Whaley, R.C., Petitet, A., Dongarra, J.J.: Automated empirical optimization of software and the ATLAS project. Parallel Computing 27, 3–35 (2001)zbMATHCrossRefGoogle Scholar
  28. 28.
    Demmel, J.W., Dongarra, J.J., et al.: Self-adapting linear algebra algorithms and software. In: Proc. IEEE, Special issue on program generation, optimization, and adaptation (2005) (to appear), See also http://bebop.cs.berkeley.edu/pubs/ieeesans.pdf

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • N. Peter Drakenberg
    • 1
  1. 1.Department of Microelectronics and Information TechnologyThe Royal Institute of TechnologyStockholmSweden

Personalised recommendations