Advertisement

Implementing Linear Algebra Routines on Multi-core Processors with Pipelining and a Look Ahead

  • Jakub Kurzak
  • Jack Dongarra
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4699)

Abstract

Linear algebra algorithms commonly encapsulate parallelism in Basic Linear Algebra Subroutines (BLAS). This solution relies on the fork-join model of parallel execution, which may result in suboptimal performance on current and future generations of multi-core processors. To overcome the shortcomings of this approach a pipelined model of parallel execution is presented, and the idea of look ahead is utilized in order to suppress the negative effects of sequential formulation of the algorithms. Application to one-sided matrix factorizations, LU, Cholesky and QR, is described. Shared memory implementation using POSIX threads is presented.

Keywords

Cholesky Factorization Numerical Linear Algebra Gantt Chart Shared Memory System POSIX Thread 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anderson, E., Bai, Z., Bischof, C., Blackford, L.S., Demmel, J.W., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users’ Guide. SIAM (1992)Google Scholar
  2. 2.
    Barker, V.A., Blackford, L.S., Dongarra, J., Du Croz, J., Hammarling, S., Marinova, M., Wasniewski, J., Yalamov, P.: LAPACK95 Users’ Guide. SIAM (1992)Google Scholar
  3. 3.
    Basic Linear Algebra Technical Forum: Basic Linear Algebra Technical Forum Standard (2001)Google Scholar
  4. 4.
    Agarwal, R.C., Gustavson, F.G.: A parallel implementation of matrix multiplication and LU factorization on the IBM 3090. In: Wright, M.H. (ed.) Proceedings of the IFIP WG 2.5 Working Conference on Aspects of Computation on Asynchronous Parallel Processors, North-Holland, pp. 217–221. Elsevier, Amsterdam (1988)Google Scholar
  5. 5.
    Agarwal, R.C., Gustavson, F.G.: Vector and parallel algorithm for Cholesky factorization on IBM 3090. In: Proceedings of the 1989 ACM/IEEE Conference on Supercomputing (1989)Google Scholar
  6. 6.
    Strazdins, P.E.: A comparison of lookahead and algorithmic blocking techniques for parallel matrix factorization. Int. J. Parallel Distrib. Systems Networks 4(1), 26–35 (2001)Google Scholar
  7. 7.
    Gustavson, F.G., Karlsson, L., Kågström, B.: Three algorithms for Cholesky factorization on distributed memory using packed storage. In: Proceedings of the Workshop on State-of-the-Art in Scientific and Engineering Computing (PARA 2006), pp. 550–559 (to appear, 2006)Google Scholar
  8. 8.
    Dongarra, J.J., Duff, I.S., Sorensen, D.C., van der Vorst, H.A.: Numerical Linear Algebra for High-Performance Computers. SIAM (1998)Google Scholar
  9. 9.
    Demmel, J.W.: Applied Numerical Linear Algebra. SIAM (1997)Google Scholar
  10. 10.
    Bischof, C., van Loan, C.: The WY representation for products of householder matrices. J. Sci. Stat. Comput. 8, 2–13 (1987)CrossRefMathSciNetGoogle Scholar
  11. 11.
    Schreiber, R., van Loan, C.: A storage-efficient WY representation for products of householder transformations. J. Sci. Stat. Comput. 10, 53–57 (1991)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Dongarra, J.J., Gustavson, F.G., Karp, A.: Implementing linear algebra algorithms for dense matrices on a vector pipeline machine. SIAM Review 26(1), 91–112 (1984)zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Menon, V., Pingali, K.: Look left, look right, look left again: An application of fractal symbolic analysis to linear algebra code restructuring. Int. J. Parallel Comput. 32(6), 501–523 (2004)zbMATHCrossRefGoogle Scholar
  14. 14.
    Dongarra, J.J., Luszczek, P., Petitet, A.: The LINPACK Benchmark: past, present and future. Concurrency Computat.: Pract. Exper. 15, 803–820 (2003)CrossRefGoogle Scholar
  15. 15.
    Petitet, A., Whaley, R.C., Dongarra, J.J., Cleary, A.: HPL - A portable implementation of the high-performance linpack benchmark for distributed-memory computers (2006), http://www.netlib.org/benchmark/hpl/
  16. 16.
    Goto, K., van de Geijn, R.: On reducing TLB misses in matrix multiplication. Technical Report TR-02-55, Department of Computer Sciences, University of Texas at Austin (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Jakub Kurzak
    • 1
  • Jack Dongarra
    • 2
  1. 1.University of Tennessee, Knoxville TN 37996USA
  2. 2.University of Tennessee, Knoxville TN 37996, USA, Oak Ridge National Laboratory, Oak Ridge, TN 37831USA

Personalised recommendations