Advertisement

Accelerating Numerical Dense Linear Algebra Calculations with GPUs

  • Jack Dongarra
  • Mark Gates
  • Azzam Haidar
  • Jakub Kurzak
  • Piotr Luszczek
  • Stanimire TomovEmail author
  • Ichitaro Yamazaki
Chapter

Abstract

This chapter presents the current best design and implementation practices for the acceleration of dense linear algebra (DLA) on GPUs. Examples are given with fundamental algorithms—from the matrix–matrix multiplication kernel written in CUDA to the higher level algorithms for solving linear systems, eigenvalue and SVD problems. The implementations are available through the MAGMA library—a redesign for GPUs of the popular LAPACK. To generate the extreme level of parallelism needed for the efficient use of GPUs, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multicore CPUs with GPU accelerators using either static scheduling or a light-weight runtime system. The use of light-weight runtime systems keeps scheduling overhead low, similar to static scheduling, while enabling the expression of parallelism through sequential-like code. This simplifies the development effort and allows the exploration of the unique strengths of the various hardware components.

Keywords

Singular Value Decomposition Shared Memory Singular Vector Cholesky Factorization Thread Block 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Anderson, E., Bai, Z., Bischof, C., Blackford, L.S., Demmel, J.W., Dongarra, J.J. Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users’ Guide. SIAM, Philadelphia (1992). http://www.netlib.org/lapack/lug/
  2. 2.
    Bientinesi, P., Igual, F.D., Kressner, D., Quintana-Ortí, E.S.: Reduction to condensed forms for symmetric eigenvalue problems on multi-core architectures. In: Proceedings of the 8th International Conference on Parallel Processing and Applied Mathematics: Part I, PPAM’09, pp. 387–395. Springer, Berlin/Heidelberg (2010)Google Scholar
  3. 3.
    Dongarra, J.J., Sorensen, D.C., Hammarling, S.J.: Block reduction of matrices to condensed forms for eigenvalue computations. J. Comput. Appl. Math. 27(1–2), 215–227 (1989)CrossRefzbMATHMathSciNetGoogle Scholar
  4. 4.
    Gansterer, W., Kvasnicka, D., Ueberhuber, C.: Multi-sweep algorithms for the symmetric eigenproblem. In: Vector and Parallel Processing - VECPAR’98. Lecture Notes in Computer Science, vol. 1573, pp. 20–28. Springer, Berlin (1999)Google Scholar
  5. 5.
    Golub, G., Loan, C.V.: Matrix Computations, 3rd edn. Johns Hopkins, Baltimore (1996)zbMATHGoogle Scholar
  6. 6.
    Haidar, A., Ltaief, H., Dongarra, J.: Parallel reduction to condensed forms for symmetric eigenvalue problems using aggregated fine-grained and memory-aware kernels. In: Proceedings of SC ’11, pp. 8:1–8:11. ACM, New York (2011)Google Scholar
  7. 7.
    Haidar, A., Ltaief, H., Luszczek, P., Dongarra, J.: A comprehensive study of task coalescing for selecting parallelism granularity in a two-stage bidiagonal reduction. In: Proceedings of the IEEE International Parallel and Distributed Processing Symposium, Shanghai, 21–25 May 2012. ISBN 978-1-4673-0975-2Google Scholar
  8. 8.
    Haidar, A., Tomov, S., Dongarra, J., Solca, R., Schulthess, T.: A novel hybrid CPU-GPU generalized eigensolver for electronic structure calculations based on fine grained memory aware tasks. Int. J. High Perform. Comput. Appl. 28(2), 196–209 (2014)CrossRefGoogle Scholar
  9. 9.
    Haidar, A., Kurzak, J., Luszczek, P.: An improved parallel singular value algorithm and its implementation for multicore hardware. In: SC13, The International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, CO, 17–22 November 2013Google Scholar
  10. 10.
    Lang, B.: Efficient eigenvalue and singular value computations on shared memory machines. Parallel Comput. 25(7), 845–860 (1999)CrossRefzbMATHMathSciNetGoogle Scholar
  11. 11.
    Ltaief, H., Luszczek, P., Haidar, A., Dongarra, J.: Enhancing parallelism of tile bidiagonal transformation on multicore architectures using tree reduction. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Wasniewski, J. (eds.) Proceedings of 9th International Conference, PPAM 2011, Torun, vol. 7203, pp. 661–670 (2012)Google Scholar
  12. 12.
    MAGMA 1.4.1: http://icl.cs.utk.edu/magma/ (2013)
  13. 13.
    Nath, R., Tomov, S., Dong, T., Dongarra, J.: Optimizing symmetric dense matrix-vector multiplication on GPUs. In: 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pp. 1–10. New York, NY, USAm 2011, ACMGoogle Scholar
  14. 14.
    Strazdins, P.E.: Lookahead and algorithmic blocking techniques compared for parallel matrix factorization. In: 10th International Conference on Parallel and Distributed Computing and Systems, IASTED, Las Vegas, 1998Google Scholar
  15. 15.
    Strazdins, P.E.: A comparison of lookahead and algorithmic blocking techniques for parallel matrix factorization. Int. J. Parallel Distrib. Syst. Netw. 4(1), 26–35 (2001)Google Scholar
  16. 16.
    Tomov, S., Nath, R., Dongarra, J.: Accelerating the reduction to upper Hessenberg, tridiagonal, and bidiagonal forms through hybrid GPU-based computing. Parallel Comput. 36(12), 645–654 (2010)CrossRefzbMATHMathSciNetGoogle Scholar
  17. 17.
    Yamazaki, I., Dong, T., Solcà, R., Tomov, S., Dongarra, J., Schulthess, T.: Tridiagonalization of a dense symmetric matrix on multiple GPUs and its application to symmetric eigenvalue problems. Concurr. Comput. Pract. Exp. (2013). doi:10.1002/cpe.3152Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Jack Dongarra
    • 1
    • 2
    • 3
  • Mark Gates
    • 1
  • Azzam Haidar
    • 1
  • Jakub Kurzak
    • 1
  • Piotr Luszczek
    • 1
  • Stanimire Tomov
    • 1
    Email author
  • Ichitaro Yamazaki
    • 1
  1. 1.University of Tennessee KnoxvilleKnoxvilleUSA
  2. 2.Oak Ridge National LaboratoryOak RidgeUSA
  3. 3.University of ManchesterManchesterUK

Personalised recommendations