Advertisement

Basics and Practice of Linear Algebra Calculation Library BLAS and LAPACK

  • Maho NakataEmail author
Chapter

Abstract

In this chapter, we explain the basic architecture and use of the linear algebra calculation libraries called BLAS and LAPACK. BLAS and LAPACK libraries are for carrying out vector and matrix operations on computers. They are used by many programs, and their implementations are optimized according to the computer they are run on. These libraries should be used whenever possible for linear algebra operations. This is because algorithms based directly on mathematical theorems in textbooks may be inefficient and their results may not have sufficient accuracy in practice. Moreover, programming such algorithms are bothersome. However, performance may suffer if you use a non-optimized library. In fact, the difference in performance between a non-optimized and optimized one is likely very large, so you should choose the fastest one for your computer. The availability of optimized BLAS and LAPACK libraries have improved remarkably. For example, they are now included in Linux distributions such as Ubuntu. In this chapter, we will refer to the libraries for Ubuntu 16.04 so that readers can easily try them out for themselves. Unfortunately, we will not mention GPU implementations on account of lack of space. However, the basic ideas are the same as presented in this chapter; therefore, we believe that readers will easily be able to utilize them as well.

References

  1. 1.
    Author unknown, The nine chapters on the mathematical art, around the 1st century BC to the second century ADGoogle Scholar
  2. 2.
    IEEE, IEEE standard for floating-point arithmetic, IEEE Std 754-2008, pp. 1–70 (2008)Google Scholar
  3. 3.
    N.J. Higham, SIAM: Society for Industrial and Applied Mathematics, 2nd edn. (2002)Google Scholar
  4. 4.
    S.  Hauberg, J. W. Eaton, D.  Bateman, GNU Octave Version 3.0.1 Manual: A High-level Interactive Language for Numerical Computations (CreateSpace Independent Publishing Platform, 2008)Google Scholar
  5. 5.
    MATLAB, version 7.10.0 (R2010a). The MathWorks Inc., Natick, Massachusetts (2010)Google Scholar
  6. 6.
    BLAS quick reference card, http://www.netlib.org/blas/blasqr.pdf
  7. 7.
    B. Kågström, P. Ling, C. Van Loan, GEMM-based level 3 BLAS: high-performance model implementations and performance evaluation benchmark, ACM Trans. Math. Softw. 24(3), 268–302 (1998)CrossRefGoogle Scholar
  8. 8.
    R.C. Whaley, J.J. Dongarra, in Proceedings of the 1998 ACM/IEEE Conference on Supercomputing, SC ’98, 1 (IEEE Computer Society, Washington, 1998)Google Scholar
  9. 9.
    K. Goto, R.A. van de Geijn, ACM Trans. Math. Softw. 34, 12:1 (2008)MathSciNetCrossRefGoogle Scholar
  10. 10.
    X. Zhang, Q. Wang, Y. Zhang, in IEEE 18th International Conference on Parallel and Distributed Systems (ICPADS), vol. 17 (IEEE Computer Society, 2012)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.RIKEN Advanced Center for Computing and CommunicationWakoJapan

Personalised recommendations