CPU vs. GPU - Performance comparison for the Gram-Schmidt algorithm
- 700 Downloads
The Gram-Schmidt method is a classical method for determining QR decompositions, which is commonly used in many applications in computational physics, such as orthogonalization of quantum mechanical operators or Lyapunov stability analysis. In this paper, we discuss how well the Gram-Schmidt method performs on different hardware architectures, including both state-of-the-art GPUs and CPUs. We explain, in detail, how a smart interplay between hardware and software can be used to speed up those rather compute intensive applications as well as the benefits and disadvantages of several approaches. In addition, we compare some highly optimized standard routines of the BLAS libraries against our own optimized routines on both processor types. Particular attention was paid to the strong hierarchical memory of modern GPUs and CPUs, which requires cache-aware blocking techniques for optimal performance. Our investigations show that the performance strongly depends on the employed algorithm, compiler and a little less on the employed hardware. Remarkably, the performance of the NVIDIA CUDA BLAS routines improved significantly from CUDA 3.2 to CUDA 4.0. Still, BLAS routines tend to be slightly slower than manually optimized code on GPUs, while we were not able to outperform the BLAS routines on CPUs. Comparing optimized implementations on different hardware architectures, we find that a NVIDIA GeForce GTX580 GPU is about 50% faster than a corresponding Intel X5650 Westmere hexacore CPU. The self-written codes are included as supplementary material.
KeywordsGraphic Processing Unit European Physical Journal Special Topic Shared Memory Memory Bandwidth Thread Block
Unable to display preview. Download preview PDF.
- 1.J. Stoer, R. Bulirsch, W. Gautschi, C. Witzgall, Introduction to Numerical Analysis (Springer, 2002)Google Scholar
- 3.G. Golub, C. Van Loan, Matrix Computations (John Hopkins University Press, 1996)Google Scholar
- 6.V. Kublanovskaya, Comput. Math. Phys. 3, 637 (1961)Google Scholar
- 7.J. Gram, J. Math 94, 45 (1883)Google Scholar
- 8.A. Householder, J. ACM (JACM) 5, 342 (1958)Google Scholar
- 14.CULA Programmer’s Guide Release 13 (CUDA 4.0), EM Photonic, Inc., Newark, DE (2011)Google Scholar
- 15.A. Kerr, D. Campbell, M. Richards, Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units (ACM, 2009), p. 71Google Scholar
- 16.CUDA Programming Guide, NVIDIA, Santa Clara, CA (2011)Google Scholar
- 17.J. Dongarra, I. Duff, D. Sorenson, D. Sorensen, H. van der Vorst, Numerical Linear Algebra on High Performance Computers (Software, Environments, Tools) (SIAM, 1999)Google Scholar
- 18.M. Harris, Optimizing Parallel Reduction in CUDA, NVIDIA white paper, Santa Clara, CA (2008)Google Scholar
- 19.CUDA Toolkit 4.0, CUBLAS Library, NVIDIA, Santa Clara, CA (2011)Google Scholar