The development of Mellanox/NVIDIA GPUDirect over InfiniBand—a new model for GPU to GPU communications
The usage and adoption of General Purpose GPUs (GPGPU) in HPC systems is increasing due to the unparalleled performance advantage of the GPUs and the ability to fulfill the ever-increasing demands for floating points operations. While the GPU can offload many of the application parallel computations, the system architecture of a GPU-CPU-InfiniBand server does require the CPU to initiate and manage memory transfers between remote GPUs via the high speed InfiniBand network. In this paper we introduce for the first time a new innovative technology—GPUDirect that enables Tesla GPUs to transfer data via InfiniBand without the involvement of the CPU or buffer copies, hence dramatically reducing the GPU communication time and increasing overall system performance and efficiency. We also explore for the first time the performance benefits of GPUDirect using Amber and LAMMPS applications.
KeywordsGPUDirect InfiniBand RDMA
Unable to display preview. Download preview PDF.
- 1.Kindratenko V, Enos J, Shi G, Showerman M, Arnold G, Stone J, Phillips J, Hwu W-m (2009) GPU clusters for high-performance computing. In: Cluster computing and workshops Google Scholar
- 2.Wu E, Liu Y (2008) Emerging technology about GPGPU. In: Circuits and systems Google Scholar
- 3.Chen G, Li G, Pei S, Wu B (2009) High performance computing via a GPU. In: Information science and engineering (ICISE) Google Scholar
- 4.Garland M (2010) Parallel computing with CUDA. In: Parallel & distributed processing (IPDPS) Google Scholar
- 5.The TOP500 list—www.top500.org
- 6.InfiniBand Trade Association—www.infinibandta.org/
- 7.Mellanox Technologies—www.mellanox.com