Computer Science - Research and Development

, 26:257

MVAPICH2-GPU: optimized GPU to GPU communication for InfiniBand clusters

Authors

    • Department of Computer Science and EngineeringThe Ohio State University
  • Sreeram Potluri
    • Department of Computer Science and EngineeringThe Ohio State University
  • Miao Luo
    • Department of Computer Science and EngineeringThe Ohio State University
  • Ashish Kumar Singh
    • Department of Computer Science and EngineeringThe Ohio State University
  • Sayantan Sur
    • Department of Computer Science and EngineeringThe Ohio State University
  • Dhabaleswar K. Panda
    • Department of Computer Science and EngineeringThe Ohio State University
Special Issue Paper

DOI: 10.1007/s00450-011-0171-3

Cite this article as:
Wang, H., Potluri, S., Luo, M. et al. Comput Sci Res Dev (2011) 26: 257. doi:10.1007/s00450-011-0171-3

Abstract

Data parallel architectures, such as General Purpose Graphics Units (GPGPUs) have seen a tremendous rise in their application for High End Computing. However, data movement in and out of GPGPUs remain the biggest hurdle to overall performance and programmer productivity. Applications executing on a cluster with GPUs have to manage data movement using CUDA in addition to MPI, the de-facto parallel programming standard. Currently, data movement with CUDA and MPI libraries is not integrated and it is not as efficient as possible. In addition, MPI-2 one sided communication does not work for windows in GPU memory, as there is no way to remotely get or put data from GPU memory in a one-sided manner.

In this paper, we propose a novel MPI design that integrates CUDA data movement transparently with MPI. The programmer is presented with one MPI interface that can communicate to and from GPUs. Data movement from GPU and network can now be overlapped. The proposed design is incorporated into the MVAPICH2 library. To the best of our knowledge, this is the first work of its kind to enable advanced MPI features and optimized pipelining in a widely used MPI library. We observe up to 45% improvement in one-way latency. In addition, we show that collective communication performance can be improved significantly: 32%, 37% and 30% improvement for Scatter, Gather and Allotall collective operations, respectively. Further, we enable MPI-2 one sided communication with GPUs. We observe up to 45% improvement for Put and Get operations.

Keywords

MPIClustersGPGPUCUDAInfiniBand

Copyright information

© Springer-Verlag 2011