QCD Library for GPU Cluster with Proprietary Interconnect for GPU Direct Communication
- Cite this paper as:
- Fujita N. et al. (2014) QCD Library for GPU Cluster with Proprietary Interconnect for GPU Direct Communication. In: Lopes L. et al. (eds) Euro-Par 2014: Parallel Processing Workshops. Euro-Par 2014. Lecture Notes in Computer Science, vol 8805. Springer, Cham
QUDA is a Lattice QCD library that can use NVIDIA’s Graphics Processing Unit (GPU) accelerators, and is widely used as a framework for Lattice QCD applications. In this paper, we apply our novel proprietary interconnect network called the Tightly Coupled Accelerators (TCA) architecture, to inter-node GPU communication in QUDA. The TCA architecture was developed for low-latency inter-node communication among accelerators connected through the PCI Express (PCIe) bus on PC clusters. It enables direct memory copy between accelerators, such as GPUs, over nodes in the same manner as an intra-node PCIe transaction. We assess the performance of TCA on QUDA by a high-density GPU cluster HA-PACS/TCA, which is a proof-of-concept testbed for TCA architecture. The results show that our interconnection network system, which effects a stronger scaling than ordinary InfiniBand solutions on PC clusters with GPUs, significantly reduces communication latency. The execution time for Conjugate Gradient (CG) iteration shows that the TCA implementation is 2.14 times faster than peer-to-peer MPI implementation and 1.96 times faster than MPI remote-memory access (RMA) implementation, where InfiniBand QDRx2 rail network is used in both cases.
Unable to display preview. Download preview PDF.