A middleware for efficient stream processing in CUDA
- 128 Downloads
This paper presents a middleware capable of out-of-order execution of kernels and data transfers for efficient stream processing in the compute unified device architecture (CUDA). Our middleware runs on the CUDA-compatible graphics processing unit (GPU). Using the middleware, application developers are allowed to easily overlap kernel computation with data transfer between the main memory and the video memory. To maximize the efficiency of this overlap, our middleware performs out-of-order execution of commands such as kernel invocations and data transfers. This run-time capability can be used by just replacing the original CUDA API calls with our API calls. We have applied the middleware to a practical application to understand the run-time overhead in performance. It reduces execution time by 19% and allows us to process large data that cannot be entirely stored in the video memory.
KeywordsStream processing Overlap CUDA GPU
Unable to display preview. Download preview PDF.
- 1.NVIDIA Corporation (2009) CUDA programming guide version 2.3, July 2009. http://developer.nvidia.com/cuda/
- 6.Phillips JC, Stone JE, Schulten K (2008) Adapting a message-driven parallel application to GPU-accelerated clusters. In: Proceedings of the international conference on high performance computing, networking, storage and analysis (SC’08), November 2008 (CD-ROM) Google Scholar
- 7.Rodrigues CI, Hardy DJ, Stone JE, Schulten K, Hwu W-MW GPU acceleration of cutoff pair potentials for molecular modeling applications. In: Proceedings of the 5th international conference on computing frontiers (CF’08), May 2008, pp 273–282 Google Scholar
- 8.Yamagiwa S, Sousa L (2007) Design and implementation of a stream-based distributed computing platform using graphics processing units. In: Proceedings of the 4th international conference on computing frontiers (CF’07), May 2007, pp 197—204 Google Scholar