A middleware for efficient stream processing in CUDA

Special Issue Paper


This paper presents a middleware capable of out-of-order execution of kernels and data transfers for efficient stream processing in the compute unified device architecture (CUDA). Our middleware runs on the CUDA-compatible graphics processing unit (GPU). Using the middleware, application developers are allowed to easily overlap kernel computation with data transfer between the main memory and the video memory. To maximize the efficiency of this overlap, our middleware performs out-of-order execution of commands such as kernel invocations and data transfers. This run-time capability can be used by just replacing the original CUDA API calls with our API calls. We have applied the middleware to a practical application to understand the run-time overhead in performance. It reduces execution time by 19% and allows us to process large data that cannot be entirely stored in the video memory.


Stream processing Overlap CUDA GPU 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    NVIDIA Corporation (2009) CUDA programming guide version 2.3, July 2009. http://developer.nvidia.com/cuda/
  2. 2.
    Garland M, Grand SL, Nickolls J, Anderson J, Hardwick J, Morton S, Phillips E, Zhang Y, Volkov V (2008) Parallel computing experiences with CUDA. IEEE MICRO 28(4):13–27 CrossRefGoogle Scholar
  3. 3.
    Hou Q, Zhou K, Guo B (2008) BSGP: bulk-synchronous GPU programming. ACM Trans Graph 27(3):19 CrossRefGoogle Scholar
  4. 4.
    Khailany B, Dally WJ, Kapasi UJ, Mattson P, Namkoong J, Owens JD, Towles B, Chang A, Rixner S (2001) Imagine: media processing with streams. IEEE MICRO 21(2):35–46 CrossRefGoogle Scholar
  5. 5.
    Nagayasu D, Ino F, Hagihara K (2008) A decompression pipeline for accelerating out-of-core volume rendering of time-varying data. Comput Graph 32(3):350–362 CrossRefGoogle Scholar
  6. 6.
    Phillips JC, Stone JE, Schulten K (2008) Adapting a message-driven parallel application to GPU-accelerated clusters. In: Proceedings of the international conference on high performance computing, networking, storage and analysis (SC’08), November 2008 (CD-ROM) Google Scholar
  7. 7.
    Rodrigues CI, Hardy DJ, Stone JE, Schulten K, Hwu W-MW GPU acceleration of cutoff pair potentials for molecular modeling applications. In: Proceedings of the 5th international conference on computing frontiers (CF’08), May 2008, pp 273–282 Google Scholar
  8. 8.
    Yamagiwa S, Sousa L (2007) Design and implementation of a stream-based distributed computing platform using graphics processing units. In: Proceedings of the 4th international conference on computing frontiers (CF’07), May 2007, pp 197—204 Google Scholar
  9. 9.
    Yang X, Yan X, Xing Z, Deng Y, Jiang J, Du J, Zhang Y (2009) Fei teng 64 stream processing system: architecture, compiler, and programming. IEEE Trans Parallel Distrib Syst 20(8):1142–1157 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  • Shinta Nakagawa
    • 1
  • Fumihiko Ino
    • 1
  • Kenichi Hagihara
    • 1
  1. 1.Graduate School of Information Science and TechnologyOsaka UniversityOsakaJapan

Personalised recommendations