MGMR: Multi-GPU Based MapReduce

  • Yi Chen
  • Zhi Qiao
  • Hai Jiang
  • Kuan-Ching Li
  • Won Woo Ro
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7861)

Abstract

MapReduce is a programming model introduced by Google for large-scale data processing. Several studies have implemented MapReduce model on Graphic Processing Unit (GPU). However, most of them are based on the single GPU and bounded by GPU memory with inefficient atomic operations. This paper intends to develop a standalone MapReduce system, called MGMR, to utilize multiple GPUs, handle large-scale data processing beyond GPU memory limit, and eliminate serial atomic operations. Experimental results have demonstrated MGMR’s effectiveness in handling large data set.

Keywords

MapReduce multi-GPU atomic-free CUDA GPUDirect 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    OpenCL - The open standard for parallel programming of heterogeneous systems, http://www.khronos.org/opencl
  3. 3.
    Caylor, M.: Numerical Solution of the Wave Equation on Dual-GPU Platforms Using Brook+. Presentation, Boise State University (2010)Google Scholar
  4. 4.
    Dean, J., Ghemawat, S.: MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM 51(1), 107–113 (2008)CrossRefGoogle Scholar
  5. 5.
    Elteir, M., Lin, H., Feng, W., Scogland, T.: StreamMR: An Optimized MapReduce Framework for AMD GPUs. In: Proceedings of the 21st International Symposium on High-Performance Parallel and Distributed Computing, pp. 364–371 (2011)Google Scholar
  6. 6.
    Shainer, G., Ayoub, A., Lui, P., Kagan, M., Trott, C., Scantlen, G., Crozier, P.: The development of Mellanox/NVIDIA GPU Direct over InfiniBand a new model for GPU to GPU communications. Computer Science - Research and Development 26(3-4), 267–273 (2011)CrossRefGoogle Scholar
  7. 7.
    White, T.: Hadoop: The Definitive Guide. O’Reilly Media, Inc./ Yahoo Press (2010)Google Scholar
  8. 8.
    Ranger, C., Raghuraman, R., Penmetsa, A., Bradski, G., Kozyraki, C.: Evaluating MapReduce for Multi-core and Multiprocessor Systems. In: Proceedings of the 2007 IEEE 13th International Symposium on High Performance Computer Architecture, pp. 13–24 (2007)Google Scholar
  9. 9.
    Fang, W., He, B., Luo, Q., Govindaraju, N.K.: Mars: Accelerating MapReduce with Graphics Processors. In: Proceedings of the 2011 IEEE 17th International Conference on Parallel and Distributed Systems, pp. 608–620 (2011)Google Scholar
  10. 10.
    Hong, C.T., Chen, D.H., Chen, Y.B., Chen, W.G., Zheng, W.M., Lin, H.B.: Providing Source Code Level Portability Between CPU and GPU with MapCG. Journal of Computer Science and Technology 27(1), 42–56 (2012)CrossRefGoogle Scholar
  11. 11.
    Chen, L., Agrawal, G.: Optimizing MapReduce for GPUs with effective shared memory usage. In: Proceedings of the 21st International Symposium on High-Performance Parallel and Distributed Computing, pp. 199–210 (2012)Google Scholar
  12. 12.
    Stuart, J.A., Owens, J.D.: Multi-GPU MapReduce on GPU Clusters. In: Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium, pp. 1068–1079 (2011)Google Scholar
  13. 13.
    Alam, S.R., Fourestey, G., Videau, B., Genovese, L., Goedecker, S., Dugan, N.: Overlapping Computations with Communications and I/O Explicitly Using OpenMP Based Heterogeneous Threading Models. In: Proceedings of the 8th International Conference on OpenMP in a Heterogeneous World, pp. 267–270 (2012)Google Scholar
  14. 14.
    Bell, N., Hoberock, J.: Thrust: A productivity-oriented library for CUDA. In: GPU Computing Gems: Jade Edition, pp. 359–371. Morgan Kaufmann (2011)Google Scholar
  15. 15.
    Li, X., Lu, P., Schaeffer, J., Shillington, J., Wong, P.S., Shi, H.: On the Versatility of Parallel Sorting by Regular Sampling. Journal of Parallel Computing 19(10), 1079–1103 (1993)CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Przydatek, B.: A Fast Approximation Algorithm for the Subset-sum Problem. Journal of International Transactions in Operational Research 9(4), 437–459 (2002)CrossRefMATHMathSciNetGoogle Scholar
  17. 17.
    Yu, S., Tranchevent, L.-C., Liu, X., Glanzel, W., Suykens, J.A.K., De Moor, B., Moreau, Y.: Optimized data fusion for kernel k-means clustering. Journal of IEEE Transactions on Pattern Analysis and Machine Intelligence 34(5), 1031–1039 (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Yi Chen
    • 1
  • Zhi Qiao
    • 1
  • Hai Jiang
    • 1
  • Kuan-Ching Li
    • 2
  • Won Woo Ro
    • 3
  1. 1.Dept. of Computer ScienceArkansas State UniversityUSA
  2. 2.Dept. of Computer Science & Information Engr.Providence UniversityTaiwan
  3. 3.School of Electrical and Electronic EngineeringYonsei UniversityKorea

Personalised recommendations