Advertisement

RCKMPI – Lightweight MPI Implementation for Intel’s Single-chip Cloud Computer (SCC)

  • Isaías A. Comprés Ureña
  • Michael Riepen
  • Michael Konow
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6960)

Abstract

The Single-chip Cloud Computer (SCC) is an experimental processor created by Intel Labs. It is a distributed memory architecture that provides shared memory possibilities and an on die Message Passing Buffer (MPB). This paper presents an MPI implementation (RCKMPI) that uses an efficient mix of MPB and DDR3 shared memory for low level communication. The on die buffer found in the SCC provides higher bandwidth and lower latency than the available shared memory. In spite of this, message passing can be faster through DDR3, due to protocol overheads related to the small size of the MPB and the necessity to split and reassemble large packages, together with the possibility that the data is not available in the cache. These overheads take over after certain message sizes, requiring run time decisions with regards to which type of buffers to use, in order to achieve higher performance. In the current implementation, the decision is based on remaining bytes to transfer from in transit packets. MPI benchmarks are shown to demonstrate that the use of both types of buffers results in equal or lower transmission times than when communicating through the on die buffer alone.

Keywords

Many-Core Processors Message Passing MPI RCKMPI 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mattson, T.G., Van der Wijngaart, R.F., Riepen, M., et al.: The 48-core SCC processor: the programmer’s view. In: Proceedings of the 2010 ACM/IEEE Conference on Supercomputing , SC 2010, New Orleans, Louisiana (November 2010)Google Scholar
  2. 2.
    Clauss, C., Lankes, S., Galowicz, J. Bemmerl, T.: iRCCE: A Non-blocking Communication Extension to the RCCE Communication Library for the Intel Single-Chip Cloud Computer, Chair for Operating Systems, RWTH Aachen University, December 17 (2010) Google Scholar
  3. 3.
    van der Wijngaart, R.F., Mattson, T.G., Haas, W.: Light-weight Communications on Intel’s Single-Chip Cloud Computer ProcessorGoogle Scholar
  4. 4.
    Howard, J., Dighe, S., Hoskote, Y., et al.: A 48-Core IA-32 Message-Passing Processor with DVFS in 45nm CMOS. In: Proceedings of the International Solid-State Circuits Conference (February 2010)Google Scholar
  5. 5.
    Buntinas, D., Mercier, G., Gropp, W.: Implementation and Shared-Memory Evaluation of MPICH2 over the Nemesis Communication Subsystem. Mathematics and Computer Science Division, Argonne National LaboratoryGoogle Scholar
  6. 6.
    Argonne National Laboratory: MPICH2, http://www.mcs.anl.gov/mpi/mpich2
  7. 7.
    Thakur, R., Rabenseifner, R., Gropp, W.: Optimization of Collective Communication Operations in MPICH. International Journal of High Performance Computing Applications (Spring 2005)Google Scholar
  8. 8.
    Thakur, R., Gropp, W.D.: Improving the performance of collective operations in MPICH. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, pp. 257–267. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    Hensgen, D., Finkel, R., Manber, U.: Two algorithms for barrier synchronization. International Journal of Parallel Programming (1988)Google Scholar
  10. 10.
    NASA Advanced Supercomputing Division Parallel Benchmarks, http://www.nas.nasa.gov/Resources/Software/npb.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Isaías A. Comprés Ureña
    • 1
  • Michael Riepen
    • 1
  • Michael Konow
    • 1
  1. 1.Microprocessor and Programming Research Labs (MPR)BraunschweigGermany

Personalised recommendations