Advertisement

Performance Tuning of SCC-MPICH by Means of the Proposed MPI-3.0 Tool Interface

  • Carsten Clauss
  • Stefan Lankes
  • Thomas Bemmerl
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6960)

Abstract

The Single-Chip Cloud Computer (SCC) experimental processor is a 48-core concept vehicle created by Intel Labs as a platform for many-core software research. Intel provides a customized programming library for the SCC, called RCCE, that allows for fast message-passing between the cores. For that purpose, RCCE offers an application programming interface (API) with a semantics that is derived from the well-established MPI standard. However, while the MPI standard offers a very broad range of functions, the RCCE API is consciously kept small and far from implementing all the features of the MPI standard. For this reason, we have implemented an SCC-customized MPI library, called SCC-MPICH, which in turn is based upon an extension to the SCC-native RCCE communication library. In this contribution, we will present SCC-MPICH and we will show how performance analysis as well as performance tuning for this library can be conducted by means of a prototype of the proposed MPI-3.0 tool information interface.

Keywords

Single-Chip Cloud Computer SCC MPI 3.0 Tools Support 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Intel Corporation. SCC External Architecture Specification (EAS) – Revision 1.1 (November 2010), http://communities.intel.com/docs/DOC-5852
  2. 2.
    Mattson, T., van der Wijngaart, R.: RCCE: a Small Library for Many-Core Communication. Intel Corporation (May 2010), http://communities.intel.com/docs/DOC-5628
  3. 3.
    Message Passing Interface Forum. MPI: A Message-Passing Interface Standard. High-Performance Computing Center Stuttgart (HLRS), V2.2 (September 2009)Google Scholar
  4. 4.
    Mattson, T., van der Wijngaart, R., Riepen, M., et al.: The 48-core SCC Processor: The Programmer’s View. In: Proceedings of the 2010 ACM/IEEE Conference on Supercomputing (SC 2010), New Orleans, LA, USA (November 2010)Google Scholar
  5. 5.
    Clauss, C., Lankes, S., Reble, P., Bemmerl, T.: Evaluation and Improvements of Programming Models for the Intel SCC Many-core Processor. In: Proceedings of the International Conference on High Performance Computing and Simulation (HPCS 2011), Istanbul, Turkey (July 2011)Google Scholar
  6. 6.
    Urena, I.A.C.: RCKMPI Manual. Intel Braunschweig (February 2011), http://communities.intel.com/docs/DOC-6628
  7. 7.
    Bierbaum, B., Clauss, C., Finocchiaro, R., Schuch, S., Pöppe, M., Worringen, J.: MP-MPICH – User Documentation and Technical Notes (2009), http://www.lfbs.rwth-aachen.de/users/global/mp-mpich/mp-mpich_manual.pdf
  8. 8.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Parallel Computing 22(6), 789–828 (1996)CrossRefzbMATHGoogle Scholar
  9. 9.
    MPI 3.0 Tools Support Working Group: ”Tool Interfaces”, current Draft of the Chapter Proposal (June 2011), https://svn.mpi-forum.org/trac/mpi-forum-web/attachment/wiki/MPI3Tools/mpit/mpi-report.18.pdf
  10. 10.
    Clauss, C., Lankes, S., Bemmerl, T.: Use Case Evaluation of the Proposed MPIT Configuration and Performance Interface. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) EuroMPI 2010. LNCS, vol. 6305, pp. 285–288. Springer, Heidelberg (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Carsten Clauss
    • 1
  • Stefan Lankes
    • 1
  • Thomas Bemmerl
    • 1
  1. 1.Chair for Operating SystemsRWTH Aachen UniversityGermany

Personalised recommendations