Advertisement

The Design of Seamless MPI Computing Environment for Commodity-Based Clusters

  • Shinji Sumimoto
  • Kohta Nakashima
  • Akira Naruse
  • Kouichi Kumon
  • Takashi Yasui
  • Yoshikazu Kamoshida
  • Hiroya Matsuba
  • Atsushi Hori
  • Yutaka Ishikawa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5759)

Abstract

This paper describes the design and implementation of a seamless MPI runtime environment, called MPI-Adapter, that realizes MPI program binary portability in different MPI runtime environments. MPI-Adapter enables an MPI binary program to run on different MPI implementations. It is implemented as a dynamic loadable module so that the module dynamically captures all MPI function calls and invokes functions defined in a different MPI implementation using the data type translation techniques. A prototype system was implemented for Linux PC clusters to evaluate the effectiveness of MPI-Adapter. The results of an evaluation on a Xeon Processor (3.8GHz) based cluster show that the MPI translation overhead of MPI sending (receiving) is around 0.028μs, and the performance degradation of MPI-Adapter is negligibly small on the NAS parallel benchmark IS.

Keywords

MPI ABI Translation Portable MPI programs Dynamic linked library MPICH2 Open MPI 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    T2K Open Supercomputer Alliance, http://www.open-supercomputer.org/
  2. 2.
  3. 3.
    RSCC: RIKEN Super Combined Cluster System, http://w3cic.riken.go.jp/rscc/
  4. 4.
    Super Computer TOP500, http://www.top500.org/
  5. 5.
    The Message Passing Interface (MPI) standard, http://www.mpi-forum.org/docs/docs.html
  6. 6.
    The Message Passing Interface (MPI) Forum, http://www.mpi-forum.org
  7. 7.
  8. 8.
  9. 9.
    MPI ABI OpenMPI + MPICH2 + HPMPI + LAMPI + NEC + vendors.xls, https://svn.mpi-forum.org/trac/mpi-forum-web/attachment/wiki/abiwikipage/mpi%20abi%20openmpi%20%2b%20mpich2%20%2b%20hpmpi%20%2b%20lampi%%20%2b%20nec%20%2b%20vendors.xls Google Scholar
  10. 10.
    SCore Cluster System Software, http://www.pccluster.org/
  11. 11.
    Sumimoto, S., Ooe, K., Kumon, K., Boku, T., Sato, M., Ukawa, A.: A Scalable Communication Layer for Multi-Dimensional Hyper Crossbar Network Using Multiple Gigabit Ethernet. In: The International Conference on Supercomputing 2006 (ICS 2006). ACM Press, New York (2006)Google Scholar
  12. 12.
  13. 13.
    Gropp, W.: Building library components that can use any mpi implementation. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J., Volkert, J. (eds.) PVM/MPI 2002. LNCS, vol. 2474, pp. 280–287. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  14. 14.
    Application Binary Interface Working Group, https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/abiwikipage

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Shinji Sumimoto
    • 1
  • Kohta Nakashima
    • 1
  • Akira Naruse
    • 1
  • Kouichi Kumon
    • 1
  • Takashi Yasui
    • 2
  • Yoshikazu Kamoshida
    • 3
  • Hiroya Matsuba
    • 3
  • Atsushi Hori
    • 3
  • Yutaka Ishikawa
    • 3
  1. 1.Fujitsu Laboratories, LtdKawasakiJapan
  2. 2.Hitachi Ltd.TokyoJapan
  3. 3.The University of TokyoTokyoJapan

Personalised recommendations