An Efficient Kernel-Level Blocking MPI Implementation

  • Atsushi Hori
  • Toyohisa Kameyama
  • Yuichi Tsujita
  • Mitaro Namiki
  • Yutaka Ishikawa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7490)

Abstract

The technique of user-level communication, where incoming messages wait in a busy loop, is used in most MPI implementations to achieve high communication performance. However, in some cases a kernel-level blocking receive is preferred. Some MPI implementations have an option to switch from user-level to kernel-level blocking with the sacrifice of communication performance. This paper identifies the problems when implementing kernel-level blocking receiving and proposes several techniques to avoid these problems. Evaluations show that the proposed kernel-level blocking techniques may achieve comparable performance with user-level communication.

Keywords

user-level communication kernel-level blocking MVAPICH two-phase wait NAS parallel benchmark 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    von Eicken, T., Basu, A., Buch, V., Vogels, W.: U-net: a user-level network interface for parallel and distributed computing. SIGOPS Oper. Syst. Rev. 29, 40–53 (1995)CrossRefGoogle Scholar
  2. 2.
    von Eicken, T., Culler, D.E., Goldstein, S.C., Schauser, K.E.: Active messages: a mechanism for integrated communication and computation. In: Proceedings of the 19th Annual International Symposium on Computer Architecture, ISCA 1992, pp. 256–266. ACM, New York (1992)CrossRefGoogle Scholar
  3. 3.
    Pakin, S., Karamcheti, V., Chien, A.A.: Fast messages: Efficient, portable communication for workstation clusters and mpps. IEEE Parallel Distrib. Technol. 5, 60–73 (1997)Google Scholar
  4. 4.
    Tezuka, H., Hori, A., Ishikawa, Y., Sato, M.: Pm: An Operating System Coordinated High Performance Communication Library. In: Hertzberger, B., Sloot, P.M.A. (eds.) HPCN-Europe 1997. LNCS, vol. 1225, pp. 708–717. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  5. 5.
    Ousterhout, J.: Scheduling techniques for concurrent systems. In: 3rd International Conference on Distributed Computing Systems, pp. 22–30. IEEE (1982)Google Scholar
  6. 6.
    Liu, J., Wu, J., Kini, S.P., Wyckoff, P., Panda, D.K.: High performance RDMA-based MPI implementation over InfiniBand. In: Proceedings of the 17th Annual International Conference on Supercomputing, ICS 2003, pp. 295–304. ACM, New York (2003)CrossRefGoogle Scholar
  7. 7.
    PC Cluster Consortium: SCore, http://www.pccluster.org/
  8. 8.
    Hori, A.: PMX Specification –DRAFT–, http://www.pccluster.org/score_doc/score-7.0.2/pdf/PMX-spec.pdf
  9. 9.
  10. 10.
    MVAPICH Team: MVAPICH2 1.7 User Guide (2012)Google Scholar
  11. 11.
    OpenFabrics Alliance: OFED, http://www.openfabrics.org/
  12. 12.
    Damianakis, S., Chen, Y., Felten, E.W.: Reducing waiting costs in user-level communication. In: 11th International Parallel Processing Symposium, pp. 381–387. IEEE Computer Society Press (1997)Google Scholar
  13. 13.
    Vishnu, A., Song, S., Marquez, A., Barker, K., Kerbyson, D., Cameron, K., Balaji, P.: Designing energy efficient communication runtime systems for data centric programming models. In: Proceedings of the 2010 IEEE/ACM Int’l Conference on Green Computing and Communications & Int’l Conference on Cyber, Physical and Social Computing, GREENCOM-CPSCOM 2010, pp. 229–236. IEEE Computer Society, Washington, DC (2010)CrossRefGoogle Scholar
  14. 14.
    Nieplocha, J., Tipparaj, V., Krishnan, M., Panda, D.K.: High performance remote memory access communication: The armci approach. Int. J. High Perform. Comput. Appl. 20(2), 233–253 (2006)CrossRefGoogle Scholar
  15. 15.
    Message Passing Interface Forum: MPI-2: Extensions to the Message-Passing Interface (2003), http://www.mpi-forum.org/docs/mpi2-report.pdf
  16. 16.
    Intel Corporation: Intel 64 and IA-32 Architectures Software Developer’s Manual (2011)Google Scholar
  17. 17.
    Dongarra, J., Choudhary, A., Kale, S., et al.: The International Exascale Software Project Roadmap. White paper, Argonne National Laboratory (October 2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Atsushi Hori
    • 1
  • Toyohisa Kameyama
    • 1
  • Yuichi Tsujita
    • 2
  • Mitaro Namiki
    • 3
  • Yutaka Ishikawa
    • 1
    • 4
  1. 1.RIKEN AICSKobeJapan
  2. 2.Kinki UnversityHigashi-HiroshimaJapan
  3. 3.Tokyo University of Agriculture and TechnologyFuchuJapan
  4. 4.The University of TokyoBunkyo-kuJapan

Personalised recommendations