Implementing MPI on Windows: Comparison with Common Approaches on Unix

  • Jayesh Krishna
  • Pavan Balaji
  • Ewing Lusk
  • Rajeev Thakur
  • Fabian Tiller
Conference paper

DOI: 10.1007/978-3-642-15646-5_17

Part of the Lecture Notes in Computer Science book series (LNCS, volume 6305)
Cite this paper as:
Krishna J., Balaji P., Lusk E., Thakur R., Tiller F. (2010) Implementing MPI on Windows: Comparison with Common Approaches on Unix. In: Keller R., Gabriel E., Resch M., Dongarra J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2010. Lecture Notes in Computer Science, vol 6305. Springer, Berlin, Heidelberg

Abstract

Commercial HPC applications are often run on clusters that use the Microsoft Windows operating system and need an MPI implementation that runs efficiently in the Windows environment. The MPI developer community, however, is more familiar with the issues involved in implementing MPI in a Unix environment. In this paper, we discuss some of the differences in implementing MPI on Windows and Unix, particularly with respect to issues such as asynchronous progress, process management, shared-memory access, and threads. We describe how we implement MPICH2 on Windows and exploit these Windows-specific features while still maintaining large parts of the code common with the Unix version. We also present performance results comparing the performance of MPICH2 on Unix and Windows on the same hardware. For zero-byte MPI messages, we measured excellent shared-memory latencies of 240 and 275 nanoseconds on Unix and Windows, respectively.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Jayesh Krishna
    • 1
  • Pavan Balaji
    • 1
  • Ewing Lusk
    • 1
  • Rajeev Thakur
    • 1
  • Fabian Tiller
    • 2
  1. 1.Argonne National LaboratoryArgonne
  2. 2.Microsoft CorporationRedmond

Personalised recommendations