Chapter

Recent Advances in the Message Passing Interface

Volume 6305 of the series Lecture Notes in Computer Science pp 160-169

Implementing MPI on Windows: Comparison with Common Approaches on Unix

  • Jayesh KrishnaAffiliated withCarnegie Mellon UniversityArgonne National Laboratory
  • , Pavan BalajiAffiliated withCarnegie Mellon UniversityArgonne National Laboratory
  • , Ewing LuskAffiliated withCarnegie Mellon UniversityArgonne National Laboratory
  • , Rajeev ThakurAffiliated withCarnegie Mellon UniversityArgonne National Laboratory
  • , Fabian TillerAffiliated withCarnegie Mellon UniversityMicrosoft Corporation

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Commercial HPC applications are often run on clusters that use the Microsoft Windows operating system and need an MPI implementation that runs efficiently in the Windows environment. The MPI developer community, however, is more familiar with the issues involved in implementing MPI in a Unix environment. In this paper, we discuss some of the differences in implementing MPI on Windows and Unix, particularly with respect to issues such as asynchronous progress, process management, shared-memory access, and threads. We describe how we implement MPICH2 on Windows and exploit these Windows-specific features while still maintaining large parts of the code common with the Unix version. We also present performance results comparing the performance of MPICH2 on Unix and Windows on the same hardware. For zero-byte MPI messages, we measured excellent shared-memory latencies of 240 and 275 nanoseconds on Unix and Windows, respectively.