Extending the MPI-2 Generalized Request Interface

  • Robert Latham
  • William Gropp
  • Robert Ross
  • Rajeev Thakur
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4757)

Abstract

The MPI-2 standard added a new feature to MPI called generalized requests. Generalized requests allow users to add new nonblocking operations to MPI while still using many pieces of MPI infrastructure such as request objects and the progress notification routines (MPI_Test, MPI_Wait). The generalized request design as it stands, however, has deficiencies regarding typical use cases. These deficiencies are particularly evident in environments that do not support threads or signals, such as the leading petascale systems (IBM Blue Gene/L, Cray XT3 and XT4). This paper examines these shortcomings, proposes extensions to the interface to overcome them, and presents implementation results.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    The MPI Forum: MPI-2: Extensions to the message-passing interface. The MPI Forum (July 1997)Google Scholar
  2. 2.
    Thakur, R., Gropp, W., Lusk, E.: On implementing MPI-IO portably and with high performance. In: Proceedings of the Sixth Workshop on Input/Output in Parallel and Distributed Systems, pp. 23–32 (1999)Google Scholar
  3. 3.
    Message Passing Interface Forum: MPI: A message-passing interface standard. Technical report (1994)Google Scholar
  4. 4.
    Brightwell, R., Riesen, R., Underwood, K.: Analyzing the impact of overlap, offload, and independent progress for MPI. The International Journal of High-Performance Computing Applications 19(2), 103–117 (2005)CrossRefGoogle Scholar
  5. 5.
    IEEE/ANSI Std. 1003.1: Single UNIX specification, version 3 (2004 edition)Google Scholar
  6. 6.
    Microsoft corporation: Microsoft Developer Network Online Documentation (accessed 2007), http://msdn.microsoft.com
  7. 7.
    PVFS development team: The PVFS parallel file system (accessed 2007), http://www.pvfs.org/
  8. 8.
    MPICH2 development team: MPICH2. http://www.mcs.anl.gov/mpi/mpich2
  9. 9.
    Intel GmbH: Intel MPI benchmarks. http://www.intel.com
  10. 10.
    Worringen, J.: Self-adaptive hints for collective I/O. In: Proceedings of the 13th European PVM/MPI User’s Group, Bonn, Germany, pp. 202–211 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Robert Latham
    • 1
  • William Gropp
    • 1
  • Robert Ross
    • 1
  • Rajeev Thakur
    • 1
  1. 1.Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439USA

Personalised recommendations