Hierarchical Collectives in MPICH2

  • Hao Zhu
  • David Goodell
  • William Gropp
  • Rajeev Thakur
Conference paper

DOI: 10.1007/978-3-642-03770-2_41

Part of the Lecture Notes in Computer Science book series (LNCS, volume 5759)
Cite this paper as:
Zhu H., Goodell D., Gropp W., Thakur R. (2009) Hierarchical Collectives in MPICH2. In: Ropo M., Westerholm J., Dongarra J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2009. Lecture Notes in Computer Science, vol 5759. Springer, Berlin, Heidelberg

Abstract

Most parallel systems on which MPI is used are now hierarchical, such as systems with SMP nodes. Many papers have shown algorithms that exploit shared memory to optimize collective operations to good effect. But how much of the performance benefit comes from tailoring the algorithm to the hierarchical topology of the system? We describe an implementation of many of the MPI collectives based entirely on message-passing primitives that exploits the two-level hierarchy. Our results show that exploiting shared memory directly usually gives small additional benefit and suggests design approaches for where the benefit is large.

Keywords

MPI Collective Communication 

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Hao Zhu
    • 1
  • David Goodell
    • 2
  • William Gropp
    • 1
  • Rajeev Thakur
    • 2
  1. 1.Department of Computer ScienceUniversity of IllinoisUrbanaUSA
  2. 2.Mathematics and Computer Science DivisionArgonne National LaboratoryArgonneUSA

Personalised recommendations