Computer Science - Research and Development

, Volume 24, Issue 1–2, pp 11–19 | Cite as

Toward message passing for a million processes: characterizing MPI on a massive scale blue gene/P

  • Pavan Balaji
  • Anthony Chan
  • Rajeev Thakur
  • William Gropp
  • Ewing Lusk
Special Issue Paper

Abstract

Upcoming exascale capable systems are expected to comprise more than a million processing elements. As researchers continue to work toward architecting these systems, it is becoming increasingly clear that these systems will utilize a significant amount of shared hardware between processing units; this includes shared caches, memory and network components. Thus, understanding how effective current message passing and communication infrastructure is in tying these processing elements together, is critical to making educated guesses on what we can expect from such future machines. Thus, in this paper, we characterize the communication performance of the message passing interface (MPI) implementation on 32 racks (131072 cores) of the largest Blue Gene/P (BG/P) system in the United States (80% of the total system size) and reveal various interesting insights into it.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    InfiniBand Trade Association http://www.infinibandta.com
  2. 2.
    Naval Research Laboratory Layered Ocean Model (NLOM) http://www.navo.hpc.mil/Navigator/Fall99_Feature.html
  3. 3.
    Alam S, Barrett B, Bast M, Fahey MR, Kuehn J, McCurdy C, Rogers J, Roth P, Sankaran R, Vetter J, Worley P, Yu W (2008) Early Evaluation of IBM BlueGene/P. In: SCGoogle Scholar
  4. 4.
    Balaji P, Chan A, Thakur R, Gropp W, Lusk E (2008) Non-Data-Communication Overheads in MPI: Analysis on Blue Gene/P. In: Euro PVM/MPI Users’ Group Meeting, Dublin, IrelandGoogle Scholar
  5. 5.
    Overview of the IBM Blue Gene/P project http://www.research.ibm.com/journal/rd/521/team.pdf
  6. 6.
    IBM System Blue Gene Solution: Blue Gene/P Application Development http://www.redbooks.ibm.com/redbooks/pdfs/sg247287.pdf
  7. 7.
    Chan A, Balaji P, Thakur R, Gropp W, Lusk E (2008) Communication Analysis of Parallel 3D FFT for Flat Cartesian Meshes on Large Blue Gene Systems. In: HiPC, Bangalore, IndiaGoogle Scholar
  8. 8.
    Liu J, Chandrasekaran B, Wu J, Jiang W, Kini S, Yu W, Buntinas D, Wyckoff P, Panda DK (2003) Performance Comparison of MPI Implementations over InfiniBand Myrinet and Quadrics. In: Supercomputing 2003: The International Conference for High Performance Computing and Communications, November 2003Google Scholar
  9. 9.
    Liu J, Jiang W, Wyckoff P, Panda DK, Ashton D, Buntinas D, Gropp W, Toonen B (2004) Design and Implementation of MPICH2 over InfiniBand with RDMA Support. In: Proceedings of Int’l Parallel and Distributed Processing Symposium (IPDPS ’04), April 2004Google Scholar
  10. 10.
    Liu J, Wu J, Kini S, Noronha R, Wyckoff P, Panda DK (2002) MPI Over InfiniBand: Early Experiences. In: IPDPSGoogle Scholar
  11. 11.
    Petrini F, Feng W-C, Hoisie A, Coll S, Frachtenberg E (2002) The Quadrics Network: High Performance Clustering Technology. IEEE Micro 22(1):46–57CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • Pavan Balaji
    • 1
  • Anthony Chan
    • 1
  • Rajeev Thakur
    • 1
  • William Gropp
    • 2
  • Ewing Lusk
    • 1
  1. 1.Mathematics and Computer ScienceArgonne National LaboratoryArgonneUSA
  2. 2.University of IllinoisUrbana-ChampaignUSA

Personalised recommendations