Skip to main content
Log in

Toward message passing for a million processes: characterizing MPI on a massive scale blue gene/P

  • Special Issue Paper
  • Published:
Computer Science - Research and Development

Abstract

Upcoming exascale capable systems are expected to comprise more than a million processing elements. As researchers continue to work toward architecting these systems, it is becoming increasingly clear that these systems will utilize a significant amount of shared hardware between processing units; this includes shared caches, memory and network components. Thus, understanding how effective current message passing and communication infrastructure is in tying these processing elements together, is critical to making educated guesses on what we can expect from such future machines. Thus, in this paper, we characterize the communication performance of the message passing interface (MPI) implementation on 32 racks (131072 cores) of the largest Blue Gene/P (BG/P) system in the United States (80% of the total system size) and reveal various interesting insights into it.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. InfiniBand Trade Association http://www.infinibandta.com

  2. Naval Research Laboratory Layered Ocean Model (NLOM) http://www.navo.hpc.mil/Navigator/Fall99_Feature.html

  3. Alam S, Barrett B, Bast M, Fahey MR, Kuehn J, McCurdy C, Rogers J, Roth P, Sankaran R, Vetter J, Worley P, Yu W (2008) Early Evaluation of IBM BlueGene/P. In: SC

  4. Balaji P, Chan A, Thakur R, Gropp W, Lusk E (2008) Non-Data-Communication Overheads in MPI: Analysis on Blue Gene/P. In: Euro PVM/MPI Users’ Group Meeting, Dublin, Ireland

  5. Overview of the IBM Blue Gene/P project http://www.research.ibm.com/journal/rd/521/team.pdf

  6. IBM System Blue Gene Solution: Blue Gene/P Application Development http://www.redbooks.ibm.com/redbooks/pdfs/sg247287.pdf

  7. Chan A, Balaji P, Thakur R, Gropp W, Lusk E (2008) Communication Analysis of Parallel 3D FFT for Flat Cartesian Meshes on Large Blue Gene Systems. In: HiPC, Bangalore, India

  8. Liu J, Chandrasekaran B, Wu J, Jiang W, Kini S, Yu W, Buntinas D, Wyckoff P, Panda DK (2003) Performance Comparison of MPI Implementations over InfiniBand Myrinet and Quadrics. In: Supercomputing 2003: The International Conference for High Performance Computing and Communications, November 2003

  9. Liu J, Jiang W, Wyckoff P, Panda DK, Ashton D, Buntinas D, Gropp W, Toonen B (2004) Design and Implementation of MPICH2 over InfiniBand with RDMA Support. In: Proceedings of Int’l Parallel and Distributed Processing Symposium (IPDPS ’04), April 2004

  10. Liu J, Wu J, Kini S, Noronha R, Wyckoff P, Panda DK (2002) MPI Over InfiniBand: Early Experiences. In: IPDPS

  11. Petrini F, Feng W-C, Hoisie A, Coll S, Frachtenberg E (2002) The Quadrics Network: High Performance Clustering Technology. IEEE Micro 22(1):46–57

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pavan Balaji.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Balaji, P., Chan, A., Thakur, R. et al. Toward message passing for a million processes: characterizing MPI on a massive scale blue gene/P . Comp. Sci. Res. Dev. 24, 11–19 (2009). https://doi.org/10.1007/s00450-009-0095-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00450-009-0095-3

Keywords

Navigation