Advertisement

A New Approach to MPI Collective Communication Implementations

  • Torsten Hoefler
  • Jeffrey M. Squyres
  • Graham E. Fagg
  • George Bosilca
  • Wolfgang Rehm
  • Andrew Lumsdaine

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alexandrov, A. et. al. (1995). LogGP: Incorporating Long Messages into the LogP Model. Journal of Parallel and Distributed Computing, 44(1):71-79.MathSciNetGoogle Scholar
  2. Barrett, B. et. al. (2005). Analysis of the Component Architecture Overhead in Open MPI. In Proc., 12th European PVM/MPI Users’ Group Meeting.Google Scholar
  3. Burns, G. et. al. (1994). LAM: An Open Cluster Environment for MPI. In Proc. of Supercomputing Symposium, pages 379-386.Google Scholar
  4. Chan, E.W. et. al. (2004). On optimizing of collective communication. In Proc. of IEEE International Conference on Cluster Computing, pages 145-155.Google Scholar
  5. Culler, D. et. al. (1993). LogP: towards a realistic model of parallel computation. In Principles Practice of Parallel Programming, pages 1-12.Google Scholar
  6. Fagg, G.E. et. al. (2004). Extending the MPI specification for process fault tolerance on high performance computing systems. In Proceedings of the International Supercomputer Conference (ICS) 2004. Primeur.Google Scholar
  7. Gabriel, Edgar and et al. (2004). Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation. In Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary.Google Scholar
  8. Gropp, W. et. al. (1996). A high-performance, portable implementation of the MPI message passing interface standard. Parallel Computing, 22(6):789-828.zbMATHCrossRefGoogle Scholar
  9. Hartmann, O. et. al. (2006). A decomposition approach for optimizing the performance of MPI libraries. In Proc., 20th International Parallel and Distributed Processing Symposium IPDPS.Google Scholar
  10. Hoefler, T. et. al. (2005). A practical Approach to the Rating of Barrier Algorithms using the LogP Model and Open MPI. In Proc. of the 2005 International Conference on Parallel Processing Workshops (ICPP’05), pages 562-569.Google Scholar
  11. Huse, Lars Paul (1999). Collective communication on dedicated clusters of workstations. In Proc. of the 6th European PVM/MPI Users’ Group Meeting on Recent Advances in PVM and MPI, pages 469-476.Google Scholar
  12. Mitra, Prasenjit, Payne, David, Shuler, Lance, van de Geijn, Robert, and Watts, Jerrell (1995). Fast collective communication libraries, please. Technical report, Austin, TX, USA.Google Scholar
  13. MPICH2 Developers (2006). http://www.mcs.anl.gov/mpi/mpich2/.
  14. Pjesivac-Grbovic, J. et. al. (2005). Performance Analysis of MPI Collective Operations. In Proc. of the 19th International Parallel and Distributed Processing Symposium.Google Scholar
  15. Rabenseifner, R. (1999). Automatic MPI counter profiling of all users: First results on a CRAY T3E 900-512. In Proc. of the Message Passing Interface Developer’s and User’s Conference, pages 77-85.Google Scholar
  16. Squyres, J. M. et. al. (2004). The component architecture of Open MPI: Enabling third-party collective algorithms. In Proc. 18th ACM International Conference on Supercomputing, Workshop on Component Models and Systems for Grid Applications, pages 167-185.Google Scholar
  17. Vadhiyar, S.S. et. al. (2000). Automatically tuned collective communications. In Proc. of the ACM/IEEE conference on Supercomputing (CDROM), page 3.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • Torsten Hoefler
    • 1
  • Jeffrey M. Squyres
    • 2
  • Graham E. Fagg
    • 3
  • George Bosilca
    • 4
  • Wolfgang Rehm
    • 5
  • Andrew Lumsdaine
    • 6
  1. 1.Open Systems LabIndiana UniversityBloomingtonUSA
  2. 2.Cisco SystemsSan JoseUSA
  3. 3.Dept. of Computer ScienceThe University of TennesseeKnoxvilleUSA
  4. 4.Dept. of Computer ScienceUniversity of TennesseeKnoxvilleUSA
  5. 5.Dept. of Computer ScienceTechnical University of ChemnitzGermany
  6. 6.Open Systems LabIndiana UniversityBloomingtonUSA

Personalised recommendations