Advertisement

Evaluating and modeling communication overhead of MPI primitives on the Meiko CS-2

  • Gianluigi Folino
  • Giandomenico Spezzano
  • Domenico Talia
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1497)

Abstract

The MPI (Message Passing Interface) is a standard communication library implemented on a large number of parallel computers. It is used for the development of portable parallel software. This paper presents, evaluates and compares the performance of the point-to-point and broadcast communication primitives of the MPI standard library on the Meiko CS-2 parallel machine. Furthermore, the paper proposes a benchmark model of MPI communications based on the size of messages exchanged and the number of involved processors. Finally, the MPI performance results on the CS-2 are compared with the performance of the Meiko Elan Widget library and the IBM SP2.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    E. Barton, J. Cownie, and M. McLaren, “Message Passing on the Meiko CS-2”, Parallel Computing,, North-Holland, vol. 20, pp. 497–507, 1994.MATHCrossRefGoogle Scholar
  2. 2.
    R. Butler and E. Lusk, “Monitors, Messages, and Clusters: The P4 Parallel Programming System”, Parallel Computing, North-Holland, vol. 20, pp. 547–564, 1994.MATHCrossRefGoogle Scholar
  3. 3.
    J.J. Dongarra and T. Dunigan, “Message Passing Performance on Varius Computers”, Tech. Report UT/CS-95-229, Computer Science Dept., University of Tennessee, Knoxville, 1995. Available at http://www.cs.utk.edu/~library/1995.html.Google Scholar
  4. 4.
    W. Gropp, E. Lusk, N. Doss, and A. Skjellum, “A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard”, Parallel Computing, North-Holland, vol. 22, pp. 789–828, 1996.MATHCrossRefGoogle Scholar
  5. 5.
    R.W. Hockney, “The Communication Challenge for MPP: Intel Paragon and Meiko CS-2”, Parallel Computing, North-Holland, vol. 20, pp. 389–398, 1994.CrossRefGoogle Scholar
  6. 6.
    C.E. Leiserson, “Fat-trees: Universal Networks for Hardware-Efficient Supercomputing”, IEEE Transactions on Computers, vol. C-34, no. 10, pp. 892–901, 1985.Google Scholar
  7. 7.
    J.Miguel, A. Arruabarrena, R. Beivide, and J.A. Gregorio, “Assessing the Performance of the New IBM SP2 Communication Subsystem”, IEEE Parallel and Distributed Technology, pp. 12–22, Winter 1996.Google Scholar
  8. 8.
    M. Snir, S. W. Otto, S. Huss-Lederman, D. W. Walker, and J. Dongarra, MPI: The Complete Reference, The MIT Press, 1996.Google Scholar
  9. 9.
    Z. Xu and K. Hwang, “Modeling Communication Overhead: MPI and MPL Performance on the IBM SP2”, IEEE Parallel and Distributed Technology, pp. 9–23, Spring 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Gianluigi Folino
    • 1
  • Giandomenico Spezzano
    • 1
  • Domenico Talia
    • 1
  1. 1.ISI-CNR, c/o DEIS, UNICALRendeItaly

Personalised recommendations