Advertisement

Performance metrics and measurement techniques of collective communication services

  • Natawut Nupairoj
  • Lionel M. Ni
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1199)

Abstract

The performance of collective communication is critical to the overall system performance. In general, the performance of the collective communication is dependent not only on the underlying hardware, but also its implementation. To evaluate the performance of collective communication accurately, identifying the most representative metrics and using correct measurement techniques are two important issues that have to be considered. This paper focuses on the measurement techniques for collective communication services. The proposed techniques can provide an accurate evaluation of the completion time without requiring a global clock and without having to know the detailed implementations of collective communication services. Experimental results conducted on the IBM/SP at Argonne National Laboratory are presented.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. P. I. Forum, MPI: A Message-Passing Interface Standard, Mar. 1994.Google Scholar
  2. 2.
    High Performance Fortran Forum, “High Performance Fortran Language Specification (version 1.0, draft),” Jan. 1993.Google Scholar
  3. 3.
    D. Culler et al., “LogP: Towards a realistic model of parallel computation,” in Proc. of the 4th ACM SIGPLAN Sym. on Principles and Practice of Parallel Programming, May 1993.Google Scholar
  4. 4.
    N. Nupairoj and L. Ni, “Benchmarking of Multicast Communication Services,” Tech. Rep. MSU-CPS-ACS-103, Department of Computer Science, Michigan State University, Apr. 1995.Google Scholar
  5. 5.
    C. E. Leiserson et al., “The network architecture of the Connection Machine CM-5,” in Proceedings of the ACM Symposium on Parallel Algorithms and Architectures, (San Diego, CA.), pp. 272–285, Association for Computing Machinery, 1992.Google Scholar
  6. 6.
    C. Huang and P. K. McKinley, “Communication Issues in Parallel Computing across ATM Networks,” Tech. Rep. MSU-CPS-94-34, Michigan State University, June 1994.Google Scholar
  7. 7.
    N. Nupairoj and L. M. Ni, “Performance Metrics and Measurement Techniques of Collective Communication Services,” Tech. Rep. MSU-CPS-ACS-96-12, Michigan State University, Department of Computer Science, East Lansing, Michigan, Dec. 1996.Google Scholar
  8. 8.
    H. Franke, P. Hochschild, P. Pattnaik, and M. Snir, “MPI-F: An Efficient Implementation of MPI on IBM-SP1,” in Proceedings of the 1994 International Conference on Parallel Processing, vol. III, (St. Charles, IL), pp. 197–201, Aug. 1994.Google Scholar
  9. 9.
    J.-Y. L. Park, H.-A. Choi, N. Nupairoj, and L. M. Ni, “Construction of Optimal Multicast Trees Based on the Parameterized Communication Model,” in Proceedings of the 1996 International Conference on Parallel Processing, pp. 180–187, Aug. 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Natawut Nupairoj
    • 1
  • Lionel M. Ni
    • 1
  1. 1.Department of Computer ScienceMichigan State UniversityEast Lansing

Personalised recommendations