Advertisement

Performance Analysis and Workload Characterization with IPM

  • Karl Fürlinger
  • Nicholas J. Wright
  • David Skinner
Conference paper

Abstract

IPM is a profiling and workload characterization tool for MPI applications. IPM achieves its goal of minimizing the monitoring overhead by recording performance data in a fixed-size hashtable resident in memory and by carefully optimizing time-critical operations. At the same time, IPM offers very detailed and user-centric performance metrics. IPM’s performance data is delivered as an XML file that can subsequently be used to generate a detailed profiling report in HTML format, avoiding the need for custom GUI applications. Pairwise communication volume and communication topology between processes, communication time breakdown across ranks, MPI operation timings, and MPI message sizes (buffer lengths) are some of IPM’s most widely used metrics. IPM is free and distributed under the LGPL license.

Keywords

Communication Topology Workload Characterization Monitoring Overhead Wallclock Time Conjugate Gradient Inversion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Shirley Browne, Jack Dongarra, N. Garner, G. Ho, and Philip J. Mucci. A portable programming interface for performance evaluation on modern processors. Int. J. High Perform. Comput. Appl., 14(3):189–204, 2000. CrossRefGoogle Scholar
  2. 2.
    Holger Brunst and Bernd Mohr. Performance analysis of large-scale OpenMP and hybrid MPI/OpenMP applications with VampirNG. In Proceedings of the First International Workshop on OpenMP (IWOMP 2005), Eugene, Oregon, USA, May 2005. Google Scholar
  3. 3.
    Karl Fürlinger and Michael Gerndt. Periscope: Performance analysis on large-scale systems. InSiDE – Innovatives Supercomputing in Deutschland (Featured Article), 3(2, Autumn):26–29, 2005. Google Scholar
  4. 4.
    Markus Geimer, Felix Wolf, Brian J. N. Wylie, and Bernd Mohr. Scalable parallel trace-based performance analysis. In Proceedings of the 13th European PVM/MPI Users’ Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface (EuroPVM/MPI 2006), pages 303–312, Bonn, Germany, 2006. Google Scholar
  5. 5.
    Allen D. Malony and Sameer S. Shende. Performance technology for complex parallel and distributed systems. pages 37–46, 2000. Google Scholar
  6. 6.
  7. 7.
    Wolfgang E. Nagel, Alfred Arnold, Michael Weber, Hans-Christian Hoppe, and Karl Solchenbach. VAMPIR: Visualization and analysis of MPI resources. Supercomputer, 12(1):69–90, 1996. Google Scholar
  8. 8.
    Sustained system performance benchmarks at NERSC, http://www.nersc.gov/projects/ssp.php.
  9. 9.
    PAPI web page: http://icl.cs.utk.edu/papi/.
  10. 10.
  11. 11.
    Sameer S. Shende and Allen D. Malony. The TAU parallel performance system. International Journal of High Performance Computing Applications, ACTS Collection Special Issue, 2005. Google Scholar
  12. 12.
    Felix Wolf and Bernd Mohr. Automatic performance analysis of hybrid MPI/OpenMP applications. In Proceedings of the 11th Euromicro Conference on Parallel, Distributed and Network-Based Processing (PDP 2003), pages 13–22. IEEE Computer Society, February 2003. Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Karl Fürlinger
    • 1
  • Nicholas J. Wright
  • David Skinner
  1. 1.Computer Science Division, EECS DepartmentUniversity of California at BerkeleyBerkeleyUSA

Personalised recommendations