Advertisement

Performance Instrumentation and Measurement for Terascale Systems

  • Jack Dongarra
  • Allen D. Malony
  • Shirley Moore
  • Philip Mucci
  • Sameer Shende
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2660)

Abstract

As computer systems grow in size and complexity, tool support is needed to facilitate the efficient mapping of large-scale applications onto these systems. To help achieve this mapping, performance analysis tools must provide robust performance observation capabilities at all levels of the system, as well as map low-level behavior to high-level program constructs. Instrumentation and measurement strategies, developed over the last several years, must evolve together with performance analysis infrastructure to address the challenges of new scalable parallel systems.

Keywords

Measurement Strategy Executable Code Performance Instrumentation Dynamic Instrumentation Hardware Counter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    MPI: A message passing interface standard. International Journal of Supercomputing Applications, 8(3/4), 1994.Google Scholar
  2. 2.
    S. Browne, J. Dongarra, N. Garner, G. Ho, and P. Mucci. A portable programming interface for performance evaluation on modern processors. International Journal of High Performance Computing Applications, 14(3): 189–204, 2000.CrossRefGoogle Scholar
  3. 3.
    B. Buck and J. Hollingsworth. An API for runtime code patching. International Journal of High Performance Computing Applications, 14(4):317–329, 2000.CrossRefGoogle Scholar
  4. 4.
    J. Dean, C. Waldspurger, and W. Weihl. Transparent, low-overhead profiling on modern processors. In Workshop on Profile and Feedback-directed Compilation, October 1998.Google Scholar
  5. 5.
    A. Eustace and A. Srivastava. ATOM: A flexible interface for building high performance program analysis tools. In Proc. USENIX Winter 1995, pages 303–314, 1995.Google Scholar
  6. 6.
    J. Galarowics and B. Mohr. Analyzing message passing programs on the Cray T3E with PAT and VAMPIR. Technical report, ZAM Forschungszentrum: Juelich, Germany, 1998.Google Scholar
  7. 7.
    J. S. Germain, A. Morris, S. Parker, A. Malony, and S. Shende. Integrating performance analysis in the uintah software development cycle. In High Performance Distributed Computing Conference, pages 33–41, 2000.Google Scholar
  8. 8.
    R. Hornung and S. Kohn. Managing application complexity in the samrai object-oriented framework. Concurrency and Computation: Practice and Experience, special issue on Software Architectures for Scientific Applications, 2001.Google Scholar
  9. 9.
    J. Larus and T. Ball. Rewriting executable files to measure program behavior. Software Practice and Experience, 24(2):197–218, 1994.CrossRefGoogle Scholar
  10. 10.
    K. Lindlan, J. Cuny, A. Malony, S. Shende, B. Mohr, R. Rivenburgh, and C. Rasmussen. A tool framework for static and dynamic analysis of object-oriented software with templates. In Proc. SC 2000, 2000.Google Scholar
  11. 11.
    P. Mucci. Dynaprof 0.8 user’s guide. Technical report, Nov. 2002.Google Scholar
  12. 12.
    S. Shende, A. Malony, and R. Bell. Instrumentation and measurement strategies for flexible and portable empirical performance evaluation. In International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’2001), 2001.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Jack Dongarra
    • 1
  • Allen D. Malony
    • 2
  • Shirley Moore
    • 1
  • Philip Mucci
    • 2
  • Sameer Shende
    • 2
  1. 1.Innovative Computing LaboratoryUniversity of Tennessee KnoxvilleUSA
  2. 2.Computer Science DepartmentUniversity of OregonEugeneUSA

Personalised recommendations