Advertisement

DynTG: A Tool for Interactive, Dynamic Instrumentation

  • Martin Schulz
  • John May
  • John Gyllenhaal
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3515)

Abstract

With the increasing complexity of today’s systems, detailed performance analysis is more important than ever. We have developed DynTG, a tool for interactive, dynamic instrumentation. It uses performance module plugins to reconfigure the data acquisition and provides a source browser that allows users to insert any probe functionality provided by the modules dynamically into the target application. Any instrumentation can be added both before and during the application’s execution and the acquired data is presented in realtime within the source viewer. This enables users to monitor their applications’ progress and interactively control and adapt the instrumentation based on their observations.

Keywords

Lawrence Livermore National Laboratory High Performance Computing Application Source Viewer Dynamic Instrumentation Hardware Counter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bell, R., Malony, A., Shende, S.: A Portable, Extensible, and Scalable Tool for Parallel Performance Profile Analysis. In: Kosch, H., Böszörményi, L., Hellwagner, H. (eds.) Euro-Par 2003. LNCS, vol. 2790, pp. 17–26. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  2. 2.
    Buck, B., Hollingsworth, J.: An API for runtime code patching. The International Journal of High Performance Computing Applications 14(4), 317–329 (2000)CrossRefGoogle Scholar
  3. 3.
    DeRose, L., Hoover, T., Hollingsworth, J.: The Dynamic Probe Class Library — An Infrastructure for Developing Instrumentation for Performance Tools. In: Proceedings of the 15th International Parallel and Distributed Processing Symposium (April 2001)Google Scholar
  4. 4.
    DeRose, L., Reed, D.: SvPablo: A Multi-Language Architecture-Independent Performance Analysis System. In: Proceedings of the International Conference on Parallel Processing (ICPP) (September 1999)Google Scholar
  5. 5.
    Lawrence Livermore National Laboratory. The ASCI purple benchmark codes (October 2002), http://www.llnl.gov/asci/purple/benchmarks/limited/code_list.html
  6. 6.
    Ludwig, T., Wismüller, R., Sunderam, V., Bode, A.: OMIS — On-line Monitoring Interface Specification (Version 2.0). LRR-TUM Research Report Series, vol. 9. Shaker Verlag, Aachen (1997) ISBN 3-8265-3035-7Google Scholar
  7. 7.
    May, J., Gyllenhaal, J.: Tool Gear: Infrastructure for Parallel Tools. In: Proceedings of the 2003 International Conference on Parallel and Distributed Techniques and Applications (June 2003)Google Scholar
  8. 8.
    Miller, B., Callaghan, M., Cargille, J., Hollingsworth, J., Irvin, R., Karavanic, K., Kunchithapadam, K., Newhall, T.: The Paradyn Parallel Performance Measurement Tool. IEEE Computer 28(11), 37–46 (1995)Google Scholar
  9. 9.
    Mohr, B., Wolf, F.: KOJAK - A Tool Set for Automatic Performance Analysis of Parallel Programs. In: Kosch, H., Böszörményi, L., Hellwagner, H. (eds.) Euro-Par 2003. LNCS, vol. 2790, pp. 1301–1304. Springer, Heidelberg (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Martin Schulz
    • 1
  • John May
    • 1
  • John Gyllenhaal
    • 1
  1. 1.Center for Applied Scientific ComputingLawrence Livermore National Laboratory 

Personalised recommendations