A Community Databank for Performance Tracefiles

  • Ken Ferschweiler
  • Mariacarla Calzarossa
  • Cherri Pancake
  • Daniele Tessera
  • Dylan Keon
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2131)

Abstract

Tracefiles provide a convenient record of the behavior of HPC programs, but are not generally archived because of their storage requirements. This has hindered the developers of performance analysis tools, who must create their own tracefile collections in order to test tool functionality and usability. This paper describes a shared databank where members of the HPC community can deposit tracefiles for use in studying the performance characteristics of HPC platforms as well as in tool development activities. We describe how the Tracefile Testbed was designed and implemented to facilitate flexible searching and retrieval of tracefiles. A Web-based interface provides a convenient mechanism for browsing and downloading collections of tracefiles and tracefile segments based on a variety of characteristics. The paper discusses the key implementation challenges.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. 1.
    R. Eigenmann and S. Hassanzadeh. Benchmarking with Real Industrial Applications: The SPEC High-Performance Group. IEEE Computational Science and Engineering, Spring Issue, 1996.Google Scholar
  2. 2.
    T. Fahringer and A. Pozgaj. P3T+: A Performance Estimator for Distributed and Parallel Programs. Journal of Scientific Programming, 7(1), 2000.Google Scholar
  3. 3.
    B.P. Miller et al. The Paradyn Parallel Measurement Performance Tool. IEEE Computer, 28(11):37–46, 1995.Google Scholar
  4. 4.
    K.L. Karavanic and B.P. Miller. Improving Online Performance Diagnosis by the Use of Historical Performance Data. In Proc. SC’99, 1999.Google Scholar
  5. 5.
    C.M. Pancake, M. Newsome and J. Hanus. ’split Personalities’ for Scientific Databases: Targeting Database Middleware and Interfaces to Specific Audiences. Future Generation Computing Systems, 6: 135–152, 1999.CrossRefGoogle Scholar
  6. 6.
    S.E. Perl, W.E. Weihl, and B. Noble. Continuous Monitoring and Performance Specification. Technical Report 153, Digital Systems Research Center, June 1998.Google Scholar
  7. 7.
    D.A. Reed et al. Performance Analysis of Parallel Systems: Approaches and Open Problems. In Joint Symposium on Parallel Processing, pages 239–256, 1998.Google Scholar
  8. 8.
    S. Shende and A. Malony and J. Cuny and K. Lindlan and P. Beckman and S. Karmesin, Portable Profiling and Tracing for Parallel Scientific Applications using C++. In Proc. SPDT’98: ACM SIGMETRICS Symposium on Parallel and Distributed Tools, pages 134–145, 1998.Google Scholar
  9. 9.
    J. Yan, S. Sarukhai, and P. Mehra, “Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs Using the AIMS Toolkit,” Software-Practice and Experience, 25(4): 429–461, 1995.CrossRefGoogle Scholar
  10. 10.
    O. Zaki, E. Lusk, W. Gropp, and D. Swider. Toward Scalable Performance Visualization with Jumpshot. The International Journal of High Performance Computing Applications, 13(2):277–288, 1999.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Ken Ferschweiler
    • 1
  • Mariacarla Calzarossa
    • 2
  • Cherri Pancake
    • 1
  • Daniele Tessera
    • 2
  • Dylan Keon
    • 1
  1. 1.Northwest Alliance for Computational Science & EngineeringOregon State UniversityUSA
  2. 2.Dipartimento di Informatica e SistemisticaUniversitá di PaviaItaly

Personalised recommendations