Advertisement

Monitor Overhead Measurement with SKaMPI

  • Dieter Kranzlmüller
  • Ralf Reussner
  • Christian Schaubschläger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1697)

Abstract

The activities testing and tuning of the software lifecycle are concerned with analyzing program executions. Such analysis relies on state information that is generated by monitoring tools during program runs. Unfortunately the monitor overhead imposes intrusion onto the observed program. The resulting influences are manifested as different temporal behavior and possible reordering of nondeterministic events, which is called the probe effect. Consequently correct analysis data requires to keep the perturbation a minimum, which defines the need for monitors with small overhead. Measuring the actual overhead of monitors for MPI programs can be done with the benchmarking suite SKaMPI. It’s results serve as a main characteristic for the quality of the applied tool, and additionally increase the user’s awareness of the monitoring crisis. Besides that, the measurements of SKaMPI can be used in correction algorithms, that remove the monitoring overhead from perturbed traces.

Keywords

Arrival Order Wild Card Message Transfer Software Lifecycle Monitoring Functionality 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. Cargille, B.P. Miller: Binary Wrapping: A Technique for Instrumenting Object Code. SIGPLAN Notices, Vol. 27, pp. 17–18 (June 1992).CrossRefGoogle Scholar
  2. 2.
    J. Gait: A Probe Effect in Concurrent Programs. Software-Practise and Experience, Vol. 16(23), pp. 225–233 (March 1986).CrossRefGoogle Scholar
  3. 3.
    D. Kranzlmüller, S. Grabner, J. Volkert: Monitoring Strategies for Hypercube Systems. Proc. PDP’96, 4th EUROMICRO Workshop on Parallel and Distr. Proc., Braga, Portugal, pp. 486–492 (Jan. 1996).Google Scholar
  4. 4.
    D. Kranzlmüller, S. Grabner, J. Volkert: Debugging with the MAD Environment. Parallel Computing, Vol. 23, No. 1–2, pp. 199–217 (Apr. 1997).zbMATHCrossRefGoogle Scholar
  5. 5.
    L. Lamport: Time, Clocks, and the Ordering of Events in a Distributed System. Comm. ACM, pp. 558–565 (July 1978).Google Scholar
  6. 6.
    E. Leu, A. Schiper, A. Zramdini: Execution Replay on Distributed Memory Architectures. Proc. 2nd IEEE Symp. on Parallel & Distributed Processing, Dallas, TX, pp. 106–112 (Dec. 1990).Google Scholar
  7. 7.
    A. Malony, D. Reed: Models for performance perturbation analysis. Proc. Workshop Parallel Distributed Debugging, ACM SIGPLAN/SIGOPS and Office of Naval Research, (May 1991).Google Scholar
  8. 8.
    Message Passing Interface Forum: MPI: A Message-Passing Interface Standard-Version 1.1. http://www.mcs.anl.gov/mpi/ (June 1995).
  9. 9.
    R. Reussner, P. Sanders, L. Prechelt, M. Müller: SKaMPI: A Detailed, Accurate MPI Benchmark. Proc. 5th European PVM/MPI Users’ Group Meeting, Springer, Lecture Notes in Computer Science, Vol. 1497, Liverpool, UK, pp. 52–59 (Sept. 1998).Google Scholar
  10. 10.
    M. Snir, S. Otto, S. Huss-Lederman, D. Walker, J. Dongarra: MPI-The Complete Reference. 2nd Edition, MIT Press, Cambridge, Massachusetts, (1998).Google Scholar
  11. 11.
    R. Wismüller: On-Line Monitoring Support in PVM and MPI. Proc. of EuroPVM/MPI’98, LNCS, Springer, Vol. 1497, Liverpool, UK, pp. 312–319, (Sept 1998).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Dieter Kranzlmüller
    • 1
  • Ralf Reussner
    • 2
  • Christian Schaubschläger
    • 1
  1. 1.GUP LinzJoh. Kepler UniversityLinzAustria
  2. 2.LIINUniversity of KarlsruheGermany

Personalised recommendations