Performance measurement using system monitors

  • Erwin M. Thurner
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 729)


System monitors record inner states of computing systems. They are required for the debugging of computer systems as well as for the measurement of performance and they are used for the verification of system models, too. This paper first discusses the area of application of system monitors, and afterwards it introduces measurement-principles of the different monitor types:
  • Software monitors, that either analyze the account-log, or that are available as event-driven monitors, as samplers or as profiling-monitors.

  • Hardware monitors, using the measurement-principles logic-analyzer, events, sampling, and constructing classes of states.

  • Hybrid monitors, which use the measurement-principles of hardware-monitors on the hardware part, but differ by the software part that generates the signals to be measured.

At last, connections to performance-analysis based on models are discussed.

Key words

System Monitor Bus Monitor Software Monitor Hardware Monitor Hybrid Monitor Measurement of Performance Debugging Performance Models Event Monitor Sampling Monitor Monitor based on Classes of States 


  1. [1]
    Adams, M.; Qian, Y.; Tomaszunas, J.; Burtscheid, J.; Kaiser, E.; Juhász, C.: Conformance Testing of VMEbus and Multibus II Products. IEEE Micro, February 1992, pp. 57–64Google Scholar
  2. [2]
    Ajmone Marsan, M.; Balbo, G.; Conte, G.: Performance Models of Multiprocessor Systems. The MIT Press: Cambridge (Mass.), 1986Google Scholar
  3. [3]
    Anderson, C. S.; Armstrong, K. J.; Borriello, G.: Proceedings of CS 586. PHM — A Programmable Hardware-Monitor. Technical Report # 89-09-11, University of Washington, Seattle (WA). August 1989Google Scholar
  4. [4]
    Bemmerl, Th.; Karl, W.; Luksch, P.: Evaluierung von Architektur-parametern verschiedener Rechnerstrukturen mit Hilfe von CAE-Workstations. In: Müller-Stoy, P. (Hrsg.): Architektur von Rechen-Systemen. 11. GI/ITG-Fachtagung. VDE-Verlag: Berlin 1990, S. 255–273Google Scholar
  5. [5]
    Brinksma, E.: A Tutorial on LOTOS. Protocol Specification, Testing, and Verification, V, 1986, pp. 171–194Google Scholar
  6. [6]
    Civera, P.; Conte, G.; del Corso, D.; Maddaleno, F.: Petri Net Models for the Description and Verification of Parallel Protocols. In: Barbacci, M. R.; Koomen, C. J. (Eds.): Computer Hardware Description Languages and their Applications. North-Holland 1987, pp. 309–325Google Scholar
  7. [7]
    Dembinski, P.; Budkowski, S.: Specification Language Estelle. In: Diaz, M.; Ansart, J.-P.; Courtiat, J.-P.; Azema, P.; Chari, V.: The Formal Description Technique Estelle. Results of the ESPRIT/SEDOS Project. North-Holland 1989, pp.35–76Google Scholar
  8. [8]
    Conner, D.: High-Density PLDs. EDN, January 2, 1992, pp. 76–88Google Scholar
  9. [9]
    Fehlau, F.; Simon, Th.; Spaniol, O.; Suppan-Borowka, J.: Messungen des Leistungsverhaltens Lokaler Netze mit einem Software-Monitor. Informatik Forsch. Entwick. (1987) 2:55–64Google Scholar
  10. [10]
    Ferrari, D.; Serazzi, G.; Zeigner, A.: Measurement and Tuning of Computer Systems. Prentice-Hall: Englewood Cliffs 1983Google Scholar
  11. [11]
    Föckeler, W.; Rüsing, N.: Aktuelle Probleme und Lösungen zur Leistungsanalyse von modernen Rechensystemen mit Hardware-Werkzeugen. In: Informatik-Fachberichte 218, 1989, S. 39–50Google Scholar
  12. [12]
    Gross, D.; Harris, C. M.: Fundamentals of Queuing Theory. Wiley and Sons: New York 1985Google Scholar
  13. [13]
    Hattenbach, J.: Hardware-Monitor-Messungen an einer SPERRY 1100/83. GWDG-Bericht Nr. 26, 1983, S. 1–21Google Scholar
  14. [14]
    Hennessy, J. L.; Patterson, D. A.: Computer-Architecture: A Quantitative Approach. San Mateo (CA) 1990Google Scholar
  15. [15]
    Hofmann, R.: Gesicherte Zeitbezüge beim Monitoring von Multiprozessorsystemen. In: Müller-Stoy, P. (Hrsg.): Architektur von Rechensystemen. 11. GI/ITG-Fachtagung. VDE-Verlag: Berlin 1990, S. 389–401Google Scholar
  16. [16]
    Intel: Multibus II Bus Architecture Specification Handbook. Santa Clara (CA), 1984Google Scholar
  17. [17]
    Keefe, D. D.: Hierarchical Control Programs for System Evaluation. IBM Systems Journal, Vol. 7, No. 2, 1968, pp. 123–133Google Scholar
  18. [18]
    Klas, G.; Lepold, R.: TOMSPIN, a Tool for Modeling with Stochastic Petri Nets. Proc. CompEuro 92, The Hague (The Netherlands), May 1992Google Scholar
  19. [19]
    Klas, G.; Wincheringer, Ch.: A Generalized Stochastic Petri net Model of Multibus II. Proc. CompEuro 92, The Hague (The Netherlands), May 1992Google Scholar
  20. [20]
    Lepold, R.: Performability Evaluation of a Fault-Tolerant Multiprocessor Architecture Using Stochastic Petri Nets. Proc. 5th Int. Conf. on Fault-Tolerant Computing Systems. Nürnberg, September 1991Google Scholar
  21. [21]
    Lepold, R.; Klas, G.: Generierung und analytische Auswertung stochastischer Petri-Netz-Modelle zur Bewertung komplexer Rechensysteme. In: Müller-Stoy, P.: Architektur von Rechensystemen. Tagungsband 11. GI/ITG-Fachtagung. VDE-Verlag: Berlin 1990Google Scholar
  22. [22]
    MIPS Computer Systems Inc.: RISC/os User's Reference manual, Vol. I (System V). Sunnyvale (CA), June 1990Google Scholar
  23. [23]
    Nutt, G.: Tutorial: Computer System Monitors. IEEE Computer, November 1975, pp. 51–61Google Scholar
  24. [24]
    Quick, A.: Synchronisierte Software-Messung zur Bewertung des dynamischen Verhaltens eines UNIX-Multiprozessor-Betriebssystems. In: Informatik-Fachberichte 218, 1989, S. 142–158Google Scholar
  25. [25]
    Richter, E.: LS2-Software-Monitor und Steuersystem für SVM. Rechentechnik/datenverarbeitung 26 (1989) 3, S. 19–22Google Scholar
  26. [26]
    Rosenbohm, W.: Messung von SVC-Ausführungszeiten mit Hilfe eines Software-Monitors. In: Mertens, B. (Hrsg.): Messung, Modellierung und Bewertung von Rechensystemen. Springer: Berlin, Heidelberg 1981, S. 58–72Google Scholar
  27. [27]
    SCO (The Santa Cruz Operation): SCO Open Desktop Development System, Programmer's Guide, ch. 9: C Programmer's Productivity Tools. Santa Cruz (CA) 1989Google Scholar
  28. [28]
    Svoboda, L.: Software Performance Monitors: Desing Trade-Offs. In: CMG VII Conference Proceedings, 1976, pp. 211–220Google Scholar
  29. [29]
    Thurner, E. M.: Hardware-Monitor Using Classes of States to Detect Performance-Bottlenecks in Computer Systems. In: Krupat, C.: Proc. Supercomputing Symposium '92, Montreal 1992, pp. 328–339Google Scholar
  30. [30]
    Thurner, E. M.: Formal Specification of Bus-Protocols and a Way to their Automatic Implementation. In: Eck, Ch. et al. (Eds.): Proc. Open Bus Systems '92. Zürich (Schweiz), 1992, pp. 123–128Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Erwin M. Thurner
    • 1
  1. 1.ZFE ST SN 13Siemens AGMünchen

Personalised recommendations