Advertisement

Performance Measurement and Tuning of Interactive Information Systems

  • I. Mistrik
  • D. A. Nelson

Abstract

We describe a strategy for instrumenting an Interactive Information System (IIS) and performing performance measurement and tuning. A hierarchy of measuring nodes is superimposed on a multi-layer IIS under this approach. Each instrumentation node accumulates and records the changes in parameter values of all instrumentation nodes subordinate to it. Adhering to this technique of selective accumulation assures that the performance behavior of the system can be understood, and the necessary data obtained for construction of event frequency profiles and process frequency and duration profiles. Data is collected only when it is needed as input for analysis. Frequency and duration profiles of system services functions provide the principal information needed for selecting system components and processing paths for performance tuning.

A side-effect of installing performance measurement instrumentation in a software-based system is that it significantly enhances the error detection and fault isolation facilities available to the systems developers. It also provides the means for measuring and evaluating software maintenance performed on the system. This enhanced capability for process-oriented handling of errors, faults and maintenance forms the foundation for an engineering approach to attaining high-quality software systems.

Keywords

Execution Path Program Step Performance Tuning Bulk Storage User Service Request 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderson, G. E., 1984, “The Coordinated Use of Five Performance Evaluation Methodologies”, Communications of the ACM, 27, (2), February 1984, pp. 119–125.CrossRefGoogle Scholar
  2. Bernstein, L., and Yuhas, C. M., 1984, “Taking the Right Measure of System Performance”, Computerworld, CW Comm, XVIII, (7), ID/l-ID/4, 30 July 1984.Google Scholar
  3. Boehm, B. W., 1981, Software Engineering Economics, Prentice-Hall.MATHGoogle Scholar
  4. Bucci, G., and Maio, D., 1982, “Margining Performance and Cost-Benefit Analysis in Computer System Evaluation”, Computer, 18, (9), IEEE, September 1982, pp. 23–31.CrossRefGoogle Scholar
  5. Floyd, C., 1984, “A Systematic Look at Prototyping”, Approaches to Prototyping, Budde, R., et al., eds., Springer Verlag.Google Scholar
  6. Gifford, D., and Spector, A., 1984, “The TWA Reservation System”, Communications of the ACM, 27, (7), July 1984, pp. 649–665.CrossRefGoogle Scholar
  7. Gohsman, G., 1985, “Performance Analysis Aids Software Development”, Computer Design, 24, (10), 15 August 1985, p. 92.Google Scholar
  8. Kosmatka, L. J., 1984, “A User Challenges Value of Subsecond Response Time”, Computerworld, CW Comm., XVIII, (23) ID/l-ID/18, 11 June 1984.Google Scholar
  9. Mistrik, I., et al., 1985, The (IV+V) System Software Package: System Description, GID, Heidelberg, FRG, November 1985.Google Scholar
  10. Mistrik, I., and Nelson, D. A., 1985, “A Technique That Supports Both Evaluation and Design of User Interfaces”, 3rd Symp. Empirical Foundations Infor. & Software Sci., Denmark, 21 October 1985.Google Scholar
  11. Nelson, D. A., 1984, “A Software Development Environment Emphasizing Rapid Prototyping”, Approaches to Prototyping, Budde, R. et al., eds., Springer Verlag.Google Scholar
  12. Norman, D. A., 1983, “Design Rules Based on Analyses of Human Errors”, Communications of the ACM, 26, (4), April 1983, pp. 254–258.Google Scholar
  13. Penniman, W. D., 1985, “Information System Performance Measurement Revisited”, Proc. Com. Sci. Conf., ACM, 12 March 1985, pp. 29–32.Google Scholar
  14. Plattner, B., and Nievergelt, I., 1981, “Monitoring Program Execution: A Survey”, Computer, 14, (11), IEEE, November 1981, pp. 76–93.CrossRefGoogle Scholar
  15. Poison, P., and Kieras, D., 1985, “A Quantitative Model of Learning and Performance of Text Editing Knowledge”, Proc. CHI’85, ACM SIGCHI, 14 April 1985, pp. 207–212.Google Scholar
  16. Roberts, T., and Moran, T., 1981, “The Evaluation of Text Editors: A Methodology and Empirical Results”, Communications of the ACM, 26, (4), April 1981, pp. 265–283.CrossRefGoogle Scholar
  17. Stockenberg, J., and van Dam, A., 1978, “Vertical Migration for Performance Enhancement in Layered Hardware/-Firmware/Software Systems”, Computer, 16, (5), May 1978, pp. 35–50.CrossRefGoogle Scholar
  18. Stonebraker, M., et al., 1983, “Performance Enhancements to a Relational Database System”, ACM Trans. Database Syst., 8, (2), ACM, June 1983, pp. 167–185.CrossRefGoogle Scholar
  19. Tolle, I. E., 1984, “Monitoring and Evaluation of Information Systems Via Transaction Log Analysis”, Research & Development in Information Retrieval, van Rijsbergen, ed., Cambridge Univ. Press, pp. 247–258.Google Scholar
  20. Tolle, I. E., 1985, “Performance Measurement and Evaluation of Online Information Systems”, Proc. Comp. Sci. Conf., ACM, 12 March 1985, pp. 196–203.Google Scholar
  21. Whiteside, J., et al., 1985, “User Performance with Command, Menu, and Iconic Interfaces”, Proc. CHI’85, ACM SIGCHI, 14 April 1985, pp. 185–191.CrossRefGoogle Scholar

Copyright information

© Plenum Press, New York 1987

Authors and Affiliations

  • I. Mistrik
    • 1
  • D. A. Nelson
    • 2
  1. 1.Gesellschaft für Information und Dokumentation mbH (GID)Heidelberg 1F. R. Germany
  2. 2.Information EngineeringSanta BarbaraUSA

Personalised recommendations