Performance Measurement and Tuning of Interactive Information Systems
We describe a strategy for instrumenting an Interactive Information System (IIS) and performing performance measurement and tuning. A hierarchy of measuring nodes is superimposed on a multi-layer IIS under this approach. Each instrumentation node accumulates and records the changes in parameter values of all instrumentation nodes subordinate to it. Adhering to this technique of selective accumulation assures that the performance behavior of the system can be understood, and the necessary data obtained for construction of event frequency profiles and process frequency and duration profiles. Data is collected only when it is needed as input for analysis. Frequency and duration profiles of system services functions provide the principal information needed for selecting system components and processing paths for performance tuning.
A side-effect of installing performance measurement instrumentation in a software-based system is that it significantly enhances the error detection and fault isolation facilities available to the systems developers. It also provides the means for measuring and evaluating software maintenance performed on the system. This enhanced capability for process-oriented handling of errors, faults and maintenance forms the foundation for an engineering approach to attaining high-quality software systems.
KeywordsExecution Path Program Step Performance Tuning Bulk Storage User Service Request
Unable to display preview. Download preview PDF.
- Bernstein, L., and Yuhas, C. M., 1984, “Taking the Right Measure of System Performance”, Computerworld, CW Comm, XVIII, (7), ID/l-ID/4, 30 July 1984.Google Scholar
- Floyd, C., 1984, “A Systematic Look at Prototyping”, Approaches to Prototyping, Budde, R., et al., eds., Springer Verlag.Google Scholar
- Gohsman, G., 1985, “Performance Analysis Aids Software Development”, Computer Design, 24, (10), 15 August 1985, p. 92.Google Scholar
- Kosmatka, L. J., 1984, “A User Challenges Value of Subsecond Response Time”, Computerworld, CW Comm., XVIII, (23) ID/l-ID/18, 11 June 1984.Google Scholar
- Mistrik, I., et al., 1985, The (IV+V) System Software Package: System Description, GID, Heidelberg, FRG, November 1985.Google Scholar
- Mistrik, I., and Nelson, D. A., 1985, “A Technique That Supports Both Evaluation and Design of User Interfaces”, 3rd Symp. Empirical Foundations Infor. & Software Sci., Denmark, 21 October 1985.Google Scholar
- Nelson, D. A., 1984, “A Software Development Environment Emphasizing Rapid Prototyping”, Approaches to Prototyping, Budde, R. et al., eds., Springer Verlag.Google Scholar
- Norman, D. A., 1983, “Design Rules Based on Analyses of Human Errors”, Communications of the ACM, 26, (4), April 1983, pp. 254–258.Google Scholar
- Penniman, W. D., 1985, “Information System Performance Measurement Revisited”, Proc. Com. Sci. Conf., ACM, 12 March 1985, pp. 29–32.Google Scholar
- Poison, P., and Kieras, D., 1985, “A Quantitative Model of Learning and Performance of Text Editing Knowledge”, Proc. CHI’85, ACM SIGCHI, 14 April 1985, pp. 207–212.Google Scholar
- Tolle, I. E., 1984, “Monitoring and Evaluation of Information Systems Via Transaction Log Analysis”, Research & Development in Information Retrieval, van Rijsbergen, ed., Cambridge Univ. Press, pp. 247–258.Google Scholar
- Tolle, I. E., 1985, “Performance Measurement and Evaluation of Online Information Systems”, Proc. Comp. Sci. Conf., ACM, 12 March 1985, pp. 196–203.Google Scholar