Performance measurement

  • C. C. Gotlieb
Chapter 4: Practical Aspects
Part of the Lecture Notes in Computer Science book series (LNCS, volume 30)


Software Monitor Multiprogramming System Strip Recorder Disk Head Movement Synthetic Program 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

7. References

  1. ACCOUNTPAK A proprietary Software Package of Systems Dimensions Ltd., Ottawa, CanadaGoogle Scholar
  2. Apple, C.T. The Program Monitor — A Device for Program Performance Measurement Proc. ACM 20th National Conference Aug. 1965, pp 66–75Google Scholar
  3. Arbuckle, R.A. Computer Analysis and Thruput Evaluation Computers and Automation, Vol. 15, No. 1, January 1966, pp 12–15Google Scholar
  4. Bard, Y. Performance criteria and measurement for a time-sharing system. IBM Systems J. Vol. 10 No. 3, 1971, pp 193–231Google Scholar
  5. Basson, Alan; Brundage, Robert Performance Measurements on a Virtual Memory Computer System in a Batch-Processing Environment — Workshop, April 1971Google Scholar
  6. Bemer, R.; Ward, A.L.; Ellison Software Instrumentation Systems for Optimum Performance Pwc. IFIP Congress 68, North Holland, pp 520–524Google Scholar
  7. Boehm, B.W. Computer Systems Analysis Methodology — Studies in Measuring, Evaluating and Simulating Computer Systems, R-520 NASA, Rand Corp., Santa Monica, Sept. 1970Google Scholar
  8. Bonner, A.J. Using System Monitor Output to Improve Performance, IBM Syst. Journal Vol 8 (1969) No. 4, pp 290–298Google Scholar
  9. Bordsen, Donald T. UNIVAC 1108 Hardware Instrumentation System — Workshop April 1971Google Scholar
  10. BUC Component Description and User's Guide. Form no. 7X22-6953 IBM Corp.Google Scholar
  11. Calengaert, P. System Performance Evaluation: Survey and Appraisal Comm. ACM Vol, 10, No. 1, January 1967, pp 12–18Google Scholar
  12. Campbell, D.J.; Heffrer, W.J. Measurement and Analysis of Large Operating Systems During Development AFIPS Proc. 33, (FJCC 1968,Vol 2), pp 903–914Google Scholar
  13. Cantrell, H.N.; Ellison, A.L. Multiprogramming System Performance and Anylysis, AFIPS Proc. 32 (SJCC, 1968), pp 213–21Google Scholar
  14. Choosing a Computer 1971–72, Data Systems, Dec. 1971Google Scholar
  15. Crooke, S.; Minker J. Key Word in Context: Index and Bibliography, Computer System Evaluation Techniques, Technical Report 69–100, Dec. 1969, University of Maryland, Computer Science Dept.Google Scholar
  16. Deniston, W.R. "SIPE: A TSS/360 Software Measurement Technique" Proc. ACM 24th National Conf. 1969, pp 229–245Google Scholar
  17. Denning, Peter J.; Eisenstein, Bruce A. Statistical Methods in Performance Evaluation — Workshop, April 1971, pp 284–307Google Scholar
  18. Esthin, G.; Hopkins, D.; Coggar, B.; Crocker, S.D. Snuper Computer: A Computer in Instrumentation Automation, AFIPS Proc. 30 (SJCC, 1967), pp 645–656Google Scholar
  19. Freibergs, I.F. The Dynamic Behaviour of Programs. AFIPS Proc. 33, (FJCC 1968, Vol. 2,) pp 1163–1167Google Scholar
  20. Gotlieb, C.C. and Mac Ewen G.H. System Evaluation Tools in Software Engin eering. NATO Scientific Affairs Division, 1969, pp 93–98Google Scholar
  21. Hart, L.E. User's Guide to Evaluation Products. Datamation 16 (Dec. 1970) 17, p 32Google Scholar
  22. Harman, A.J. The International Computer Industry. Harvard University Press, 1971Google Scholar
  23. Hillegass, J.R. Standardized Benchmark Problem Measure Computer Performance Computers and Automation Vol. 15, no. 1, Jan. 1966, pp 16–21Google Scholar
  24. IBM System/370 Model 165 Functional Characteristic, GA22-6935-0 May 1971, p 24Google Scholar
  25. Joslen, E.O. and Aiken, J.J. The Validity of Basing Computer Selections on Benchmark Results. Computers and Automation Vol. 15, No. 6, June 1966, pp 22–23Google Scholar
  26. Katonak, P.R. Use of Performance Analysis Statistics in Computer System Simulation — Fifth Conference on Applications of Simulations. Association for Computing Machinery, December 1971, pp 317–325Google Scholar
  27. Kohn, Carl E. Techniques and Results of Systems Monitoring. University of Waterloo, 1971, Computer CentreGoogle Scholar
  28. Knight, K. Evaluating Computer Performance 1962–1967. Datamation, January 1968, pp 31–35Google Scholar
  29. Lucas, H.C. Performance Evaluation and Monitoring Computing Surveys, Vol. 3, No 3, Sept. 1971, pp 79–91Google Scholar
  30. MacGowan, J.M. UNIVAC 1108. Instrumentation in Software Engineering Techniques. NATO Scientific Affairs Div. 1970, pp 106–110Google Scholar
  31. Metzger, J. Monitoring Computing Systems. M. Sc. Thesis. Dept. of Computer Science. University of Toronto, December 1970Google Scholar
  32. Milandre, G. Hardware II — University of Toronto, Hardware Monitor Project. Internal Report V, November 1971. University of Toronto Computer CentreGoogle Scholar
  33. Minker, S.; Crook and J. Yeh Analysis of Data Processing Systems. Technical Report 69–99. University of Maryland, Computer Science Centre, Dec. 1969Google Scholar
  34. Pinkerton, T. Performance Monitoring in a Time-Sharing System. CACM 12, Nov. 1969, Vol. 12, No. 11, pp 608–610Google Scholar
  35. Saltzer, J.H.; Gintell, J.W. The Instrumentation of Multics. CACM 13, No. 8, Aug. 1970, pp 495–500Google Scholar
  36. Scherr, A.L. An Analysis of Time-Shared Computer Systems. M. I. T. Press, Cambridge, 1967Google Scholar
  37. Schneidewind, N.F. The Practice of Computer Selection. Datamation, February 1967, pp 22–25Google Scholar
  38. Schulman, F.D. Hardware Measurement Device for IBM System 1360 Time Sharing Evaluation. Proc. ACM 224. National Conf. 1967, pp 103–109Google Scholar
  39. Share-Session Report on "Hardware vs Software". Share XXXIV Proc. Vol. 1 (1970) pp 380–405Google Scholar
  40. Sharpe, W.F. The Economics of Computers. Columbia University Press 1969 Ch. 9. The Cost and Effectiveness of Computer SystemsGoogle Scholar
  41. Sherman, S.; Browne, J.C. Forest Baskett III. Trace Driven Modeling and Analysis of CPU Scheduling in a Multi-Programming System — Workshop, April 1971, pp 173–199Google Scholar
  42. Solomon, M.B. Jrs. Economies of Scale and the IBM System/360 Comm. ACM Vol. 9, No 6, June 1966, pp 435–440Google Scholar
  43. Stanley, W.I.; Hertel, H.F. Statistics Gathering and Simulation for the Appollo Real Time Operating System. IBM Syst. J. Vol. 7 (1968), No 2, pp 85–102Google Scholar
  44. Stevens, D.G. System Evaluation on the Control Data 6600. Proc. IFIP Cong. 68, Aug. 1968, pp 542–547Google Scholar
  45. SUPERMON Systems Technical Memo No. 30, January 1970. COSMIC, Barrow Hall, University of Georgia, Athens, GeorgiaGoogle Scholar
  46. System Performance Comparison Charts — in Standard EDP Reports, Auerbach Corp. sec. 11 00.101–115Google Scholar
  47. System Utilization Monitor: User's Manual. Form no. A/B-416. Computer Synetics Inc., Sept. 1969Google Scholar
  48. Warner, C.D. Monitoring: A Key to Cost Efficiency. Datamation Jan. 1971, pp 40–49Google Scholar
  49. Workshop on System Performance Evaluation, Harvard University, Cambridge, Mass., ACM, April 5–7, 1971Google Scholar
  50. Wulf, W. Performance Monitors for Multiprogramming Systems. Proc. 2nd ACM Symp. on Op. Syst. Principles. Princeton, N.J. (Oct. 1969), pp 175–181Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1975

Authors and Affiliations

  • C. C. Gotlieb
    • 1
  1. 1.Department of Computer ScienceUniversity of TorontoCanada

Personalised recommendations