Abstract
Computer evaluation, comparison, and selection is essentially a decision process. The decision making is based on a number of worth indicators, including various computer performance indicators. The performance indicators are obtained through the computer performance measurement procedure. Consequently, in this environment the measurement procedure should be completely conditioned by the decision process. This paper investigates various aspects of the computer performance measurement and evaluation procedure within the context of the computer evaluation, comparison, and selection process based on the Logic Scoring of Preference method. A set of elementary criteria for performance evaluation is proposed and the corresponding set of performance indicators is defined. The necessary performance measurements are based on a standardized set of synthetic benchmark programs and include three separate measurements: monoprogramming performance measurement, multiprogramming performance measurement, and multiprogramming efficiency measurement. Using the proposed elementary criteria, the measured performance indicators can be transformed into elementary preferences and then aggregated with other nonperformance elementary preferences obtained through the evaluation process. The applicability of presented elementary criteria is illustrated by numerical examples.
Similar content being viewed by others
References
A. G. Bell, P. J. Hallowell, and D. H. Long, “A universal benchmark,”Software-Practice and Experience 3:355–357 (1973).
W. Buchholz, “A synthetic job for measuring system performance,”IBM Syst. J. 8(4):309–318 (1969).
H. J. Curnow and B. A. Wichmann, “A synthetic benchmark,”Computer J. 19(1): 43–49 (February 1976).
J. J. Dujmović, “Extended continuous logic and the theory of complex criteria,”Publ. Elek. Fak. Ser. Mat. Phys., No. 498–541, pp. 197–216 (1975).
J. J. Dujmović, “Professional evaluation and selection of computer systems,”Proceedings of the 12 th Yugoslav International Symposium on Information Processing (October 1977), Paper 4-001.
J. J. Dujmović, “Workload characterization, benchmarking and the concept of total resources consumption,”Proceedings of the Second International Conference on Computer Capacity Management, San Francisco (April 1980), pp. 151–163.
D. Ferrari, “Workload characterization and selection in computer performance measurement,”Computer J. 5(4):18–24 (July/August 1972).
E. O. Joslin, “Application benchmarks/The key to meaningful computer evaluations,”Proceedings of the 20th ACM National Conference (1965), pp. 20–37.
E. O. Joslin and J. J. Aiken, “The validity of basing computer selections on benchmark results,”Computers and Automation 15(1):22–23 (January 1966).
K. Kummerle, “Characteristic parameters for the description of performance and efficiency of EDP systems” (in German),Elektronische Rechenanlagen 14:12–18 (1972).
H. C. Lucas, Jr., “Performance evaluation and monitoring,”Computing Surveys 3(3):79–91 (September 1971).
K. Sreenivasan and A. J. Kleinman, “On the construction of a representative synthetic workload,”Comm. ACM 17(3):127–133 (March 1974).
R. E. Walters, “Benchmark techniques/A constructive approach,”Computer J. 19(1):50–55 (February 1976).
D. C. Wood and E. H. Forman, “Throughput measurement using a synthetic job stream,”Proceedings of the AFIPS 1971 FJCC, Vol. 39, pp. 51–56.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Dujmović, J.J. Computer selection and criteria for computer performance evaluation. International Journal of Computer and Information Sciences 9, 435–458 (1980). https://doi.org/10.1007/BF01417937
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01417937