Criteria and Techniques for the Objective Measurement of Human-Computer Performance

  • Peter Jagodzinski
  • David D. Clarke


The experiments described in this paper compare four prototype versions of a user interface, each of which had a different design of screen dialogue. Test procedures included are the selection and stratification of groups of test subjects, the design of transaction sets, the pre-training and pretesting of subjects, the conduction of test runs, and the post- testing of subjects’ comprehension, attitudes, and perceptions.

The analysis of test results was carried out mainly with univariate statistics and Analysis of Variance (ANOVA). These techniques are relatively simple to apply and revealed some clear distinctions between alternative versions of the interface with degrees of significance. They also showed relationships between effects; some of which supported the original hypothesis that dialogue, task, and job elements are closely interconnected. For example, there was significant positive correlation between perceived ease of use and expectation of improved job prospects.

The conclusions of the research were that techniques of this sort provide a valuable means of interface evaluation. They are, also, for the most part, sufficiently easy to use to be cost-effective and practical for medium and large scale computer systems analysis, design, and implementation projects in which their cost would be a relatively small fraction of the total cost.


Discriminant Function Analysis System Analyst Manual Registration Interface Evaluation Naive User 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bloom, B. S., 1956, Taxonomy of Educational Objectives, Longmans, New York.Google Scholar
  2. De Greene, K. B., 1970, “Systems and Psychology”, Systems Psychology, De Greene, K. B., ed., McGraw-Hill, pp. 3–50.Google Scholar
  3. Jagodzinski, A. P., 1983(a), “A Theoretical Basis for the Representation of an On-Line Computer System to Naive Users”, International Journal of Man-Machine Studies, 18, pp. 215–252.CrossRefGoogle Scholar
  4. Jagodzinski, A. P., 1983(b), Some Applications of Cognitive Science in the Analysis, Design, and Implementation of Interactive Computer Systems: D.Phil. Thesis, Oxford.Google Scholar
  5. Jagodzinski, A. P., 1985, “The Interaction Between Electronic Storage Systems and Their Users”, Proceedings of the 11th Meeting of IATUL, Gotenborg, pp. 133–138.Google Scholar
  6. Jagodzinski, A. P., and Clarke, D. D., 1986, “A Review of Methods for Measuring and Describing Users’ Attitudes as an Essential Constituent of Systems Analysis and Design”, The Computer Journal, 29, (2), pp. 97–102.CrossRefGoogle Scholar
  7. Mumford, E., and Weir, M., 1979, Computer Systems in Work-Design- the ETHICS Method, Associated Business Press, London.Google Scholar
  8. Nie, N. H., Hull, C. H., Jenkins, J. G., Steinbrenner, K., and Bent, D. H., 1970, Statistical Package for the Social Sciences, 2nd Edition, McGraw-Hill, New York.Google Scholar
  9. Oppenheim, A. N., 1966, Questionnaire Design and Attitude Measurement, Heinemann, London.Google Scholar
  10. Rasmussen, J., 1980, “The Human As a Systems Component”, Human Interaction with Computers, Academic Press, London, pp. 67–96.Google Scholar
  11. Thierauf, R. J., 1986, Systems Analysis and Design, Merril Publishing Company, Columbus.Google Scholar

Copyright information

© Plenum Press, New York 1987

Authors and Affiliations

  • Peter Jagodzinski
    • 1
  • David D. Clarke
    • 2
  1. 1.Department of ComputingPlymouth PolytechnicPlymouthUK
  2. 2.Department of Experimental PsychologyUniversity of OxfordOxfordUK

Personalised recommendations