Criteria and Techniques for the Objective Measurement of Human-Computer Performance
The experiments described in this paper compare four prototype versions of a user interface, each of which had a different design of screen dialogue. Test procedures included are the selection and stratification of groups of test subjects, the design of transaction sets, the pre-training and pretesting of subjects, the conduction of test runs, and the post- testing of subjects’ comprehension, attitudes, and perceptions.
The analysis of test results was carried out mainly with univariate statistics and Analysis of Variance (ANOVA). These techniques are relatively simple to apply and revealed some clear distinctions between alternative versions of the interface with degrees of significance. They also showed relationships between effects; some of which supported the original hypothesis that dialogue, task, and job elements are closely interconnected. For example, there was significant positive correlation between perceived ease of use and expectation of improved job prospects.
The conclusions of the research were that techniques of this sort provide a valuable means of interface evaluation. They are, also, for the most part, sufficiently easy to use to be cost-effective and practical for medium and large scale computer systems analysis, design, and implementation projects in which their cost would be a relatively small fraction of the total cost.
KeywordsDiscriminant Function Analysis System Analyst Manual Registration Interface Evaluation Naive User
Unable to display preview. Download preview PDF.
- Bloom, B. S., 1956, Taxonomy of Educational Objectives, Longmans, New York.Google Scholar
- De Greene, K. B., 1970, “Systems and Psychology”, Systems Psychology, De Greene, K. B., ed., McGraw-Hill, pp. 3–50.Google Scholar
- Jagodzinski, A. P., 1983(b), Some Applications of Cognitive Science in the Analysis, Design, and Implementation of Interactive Computer Systems: D.Phil. Thesis, Oxford.Google Scholar
- Jagodzinski, A. P., 1985, “The Interaction Between Electronic Storage Systems and Their Users”, Proceedings of the 11th Meeting of IATUL, Gotenborg, pp. 133–138.Google Scholar
- Mumford, E., and Weir, M., 1979, Computer Systems in Work-Design- the ETHICS Method, Associated Business Press, London.Google Scholar
- Nie, N. H., Hull, C. H., Jenkins, J. G., Steinbrenner, K., and Bent, D. H., 1970, Statistical Package for the Social Sciences, 2nd Edition, McGraw-Hill, New York.Google Scholar
- Oppenheim, A. N., 1966, Questionnaire Design and Attitude Measurement, Heinemann, London.Google Scholar
- Rasmussen, J., 1980, “The Human As a Systems Component”, Human Interaction with Computers, Academic Press, London, pp. 67–96.Google Scholar
- Thierauf, R. J., 1986, Systems Analysis and Design, Merril Publishing Company, Columbus.Google Scholar