Advertisement

Evaluation of Human-Computer Interface Development Tools: Problems and Promises

  • Deborah Hix

Abstract

The computer industry has seen an explosive emergence of user interface management system (UIMS) toolkits in the last few years. However, there are no standards for the components of such toolkits, and no procedure for systematically evaluating or comparing these toolkits. With their proliferation, ad hoc evaluations and comparisons are constantly being done, without a formal, structured approach.

This paper will describe several of the problems involved in developing an evaluation procedure for UIMS, and will report on research that is showing promise as an evaluation procedure that produces quantifiable criteria for evaluation and comparing UIMS. Such a procedure could be used, for example, for choosing a UIMS for a particular human-computer interface development environment.

The procedure we have developed generates ratings for two dimensions:
  • Functionality of the UIMS being evaluated, and

  • Usability of the UIMS being evaluated.

Functionality refers to what the UIMS can do; that is, what interface styles, techniques, and features it can be used to produce. Usability refers to how well the UIMS does what it can do in terms of ease of use (a subjective, qualitative rating of how easy the UIMS is to use) and human performance (an objective, quantitative rating of how efficiently the UIMS can be used to perform a task).

A significant by-product of this research is a practical taxonomy of types of human-computer interfaces, including interaction styles, features, and hardware, in addition to a taxonomy of types of interface development support provided by UIMS, and general characteristics of UIMS.

Keywords

Usability Rating Evaluation Procedure Functionality Rating Interface Development Textual Programming 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Borenstein, N. S., 1985, The Evaluation of Text Editors: A Critical Review of the Roberts and Moran Methodology Based on New Experiments, Proceedings of the CHI ’85 Conference, San Francisco, CA, April 1985, pp. 99–105.Google Scholar
  2. Cohill, A. M., Gilfoil, D. M., and Pilitsis, J. V., 1988, Measuring the Utility of Application Software, Advances in Human-Computer Interaction, Volume 2, H. R. Hartson and D. Hix„ eds., Ablex Publishing Corp., Norwood, NJ, pp. 128–158.Google Scholar
  3. Hix, D., 1987, An Evaluation Procedure for Human-Computer Interface Development Toolkits, VPIandSU Department of Computer Science Technical Report, October 1987.Google Scholar
  4. Roberts, T. L., and Moran, T. P., 1983, The Evaluation of Text Editors: Methodology and Empirical Results, Communications of the ACM, Vol. 26, No. 4, April 1983, pp. 265–283.CrossRefGoogle Scholar

Copyright information

© Plenum Press, New York 1990

Authors and Affiliations

  • Deborah Hix
    • 1
  1. 1.Department of Computer ScienceVirginia TechBlacksburgUSA

Personalised recommendations