Software Quality Journal

, Volume 14, Issue 2, pp 159–178 | Cite as

Usability measurement and metrics: A consolidated model

  • Ahmed Seffah
  • Mohammad Donyaee
  • Rex B. Kline
  • Harkirat K. Padda

Abstract

Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why encompassing interactive systems – computers plus people, fail in actual use. The design of this diversity of applications so that they actually achieve their intended purposes in term of ease of use is not an easy task. Although there are many individual methods for evaluating usability; they are not well integrated into a single conceptual framework that facilitate their usage by developers who are not trained in the filed of HCI. This is true in part because there are now several different standards (e.g., ISO 9241, ISO/IEC 9126, IEEE Std.610.12) or conceptual models (e.g., Metrics for Usability Standards in Computing [MUSiC]) for usability, and not all of these standards or models describe the same operational definitions and measures. This paper first reviews existing usability standards and models while highlighted the limitations and complementarities of the various standards. It then explains how these various models can be unified into a single consolidated, hierarchical model of usability measurement. This consolidated model is called Quality in Use Integrated Measurement (QUIM). Included in the QUIM model are 10 factors each of which corresponds to a specific facet of usability that is identified in an existing standard or model. These 10 factors are decomposed into a total of 26 sub-factors or measurable criteria that are furtherdecomposed into 127 specific metrics. The paper explains also how a consolidated model, such as QUIM, can help in developing a usability measurement theory.

Keywords

Usability Measurement Metrics Effectiveness Efficiency User satisfaction Software engineering quality models 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ahuja, V. 2000. Building trust in electronic commerce, IT Professional 2: 61–63.CrossRefGoogle Scholar
  2. Atif, Y. 2002. Building trust in e-commerce, IEEE Internet Computing 6: 18–24.CrossRefGoogle Scholar
  3. Bevan, N. 1995. Measuring usability as quality of use, Software Quality Journal 4: 115–130.CrossRefGoogle Scholar
  4. Bevan, N. and Azuma, M. 1997. Quality in Use: Incorporating human factors into the software engineering lifecycle, Proceedings of the Third IEEE International Symposium and Forum on Software Engineering Standards, Walnut Creek, CA, pp. 169–179.Google Scholar
  5. Bevan, N. and Macleod, M. 1994. Usability measurement in context, Behavior and Information Technology 13: 132–145.Google Scholar
  6. Bevan, N. and Schoeffel, R. 2001. A proposed standard for consumer product usability, Proceedings of 1st International Conference on Universal Access in Human Computer Interaction, New Orleans, LA, pp. 557–561.Google Scholar
  7. Boehm, B.W., Abts, C., Brown, W., Chulani, S, Clark, B.F., Steece, B., Brown, A.W., Chulani, S., and Abts, C. 2000. Software Cost Estimation with COCOMO II, New York: Prentice-Hall.Google Scholar
  8. Boehm, B.W., Brown, J.R., Kaspar, H., Lipow, M., Macleod, G., and Merritt, M. J. 1978. Characteristics of Software Quality, New York: North Holland.Google Scholar
  9. Caldwell, B., Chisholm, W., Vanderheiden, G., and White, J. (Eds.), 2004. Web Content Accessibility Guidelines 2.0, W3C Working Draft 30 July 2004, World Wide Web Consortium. Retrieved July 3, 2005 from http://www.w3.org/TR/2004/WD-WCAG20-20040730/.
  10. Cheskin Research and Studio Archetype/Sapient 1999. e-Commerce trust study. Retrieved June 30, 2005 from http://www.cheskin.com/docs/sites/1/report-eComm%20Trust1999.pdf.
  11. Constantine, L.L. and Lockwood, L.A.D. 1999. Software for Use: A Practical Guide to the Models and Methods of Usage-Centred Design, New York: Addison-Wesley.Google Scholar
  12. Council of the European Union, 1990. Council Directive 90/270/EEC on the Minimum Safety and Health Requirements for Work with Display Screen Equipment, Official Journal of the European Communities L 156: 14–18.Google Scholar
  13. Curtis, B., 1980. Measurement and experimentation in software engineering, IEEE Transaction on Software Engineering 68: 1144–1157.MathSciNetGoogle Scholar
  14. Fenton, N. E., and Whitty, R., 1995. Software Quality Assurance and Measurement: A Worldwide Perspective, London: International Thomson Computer Press.Google Scholar
  15. Friedman, B., Kahn, P.H., Jr., and Howe, D.C. 2000. Trust online, ACM Communications, 43: 34–40.CrossRefGoogle Scholar
  16. Hyatt, L.E. and Rosenberg, L.H. 1996. A software quality model and metrics for identifying project risks and assessing software quality. Retrieved July 3, 2005 from http://satc.gsfc.nasa.gov/support/STC_APR96/qualtiy/stc_qual.PDF.
  17. Institute of Electrical and Electronics Engineers, 1990. 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology, Los Alamitos, CA: Author.Google Scholar
  18. Institute of Electrical and Electronics Engineers, 1998. 1061-1998, Standard for a Software Quality Metrics Methodology, Los Alamitos, CA: Author.Google Scholar
  19. International Electrotechnical Commission, 2004. IEC 60300-3-9, Ed. 2.0, Dependability Management, Part 3-9: Application Guide, Risk Analysis of Technological Systems, Geneva: Author.Google Scholar
  20. International Organization for Standardization, 1998. ISO 9241-11, Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs), Part 11: Guidance on Usability, Geneva: Author.Google Scholar
  21. International Organization for Standardization, 1999. ISO 13407:1999, Human-Centered Design Processes for Interactive Systems, Geneva: Author.Google Scholar
  22. International Organization for Standardization/International Electrotechnical Commission, 1991. ISO/IEC 9126, Information Technology, Software Product Evaluation, Quality Characteristics and Guidelines for their Use, Geneva: Author.Google Scholar
  23. International Organization for Standardization/International Electrotechnical Commission, 1995. ISO/IEC 12207, Information Technology, Software Life Cycle Processes Geneva: Author.Google Scholar
  24. International Organization for Standardization/International Electrotechnical Commission, 1999. ISO/IEC 14598-1, Information Technology, Software Product Evaluation, Part 1: General Overview, Geneva: Author.Google Scholar
  25. International Organization for Standardization/International Electrotechnical Commission, 2001. ISO/IEC 9126-1 Standard, Software Engineering, Product Quality, Part 1: Quality Model, Geneva: Author.Google Scholar
  26. International Organization for Standardization/International Electrotechnical Commission, 2001. ISO/IEC 9126-4, Software Engineering, Product Quality, Part 4: Quality in Use Metrics, Geneva: Author.Google Scholar
  27. Ivory, M.Y. and Hearst, M.A. 2001. The state of the art in automating usability evaluation of user interfaces, ACM Computing Surveys 33: 470–516.CrossRefGoogle Scholar
  28. Jarrar, M., Demey, J., and Meersman, R. 2003. On using conceptual data modeling for ontology engineering, Journal on Data Semantics 2800: 185–207.Google Scholar
  29. John, B.E. and Kieras, D. E. 1996. Using GOMS for user interface design and evaluation: Which technique? ACM Transactions on Computer-Human Interaction 3: 287–319.CrossRefGoogle Scholar
  30. Kirakowski, J. and Corbett, M., 1993. SUMI: The Software Usability Measurement Inventory, British Journal of Educational Technology 24: 210–212.Google Scholar
  31. Lin, H. X., Choong, Y.-Y., and Salvendy, G., 1997. A proposed index of usability: A method for comparing the relative usability of different software systems, Behaviour and Information Technology, 16: 267-277.CrossRefGoogle Scholar
  32. Macleod, M., 1994. Usability: Practical Methods for testing and Improvement, Proceedings of the Norwegian Computer Society Software Conference, Sandvika, Norway. Retrieved July 3, 2005 from http://www.usability.serco.com/papers/mm-us94.pdf.
  33. Macleod, M., and Rengger, R., 1993. The development of DRUM: A software tool for video-assisted usability evaluation. Retrieved July 3, 2005 from http://www.usability.serco.com/papers/drum93.pdf
  34. Macleod, M., Bowden, R., Bevan, N. and Curson, I., 1997. The MUSiC performance method, Behaviour and Information Technology 16: 279-293.CrossRefGoogle Scholar
  35. McCall, J. A., Richards, P. K., and Walters, G. F., 1977. Factors in Software Quality, Springfield, VA: National Technical Information Service.Google Scholar
  36. Nielsen, J., 1993. Usability Engineering, London, UK: Academic Press.Google Scholar
  37. Nunnally, J. C., and Bernstein, I. H., 1994. Psychometric theory (3rd ed.), New York: McGraw-Hill.Google Scholar
  38. Olsina, L., Lafuente, G., and Rossi, G., 2001. Specifying quality characteristics and attributes for websites, in S. Murugesan and Y. Deshpande (Eds.), Web Engineering, Software Engineering and Web Application Development, London: Springer-Verlag, pp. 266–278.Google Scholar
  39. Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., and Carey, T. 1994. Human Computer Interaction, Wokingham, UK: Addison-Wesley.Google Scholar
  40. Rubin, J., 1994. Handbook of Usability Testing, New York: John Wiley.Google Scholar
  41. Schneiderman, B., 1992. Designing the User Interface: Strategies for Effective Human-Computer Interaction (2nd ed.), Reading, MA: Addison-Wesley.Google Scholar
  42. Schneidewind, N. F., 1992. Methodology for validating software metrics, IEEE Transactions on Software Engineering 18: 410–422.CrossRefGoogle Scholar
  43. Scholtz, J. and Laskowski, S., 1998. Developing usability tools and techniques for designing and testing web sites, Proceedings of the Fourth Conference on Human Factors & the Web, Basking Ridge, NJ. Retrieved July 3, 2005 from http://www.research.att.com/conf/hfweb/proceedings/scholtz/index.html.
  44. Sears, A., 1995. AIDE: A step toward metric-based interface development tools, Proceedings of the ACM Symposium on User Interface Software and Technology, New York: ACM Press, pp. 101–110.Google Scholar
  45. Shackel, B., 1991. Usability—Context, framework, definition, design and evaluation, in B. Shackel and S. Richardson (Eds.), Human Factors for Informatics Usability, Cambridge, MA: University Press, pp. 21–38.Google Scholar
  46. Stevens, S. S., 1959. Measurement, psychophysics, and utility, in C. W. Churchman and P. Ratoosh (Eds.), Measurement: Definitions and Theories, New York: John Wiley, pp.18–63.Google Scholar
  47. Tilson, R., Dong, J., Martin, S., and Kieke, E., 1998. Factors and principles affecting the usability of four e-commerce sites, Proceedings of the 4th Conference on Human Factors & the Web, Basking Ridge, New Jersey. Retrieved July 3, 2005 from http://www.research.att.com/conf/hfweb/proceedings/tilson/index.html.
  48. Yamada, S., Hong, J.-K., and Sugita, S. 1995. Development and evaluation of hypermedia for museum education: Validation of metrics, ACM Transactions of Computer Human Interface 12: 410–422.Google Scholar
  49. Landuaer, T.K. The Trouble with Computers: Usefulness, Usability and Productivity, MIT Press, 1995.Google Scholar
  50. Mayhew, D.J. (1999). The Usability Engineering Lifecycle: A Practitioner’s Handbook for User Interface design, Morgan Kaufmann, San Francisco.Google Scholar

Copyright information

© Springer Science + Business Media, Inc. 2006

Authors and Affiliations

  • Ahmed Seffah
    • 1
  • Mohammad Donyaee
    • 1
  • Rex B. Kline
    • 2
  • Harkirat K. Padda
    • 1
  1. 1.Human-Centered Software Engineering Group, Department of Computer Science and Software EngineeringConcordia UniversityMontreuCanada
  2. 2.Department of Psychology (PY 151-6)Concordia UniversityMontrealCanada

Personalised recommendations