Usage-Based Automatic Detection of Usability Smells

  • Patrick Harms
  • Jens Grabowski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8742)


With an increasing number of supported devices, usability evaluation of websites becomes a laborious task. Therefore, usability evaluation should be automated as far as possible. In this paper, we present a summative method for automated usability evaluation of websites. The approach records user actions and transforms them into task trees. The task trees are then checked for usability smells to identify potential usability issues. The approach was applied in two case studies and shows promising results in the identification of four types of usability smells.


task trees usage-based automatic usability evaluation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Sarodnick, F., Brau, H.: Methoden der Usability Evaluation: Wissenschaftliche Grundlagen und praktische Anwendung, 1st edn. Huber, Bern (2006)Google Scholar
  2. 2.
    Krug, S.: Web Usability: Rocket Surgery Made Easy. Addison-Wesley (2010)Google Scholar
  3. 3.
    Harms, P., Herbold, S., Grabowski, J.: Trace-based task tree generation. In: Proceedings of the Seventh International Conference on Advances in Computer-Human Interactions (ACHI 2014). XPS - Xpert Publishing Services (2014)Google Scholar
  4. 4.
    ISO 9241-11: Ergonomic requirements for office work with visual display terminals (VDTs) – Part 11: Guidance on usability (ISO 9241-11:1998), ISO (1998)Google Scholar
  5. 5.
    Paternò, F.: ConcurTaskTrees: An engineered approach to model-based design of interactive systems. In: The Handbook of Analysis for HumanComputer Interaction, pp. 1–18 (1999)Google Scholar
  6. 6.
    Polson, P.G., Lewis, C., Rieman, J., Wharton, C.: Cognitive walkthroughs: A method for theory-based evaluation of user interfaces. Int. J. Man-Mach. Stud. 36(5), 741–773 (1992)CrossRefGoogle Scholar
  7. 7.
    Ferré, X., Juristo, N., Windl, H., Constantine, L.: Usability basics for software developers. IEEE Softw. 18(1), 22–29 (2001)CrossRefGoogle Scholar
  8. 8.
    Norman, D.A.: The design of everyday things, 1st edn. Basic Books, New York (2002)Google Scholar
  9. 9.
    Balbo, S., Goschnick, S., Tong, D.: Leading Web Usability Evaluations to WAUTER. In: The Eleventh Australasian World Wide Web Conference. Gold Coast (2005)Google Scholar
  10. 10.
    Lecerof, A., Paternò, F.: Automatic support for usability evaluation. IEEE Trans. Softw. Eng. 24, 863–888 (1998)CrossRefGoogle Scholar
  11. 11.
    Tidwell, J.: Designing Interfaces - Patterns for Effective Interaction Design. In: Treseler, M. (ed.), 2nd edn. Oreilly Series. O’Reilly Media, Incorporated (2010),
  12. 12.
    Patern, F., Piruzza, A., Santoro, C.: Remote usability analysis of multimodal information regarding user behaviour, pp. 15–22 (2005),
  13. 13.
    Herbold, S., Harms, P.: AutoQUEST - Automated Quality Engineering of Event-driven Software (March 2013)Google Scholar
  14. 14.
    Software Engineering for Distributed Systems Group. Software Engineering for Distributed Systems (2014), (retrieved: 4, 2014)
  15. 15.
    Fowler, M.: Refactoring: Improving the Design of Existing Code. Addison-Wesley, Boston (1999)Google Scholar
  16. 16.
    Ivory, M.Y., Hearst, M.A.: The state of the art in automating usability evaluation of user interfaces. ACM Comput. Surv. 33, 470–516 (2001), CrossRefGoogle Scholar
  17. 17.
    Vargas, A., Weffers, H., da Rocha, H.V.: A method for remote and semi-automatic usability evaluation of web-based applications through users behavior analysis. In: Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research, MB 2010, pp. 19:1–19:5. ACM, New York (2010),
  18. 18.
    Paganelli, L., Paternò, F.: Tools for remote usability evaluation of web applications through browser logs and task models. Behavior Research Methods 35, 369–378 (2003)CrossRefGoogle Scholar
  19. 19.
    Paternò, F., Russino, A., Santoro, C.: Remote evaluation of mobile applications. In: Winckler, M., Johnson, H., Palanque, P. (eds.) TAMODIA 2007. LNCS, vol. 4849, pp. 155–169. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  20. 20.
    Buchholz, G., Engel, J., Märtin, C., Propp, S.: Model-based usability evaluation – evaluation of tool support. In: Jacko, J.A. (ed.) HCI 2007. LNCS, vol. 4550, pp. 1043–1052. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  21. 21.
    John, B.E., Prevas, K., Salvucci, D.D., Koedinger, K.: Predictive human performance modeling made easy. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2004, pp. 455–462. ACM, New York (2004)CrossRefGoogle Scholar
  22. 22.
    Jameson, A., Mahr, A., Kruppa, M., Rieger, A., Schleicher, R.: Looking for unexpected consequences of interface design decisions: The memo workbench. In: Winckler, M., Johnson, H. (eds.) TAMODIA 2007. LNCS, vol. 4849, pp. 279–286. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  23. 23.
    Feuerstack, S., Blumendorf, M., Kern, M., Kruppa, M., Quade, M., Runge, M., Albayrak, Ş.: Automated usability evaluation during model-based interactive system development. In: Forbrig, P., Paternò, F. (eds.) HCSE/TAMODIA 2008. LNCS, vol. 5247, pp. 134–141. Springer, Heidelberg (2008)Google Scholar
  24. 24.
    Quade, M., Blumendorf, M., Albayrak, S.: Towards model-based runtime evaluation and adaptation of user interfaces. In: User Modeling and Adaptation for Daily Routines: Providing Assistance to People with Special and Specific Needs (2010)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2014

Authors and Affiliations

  • Patrick Harms
    • 1
  • Jens Grabowski
    • 1
  1. 1.Institute of Computer ScienceUniversity of GöttingenGöttingenGermany

Personalised recommendations