A Comparative Analysis of Usability Evaluation Methods on Their Versatility in the Face of Diversified User Input Methods

  • Daiju Ishikawa
  • Takashi KatoEmail author
  • Chigusa Kita
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 528)


Every command consists of an action and an object, suggesting that a usability problem can occur whenever the user is unable to identify an appropriate action and/or the object associated with his/her current goal. The recent shift from mouse-based to touch-based interaction demands that any usability evaluation method be sensitive to not only object-related but also action-related usability problems. This study involved a total of 32 participants, four kinds of tasks differing in the difficulty of identifying objects and executing actions, and four qualitative methods of usability evaluation. Analyses of sets of observation data with concurrent and retrospective protocol by the same participant and interpretive protocol by a new participant indicate that while the oral instruction method seems least appropriate, the newly-devised narration method seems to have better prospects than the observation and the think aloud method for the usability evaluation of touch-based interaction.


Usability evaluation method Qualitative method Touch-based interaction Mouse-based interaction 



This research was supported by JSPS KAKENHI Grant Number 25510018. Daiju Ishikawa is currently at Marubeni Information Systems Co., Ltd.


  1. 1.
    Carroll, J.M., Mack, R.L.: Learning to use a word processor: by doing, by thinking, and by knowing. In: Thomas, J.C., Schneider, M.L. (eds.) Human Factors in Computer Systems, pp. 13–51. Ablex, Norwood (1984)Google Scholar
  2. 2.
    Elling, S., Lentz, L., de Jong, M.: Retrospective think-aloud method: using eye movements as an extra cue for participants’ verbalizations. In: CHI 2011 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1161–1170. ACM, New York (2011)Google Scholar
  3. 3.
    Erlandsson, M., Jansson, A.: Verbal reports and domain-specific knowledge: a comparison between collegial and retrospective verbalisation. Cogn. Technol. Work 15, 239–254 (2013)CrossRefGoogle Scholar
  4. 4.
    Hori, M., Kihara, Y., Kato, T.: Investigation of indirect oral operation method for think aloud usability testing. In: Kurosu, M. (ed.) HCD 2011. LNCS, vol. 6776, pp. 38–46. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  5. 5.
    Kato, T.: What “question-asking protocols” can say about the user interface. Int. J. Man Mach. Stud. 25, 659–673 (1986)CrossRefGoogle Scholar
  6. 6.
    Mack, R.L., Robinson, J.B.: When novices elicit knowledge: question asking in designing, evaluating, and learning to use software. In: Hoffman, R.R. (ed.) The Psychology of Expertise: Cognitive Research and Empirical AI, pp. 245–268. Springer, New York (1992)CrossRefGoogle Scholar
  7. 7.
    Olmsted-Hawala, E.L., Murphy, E.D., Hawala, S., Ashenfelter, K.T.: Think-aloud protocols: a comparison of three think-aloud protocols for use in testing data-dissemination web sites for usability. In: CHI 2010 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2381–2390. ACM, New York (2010)Google Scholar
  8. 8.
    Van den Haak, M.J., de Jong, M.D.T., Schellens, P.J.: Retrospective vs. concurrent think-aloud protocols: testing the usability of an online library catalogue. Behav. Inf. Technol. 22, 339–351 (2003)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Graduate School of InformaticsKansai UniversityTakatsukiJapan

Personalised recommendations