Cognitive Processing

, Volume 11, Issue 3, pp 263–272 | Cite as

Web usability evaluation with screen reader users: implementation of the partial concurrent thinking aloud technique

  • Federici Stefano
  • Simone Borsci
  • Gianluca Stamerra
Research Report


A verbal protocol technique, adopted for a web usability evaluation, requires that the users are able to perform a double task: surfing and talking. Nevertheless, when blind users surf by using a screen reader and talk about the way they interact with the computer, the evaluation is influenced by a structural interference: users are forced to think aloud and listen to the screen reader at the same time. The aim of this study is to build up a verbal protocol technique for samples of visual impaired users in order to overcome the limits of concurrent and retrospective protocols. The technique we improved, called partial concurrent thinking aloud (PCTA), integrates a modified set of concurrent verbalization and retrospective analysis. One group of 6 blind users and another group of 6 sighted users evaluated the usability of a website using PCTA. By estimating the number of necessary users by the means of an asymptotic test, it was found out that the two groups had an equivalent ability of identifying usability problems, both over 80%. The result suggests that PCTA, while respecting the properties of classic verbal protocols, also allows to overcome the structural interference and the limits of concurrent and retrospective protocols when used with screen reader users. In this way, PCTA reduces the efficiency difference of usability evaluation between blind and sighted users.


Asymptotic test Human computer interaction Thinking aloud Usability evaluation 

Supplementary material

10339_2009_347_MOESM1_ESM.doc (233 kb)
Supplementary material 1 (DOC 233 kb)


  1. Bettman JR (1979) An information processing theory of consumer choice. Addison-Wesley, Reading CambridgeGoogle Scholar
  2. Bettman JR, Park CW (1980) Effects of prior knowledge and experience and phase of the choice processes on consumer decision processes: a protocol analysis. J Consum Res 7:234–248CrossRefGoogle Scholar
  3. Biehal G, Chakravarti D (1982a) Experiences with the Bettman-Park protocol coding scheme. J Consum Res 8:442–448CrossRefGoogle Scholar
  4. Biehal G, Chakravarti D (1982b) Information-presentation format and learning goals as determinants of consumers’ memory retrieval and choice processes. J Consum Res 8:431–441CrossRefGoogle Scholar
  5. Biehal G, Chakravarti D (1986) Consumers’ use of memory and external information in choice: macro and micro processing perspectives. J Consum Res 12:382–405CrossRefGoogle Scholar
  6. Biehal G, Chakravarti D (1989) The effects of concurrent verbalization on choice processing. J Mark Res 26:84–96CrossRefGoogle Scholar
  7. Borsci S, Federici S (2009) The partial concurrent thinking aloud: a new usability evaluation technique for blind users. In: Emiliani PL, Burzagli L, Como A, Gabbanini F, Salminen A-L (eds) Assistive technology from adapted equipment to inclusive environments—AAATE 2009. IOS Press, Amsterdam, pp 421–425Google Scholar
  8. Bowers VA, Snyder HL (1990) Concurrent versus retrospective verbal protocols for comparing window usability. Human Factors Society 34th Meeting, 8–12 October 1990 HFES, Santa Monica, pp 1270–1274Google Scholar
  9. Chandrashekar S, Fels D, Stockman T, Benedyk R (2006) Using think aloud protocol with blind users: a case for inclusive usability evaluation methods. Proceedings of the 8th international ACM SIGACCESS conference on computers and accessibility. ACM, New YorkGoogle Scholar
  10. Cherry EC (1953) Some experiments on the recognition of speech, with one and with two ears. J Acoust Soc Am 25:975–979CrossRefGoogle Scholar
  11. Coyne KP, Nielsen J (2001) Beyond ALT text: making the web easy to use for users with disabilities. Nielsen/Norman Group ReportsGoogle Scholar
  12. Ericsson KA, Kintsch W (1995) Long-term working memory. Psychol Rev 102:211–245CrossRefPubMedGoogle Scholar
  13. Ericsson KA, Simon HA (1980) Verbal reports as data. Psychol Rev 87:215–251CrossRefGoogle Scholar
  14. Ericsson KA, Simon HA (1993) Protocol analysis: verbal reports as data, Revised edn. MIT Press, CambridgeGoogle Scholar
  15. Green A (1995) Protocol analysis. Psychologist 8:126–129Google Scholar
  16. Guan Z, Lee S, Cuddihy E, Ramey J (2006) The validity of the stimulated retrospective think-aloud method as measured by eye tracking. Proceedings of the SIGCHI conference on human factors in computing systems, pp 1253–1262Google Scholar
  17. Hannu K, Pallab P (2000) A comparison of concurrent and retrospective verbal protocol analysis. Am J Psychol 113:387–404CrossRefGoogle Scholar
  18. Hoc JM, Leplat J (1983) Evaluation of different modalities of verbalization in a sorting task. Int J Man-Mach Stud 18:283–306CrossRefGoogle Scholar
  19. Johnstone CJ, Bottsford-Miller NA, Thompson SJ (2006) Using the think aloud method (CognitiveLabs) to evaluate test design for students with disabilities and English language learners. University of Minnesota, National Center on Educational Outcomes, MinneapolisGoogle Scholar
  20. Kemper S, Herman RE, Lian CHT (2003) The costs of doing two things at once for young and older adults: talking while walking, finger tapping, and Ignoring Speech or Noise. Psychol Aging 18:181–192CrossRefPubMedGoogle Scholar
  21. Kuusela H, Spence MT, Kanto AJ (1998) Expertise effects on prechoice decision processes and final outcomes: a protocol analysis. Eur J Mark 32:559–576CrossRefGoogle Scholar
  22. Minsky M (1975) A framework for representing knowledge. In: Winston P (ed) The psychology of computer vision. McGraw-Hill, New York, pp 211–277Google Scholar
  23. Nielsen J (1992) Finding usability problems through heuristic evaluation. In: CHI conference on human factors in computing systems. ACM, New York, pp 373–380Google Scholar
  24. Nielsen J (1994a) Estimating the number of subjects needed for a thinking aloud test. Int J Hum-Comput Stud 41:385–397CrossRefGoogle Scholar
  25. Nielsen J (1994b) Heuristic evaluation. In: Nielsen J, Mack RL (eds) Usability inspection methods. Wiley, New York, pp 25–62Google Scholar
  26. Nielsen J, Landauer TK (1993) A mathematical model of the finding of usability problems. In: Ashlund S, Mullet K, Henderson A, Hollnagel E, White E (eds) Proceedings of the InterCHI’93 conference. ACM, New York, pp 206–213Google Scholar
  27. Strain P, Shaikh AD, Boardman R (2007) Thinking but not seeing: think-aloud for non-sighted users. CHI ‘07 extended abstracts on Human factors in computing systems, ACM, New YorkGoogle Scholar
  28. Takagi H, Saito S, Fukuda K, Asakawa C (2007) Analysis of navigability of web applications for improving blind usability. Comput-Hum Interact 14:13–37CrossRefGoogle Scholar
  29. Turner CW, Lewis JR, Nielsen J (2006) Determining usability test sample size. In: Karwowski W (ed) International encyclopedia of ergonomics and human factors, vol 3, 2nd edn. CRC Press, Boca Raton, pp 3084–3088Google Scholar
  30. Van den Haak MJ, De Jong MDT (2003) Exploring two methods of usability testing: concurrent versus retrospective think-aloud protocols, IEEE international professional communication conference proceedings, PiscatawayGoogle Scholar
  31. Virzi RA (1990) Streamlining the design process: running fewer subjects. In: Human factors and ergonomics society 34th annual meeting. Human Factors and Ergonomics Society, Santa Monica, pp 291–294Google Scholar
  32. Virzi RA (1992) Refining the test phase of usability evaluation: how many subjects is enough? Hum Factors 34:457–468Google Scholar
  33. Wright P, Monk A (1991) A cost-effective evaluation method for use by designers. Int J Man-Mach Stud 35:891–912CrossRefGoogle Scholar

Copyright information

© Marta Olivetti Belardinelli and Springer-Verlag 2009

Authors and Affiliations

  • Federici Stefano
    • 1
  • Simone Borsci
    • 2
  • Gianluca Stamerra
    • 2
  1. 1.Department of Human and Education SciencesUniversity of PerugiaPerugiaItaly
  2. 2.ECoNA, Interuniversity Centre For Research on Cognitive Processing in Natural and Artificial SystemsUniversity of Rome ‘La Sapienza’RomeItaly

Personalised recommendations