Experiences of usability evaluation of the IMAGINE speech-based interaction system

Article
  • 60 Downloads

Abstract

IMAGINE is a recently completed multi-partner, multi-national European 5th Framework project aimed at developing systems for interaction with e-business applications through a multilingual natural language interface from mobile and other devices. The vision embodied in IMAGINE is universal access to electronic services by anyone (including disabled users) at any time. Since this long-term vision is not achievable with current capabilities in speech and natural language technologies, we have concentrated initially on a monolingual (English) application, namely a speech-enabled interface to an Online Shop. This paper is primarily concerned with usability evaluation of a prototype system for this application. The principal finding is that considerable challenges and barriers remain against the successful deployment of speech systems in real applications. The problems are not just technical, but systemic (to do with the nature of speech) and are exacerbated by lack of shared background and difficulties of communication between different professionals involved in the design, commissioning and maintenance of speech systems.

Keywords

Interactive speech systems Usability evaluation E-business Natural language dialogue 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Balentine, B. and Morgan, D.P. (2002). How to Build a Speech Recognition Application: A Style Guide for Telephony Dialogs. (2nd ed.). Salem, OR: Enterprise Integration Group.Google Scholar
  2. Cohen, M.H., Giangola, J.P., and Balogh, J. (2004). Voice User Interface Design. Reading, MA: Addison-Wesley.Google Scholar
  3. Damper, R.I. (1988). Practical experiences with speech data entry. In E.D. McGaw (Ed.), Contemporary Ergonomics 1988. London, UK, Taylor and Francis, pp. 92–97.Google Scholar
  4. Damper, R.I. (1993). Speech as an interface medium: How can it best be used? In C. Baber and J. Noyes (Eds.), Interactive Speech Technology: Human Factors Issues in the Application of Speech Input/Output to Computers, London: Taylor and Francis, pp. 59–71.Google Scholar
  5. Damper, R.I., Lambourne, A.D., and Guy, D.P. (1985). Speech input as an adjunct to keyboard entry in television subtitling. In B. Shackel (Ed.), Human-Computer Interaction—INTERACT ’84. Amsterdam, The Netherlands, Elsevier (North-Holland), pp. 203–208.Google Scholar
  6. Damper, R.I., Tranchant, M.A., and Lewis, S.M. (1996). Speech versus keying in command and control: Effect of concurrent tasking. International Journal of Human-Computer Studies, 45(3): 337–348.CrossRefGoogle Scholar
  7. Gladstone, K., Dixon, E., Pick, R., and Alexander, T. (2003). Approaches to voice interaction systems for use by visually impaired people. In 7th Conference of Association for the Advancement of Assistive Technology in Europe, AAATE 2003, Dublin, Ireland.Google Scholar
  8. Nielsen, J. (1994). Heuristic evaluation. In J. Nielsen and R.L. Mack (Eds.), Usability Inspection Methods, New York, NY: John Wiley, pp. 25–62.Google Scholar
  9. Nuance (2002). Introduction to the Nuance System: Nuance Speech Recognition System, Version 8.0. Retrieved 8 March 2002 from http://www.nuance.com.

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  1. 1.Information: Signals, Images, Systems (ISIS) Research Group, School of Electronics and Computer ScienceUniversity of SouthamptonSouthamptonUK
  2. 2.Deaf Education through Listening and Talking (DELTA)PeterboroughUK

Personalised recommendations