Personal and Ubiquitous Computing

, Volume 10, Issue 5, pp 269–283

Can we do without GUIs? Gesture and speech interaction with a patient information system

  • Eamonn O’Neill
  • Manasawee Kaenampornpan
  • Vassilis Kostakos
  • Andrew Warr
  • Dawn Woodgate
Original Article

DOI: 10.1007/s00779-005-0048-1

Cite this article as:
O’Neill, E., Kaenampornpan, M., Kostakos, V. et al. Pers Ubiquit Comput (2006) 10: 269. doi:10.1007/s00779-005-0048-1

Abstract

We have developed a gesture input system that provides a common interaction technique across mobile, wearable and ubiquitous computing devices of diverse form factors. In this paper, we combine our gestural input technique with speech output and test whether or not the absence of a visual display impairs usability in this kind of multimodal interaction. This is of particular relevance to mobile, wearable and ubiquitous systems where visual displays may be restricted or unavailable. We conducted the evaluation using a prototype for a system combining gesture input and speech output to provide information to patients in a hospital Accident and Emergency Department. A group of participants was instructed to access various services using gestural inputs. The services were delivered by automated speech output. Throughout their tasks, these participants could see a visual display on which a GUI presented the available services and their corresponding gestures. Another group of participants performed the same tasks but without this visual display. It was predicted that the participants without the visual display would make more incorrect gestures and take longer to perform correct gestures than the participants with the visual display. We found no significant difference in the number of incorrect gestures made. We also found that participants with the visual display took longer than participants without it. It was suggested that for a small set of semantically distinct services with memorable and distinct gestures, the absence of a GUI visual display does not impair the usability of a system with gesture input and speech output.

Keywords

Multimodal interaction Gesture input Speech output GUI Mobile Ubiquitous 

Copyright information

© Springer-Verlag London Limited 2005

Authors and Affiliations

  • Eamonn O’Neill
    • 1
  • Manasawee Kaenampornpan
    • 1
  • Vassilis Kostakos
    • 1
  • Andrew Warr
    • 1
  • Dawn Woodgate
    • 1
  1. 1.Human-Computer Interaction Group, Department of Computer ScienceUniversity of BathBathUK