Abstract
We discuss the various issues related to the design and implementation of multimodal spoken dialogue systems1 with wireless client devices. In particular we discuss the design of a usable interface that exploits the complementary features of the audio and visual channels to enhance usability. We then describe two client-server architectures in which we implemented applications for mapping and navigating to points of interest.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Oviatt, S. L. (2003). Multimodal interfaces. In Jacko, J. and Sears, A., editors, The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, pages 286–304. Lawrence Erlbaum Assoc., Mahwah, New Jersey, USA.
Pieraccini, R., Caskey, S., Dayanidhi, K., Carpenter, B., and Phillips, M. (2001). ETUDE, a recursive dialog manager with embedded user interface patterns. In Proceedings of lEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Madonna di Campiglio, Italy.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer
About this chapter
Cite this chapter
Pieraccini, R. et al. (2005). Multimodal Spoken Dialogue with Wireless Devices. In: Minker, W., Bühler, D., Dybkjær, L. (eds) Spoken Multimodal Human-Computer Dialogue in Mobile Environments. Text, Speech and Language Technology, vol 28. Springer, Dordrecht. https://doi.org/10.1007/1-4020-3075-4_10
Download citation
DOI: https://doi.org/10.1007/1-4020-3075-4_10
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-3073-4
Online ISBN: 978-1-4020-3075-8
eBook Packages: Computer ScienceComputer Science (R0)