Advertisement

Multimodal Input for Meeting Browsing and Retrieval Interfaces: Preliminary Findings

  • Agnes Lisowska
  • Susan Armstrong
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4299)

Abstract

In this paper we discuss the results of user-based experiments to determine whether multimodal input to an interface for browsing and retrieving multimedia meetings gives users added value in their interactions. We focus on interaction with the Archivus interface using mouse, keyboard, voice and touchscreen input. We find that voice input in particular appears to give added value, especially when used in combination with more familiar modalities such as the mouse and keyboard. We conclude with a discussion of some of the contributing factors to these findings and directions for future work.

Keywords

Input Modality Language Input Multimodal Interaction Traditional Paradigm Keyboard Input 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bernsen, N.O., Dybkjaer, L.: Evaluation of Spoken Multimodal Conversation. In: Proceedings of the International Conference on Multimodal Interfaces, ICMI 2004, State College, Pennsylvania, USA, October 14-15, 2004, pp. 38–45. ACM Press, New York (2004)CrossRefGoogle Scholar
  2. 2.
    Dahlbäck, N., Jönsson, A., Ahrenberg, L.: Wizard of Oz Studies - Why and How. In: International Workshop on Intelligent User Interfaces, Orlando, FL, USA. ACM Press, New York (1993)Google Scholar
  3. 3.
    Karat, C., Vergo, J., Namahoo, D.: Converstional Interface Technologies. In: Jacko, J.A., Sears, A. (eds.) The Human-Computer Interaction Handbook, pp. 169–187. Lawrence Erlbaum Associates Inc., Mahaw (2003)Google Scholar
  4. 4.
    Lai, J., Yankelovich, N.: Conversational Speech Interfaces. In: Jacko, J.A., Sears, A. (eds.) The Human-Computer Interaction Handbook, pp. 698–714. Lawrence Erlbaum Associates Inc., Mahaw (2003)Google Scholar
  5. 5.
    Lisowska, A., Popescu-Belis, A., Armstrong, S.: User Query Analysis for the Specification and Evaluation of a Dialogue Processing and Retrieval System. In: LREC 2004. Fourth International Conference on Language Resources and Evalu Tion, Lisbon, Portugal (2004)Google Scholar
  6. 6.
    Lisowska, A., Rajman, M., Bui, T.H.: ARCHIVUS: A System for Accesssing the Content of Recorded Multimodal Meetings. In: MLMI - Joint AMI/PASCAL/IM2/M4 Workshop on Multimodal Interaction and Related Machine Learning Algorithms, Martigny, Switzerland (2004)Google Scholar
  7. 7.
    Moore, D.: The IDIAP Smart Meeting Room. In: IDIAP, Martigny, Switzerland, p. 13 (2002)Google Scholar
  8. 8.
    Oviatt, S.: Multimodal Interfaces. In: Jacko, J.A., Sears, A. (eds.) The Human-Computer Interaction Handbook, pp. 286–304. Lawrence Erlbaum Associates Inc., Mahaw (2003)Google Scholar
  9. 9.
    Salber, D., Coutaz, J.: Applying the Wizard of Oz Technique to the Study of Multimodal Systems. In: 3rd International East/West Human Computer Interaction Conference. Springer, Moscow (1993)Google Scholar
  10. 10.
    Tucker, S., Whittaker, S.: Accessing Multimodal Meeting Data: Systems, Problems and Possibilities. In: Bengio, S., Bourlard, H. (eds.) MLMI 2004. LNCS, vol. 3361, pp. 1–11. Springer, Heidelberg (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Agnes Lisowska
    • 1
  • Susan Armstrong
    • 1
  1. 1.ISSCO/TIM/ETIUniversity of GenevaGenevaSwitzerland

Personalised recommendations