Skip to main content

Toward the Development of an Integrative Framework for Multimodal Dialogue Processing

  • Conference paper
On the Move to Meaningful Internet Systems: OTM 2008 Workshops (OTM 2008)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 5333))

Abstract

The “universal accessibility” concept is acquiring an important role in the research area of human-computer interaction (HCI). This phenomenon is guided by the need to simplify the access to technological devices, such as mobile phones, PDAs and portable PCs, by making human-computer interaction more similar to human-human communication. In this direction, multimodal interaction has emerged as a new paradigm of human-computer interaction, which advances the implementation of universal accessibility. The main challenge of multimodal interaction, that is also the main topic of this paper, lies in developing a framework that is able to acquire information derived from whatever input modalities, to give these inputs an appropriate representation with a common meaning, to integrate these individual representations into a joint semantic interpretation, and to understand which is the better way to react to the interpreted multimodal sentence by activating the appropriate output devices. A detailed description of this framework and its functionalities will be given in this paper, along with some preliminary application details.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Coutaz, J., Caelen, J.: A Taxonomy For Multimedia and Multimodal User Interfaces. In: Proceedings of the 1st ERCIM Workshop on Multimedia HCI, November 1991, Lisbon (1991)

    Google Scholar 

  2. Schomaker, L., Nijtmans, J., Camurri, A., Lavagetto, F., Morasso, P., Benoit, C., Guiard-Marigny, T., Le Goff, B., Robert-Ribes, J., Adjoudani, A., Defee, I., Munch, S., Hartung, K., Blauert, J.: A Taxonomy of Multimodal Interaction in the Human Information Processing System. In: Multimodal Integration for Advanced Multimedia Interfaces (MIAMI). ESPRIT III, Basic Research Project 8579 (1995)

    Google Scholar 

  3. Bolt, R.: Put-that-there: Voice and gesture at the graphics interface. Computer Graphics 14(3), 262–270 (1980)

    Article  Google Scholar 

  4. Neal, J.G., Shapiro, S.C.: Intelligent multimedia interface technology. In: Sullivan, J., Tyler, S. (eds.) Intelligent User Interfaces, pp. 11–43. ACM Press, New York (1991)

    Google Scholar 

  5. Nigay, L., Coutaz, J.: A generic platform for addressing the multimodal challenge. In: The Proceedings of the Conference on Human Factors in Computing Systems, ACM Press, New York (1995)

    Google Scholar 

  6. Cohen, P.R., Johnston, M., McGee, D., Oviatt, S.L., Pittman, J., Smith, I.A., Chen, L., Clow, J.: Quickset: Multimodal interaction for distributed applications. ACM Multimedia, 31–40 (1997)

    Google Scholar 

  7. Vo, M.T.: A framework and Toolkit for the Construction of Multimodal Learning Interfaces, PhD. Thesis, Carnegie Mellon University, Pittsburgh, USA (1998)

    Google Scholar 

  8. Wahlster, W., Reithinger, N., Blocher, A.: SmartKom: Multimodal Communication with a Life-Like Character. In: Proceedings of Eurospeech, Aalborg, Denmark (2001)

    Google Scholar 

  9. Johnston, M., Bangalore, S.: Finite-state Multimodal Integration and Understanding. Journal of Natural Language Engineering 11(2), 159–187 (2005)

    Article  Google Scholar 

  10. Rudzicz, F.: Clavius: Bi-directional Parsing for Generic Multimodal Interaction. In: Proceedings of COLING/ACL 2006, Sydney (2006)

    Google Scholar 

  11. Sun, Y., Shi, Y., Chen, F., Chung, V.: An efficient unification-based multimodal language processor in multimodal input fusion. In: Proceedings of the 2007 Conference of the Computer-Human interaction Special interest Group (Chisig) of Australia on Computer-Human interaction: Design: Activities, Artifacts and Environments, OZCHI 2007, Adelaide, Australia, November 28-30, 2007, vol. 251, pp. 215–218. ACM, New York (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

D’Ulizia, A., Ferri, F., Grifoni, P. (2008). Toward the Development of an Integrative Framework for Multimodal Dialogue Processing. In: Meersman, R., Tari, Z., Herrero, P. (eds) On the Move to Meaningful Internet Systems: OTM 2008 Workshops. OTM 2008. Lecture Notes in Computer Science, vol 5333. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88875-8_74

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-88875-8_74

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-88874-1

  • Online ISBN: 978-3-540-88875-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics