Advertisement

Multimodal Architectures: Issues and Experiences

  • Giovanni Frattini
  • Luigi Romano
  • Vladimiro Scotto di Carlo
  • Pierpaolo Petriccione
  • Gianluca Supino
  • Giuseppe Leone
  • Ciro Autiero
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4277)

Abstract

The penetration of mobile device in western countries is still increasing. The Italian case is really surprising: every single Italian has more than one mobile terminal. Thus, considering this large potential audience, there is real need for innovation and new services. In this context, usable multimodal services could have an unexpected impact on social behaviour. Nevertheless, the research community should be able to propose a framework for building generic multi-modal services, covering all their lifecycle. We are currently defining an architecture for building coordinated simultaneous multimodal applications trying to use as much as possible open source software: our goal is to define a set of tools for enabling a rapid deployment of a generic multimodal service. In our opinion, a platform based on open source software could meet the expectations of a large numbers of service developers. A special effort for enabling a mass diffusion of mobile multimodal services should be focused on the client side, where the situation is still evolving.

Keywords

Mobile Device Open Source Software Mobile Terminal Mobile Client User Context 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bolt, R.A.: Put that there: Voice and gesture at the graphics interface. Computer Graphics 14(3), 262–270 (1980)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Oviatt, S.L.: Multimodal interactive maps: Designing for human performance. Human-Computer Interaction 12, 93–129 (1997)CrossRefGoogle Scholar
  3. 3.
    Oviatt, S.L., Cohen, P.R., Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers, J., Holzman, T., Winograd, T., Landay, J., Larson, J., Ferro, D.: Designing the user interface for multimodal speech and gesture applications: State-of-the-art systems and research directions for 2000 and beyond Google Scholar
  4. 4.
    Knutkavale, N.: Janeikesetknudsen Speech Centric Multimodal Interfaces for Mobile Communication Systems. Telektronikk 2, 104–117 (2003)Google Scholar
  5. 5.
    Garzotto, F., Mainetti, L., Paolini, P.: Designing Modal Hypermedia Applications. In: The Eighth ACM Conference on Hypertext, Southampton, England, pp. 38–47 (1997)Google Scholar
  6. 6.
    Wu, L., Oviatt, S.L., Cohen, P.R.: Multimodal Integration - A Statistical View. IEEE Transactions on Multimedia 1(4), 334–341 (1999)CrossRefGoogle Scholar
  7. 7.
    Beckham, J.L., Fabbrizio, G.D., Klarlund, M.: Towards SMIL as a Foundation for Multimodal, Multimedia Applications, W3C Multimodal Interaction Activity (2002)Google Scholar
  8. 8.
    Oviatt, S.L., DeAngeli, A., Kuhn, K.: Integration and synchronization of input modes during multimodal humancomputer interaction. In: Proceedings of the CHI 1997 Conference, pp. 415–422. ACM Press, New York (1997)Google Scholar
  9. 9.
    Oviatt, S.: Mutual disambiguation of recognition errors in a multimodal architecture. In: Proceedings of the Conference on Human Factors in Computing Systems: CHI 1999, pp. 576–583. ACM Press, Pittsburgh (1999); Multimodal Architectures: Issues and Experiences 983CrossRefGoogle Scholar
  10. 10.
    Oviatt, S.: Ten Myths of Multimodal Interaction. Communications of the ACM 42(11), 74–81 (1999)CrossRefGoogle Scholar
  11. 11.
    W3C Multimodal Architecture and Interfaces, http://www.w3.org/TR/2005/WD-mmiarch-20050422/
  12. 12.
    EMMA: Extensible MultiModal Annotation Markup Language, http://www.w3.org/TR/emma/
  13. 13.
    Overview of SGML Resources, http://www.w3.org/MarkUp/SGML/
  14. 14.
    X+V for the Next Generation Web, http://www.voicexml.org/specs/multimodal/
  15. 15.
    3GPP TS 23.057 V4.4.0 (2001-12) 3rd Generation Partnership Project Technical Specification Group Terminals Mobile Station Application Execution Environment (MExE), Functional description, Stage 2 (Release 4), http://www.3gpp.org/ftp/Specs/2001-12/Rel-4/23_series/23057-440.zip
  16. 16.
    Synchronized Multimedia Integration Language (SMIL) Specification, http://www.w3.org/TR/REC-smil/
  17. 17.
    Oviatt, S., Cohen, P.: Multimodal Interfaces That Process What Comes Naturally. Communications of the ACM 43(3), 45–53 (2000)CrossRefGoogle Scholar
  18. 18.
    Matera, M.: SUE : A Systematic Methodology for Evaluating Hypermedia Usability, Ph. D. Thesis, Politecnico of Milano (2000)Google Scholar
  19. 19.
    Chai, J.Y., Hong, P., Zhou, M.X.: A probabilistic approach to reference resolution in multimodal user interfaces (2004)Google Scholar
  20. 20.
    Elting, C., Rapp, S., Möhler, G., Strube, M.: Architecture and implementation of multimodal plug and play. In: Proceedings of the 5th international conference on Multimodal interfaces (2003)Google Scholar
  21. 21.
    Johnston, M.: Multimodal Language Processing. In: Proceedings of International Conference on Spoken Language Processing (ICSLP), Sydney, Australia (1998)Google Scholar
  22. 22.
    Filippo, F., Krebs, A., Marsic, I.: A framework for rapid development of multimodal interfaces. In: Proceedings of the 5th international conference on Multimodal interfaces, Vancouver, British Columbia, Canada, November 05-07 (2003)Google Scholar
  23. 23.
    Rousseau, C., Bellik, Y., Vernier, F.: Multimodal Output Specification/Simulation Platform. In: Proceedings of the 7th international conference on Multimodal interfaces, Sorrento, ItalyGoogle Scholar
  24. 24.
    Gourdol, A., Nigay, L., Salber, D., Coutaz, J.: Two Case Studies of Software Architecture for Multimodal Interactive Systems: Voice-Paint and a Voice-enabled Graphical Notebook. In: Proceedings of the IFIP WG 2.7 working conference, August 1992, North-Holland, Amsterdam (1992)Google Scholar
  25. 25.
    MUST – Multimodal and Multilingual services for small Mobile Terminals. Heidelberg, EURESCOM Brochure Serires (May 2002), http://www.eurescom.de/public/projects/P1100-series/P1104/default.asp
  26. 26.
    Bui, T.H.: Multimodal Dialogue Management - State of the art, CTIT Technical Report series No. 06-01, University of Twente (UT), Enschede, The Netherlands (January 2006)Google Scholar
  27. 27.
    Trewin, S., Zimmermann, G., Vanderheiden, G.: Abstract User Interface Representations: How Well do they Support Universal Access? In: Proceedings of the 2003 Conference on Universal Usability, pp. 77–84. Association for Computing Machinery, New York (2003)CrossRefGoogle Scholar
  28. 28.
    Simon, R., Wegscheider, F., Tolar, K.: Tool-Supported Single Authoring for Device Independence and Multimodality. In: Proc. of the 7th International Conference on Human Computer Interaction with Mobile Devices and Services (Mobile HCI 2005), Salzburg, Austria, September 19-22 (2005)Google Scholar
  29. 29.
    Repo, P., Riekki, J.: Middleware Support for Implementing Context-Aware Multimodal User Interfaces. In: Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia table of contents, College Park, MarylandGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Giovanni Frattini
    • 1
  • Luigi Romano
    • 1
  • Vladimiro Scotto di Carlo
    • 1
  • Pierpaolo Petriccione
    • 1
  • Gianluca Supino
    • 1
  • Giuseppe Leone
    • 1
  • Ciro Autiero
    • 1
  1. 1.AtosOrigin Italia S.p.A.Pozzuoli (NA)Italy

Personalised recommendations