This paper outlines a research plan with the purpose of combining model-based methodology and multimodal interaction. This work picks up frameworks such as modality theory, TYCOON and CARE and correlates them to approaches for context of use modelling such as the interaction constraints model and the unifying reference framework for multi-target user interfaces. This research shall result in methodological design support for multimodal interaction. The resulting framework will consist of methodological design support, such as a design pattern language for multimodal interaction and a set of model-based notational elements.


Design Pattern Modality Theory Context Model Modality Property Output Modality 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alexander, C., Ishikawa, S., Silverstein, M., Jacobson, M., Fiksdahl-King, I., Angel, S.: A Pattern Language. Oxford University Press, Oxford (1997)Google Scholar
  2. 2.
    Benoît, et al.: Audio-visual and Multimodal Speech Systems. Audio-visual and Multimodal Speech Systems. Handbook of Standards and Resources for Spoken Language Systems - Supplement Volume (2000)Google Scholar
  3. 3.
    Bernsen, N.O.: Modality Theory: Supporting Multimodal Interface Design. In: Proc. ERCIM (1993)Google Scholar
  4. 4.
    Bernsen, N.O.: A toolbox of output modalities. Representing output information in multimodal interfaces. WPCS-95-10. Centre for Cognitive Science, Roskilde University (1995)Google Scholar
  5. 5.
    Bernsen, N.O.: Towards a tool for predicting speech functionality. Speech Communication 23, 181–210 (1997)CrossRefGoogle Scholar
  6. 6.
    Bernsen, N.O.: Multimodality in language and speech systems - from theory to design support tool. In: Granström, B., House, D., Karlsson, I. (eds.) Multimodality in Language and Speech Systems, pp. 93–148. Kluwer Academic Publishers, Dordrecht (2002)Google Scholar
  7. 7.
    Borchers, J.O.: A Pattern Approach to Interaction Design. AI & Society Journal of Human-Centred Systems and Machine Intelligence 15(4), 359–376 (2001)MathSciNetGoogle Scholar
  8. 8.
    Bürgy, C.: An Interaction Constraints Model for Mobile and Wearable Computer-Aided Engineering Systems in Industrial Applications, Doctoral Dissertation, University of Pittsburgh, Pennsylvania, USA (2002)Google Scholar
  9. 9.
    Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., Vanderdonckt, J.: A unifying reference framework for multi-target user interfaces. Interacting with Computers 15(3), 289–308 (2003)CrossRefGoogle Scholar
  10. 10.
    Coutaz, J., Nigay, L., Salber, D., Blandford, A., May, J., Young, R.M.: Four Easy Pieces for Assessing the Usability of Multimodal Interaction: The CARE Properties. In: Proc. Interact 1995, pp. 115–120. Chapman & Hall, London (1995)Google Scholar
  11. 11.
    Eco, U.: Einführung in die Semiotik. Fink, München (1994)Google Scholar
  12. 12.
    Martin, J.-C.: Towards “intelligent” cooperation between modalities. The example of a system enabling multimodal interaction with a map. In: Proc. IJCAI 1997 Workshop on Intelligent Multimodal Systems, Nagoya, Japan (1997)Google Scholar
  13. 13.
    Nigay, L., Coutaz, J.: A Design Space For Multimodal Systems: Concurrent Processing and Date Fusion. In: Proc. INTERCHI 1993. ACM Press, NY (1993)Google Scholar
  14. 14.
    Hiyoshi, M., Shimazu, H.: Drawing pictures with natural language and direct manipulation. In: Proc. Coling 1994, Kyoto, Japan (1994)Google Scholar
  15. 15.
    Rousseau, C., Bellik, Y., Vernier, F.: Multimodal Output Specification / Simulation Platform. In: Proc. ICMI 2005, Trento, Italy (2005)Google Scholar
  16. 16.
    Suhm, B., Myers, B., Waibel, A.: Multimodal error correction for speech user interfaces. ACM Trans. Comput.-Hum. Interact 8(1), 60–98 (2001)CrossRefGoogle Scholar
  17. 17.
    Tan, Y.K., Sherkat, N., Allen, T.: Error recovery in a blended style eye gaze and speech interface. In: Proc. ICMI 2003, pp. 196–202. ACM Press, New York (2003)CrossRefGoogle Scholar
  18. 18.
    Vernier, F., Nigay, L.: A Framework for the Combination and Characterization of Output Modalities. In: Palanque, P., Paternó, F. (eds.) DSV-IS 2000. LNCS, vol. 1946, pp. 35–50. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  19. 19.
    Van Welie, M., van der Weer, G.C.: Pattern Languages in Interaction Design: Structure and Organization. In: Proc. Interact 2003 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Andreas Ratzka
    • 1
  1. 1.Lehrstuhl für InformationswissenschaftUniversität RegensburgRegensburgGermany

Personalised recommendations