User Interaction Adaptation within Ambient Environments

  • G. PruvostEmail author
  • T. Heinroth
  • Y. Bellik
  • W. Minker


Ambient environments introduce new user interaction issues. The interaction environment which was static and closed becomes open, heterogeneous and dynamic. The variety of users, devices and physical environments leads to a more complex interaction context. As a consequence, the interface has to adapt itself to preserve its utility and usability. It is no longer reasonable to continue to propose static and rigid interfaces while users, systems and environments are more and more diversified. To the dynamic nature of the interaction context introduced by ambient environments, the user interface must also respond by a dynamic adaptation. Thanks to the interaction richness it can offer, multimodality represents an interesting solution to this adaptation problem. The objective is to exploit all the interaction capabilities available to the system at a given moment, to instantiate and evolve user interfaces. In this chapter, we start by presenting a survey of the state of the art on user interaction adaptation. After, discussing the limitations of the existing approaches, we present our proposals to achieve user interaction adaptation within ambient environments. Then we describe the derived software architecture and the user evaluation it led to. We conclude by some directions for future work.


User Interface Behavioural Model Interaction Task Interaction Agent Interaction Adaptation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Andre, E.: Handbook of natural language processing, chap. The Generation of Multimedia Presentations, pp. 305–327. Marcel Dekker Inc. (2000)Google Scholar
  2. 2.
    Appert, C., Huot, S., Dragicevic, P., Beaudouin-Lafon, M.: Flowstates : Prototypage d’application interactives avec des flots de donnees et des machines a etats. In: Proceedings of 21eme conference francophone sur l’Interaction Homme-Machine, IHM 2009, pp. 119–128. ACM Press (2009)Google Scholar
  3. 3.
    Athanasopoulos, D., Zarras, A., Issarny, V., Pitoura, E., Vassiliadis, P.: CoWSAMI: Interfaceaware context gathering in ambient intelligence environments. Pervasive and Mobile Computing 4(3), 360–389 (2008)CrossRefGoogle Scholar
  4. 4.
    Balme, L., Demeure, A., Barralon, N., Coutaz, J., Calvary, G., et al.: Cameleon-rt: A software architecture reference model for distributed, migratable, and plastic user interfaces. Lecture Notes in Computer Science, Ambient intelligence 3295, 291–302 (2004)CrossRefGoogle Scholar
  5. 5.
    Bass, L., Faneuf, R., Little, R., Mayer, N., Pellegrino, B., Reed, S., Seacord, R., Sheppard, S., Szczur, M.: A Metamodel for the Runtime Architecture of an Interactive System. ACM SIGCHI Bulletin 24(1), 32–37 (1992)CrossRefGoogle Scholar
  6. 6.
    Bellik, Y.: Interfaces multimodales: concepts, modeles et architectures. Ph.D. thesis, Universite d’Orsay Paris-Sud (1995)Google Scholar
  7. 7.
    Bellik, Y., Rebai, I., Machrouh, E., Barzaj, Y., Jacquet, C., Pruvost, G., Sansonnet, J.: Multimodal Interaction within Ambient Environments: an Exploratory Study. In: Human-computer Interaction, INTERACT’09: IFIP International Conference on Human-Computer Interaction; Uppsala, Sweden (2009)Google Scholar
  8. 8.
    Bernsen, N.: Modality Theory in support of multimodal interface design. In: Proc. of Intelligent Multi-Media Multi-Modal Systems, pp. 37–44 (1994)Google Scholar
  9. 9.
    Berti, S., Paterno, F.: Migratory multimodal interfaces in multidevice environments. In: Proc. International Conference on Multimodal Interfaces, pp. 92–99. ACM Press (2005)Google Scholar
  10. 10.
    Bohus, D., Rudnicky, A.I.: The ravenclaw dialog management framework: Architecture and systems. Computer Speech & Language 23, 332–361 (2009)CrossRefGoogle Scholar
  11. 11.
    Bordegoni, M., Faconti, G., Maybury, M.T., Rist, T., Ruggieri, S., Trahanias, P., Wilson, M.: A Standard Reference Model for Intelligent Multimedia Presentation Systems. Computer Standards and Interfaces 18(6), 477–496 (1997)CrossRefGoogle Scholar
  12. 12.
    Braffort, A., Choisier, A., Collet, C., Dalle, P., Gianni, F., Lenseigne, B., Segouat, J.: Toward an annotation software for video of Sign Language, including image processing tools and signing space modelling. In: 4th International Conference on Language Resources and Evaluation, held in Lisbon, Portugal (2004)Google Scholar
  13. 13.
    Browne, D., Totterdell, P., Norman, M.: Adaptive User Interfaces. Academic Press (1990)Google Scholar
  14. 14.
    Brusilovsky, P.: Adaptive Hypermedia. User Modeling and User-Adapted Interaction 11, 87– 110 (2001)zbMATHCrossRefGoogle Scholar
  15. 15.
    Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., Vanderdonckt, J.: A unifying reference framework for multi-target user interfaces. Journal of InteractingWith Computers Vol 15/3, Elsevier Science B.V (2003)Google Scholar
  16. 16.
    Chikofsky, E., Cross II, J.: Reverse engineering and design recovery: A taxonomy. IEEE software 7(1), 13–17 (1990)CrossRefGoogle Scholar
  17. 17.
    Coutaz, J.: Meta-User Interfaces for Ambient Spaces. Lecture Notes in Computer Science : Task Models and Diagrams for Users Interface Design 4385, 1–15 (2007)CrossRefGoogle Scholar
  18. 18.
    Coutaz, J., Nigay, L., Salber, D., Blandford, A., May, J., Young, R.: Four easy pieces for assessing the usability of multimodal interaction: the CARE properties. In: Proceedings of INTERACT, vol. 95, pp. 115–120 (1995)Google Scholar
  19. 19.
    Dey, A.K.: Providing Architectural Support for Building Context-Aware Applications. Ph.D. thesis, Georgia Institute of Technology (2000)Google Scholar
  20. 20.
    Dieterich, H., Malinowski, U., Kuhme, T., Schneider-Hufschmidt, M.: State of the art in adaptive user interfaces. Human factors in information technology 10, 13–13 (1993)Google Scholar
  21. 21.
    Dragicevic, P., Fekete, J.: Input device selection and interaction configuration with ICON. People and Computers pp. 543–558 (2001)Google Scholar
  22. 22.
    Duarte, C., Carrico, L.: A conceptual framework for developing adaptive multimodal applications. In: Proceedings of the 11th international conference on Intelligent user interfaces, pp. 132–139. ACM Press (2006)Google Scholar
  23. 23.
    Eisenstein, J., Vanderdonckt, J., Puerta, A.: Applying Model-Based Techniques to the Development of UIs for Mobile Computers. In: in Proc. of the 6th international conference on Intelligent User Interfaces, pp. 69–76. ACM Press Publ., Santa Fe, New Mexico, USA (2001)Google Scholar
  24. 24.
    Florins, M., Trevisan, D., Vanderdonckt, J.: The Continuity Property in Mixed Reality and Multiplatform Systems: a Comparative Study. In: Proceedings of CADUI’04, pp. 13–16. Madeira Island (2004)Google Scholar
  25. 25.
    Forgy, C.: Rete: A fast algorithm for the many pattern/many object pattern match problem* 1. Artificial intelligence 19(1), 17–37 (1982)CrossRefGoogle Scholar
  26. 26.
    Frasincar, F., Houben, G.J.: Hypermedia Presentation Adaptation on the Semantic Web. In: Proceedings of the 2nd International Conference on Adaptive Hypermedia and AdaptiveWeb- Based Systems, LNCS 2347, pp. 133–142. Malaga, Spain (2002)Google Scholar
  27. 27.
    Friedman, E.: Jess in action: rule-based systems in java. Manning Publications Co. Greenwich, CT, USA (2003)Google Scholar
  28. 28.
    Frohlich, D.M.: The design space of interfaces, multimedia systems, interaction and applications. Proc. of 1st Eurographics Workshop, held in Stockholm, Sweden (1991)Google Scholar
  29. 29.
    Gram, C., Cockton, G.: Design Principles for Interactive Software (IFIP International Federation for Information Processing). Chapman & Hall, Ltd. London, UK, UK (1996)Google Scholar
  30. 30.
    Hagras, H., Goumopoulos, C., Bellik, Y., Minker, W., Meliones, A.: Research Results on Adaptation & Evolution. Tech. rep., 7th Framework program, ATRACO project report - Deliverable D13 (2011)Google Scholar
  31. 31.
    Heinroth, T., Denich, D., Schmitt, A.: Owlspeak - adaptive spoken dialogue within intelligent environments. In: 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), pp. 666 – 671. Mannheim (Germany) (2010). URL &arnumber=5470518
  32. 32.
    van Helvert, J., Hagras, H., Kameas, A.: D27 - prototype testing and validation. Restricted deliverable, The ATRACO Project (FP7/2007-2013 grant agreement ni.12Google Scholar
  33. 33.
    216837) (2009)Google Scholar
  34. 34.
    Henricksen, K., Indulska, J., Rakotonirainy, A.: Modeling Context Information in Pervasive Computing Systems. Pervasive Computing 2414, 79–117 (2002)CrossRefGoogle Scholar
  35. 35.
    Horrocks, I., Patel-Schneider, P., Boley, H., Tabet, S., Grosof, B., Dean, M.: SWRL: A semantic web rule language combining OWL and RuleML. W3C Member submission 21 (2004). URL
  36. 36.
    Jaimes, A., Sebe, N.: Multimodal human-computer interaction: A survey. Computer Vision and Image Understanding 108(1-2), 116–134 (2007)CrossRefGoogle Scholar
  37. 37.
    Johnston, M., Bangalore, S.: Finite-state multimodal integration and understanding. Natural Language Engineering 11(02), 159–187 (2005)CrossRefGoogle Scholar
  38. 38.
    Kobsa, A., Koenemann, J., Pohl, W.: Personalised hypermedia presentation techniques for improving online customer relationships. The Knowledge Engineering Review 16(2), 111– 155 (2001)zbMATHCrossRefGoogle Scholar
  39. 39.
    Kolski, C., Le Strugeon, E.: A review of intelligent human-machine interfaces in the lights of the arch model. International Journal of Human-Computer Interaction 10(3), 193–231 (1998)CrossRefGoogle Scholar
  40. 40.
    Kurkovsky, S.: Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability. IGI Global (2010)Google Scholar
  41. 41.
    Landragin, F.: Physical, semantic and pragmatic levels for multimodal fusion and fission (2007). Accessible on HAL server :
  42. 42.
    Larsson, S., Traum, D.: Information state and dialogue management in the trindi dialoguemove engine. Natural Language Engineering Special Issue pp. 323–340 (2000)Google Scholar
  43. 43.
    Lee, G.G., Mariani, J., Minker, W. (eds.): Spoken Dialogue Systems for Ambient Environments, Lecture Notes in Computer Science, vol. 6392. Springer Verlag, Heidelberg (2010)Google Scholar
  44. 44.
    Limbourg, Q., Vanderdonckt, J., Michotte, B., Bouillon, L., Lopez-Jaquero, V.: Usixml: A language supporting multi-path development of user interfaces. Engineering Human Computer Interaction and Interactive Systems pp. 200–220 (2005)Google Scholar
  45. 45.
    Mori, G., Paterno, F., Santoro, C.: Tool support for designing nomadic applications. In: Proceedings of the 8th international conference on Intelligent user interfaces, pp. 141–148. ACM Press (2003)Google Scholar
  46. 46.
    Mori, G., Paterno, F., Santoro, C.: Design and development of multidevice user interfaces through multiple logical descriptions. IEEE Transactions on Software Engineering 30(8), 507–520 (2004)CrossRefGoogle Scholar
  47. 47.
    Navarre, D., Palanque, P., Bastide, R., Schyn, A.,Winckler, M., Nedel, L., Freitas, C.: A formal description of multimodal interaction techniques for immersive virtual reality applications. Human-Computer Interaction-INTERACT 2005 pp. 170–183 (2005)Google Scholar
  48. 48.
    Nigay, L., Coutaz, J.: A Generic Platform for Addressing the Multimodal Challenge. In: Proc. of the Conference on Human Factors in Computing Systems, CHI’95 held in Denver, Colorado, USA (1995)Google Scholar
  49. 49.
    Oshry, M., Auburn, R., Baggia, P., Bodell, M., Burke, D., Burnett, D., Candell, E., Carter, J., Mcglashan, S., Lee, A., Porter, B., Rehor, K.: Voice extensible markup language (voicexml) version 2.1. Tech. rep., W3C – Voice Browser Working Group (2007)Google Scholar
  50. 50.
    Padovitz, A., Loke, S., Zaslavsky, A.: The ECORA framework: A hybrid architecture for context-oriented pervasive computing. Pervasive and mobile computing 4(2), 182–215 (2008)CrossRefGoogle Scholar
  51. 51.
    Paganelli, L., Paterno, F.: Automatic Reconstruction of the Underlying Interaction Design of Web Applications. In: Proceedings of SEKE 2002. Ischia, Italy (2002)Google Scholar
  52. 52.
    Paterno, F., Mancini, C., Meniconi, S.: Concurtasktrees: A diagrammatic notation for specifying task models. In: INTERACT ’97: Proceedings of the IFIP TC13 Interantional Conference on Human-Computer Interaction, pp. 362–369. Chapman & Hall, Ltd. (1997)Google Scholar
  53. 53.
    Potel, M.: MVP: Model-View-Presenter The Taligent Programming Model for C++ and Java. Tech. rep., Taligent Inc (1996). URL Availableon Potel/Portfolio/mvp.pdf
  54. 54.
    Reenskaug, T.: Models - views - controllers. Tech. rep., Xerox PARC (1979). URL http: // Scholar
  55. 55.
    Riley, G.: Clips: An expert system building tool. In: NASA, Washington, Technology 2001: The Second National Technology Transfer Conference and Exposition, vol. 2 (1991)Google Scholar
  56. 56.
    Rist, T.: Supporting Mobile Users Through Adaptive Information Presentation. Multimodal intelligent information presentation 27, 113–139 (2005)CrossRefGoogle Scholar
  57. 57.
    Rist, T., Andre, E.: Building smart embodied virtual characters. Lecture Notes in Computer Science: Smart Graphics 2733, 123–130 (2003)CrossRefGoogle Scholar
  58. 58.
    Roman, M., Hess, C., Cerqueira, R., Ranganathan, A., Campbell, R., Nahrstedt, K.: A middleware infrastructure for active spaces. IEEE Pervasive Computing 1(4), 74–83 (2002)CrossRefGoogle Scholar
  59. 59.
    Rousseau, C.: Presentation multimodale et contextuelle de l’information. Ph.D. thesis, Universite d’Orsay Paris-Sud (2006)Google Scholar
  60. 60.
    Rousseau, C., Bellik, Y., Vernier, F., Bazalgette, D.: A framework for the intelligent multimodal presentation of information. Signal Processing 86(12), 3696–3713 (2006)zbMATHCrossRefGoogle Scholar
  61. 61.
    Samaan, K., Tarpin-Bernard, F.: Task models and interaction models in a multiple user interfaces generation process. In: In Proceedings of TAMODIA 2004, vol. vol. 86, pp. 137–144. ACM Press, Czech Republic (2004)Google Scholar
  62. 62.
    Scapin, D., Bastien, J.: Ergonomic criteria for evaluating the ergonomic quality of interactive systems. Behaviour & Information Technology 16(4), 220–231 (1997)CrossRefGoogle Scholar
  63. 63.
    Serrano, M., Nigay, L., Lawson, J.Y.L., Ramsay, A., Murray-Smith, R., Denef, S.: The openinterface framework: a tool for multimodal interaction. In: CHI ’08 extended abstracts on Human factors in computing systems, pp. 3501–3506. ACM, New York, NY, USA (2008). DOI 10.1145/1358628.1358881
  64. 64.
    Stanciulescu, A., Limbourg, Q., Vanderdonckt, J., Michotte, B., Montero, F.: A transformational approach for multimodal web user interfaces based on UsiXML. In: Proceedings of the 7th international conference on Multimodal interfaces, pp. 259–266. ACM New York, NY, USA (2005)Google Scholar
  65. 65.
    Stephanidis, C., Karagiannidis, C., Koumpis, A.: Decision Making in Intelligent User Interfaces. In: Intelligent User Interfaces, IUI’97 held in Orlando, florida, USA, pp. 195–202 (1997)Google Scholar
  66. 66.
    Stephanidis, C., Paramythis, A., Sfyrakis, M., Stergiou, A., Maou, N., Leventis, A., Paparoulis, G., Karagiandidis, C.: Adaptable and adaptive user interfaces for disabled users in avanti project. Lecture Notes In Computer Science. Intelligence in Services and Networks: technology for Ubiquitous Telecom Services 1430, 153–16 (1998)CrossRefGoogle Scholar
  67. 67.
    Stephanidis, C., Savidis, A.: Universal Access in the Information Society: Methods, Tools, and Interaction Technologies. UAIS Journal 1(1), 40–55 (2001)Google Scholar
  68. 68.
    Tandler, P.: The BEACH application model and software framework for synchronous collaboration in ubiquitous computing environments. The Journal of Systems & Software 69(3), 267–296 (2004)CrossRefGoogle Scholar
  69. 69.
    Tarpin-Bernard, F.: Interaction Homme-Machine Adaptative. Habilitation a diriger des recherches (hdr), Universite Claude Bernard de Lyon (2006)Google Scholar
  70. 70.
    Teil, D., Bellik, Y.: The Structure of Multimodal Dialog II, chap. Multimodal Interaction Interface Using Voice and Gesture, chapter 19, pp. pp. 349–366. John Benjamins publishing co. (2000)Google Scholar
  71. 71.
    Thevenin, D., Coutaz, J.: Plasticity of user interfaces: Framework and research agenda. In: Human-computer Interaction, INTERACT’99: IFIP TC. 13 International Conference on Human-Computer Interaction, 30th August-3rd September 1999, Edinburgh, UK, p. 110. IOS Press (1999)Google Scholar
  72. 72.
    Thevenin, D., Coutaz, J.: Adaptation des IHM: taxonomies et archi. logicielle. In: Proceedings of the 14th French-speaking conference on Human-computer interaction, pp. 207–210. ACM New York, NY, USA (2002)Google Scholar
  73. 73.
    Vanderdonckt, J., Grolaux, D., Van Roy, P., Limbourg, Q., Macq, B., Michel, B.: A Design Space For Context-Sensitive User Interfaces. In: In Proceedings of IASSE 2005, held in Toronto, Canada (2005)Google Scholar
  74. 74.
    Young, S., Williams, J., Schatzmann, J., Stuttle, M., Weilhammer, K.: D4.3: Bayes net prototype - the hidden information state dialogue manager. Tech. rep., TALK - Talk and Look: Tools for Ambient Linguistic Knowledge, IST-507802, 6th FP (2006)Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.National Center for Scientific Research (LIMSI-CNRS)Orsay cedexFrance
  2. 2.Institute of Information TechnologyUlm UniversityUlmGermany

Personalised recommendations