Designing Cognition-Centric Smart Room Predicting Inhabitant Activities

  • A. L. Ronzhin
  • A. A. Karpov
  • I. S. Kipyatkova
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5638)


Assignment of easy-to-use and well-timed services staying invisible for a user is one of important features of ambient intelligent. Multimodal user interface capable to perceive speech, movements, poses and gestures of participants in order to determinate their needs provides the natural and intuitively understandable way of interaction with the developed intelligent meeting room. Awareness of the room about spatial position of the participants, their current activities, roles in a current event, their preferences helps to predict more accurately the intentions and needs of participants. Technological framework, equipment and description of technologies applied to the intelligent meeting room are presented. Some scenarios and data structures used for a formalization of context and behavior information from practical human-human, human-machine and machine-machine interaction are discussed.


ambient intelligence cognitive-centric design multimodal interfaces context awareness smart home intelligent meeting room 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ducatel, K., Bogdanowicz, M., Scapolo, F., Leijten, J., Burgelman, J.-C.: ISTAG - Scenarios of Ambient Intelligence in 2010. European Commission Community Research (2001)Google Scholar
  2. 2.
    Aldrich, F.: Smart Homes: Past, Present and Future. In: Harper, R. (ed.) Inside the Smart Home, pp. 17–39. Springer, London (2003)CrossRefGoogle Scholar
  3. 3.
    Gann, D., Venables, T., Barlow, J.: Digital Futures: Making Homes Smarter. Chartered Institute of Housing, Coventry (1999)Google Scholar
  4. 4.
    Masakowski, Y.: Cognition-Centric Systems Design: A Paradigm Shift in System Design. In: 7th International Conference on Computer and IT Applications in the Maritime Industries, pp. 603–607 (2008)Google Scholar
  5. 5.
    Degler, D., Battle, L.: Knowledge management in pursuit of performance the challenge of context. Performance Improvement 39(6), 25–31 (2007)CrossRefGoogle Scholar
  6. 6.
    Tzovaras, D. (ed.): Multimodal User Interfaces: From Signals to Interaction. Springer, Heidelberg (2008)Google Scholar
  7. 7.
    Chai, J., Pan, S., Zhou, M.: MIND: A Context-based Multimodal Interpretation Framework. Kluwer Academic Publishers, Dordrecht (2005)Google Scholar
  8. 8.
    Ronzhin, A.: Topological peculiarities of morpho-phonemic approach to representation of the vocabulary of Russian speech recognition. Bulletin of computer and information technologies, Moscow (9), 12–19 (2008)Google Scholar
  9. 9.
    Wallhoff, F., Zobl, M., Rigoll, G.: Action segmentation and recognition in meeting room scenarios. In: International Conference on Image Processing (ICIP 2004), pp. 2223–2226 (2004)Google Scholar
  10. 10.
    Lobanov, B., Tsirulnik, L., Železný, M., Krňoul, Z., Ronzhin, A., Karpov, A.: Audio-Visual Russian Speech Synthesis System. Informatics, Minsk, Belarus 20(4), 67–78 (2008)Google Scholar
  11. 11.
    Karpov, A., Ronzhin, A.: An Information Enquiry Kiosk with a Multimodal User Interface. In: 9th International Conference Pattern Recognition and Image Analysis (PRIA-2008), Nizhny Novgorod, Russia, pp. 265–268 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • A. L. Ronzhin
    • 1
  • A. A. Karpov
    • 1
  • I. S. Kipyatkova
    • 1
  1. 1.St. Petersburg Institute for Informatics and AutomationSt. PetersburgRussia

Personalised recommendations