Auto-Adaptive Interactive Systems for Active and Assisted Living Applications

Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 470)

Abstract

The objective of this work is of improving the efficacy, acceptance, adaptability and overall performance of Human-Machine Interaction (HMI) applications using a context-based approach. In HMI, we aim to define a general human model that may lead to principles and algorithms allowing more natural and effective interaction between humans and artificial agents. This is paramount for applications in the field of Active and Assisted Living (AAL). The challenge of user acceptance is of vital importance for future solutions, and still one of the major reasons for reluctance to adopt cyber-physical systems in this domain. Our hypothesis is that, we can overcome limitations of current interaction functionalities by integrating contextual information to improve algorithms accuracy when performing under very different conditions and to adapt interfaces and interaction patterns according user intentions and emotional states.

Keywords

Human-machine interaction Context Active and Assisted Living Social agents Adaptive systems 

References

  1. 1.
    Quintas, J., Menezes, P., Dias, J.: Context-based perception and understanding of human intentions. In: 22nd IEEE International Symposium on Robot and Human Interactive Communication (2013)Google Scholar
  2. 2.
    Morandell, M.M., Hochgatterer, A., Wöckl, B., Dittenberger, S., Fagel, S.: Avatars@home: interFACEing the smart home for elderly people. In: Holzinger, A., Miesenberger, K. (eds.) USAB 2009. LNCS, vol. 5889, pp. 353–365. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  3. 3.
    Goodrich, M.A., Schultz, A.C.: Human-robot interaction: a survey. Found. Trends Hum.-Comput. Interact. 1, 203–275 (2007)CrossRefMATHGoogle Scholar
  4. 4.
    Ferreira, J.F., Lobo, J., Bessiére, P., Castelo-Branco, M., Dias, J.: A Bayesian framework for active artificial perception. IEEE Trans. Syst. Man Cybern. Part B Cybern. 43, 699–711 (2012)Google Scholar
  5. 5.
    Prado, J.A., Simplício, C., Lori, N.F., Dias, J.: Visuo-auditory multimodal emotional structure to improve human-robot-interaction. Int. J. Soc. Robot. 4, 29–51 (2011)CrossRefGoogle Scholar
  6. 6.
    Aliakbarpour, H., Khoshhal, K., Quintas, J., Mekhnacha, K., Ros, J., Andersson, M., Dias, J.: HMM-based abnormal behaviour detection using heterogeneous sensor network. In: 2nd Doctoral Conference on Computing, Electrical and Industrial Systems (2011)Google Scholar
  7. 7.
    Quintas, J., Almeida, L., Brito, M., Quintela, G., Menezes, P., Dias, J.: Context-based understanding of interaction intentions. In: 21st IEEE International Symposium on Robot and Human Interactive Communication (2012)Google Scholar
  8. 8.
    Menezes, P., Quintas, J., Dias, J.: The role of context information in human-robot interaction. In: 23rd IEEE International Symposium on Robot and Human Interactive Communication Workshop on Interactive Robots for Aging and/or Impaired People (2014)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2016

Authors and Affiliations

  1. 1.Laboratory of Automatic and SystemsInstituto Pedro NunesCoimbraPortugal
  2. 2.Department of Electrical and Computer EngineeringUniversity of CoimbraCoimbraPortugal
  3. 3.Khalifa University of Science and Technology and ResearchAbu DhabiUAE

Personalised recommendations