Multimodal Recognition of Emotions Using Physiological Signals with the Method of Decision-Level Fusion for Healthcare Applications

  • Chaka Koné
  • Imen Meftah Tayari
  • Nhan Le-Thanh
  • Cecile Belleudy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9102)

Abstract

Automatic emotion recognition enhance dramatically the development of human/machine dialogue. Indeed, it allows computers to determine the emotion felt by the user and adapt consequently its behavior. This paper presents a new method for the fusion of signals for the purpose of a multimodal recognition of eight basic emotions using physiological signals. After a learning phase where an emotion data base is constructed, we apply the recognition algorithm on each modality separately. Then, we merge all these decisions separately by applying a decision fusion approach to improve recognition rate. The experiments show that the proposed method allows high accuracy emotion recognition. Indeed we get a recognition rate of 81.69% under some conditions.

Keywords

Signal fusion method Basic emotions Multimodal detection Physiological signals 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Healey, J.: Wearable and automative systems for a affect recognition from physiology, these doctorale MIT soumis le 18 Mai 2000Google Scholar
  2. 2.
    Sebe, N., Bakker, E., Cohen, I.: Theo Gevers et Thomas Huang. Bimodal Emotion Recognition (2005)Google Scholar
  3. 3.
    Shan, Shaogang Gong, Peter, W.: McOwan : Facial expression recognition based on Local Binary Patterns: A comprehensive study Caifeng. Image and Vision Computing 27, 803–816 (2009)CrossRefGoogle Scholar
  4. 4.
    Hamdi, H.: Plate-forme multimodale pour la reconnaissance d’émotions via l’analyse de signaux physiologiques : Application à la simulation d’entretiens d’embauche. Modeling and Simulation. Université d’Angers. French. <tel-00997249> (2012)Google Scholar
  5. 5.
    Sharma, R., Pavlovic, V.I., Huang, T.S.: Toward multimodal human-computer interface. Proceedings of the IEEE 86(5), 853–869. 29, 30, 32, 167 (1998)Google Scholar
  6. 6.
    Noureddine Aboutabit. Reconnaissance de la Langue Française Parlée Complété (LPC) : décodage phonétique des gestes main-lèvres.. domain stic. Institut National Polytechnique de Grenoble - INPG. French (2007)Google Scholar
  7. 7.
    Zhi, Q., Kaynak, M.N., Sengupta, K., Cheok, A.D., Ko, C.C.: Hmm modeling for audio-visual speech recognition. In: Proceedings of the IEEE International Conference on Multimedia and Expo (ICME 2001), p. 136 (2001)Google Scholar
  8. 8.
    Meftah,I.T.: Modélisation, détection et annotation des états émotionnels à l’aide d’un espace vectoriel multidimensionnel. Artificial Intelligence. Université Nice Sophia Antipolis. French. <NNT : 2013NICE4017>. <tel-00908233> (2013)Google Scholar
  9. 9.
    Park, Hyung-Bum, Han, Ji-Eun, Hyun, Joo-Seok: You may look unhappy unless you smile: The distinctiveness of a smiling face against faces without an explicit smile. Acta Psychologica 157, 185 (2015)CrossRefGoogle Scholar
  10. 10.
    Busso, S.L., Narayanan, S.: Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection in IEEE transactions on audio, speech and language processing vol 17 n°4, 4 Mai 2009Google Scholar
  11. 11.
    Kim,K.H., Bang, S.W., Kim, S.R.: Emotion recognition system using short-term monitoring of physiological signals in medical & biological engineering and computing 2004, vol 42, pp. 419–427, 17 February 2004Google Scholar
  12. 12.
    Wagner,J., Andre, E., Lingenfelser, F., Kim, J.: Exploring fusion methods for multimodal emotion recognition with missing data in Affective computing IEEE Transactions on vol 2, issue 4, pp. 206–218, 12 January 2012Google Scholar
  13. 13.
    Zong, C., Chetouani, M.: Hilbert Huang transform based Physiological signals analysis for emotion recognition. In: 2009 IEEE International Symposium on Signal Processing and Information Technology(ISSPIT), pp. 334–33, 14 Decembre 2009Google Scholar
  14. 14.
    Monte-Moreno, E., Chetouani, M., Faundez-Zanuy, M., SoleCasals, J.: Maximum likelihood linear programming data fusion for speaker recognition. Speech Communication 51(9):820–830. 68 (2009)Google Scholar
  15. 15.
    Mahdhaoui, A., Chetouani, M.: Emotional speech classification based on multi view characterization. In: IAPR International Conference on Pattern Recognition, ICPR. 51 (2010)Google Scholar
  16. 16.
    Mahdhaoui, A.: Analyse de Signaux Sociaux pour la Modelisation de l’interaction face à face.Signal and Image processing. Université Pierre et Marie Curie - Paris VI. French. <tel-00587051> (2010)Google Scholar
  17. 17.
    Meftah1, I.T., Le Thanh, N., Amar, C.B.: Detecting depression using a multidimensional model of emotional states: Global Health 2012: The First International Conference On Global Health ChallengesGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Chaka Koné
    • 1
  • Imen Meftah Tayari
    • 2
  • Nhan Le-Thanh
    • 3
  • Cecile Belleudy
    • 1
  1. 1.University of Nice Sophia Antipolis, LEAT laboratory CNRS UMR 7248Sophia AntipolisFrance
  2. 2.REGIM laboratoryUniversity of SfaxSfaxTunisia
  3. 3.University of Nice Sophia Antipolis, I3S laboratory CNRS UMR 7271Sophia AntipolisFrance

Personalised recommendations