Advertisement

Hybrid Fusion Approach for Detecting Affects from Multichannel Physiology

  • Md. Sazzad Hussain
  • Rafael A. Calvo
  • Payam Aghaei Pour
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6974)

Abstract

Bringing emotional intelligence to computer interfaces is one of the primary goals of affective computing. This goal requires detecting emotions often through multichannel physiology and/or behavioral modalities. While most affective computing studies report high affect detection rate from physiological data, there is no consensus on which methodology in terms of feature selection or classification works best for this type of data. This study presents a framework for fusing physiological features from multiple channels using machine learning techniques to improve the accuracy of affect detection. A hybrid fusion based on weighted majority vote technique for integrating decisions from individual channels and feature level fusion is proposed. The results show that decision fusion can achieve higher classification accuracy for affect detection compared to the individual channels and feature level fusion. However, the highest performance is achieved using the hybrid fusion model.

Keywords

Affective computing physiology multimodality feature extraction machine learning information fusion 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sharma, R., Pavlovic, V.I., Huang, T.S.: Toward multimodal human-computer interface. Proceedings of the IEEE 86, 853–869 (1998)CrossRefGoogle Scholar
  2. 2.
    Pantic, M., Rothkrantz, L.J.M.: Toward an affect-sensitive multimodal human-computer interaction. Proceedings of the IEEE 91, 1370–1390 (2003)CrossRefGoogle Scholar
  3. 3.
    Calvo, R.A., D’Mello, S.: Affect Detection: An Interdisciplinary Review of Models, Methods, and their Applications. IEEE Transactions on Affective Computing 1, 18–37 (2010)CrossRefGoogle Scholar
  4. 4.
    Aghaei Pour, P., Hussain, M.S., AlZoubi, O., D’Mello, S., Calvo, R.: The Impact of System Feedback on Learners’ Affective and Physiological States. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010. LNCS, vol. 6094, pp. 264–273. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Hussain, M.S., AlZoubi, O., Calvo, R.A., D’Mello, S.: Affect Detection from Multichannel Physiology during Learning Sessions with AutoTutor. In: Biswas, G., Bull, S., Kay, J., Mitrovic, A. (eds.) AIED 2011. LNCS (LNAI), vol. 6738, pp. 131–138. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  6. 6.
    D’Mello, S., Graesser, A.: Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features. User Modeling and User-adapted Interaction 20, 147–187 (2010)CrossRefGoogle Scholar
  7. 7.
    Hall, D.L., McMullen, S.A.H.: Mathematical techniques in multi-sensor data fusion. Artech House Publishers, Boston (2004)zbMATHGoogle Scholar
  8. 8.
    Utthara, M., Suranjana, S., Sukhendu, D., Pinaki, C.: A Survey of Decision Fusion and Feature Fusion Strategies for Pattern Classification. IETE Technical Review 27, 293–307 (2010)CrossRefGoogle Scholar
  9. 9.
    Paleari, M., Lisetti, C.L.: Toward multimodal fusion of affective cues. In: Proceedings of the 1st ACM International Workshop on Human-Centered Multimedia, pp. 99–108. ACM, New York (2006)CrossRefGoogle Scholar
  10. 10.
    Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th International Conference on Multimodal Interfaces, pp. 205–211. ACM, New York (2004)CrossRefGoogle Scholar
  11. 11.
    Kim, J.: Bimodal emotion recognition using speech and physiological changes. In: Robust Speech Recognition and Understanding. I-Tech Education and Publishing, Vienna (2007)Google Scholar
  12. 12.
    Castellano, G., Kessous, L., Caridakis, G.: Emotion recognition through multiple modalities: face, body gesture, speech. In: Peter, C., Beale, R. (eds.) Affect and Emotion in Human-Computer Interaction. LNCS, vol. 4868, pp. 92–103. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  13. 13.
    Lang, P.J., Bradley, M.M., Cuthbert, B.N.: International affective picture system (IAPS): Technical manual and affective ratings. The Center for Research in Psychophysiology, University of Florida, Gainesville, FL (1997)Google Scholar
  14. 14.
    Russell, J.A.: A circumplex model of affect. Journal of Personality and Social Psychology 39, 1161–1178 (1980)CrossRefGoogle Scholar
  15. 15.
    Picard, R.W.: Affective computing: challenges. International Journal of Human-Computer Studies 59, 55–64 (2003)CrossRefGoogle Scholar
  16. 16.
    Kuncheva, L.I.: Combining pattern classifiers: methods and algorithms. Wiley-Interscience, Hoboken (2004)CrossRefzbMATHGoogle Scholar
  17. 17.
    Holmes, N.P., Spence, C.: Multisensory integration: space, time and superadditivity. Current Biology 15, 762–764 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Md. Sazzad Hussain
    • 1
    • 2
  • Rafael A. Calvo
    • 2
  • Payam Aghaei Pour
    • 2
  1. 1.National ICT Australia (NICTA)EveleighAustralia
  2. 2.School of Electrical and Information EngineeringUniversity of SydneyAustralia

Personalised recommendations