Hybrid Fusion Approach for Detecting Affects from Multichannel Physiology
Bringing emotional intelligence to computer interfaces is one of the primary goals of affective computing. This goal requires detecting emotions often through multichannel physiology and/or behavioral modalities. While most affective computing studies report high affect detection rate from physiological data, there is no consensus on which methodology in terms of feature selection or classification works best for this type of data. This study presents a framework for fusing physiological features from multiple channels using machine learning techniques to improve the accuracy of affect detection. A hybrid fusion based on weighted majority vote technique for integrating decisions from individual channels and feature level fusion is proposed. The results show that decision fusion can achieve higher classification accuracy for affect detection compared to the individual channels and feature level fusion. However, the highest performance is achieved using the hybrid fusion model.
KeywordsAffective computing physiology multimodality feature extraction machine learning information fusion
Unable to display preview. Download preview PDF.
- 10.Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th International Conference on Multimodal Interfaces, pp. 205–211. ACM, New York (2004)CrossRefGoogle Scholar
- 11.Kim, J.: Bimodal emotion recognition using speech and physiological changes. In: Robust Speech Recognition and Understanding. I-Tech Education and Publishing, Vienna (2007)Google Scholar
- 13.Lang, P.J., Bradley, M.M., Cuthbert, B.N.: International affective picture system (IAPS): Technical manual and affective ratings. The Center for Research in Psychophysiology, University of Florida, Gainesville, FL (1997)Google Scholar