Affect representation and recognition in 3D continuous valence–arousal–dominance space
- 709 Downloads
Currently, the focus of research on human affect recognition has shifted from six basic emotions to complex affect recognition in continuous two or three dimensional space due to the following challenges: (i) the difficulty in representing and analyzing large number of emotions in one framework, (ii) the problem of representing complex emotions in the framework, and (iii) the lack of validation of the framework through measured signals, and (iv) the lack of applicability of the selected framework to other aspects of affective computing. This paper presents a Valence – Arousal – Dominance framework to represent emotions. This framework is capable of representing complex emotions on continuous 3D space. To validate the model, an affect recognition technique has been proposed that analyses spontaneous physiological (EEG) and visual cues. The DEAP dataset is a multimodal emotion dataset which contains video and physiological signals as well as Valence, Arousal and Dominance values. This dataset has been used for multimodal analysis and recognition of human emotions. The results prove the correctness and sufficiency of the proposed framework. The model has also been compared with other two dimensional models and the capacity of the model to represent many more complex emotions has been discussed.
KeywordsAffect representation Emotion recognition Valence Arousal Dominance Physiological signals EEG Classification and clustering of emotions
- 2.Emotion Article: www.measuredme.com visited on 20 July, 2014.
- 3.Caridakis, G., Malatesta, L., Kessous, L., Amir, N., Paouzaiou, A., &Karpouzis, K. (2006, November). Modelling naturalistic affective states via facial and vocal expression recognition. In Proceedings 8th ACM International Conference on Multimodal Interfaces (ICMI’06), Banff, Alberta, Canada (pp. 146–154). ACM Publishing.Google Scholar
- 4.Chung S. Y., Yoon H. J. (2012) Affective classification using Bayesian classifier and supervised learning. 12th Int Conf Control, Autom Syst (ICCAS) Island: pp. 1768–1771.Google Scholar
- 5.Ekman P, Friesen WV, O’Sullivan M, Chan A, Diacoyanni-Tarlatzis I, Heider K, Krause R, LeCompte WA, Pitcairn T, Ricci-Bitti PE, Scherer K, Tomita M, Tzavaras A (1987) Universals and cultural differences in the judgments of facial expressions of emotion. J Pers Soc Psychol 53:12–717CrossRefGoogle Scholar
- 7.Glowinski, D., Camurri, A., Volpe, G., Dael, N. and Scherer K (2008) Technique for automatic emotion recognition by body gesture analysis. Proc. IEEE CS Conf. Computer vision and pattern recognition workshops, pp. 1–6.Google Scholar
- 8.Gunes H. and Pantic M. (2010) Automatic measurement of affect in dimensional and continuous spaces: why, what, and how? Proc Seventh Int’,l Conf Methods Tech Behav Res, pp. 122–126.Google Scholar
- 13.Morris JD (1995) SAM: the self-assessment manikin. An efficient cross-cultural measurement of emotion response.Jounal of. Advert Res 35(8):63–68Google Scholar
- 16.Saha, A., Jonathan, Q. M. (2010) Facial Expression Recognition using Curvelet based local binary patterns. IEEE Int Conf Acoust Speech Signal Proc (ICASSP), pp. 2470–2473.Google Scholar
- 17.Christopher P. Said, James V. Haxby, Alexander Todorov.(2011) Brain systems for assessing the affective value of faces. Phil Trans R Soc London B Biol Sci. 2011 Jun 12:366(1571):1660–1670. doi: 10.1098/rstb.2010.0351.
- 19.Schuller B (2009) Acoustic emotion recognition: a benchmark comparison of performances. Proc, IEEE ASRUGoogle Scholar
- 23.Sumana I, Islam M, Zhang DS, Lu G (2008) Content based image retrieval using curvelet transform. Proc. of IEEE International Workshop on Multimedia Signal Processing, Cairns, Queensland, Australia, pp. 11–16Google Scholar
- 24.Verma G. K. and Tiwary U. S (2014) Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.Vol. 102, Part 1, Pages 162–172 NeuroImage. doi: 10.1016/j.neuroimage.2013.11.007.
- 25.Viola PA, Jones MJ (2001) Rapid object detection using a boosted cascade of simple features. CVPR, Issue 1:511–518Google Scholar
- 27.Whissell CM (1989) The dictionary of affect in language, emotion: theory, research and experience, vol 4. Academic Press, New YorkGoogle Scholar
- 29.Wu, X., Zhao, J. (2010) Curvelet feature extraction for face recognition and facial expression recognition. Sixth Int Conf Nat Comput (ICNC), pp. 1212–1216.Google Scholar