Abstract
In this work, we conduct two experiments: the first aims at emotional (sadness, happiness, disgust, fear, anger, shame) and neutral facial expressions recognition and the second investigates how the emotional audio (cry, laugh) affects the recognition for emotional (sadness, happy) and neutral facial expressions. The eye movements data in both experiments are recorded by an SMI eye tracker with 120 Hz sampling frequency. The hidden Markov model (HMM) is then applied to extract the fixation patterns in data. For each emotional block, two HMMs are learned, which are related, respectively, to the emotional and neutral faces. The HMM models are trained using 70% saccade data and tested using the remaining 30% data. The first experiment indicates that the saccade patterns not only relate to the task, but also relate to the emotional stimulations and contexts. In particular, the males and females show significant difference in that the females are more easily affected by sadness facial expressions (negative emotion). The second experiment maintains that the emotional audio has greater impact on the females than on the males while the subjects recognizing the neutral facial expressions. This implies that men and women also display significant difference in audio modulated facial expressions cognition. The findings in this study may be still inconclusive, possibly the results of methodological differences.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aviezer, H., et al.: Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19(7), 724–732 (2008)
Beaupré, M.G., Hess, U.: Cross-cultural emotion recognition among Canadian ethnic groups. J. Cross Cult. Psychol. 36(3), 355–370 (2005)
Bilmes, J.A.: A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. Int. Comput. Sci. Inst. 4(510), 126 (1998)
Broverman, I.K., et al.: Sex-role stereotypes: a current appraisal1. J. Soc. Issues 28(2), 59–78 (1972)
Chuk, T., Chan, A.B., Hsiao, J.: Hidden Markov model analysis reveals better eye movement strategies in face recognition. In: Proceedings of the 37th Annual Conference of the Cognitive Science Society (2015)
Chuk, T., et al.: Understanding eye movements in face recognition using hidden Markov models. J. Vis. 14(11), 8 (2014)
Chuk, T., et al.: Understanding eye movements in face recognition with hidden Markov model. In: Proceedings of the 35th Annual Conference of the Cognitive Science Society (2013)
Greene, M.R., et al.: Reconsidering Yarbus: a failure to predict observers’ task from eye movement patterns. Vis. Res. 62, 1–8 (2012)
Haji-Abolhassani, A., Clark, J.J.: An inverse Yarbus process: predicting observers’ task from eye movement patterns. Vis. Res. 103, 127–142 (2014)
Lavan, N., Lima, C.F., Harvey, H., et al.: I thought that I heard you laughing: contextual facial expressions modulate the perception of authentic laughter and crying. Cog. Emot. 29(5), 935–944 (2015)
Noller, P.: Video primacy—A further look. J. Nonverbal Behav. 9(1), 28–47 (1985)
Oates, T., et al.: Clustering time series with hidden markov models and dynamic time warping. In: Proceedings of the IJCAI-99 workshop on neural, symbolic and reinforcement learning methods for sequence learning. Citeseer (1999)
Paulmann, S., Pell, M.D.: Contextual influences of emotional speech prosody on face processing: how much is enough? Cogn. Affect. Behav. Neurosci. 10(2), 230–242 (2010)
Petitjean, F., et al.: Dynamic Time Warping averaging of time series allows faster and more accurate classification. In: 2014 IEEE International Conference on Data Mining (ICDM). IEEE (2014)
Robin, O., et al.: Gender influence on emotional responses to primary tastes. Physiol. Behav. 78(3), 385–393 (2003)
Schurgin, M., et al.: Eye movements during emotion recognition in faces. J. Vis. 14(13), 14 (2014)
Spalek, K., et al.: Sex-dependent dissociation between emotional appraisal and memory: a large-scale behavioral and fMRI study. J. Neurosci. 35(3), 920–935 (2015)
Van den Stock, J., et al.: Body expressions influence recognition of emotions in the face and voice. Emotion 7(3), 487 (2007)
van Hooff, J.C., et al.: The wandering mind of men: ERP evidence for gender differences in attention bias towards attractive opposite sex faces. Soc. Cogn. Affect. Neurosci. 6(4), 477–485 (2011)
Zeng, Z., et al.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)
Acknowledgements
This work was supported by the 973 Program (2015CB351703) and the National Natural Science Foundation of China (No. 61372152).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Han, Y., Chen, B., Zhang, X. (2017). Sex Difference of Saccade Patterns in Emotional Facial Expression Recognition. In: Sun, F., Liu, H., Hu, D. (eds) Cognitive Systems and Signal Processing. ICCSIP 2016. Communications in Computer and Information Science, vol 710. Springer, Singapore. https://doi.org/10.1007/978-981-10-5230-9_16
Download citation
DOI: https://doi.org/10.1007/978-981-10-5230-9_16
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-5229-3
Online ISBN: 978-981-10-5230-9
eBook Packages: Computer ScienceComputer Science (R0)