Skip to main content

Sex Difference of Saccade Patterns in Emotional Facial Expression Recognition

  • Conference paper
  • First Online:
Cognitive Systems and Signal Processing (ICCSIP 2016)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 710))

Included in the following conference series:

Abstract

In this work, we conduct two experiments: the first aims at emotional (sadness, happiness, disgust, fear, anger, shame) and neutral facial expressions recognition and the second investigates how the emotional audio (cry, laugh) affects the recognition for emotional (sadness, happy) and neutral facial expressions. The eye movements data in both experiments are recorded by an SMI eye tracker with 120 Hz sampling frequency. The hidden Markov model (HMM) is then applied to extract the fixation patterns in data. For each emotional block, two HMMs are learned, which are related, respectively, to the emotional and neutral faces. The HMM models are trained using 70% saccade data and tested using the remaining 30% data. The first experiment indicates that the saccade patterns not only relate to the task, but also relate to the emotional stimulations and contexts. In particular, the males and females show significant difference in that the females are more easily affected by sadness facial expressions (negative emotion). The second experiment maintains that the emotional audio has greater impact on the females than on the males while the subjects recognizing the neutral facial expressions. This implies that men and women also display significant difference in audio modulated facial expressions cognition. The findings in this study may be still inconclusive, possibly the results of methodological differences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Aviezer, H., et al.: Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19(7), 724–732 (2008)

    Article  Google Scholar 

  • Beaupré, M.G., Hess, U.: Cross-cultural emotion recognition among Canadian ethnic groups. J. Cross Cult. Psychol. 36(3), 355–370 (2005)

    Article  Google Scholar 

  • Bilmes, J.A.: A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. Int. Comput. Sci. Inst. 4(510), 126 (1998)

    Google Scholar 

  • Broverman, I.K., et al.: Sex-role stereotypes: a current appraisal1. J. Soc. Issues 28(2), 59–78 (1972)

    Article  Google Scholar 

  • Chuk, T., Chan, A.B., Hsiao, J.: Hidden Markov model analysis reveals better eye movement strategies in face recognition. In: Proceedings of the 37th Annual Conference of the Cognitive Science Society (2015)

    Google Scholar 

  • Chuk, T., et al.: Understanding eye movements in face recognition using hidden Markov models. J. Vis. 14(11), 8 (2014)

    Article  Google Scholar 

  • Chuk, T., et al.: Understanding eye movements in face recognition with hidden Markov model. In: Proceedings of the 35th Annual Conference of the Cognitive Science Society (2013)

    Google Scholar 

  • Greene, M.R., et al.: Reconsidering Yarbus: a failure to predict observers’ task from eye movement patterns. Vis. Res. 62, 1–8 (2012)

    Article  Google Scholar 

  • Haji-Abolhassani, A., Clark, J.J.: An inverse Yarbus process: predicting observers’ task from eye movement patterns. Vis. Res. 103, 127–142 (2014)

    Article  Google Scholar 

  • Lavan, N., Lima, C.F., Harvey, H., et al.: I thought that I heard you laughing: contextual facial expressions modulate the perception of authentic laughter and crying. Cog. Emot. 29(5), 935–944 (2015)

    Article  Google Scholar 

  • Noller, P.: Video primacy—A further look. J. Nonverbal Behav. 9(1), 28–47 (1985)

    Article  Google Scholar 

  • Oates, T., et al.: Clustering time series with hidden markov models and dynamic time warping. In: Proceedings of the IJCAI-99 workshop on neural, symbolic and reinforcement learning methods for sequence learning. Citeseer (1999)

    Google Scholar 

  • Paulmann, S., Pell, M.D.: Contextual influences of emotional speech prosody on face processing: how much is enough? Cogn. Affect. Behav. Neurosci. 10(2), 230–242 (2010)

    Article  Google Scholar 

  • Petitjean, F., et al.: Dynamic Time Warping averaging of time series allows faster and more accurate classification. In: 2014 IEEE International Conference on Data Mining (ICDM). IEEE (2014)

    Google Scholar 

  • Robin, O., et al.: Gender influence on emotional responses to primary tastes. Physiol. Behav. 78(3), 385–393 (2003)

    Article  Google Scholar 

  • Schurgin, M., et al.: Eye movements during emotion recognition in faces. J. Vis. 14(13), 14 (2014)

    Article  Google Scholar 

  • Spalek, K., et al.: Sex-dependent dissociation between emotional appraisal and memory: a large-scale behavioral and fMRI study. J. Neurosci. 35(3), 920–935 (2015)

    Article  Google Scholar 

  • Van den Stock, J., et al.: Body expressions influence recognition of emotions in the face and voice. Emotion 7(3), 487 (2007)

    Article  Google Scholar 

  • van Hooff, J.C., et al.: The wandering mind of men: ERP evidence for gender differences in attention bias towards attractive opposite sex faces. Soc. Cogn. Affect. Neurosci. 6(4), 477–485 (2011)

    Article  Google Scholar 

  • Zeng, Z., et al.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the 973 Program (2015CB351703) and the National Natural Science Foundation of China (No. 61372152).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Badong Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper

Han, Y., Chen, B., Zhang, X. (2017). Sex Difference of Saccade Patterns in Emotional Facial Expression Recognition. In: Sun, F., Liu, H., Hu, D. (eds) Cognitive Systems and Signal Processing. ICCSIP 2016. Communications in Computer and Information Science, vol 710. Springer, Singapore. https://doi.org/10.1007/978-981-10-5230-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-5230-9_16

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-5229-3

  • Online ISBN: 978-981-10-5230-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics