Skip to main content
Log in

CoDF-Net: coordinated-representation decision fusion network for emotion recognition with EEG and eye movement signals

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Physiological signals, such as EEG and eye movements, have emerged as promising research topics in emotion recognition due to their inherent advantages of objectivity, high recognition accuracy, and cost-effectiveness. However, most existing methods for fusing EEG and eye movement signals use concatenation or weighted summation, which may lead to information loss and limited ability to resist noise. To tackle this issue, in this paper, we propose a Coordinated-representation Decision Fusion Network (CoDF-Net) to efficiently fuse the representation of EEG and eye movement signals. Specifically, CoDF-Net first learns personalized information by maximizing the correlation between modalities. Next, the Decision-level Fusion Broad Learning System (DF-BLS) is developed to construct multiple sub-systems to obtain the final emotional states via the effective decision-making mechanism. To evaluate the performance of the proposed method, subject-dependent and subject-independent experiments are designed on two public datasets. Extensive experiments demonstrate that the proposed method has superior emotion recognition performance over traditional approaches and current state-of-the-art methods. The CoDF-Net achieves 94.09 and 91.62% in the subject-dependent setting and 87.04 and 83.87% in the subject-independent setting on the SEED-CHN and SEED-GER datasets, respectively. Moreover, it is found that the proposed method exhibits a more significant ability to resist noise by adding Gaussian noise with different standard deviations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

The data used in this paper are the sub-dataset of the SJTU Emotion EEG Dataset that can be acquired through the URL https://bcmi.sjtu.edu.cn/home/seed/downloads.html#seed-access-anchor.

Notes

  1. https://bcmi.sjtu.edu.cn/home/seed/.

  2. https://compumedicsneuroscan.com/product/synamps-rt-64-channel-eeg-erp-ep-amplifier/.

  3. https://imotions.com/products/hardware/smi-eye-tracking-glasses/.

References

  1. McRae K (2016) Cognitive emotion regulation: a review of theory and scientific findings. Curr Opin Behav Sci 10:119–124

    Google Scholar 

  2. Cowie R, Douglas-Cowie E, Tsapatsoulis N, Votsis G, Kollias S, Fellenz W, Taylor JG (2001) Emotion recognition in human–computer interaction. IEEE Signal Process Mag 18(1):32–80

    ADS  Google Scholar 

  3. Khan G, Samyan S, Khan MUG, Shahid M, Wahla SQ (2020) A survey on analysis of human faces and facial expressions datasets. Int J Mach Learn Cybern 11:553–571

    Google Scholar 

  4. Jin X, Sun W, Jin Z (2020) A discriminative deep association learning for facial expression recognition. Int J Mach Learn Cybern 11:779–793

    CAS  Google Scholar 

  5. Zhang T, Gong X, Chen CLP (2022) BMT-Net: broad multitask transformer network for sentiment analysis. IEEE Trans Cybern 52(7):6232–6243

    PubMed  Google Scholar 

  6. Yan R, Yu Y, Qiu D (2022) Emotion-enhanced classification based on fuzzy reasoning. Int J Mach Learn Cybern 13(3):839–850

    Google Scholar 

  7. Huang Y, Wen H, Qing L, Jin R, Xiao L (2021) Emotion recognition based on body and context fusion in the wild. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 3609–3617

  8. Zhou C, Zhi R (2022) Learning deep representation for action unit detection with auxiliary facial attributes. Int J Mach Learn Cybern 13:407–419

  9. Scherer KR, Bänziger T (2010) On the use of actor portrayals in research on emotional expression. In: Blueprint for affective computing: a sourcebook. Oxford University Press, New York, NY, pp 166–176

    Google Scholar 

  10. Koelstra S, Muhl C, Soleymani M, Lee J-S, Yazdani A, Ebrahimi T, Pun T, Nijholt A, Patras I (2011) DEAP: a database for emotion analysis; using physiological signals. IEEE Trans Affect Comput 3(1):18–31

    Google Scholar 

  11. Gao Y, Wang X, Potter T, Zhang J, Zhang Y (2020) Single-trial EEG emotion recognition using granger causality/transfer entropy analysis. J Neurosci Methods 346:108904

    PubMed  Google Scholar 

  12. Liu S, Tong J, Meng J, Yang J, Zhao X, He F, Qi H, Ming D (2018) Study on an effective cross-stimulus emotion recognition model using EEGs based on feature selection and support vector machine. Int J Mach Learn Cybern 9:721–726

    Google Scholar 

  13. Skaramagkas V, Giannakakis G, Ktistakis E, Manousos D, Karatzanis I, Tachos NS, Tripoliti E, Marias K, Fotiadis DI, Tsiknakis M (2021) Review of eye tracking metrics involved in emotional and cognitive processes. IEEE Rev Biomed Eng 16:260–277

    Google Scholar 

  14. Lu Y, Zheng W-L, Li B, Lu B-L (2015) Combining eye movements and EEG to enhance emotion recognition. In: IJCAI, vol 15. Buenos Aires, pp 1170–1176

  15. Zhang X, Pan J, Shen J, Din Z, Li J, Lu D, Wu M, Hu B (2020) Fusing of electroencephalogram and eye movement with group sparse canonical correlation analysis for anxiety detection. IEEE Trans Affect Comput 13(2):958–971

    Google Scholar 

  16. Zhu J, Yang C, Xie X, Wei S, Li Y, Li X, Hu B (2022) Mutual information based fusion model (MIBFM): mild depression recognition using EEG and pupil area signals. IEEE Trans Affect Comput. https://doi.org/10.1109/TAFFC.2022.3171782

  17. Zheng W-L, Liu W, Lu Y, Lu B-L, Cichocki A (2018) Emotionmeter: a multimodal framework for recognizing human emotions. IEEE Trans Cybern 49(3):1110–1122

    PubMed  Google Scholar 

  18. Liu W, Qiu J-L, Zheng W-L, Lu B-L (2022) Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition. IEEE Trans Cognit Dev Syst 14(2):715–729. https://doi.org/10.1109/TCDS.2021.3071170

    Article  Google Scholar 

  19. Lawhern VJ, Solon AJ, Waytowich NR, Gordon SM, Hung CP, Lance BJ (2018) EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. J Neural Eng 15(5):056013

    PubMed  Google Scholar 

  20. Gao Q, Yang Y, Kang Q, Tian Z, Song Y (2022) EEG-based emotion recognition with feature fusion networks. Int J Mach Learn Cybern 13(2):421–429

    Google Scholar 

  21. Song T, Zheng W, Song P, Cui Z (2020) EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Trans Affect Comput 11(3):532–541

    Google Scholar 

  22. Zhang T, Wang X, Xu X, Chen CLP (2022) GCB-Net: graph convolutional broad network and its application in emotion recognition. IEEE Trans Affect Comput 13(1):379–388

    CAS  Google Scholar 

  23. Li Q, Zhang T, Chen CLP, Yi K, Chen L (2022) Residual GCB-Net: residual graph convolutional broad network on emotion recognition. IEEE Trans Cognit Dev Syst. https://doi.org/10.1109/TCDS.2022.3147839

    Article  Google Scholar 

  24. Liu B, Guo J, Chen CLP, Wu X, Zhang T (2023) Fine-grained interpretability for EEG emotion recognition: CONCAT-aided grad-cam and systematic brain functional network. IEEE Trans Affect Comput. https://doi.org/10.1109/TAFFC.2023.3288885

    Article  Google Scholar 

  25. Ye M, Chen CLP, Zhang T (2022) Hierarchical dynamic graph convolutional network with interpretability for EEG-based emotion recognition. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2022.3225855

    Article  PubMed  Google Scholar 

  26. Li J, Wu X, Zhang Y, Yang H, Wu X (2022) DRS-Net: a spatial-temporal affective computing model based on multichannel EEG data. Biomed Signal Process Control 76:103660

    Google Scholar 

  27. Wang Z, Wang Y, Hu C, Yin Z, Song Y (2022) Transformers for EEG-based emotion recognition: a hierarchical spatial information learning model. IEEE Sens J 22(5):4359–4368

    ADS  Google Scholar 

  28. Sun M, Cui W, Yu S, Han H, Hu B, Li Y (2022) A dual-branch dynamic graph convolution based adaptive transformer feature fusion network for EEG emotion recognition. IEEE Trans Affect Comput 13(4):2218–2228

    Google Scholar 

  29. Chen CP, Liu Z (2017) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Trans Neural Netw Learn Syst 29(1):10–24

    MathSciNet  PubMed  Google Scholar 

  30. Gong X, Zhang T, Chen CLP, Liu Z (2022) Research review for broad learning system: algorithms, theory, and applications. IEEE Trans Cybern 52(9):8922–8950

    PubMed  Google Scholar 

  31. Jia X, Zhang T, Philip Chen CL, Liu Z, Chen L, Wen G, Hu B (2020) Multi-channel EEG based emotion recognition using temporal convolutional network and broad learning system. In: 2020 IEEE international conference on systems, man, and cybernetics (SMC), pp 2452–2457

  32. Yang Y, Gao Z, Li Y, Cai Q, Marwan N, Kurths J (2021) A complex network-based broad learning system for detecting driver fatigue from EEG signals. IEEE Trans Syst Man Cybern Syst 51(9):5800–5808

    Google Scholar 

  33. Oliva M, Anikin A (2018) Pupil dilation reflects the time course of emotion recognition in human vocalizations. Sci Rep 8(1):4871

    ADS  PubMed  PubMed Central  Google Scholar 

  34. Bradley MM, Miccoli L, Escrig MA, Lang PJ (2008) The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45(4):602–607

    PubMed  PubMed Central  Google Scholar 

  35. Geangu E, Hauf P, Bhardwaj R, Bentz W (2011) Infant pupil diameter changes in response to others’ positive and negative emotions. PLoS ONE 6(11):27132

    ADS  Google Scholar 

  36. Aracena C, Basterrech S, Snáel V, Velásquez J (2015) Neural networks for emotion recognition based on eye tracking data. In: 2015 IEEE international conference on systems, man, and cybernetics. IEEE, pp 2632–2637

  37. Lanatà A, Armato A, Valenza G, Scilingo EP (2011) Eye tracking and pupil size variation as response to affective stimuli: a preliminary study. In: 2011 5th international conference on pervasive computing technologies for healthcare (PervasiveHealth) and workshops. IEEE, pp 78–84

  38. Chen X, Mao J, Liu Y, Zhang M, Ma S (2022) Investigating human reading behavior during sentiment judgment. Int J Mach Learn Cybern 13(8):2283–2296

    Google Scholar 

  39. Alhargan A, Cooke N, Binjammaz T (2017) Affect recognition in an interactive gaming environment using eye tracking. In: 2017 seventh international conference on affective computing and intelligent interaction (ACII). IEEE, pp 285–291

  40. Melo CM, Paiva A, Gratch J (2014) Emotion in games. In: Handbook of digital games. Hoboken, NJ, USA : Wiley, pp 573–592

  41. Baltrušaitis T, Ahuja C, Morency L-P (2018) Multimodal machine learning: a survey and taxonomy. IEEE Trans Pattern Anal Mach Intell 41(2):423–443

    PubMed  Google Scholar 

  42. Zeng Z, Pantic M, Roisman GI, Huang TS (2007) A survey of affect recognition methods: audio, visual and spontaneous expressions. In: Proceedings of the 9th international conference on multimodal interfaces, pp 126–133

  43. D’mello SK, Kory J (2015) A review and meta-analysis of multimodal affect detection systems. ACM Comput Surv (CSUR) 47(3):1–36

    Google Scholar 

  44. Guo K, Chai R, Candra H, Guo Y, Song R, Nguyen H, Su S (2019) A hybrid fuzzy cognitive map/support vector machine approach for EEG-based emotion classification using compressed sensing. Int J Fuzzy Syst 21:263–273

    Google Scholar 

  45. Nemati S, Rohani R, Basiri ME, Abdar M, Yen NY, Makarenkov V (2019) A hybrid latent space data fusion method for multimodal emotion recognition. IEEE Access 7:172948–172964

    Google Scholar 

  46. Ngiam J, Khosla A, Kim M, Nam J, Lee H, Ng AY (2011) Multimodal deep learning. In: Proceedings of the 28th international conference on machine learning (ICML-11), pp 689–696

  47. Andrew G, Arora R, Bilmes J, Livescu K (2013) Deep canonical correlation analysis. In: International conference on machine learning. PMLR, pp 1247–1255

  48. Vielzeuf V, Lechervy A, Pateux S, Jurie F (2018) CentralNet: a multilayer approach for multimodal fusion. In: Proceedings of the European conference on computer vision (ECCV) workshops

  49. Zheng W-L, Zhu J-Y, Lu B-L (2017) Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans Affect Comput 10(3):417–429

    Google Scholar 

  50. Breiman L (1996) Bagging predictors. Mach Learn 24:123–140

    Google Scholar 

  51. Zheng W-L, Lu B-L (2015) Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans Auton Ment Dev 7(3):162–175

    Google Scholar 

  52. Duan R-N, Zhu J-Y, Lu B-L (2013) Differential entropy feature for EEG-based emotion classification. In: 2013 6th international IEEE/EMBS conference on neural engineering (NER). IEEE, pp 81–84

  53. Liu W, Zheng W-L, Li Z, Wu S-Y, Gan L, Lu B-L (2022) Identifying similarities and differences in emotion recognition with EEG and eye movements among Chinese, German, and French people. J Neural Eng 19(2):026012

    Google Scholar 

  54. Li J, Hua H, Xu Z, Shu L, Xu X, Kuang F, Wu S (2022) Cross-subject EEG emotion recognition combined with connectivity features and meta-transfer learning. Comput Biol Med 145:105519

    PubMed  Google Scholar 

  55. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  Google Scholar 

  56. Hwang HC, Kim SM, Han DH (2021) Different facial recognition patterns in schizophrenia and bipolar disorder assessed using a computerized emotional perception test and FMRI. J Affect Disord 279:83–88

    PubMed  Google Scholar 

  57. Pan SJ, Tsang IW, Kwok JT, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210

    PubMed  Google Scholar 

  58. Zheng W-L, Lu B-L (2016) Personalizing EEG-based affective models with transfer learning. In: Proceedings of the twenty-fifth international joint conference on artificial intelligence, pp 2732–2738

  59. Laukka P, Elfenbein HA (2021) Cross-cultural emotion recognition and in-group advantage in vocal expression: a meta-analysis. Emot Rev 13(1):3–11

    Google Scholar 

  60. Palva S, Palva JM (2007) New vistas for \(\alpha\)-frequency band oscillations. Trends Neurosci 30(4):150–158

    CAS  PubMed  Google Scholar 

  61. Gong X, Chen CLP, Zhang T (2023) Cross-cultural emotion recognition with EEG and eye movement signals based on multiple stacked broad learning system. IEEE Trans Comput Soc Syst. https://doi.org/10.1109/TCSS.2023.3298324

    Article  Google Scholar 

  62. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Google Scholar 

Download references

Acknowledgements

This work was funded in part by the National Key Research and Development Program of China under number 2019YFA0706200, in part by the National Natural Science Foundation of China Grant under numbers 62222603, 62076102, and 92267203, in part by the Guangdong Natural Science Funds for Distinguished Young Scholar under number 2020B1515020041, and in part by the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (2019ZT08X214).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tong Zhang.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gong, X., Dong, Y. & Zhang, T. CoDF-Net: coordinated-representation decision fusion network for emotion recognition with EEG and eye movement signals. Int. J. Mach. Learn. & Cyber. 15, 1213–1226 (2024). https://doi.org/10.1007/s13042-023-01964-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01964-w

Keywords

Navigation