Skip to main content

Advertisement

Log in

Music-evoked emotion recognition based on cognitive principles inspired EEG temporal and spectral features

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Electroencephalographic (EEG) based emotion recognition has attracted increasing attention in the field of human-computer interaction (HCI). But, how to use the cognitive principles to enhance the emotion recognition model is still a challenge. The purpose of this research paper is to investigate the emotion cognitive process and its application. Firstly, to evoke the response emotions, a three-stage experimental paradigm of long-time music stimuli was designed. The EEG signals were recorded in 15 healthy adults during the listening of 16 music clips respectively. Then, the time course analysis method of music-evoked emotions was proposed to examine the differences of brain activities. There was great increase of the spectral power in alpha band and a slight decrease in high-frequency beta and gamma bands during music listening. Through time analysis, the characteristics of the inspiring–keeping–fading were also found in different emotional states. After that, the most relevant EEG features were selected based on the time correlation analysis between EEG and music features. Finally, based on the cognitive principles inspired EEG features, an emotional prediction system was built. From the results, the accuracies of binary classification were 66.8% for valence and 59.5% for arousal. The accuracies of 3-classes classification performed as 45.9% for valence and 45.1% for arousal. These results suggest that with the help of cognitive principles, a better emotional recognition system could be built. Understanding the cognitive process could promote the development of artificial intelligence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Luo J (2012) Affective computing and intelligent interaction, vol 137. Springer, New York

    Book  Google Scholar 

  2. Basu A, Das S, Horain P (2017) Intelligent human computer interaction. Springer International Publishing, New York

    Book  Google Scholar 

  3. Yang YH, Chen HH (2012) Machine recognition of music emotion: a review. ACM Trans Intell Syst Technol 3(3):40

    Article  Google Scholar 

  4. Bradley MM, Lang PJ (1999) Affective norms for English words (ANEW): instruction manual and affective ratings, vol. 30, No. 1. Technical report C-1, the center for research in psychophysiology, University of Florida, pp 25–36

  5. Lang PJ, Bradley MM, Cuthbert BN (2008) International affective picture system (IAPS): affective ratings of pictures and instruction manual. In: Tech Rep A-8. The center for research in psychophysiology, University of Florida

  6. Bradley MM, Lang PJ (2007) The international affective digitized sounds (IADS-2): affective ratings of sounds and instruction manual. In: Tech Rep B-3. The center for research in psychophysiology, University of Florida

  7. Brady K, Gwon Y, Khorrami P, et al (2016) Multi-modal audio, video and physiological sensor learning for continuous emotion prediction. International Workshop on Audio/visual Emotion Challenge, pp 97–104

  8. Wang XW, Nie D, Lu BL (2014) Emotional state classification from EEG data using machine learning approach. Neurocomputing 129(4):94–106

    Article  Google Scholar 

  9. Peng Y, Lu BL (2016) Discriminative manifold extreme learning machine and applications to image and EEG signal classification. Neurocomputing 174:265–277

    Article  Google Scholar 

  10. Krumhansl CL (1997) An exploratory study of musical emotions and psychophysiology. Can J Exp Psychol 51(4):336–353

    Article  Google Scholar 

  11. Bo H, Li H, Ma L, Yu B (2016) Time-course eeg spectrum evidence for music key perception and emotional effects. In: International conference on brain inspired cognitive systems (BICS), pp 184–195

  12. Hevner K (1935) The affective character of the major and minor modes in music. Am J Psychol 47(1):103–118

    Article  Google Scholar 

  13. Koelsch S (2010) Towards a neural basis of music-evoked emotions. Trends Cogn Sci 14(3):131–137

    Article  Google Scholar 

  14. Blum K, Chen TJ, Chen AL, Madigan M, Downs BW, Waite RL, Braverman ER, Kerner M, Bowirrat A, Giordano J, Henshaw H (2010) Do dopaminergic gene polymorphisms affect mesolimbic reward activation of music listening response? therapeutic impact on reward deficiency syndrome (RDS). Med hypotheses 74(3):513–520

    Article  Google Scholar 

  15. Zatorre RJ, Salimpoor VN (2013) From perception to pleasure: music and its neural substrates. Proc Natl Acad Sci 110(Supplement 2):10430–10437

    Article  Google Scholar 

  16. Li Y, Dan C, Ling W (2015) Abnormal functional connectivity of EEG gamma band in patients with depression during emotional face processing. Clin Neurophysiol 126(11):2078–2089

    Article  Google Scholar 

  17. Partanen E, Kujala T, Tervaniemi M, Huotilainen M (2013) Prenatal music exposure induces long-term neural effects. PLoS One 8(10):e78946

    Article  Google Scholar 

  18. Mao M, Rau PL (2014) EEG-based measurement of emotion induced by mode, rhythm, and mv of chinese pop music. In: International Conference on Cross-Cultural Design, pp 89–100

  19. Akar SA, Kara S, Agambayev S (2015) Nonlinear analysis of EEGs of patients with major depression during different emotional states. Comput Biol Med 67:49–60

    Article  Google Scholar 

  20. Lin YP, Wang CH, Jung TP, Wu TL, Jeng SK, Duann JR, Chen JH (2010) EEG-based emotion recognition in music listening. IEEE Trans Biomed Eng 57(7):1798–1806

    Article  Google Scholar 

  21. Wang XW, Nie D, Lu BL (2011) EEG-based emotion recognition using frequency domain features and support vector machines. In: International Conference on Neural Information Processing, pp 734–743

  22. Koelstra S, Muhl C, Soleymani M (2012) Deap: a database for emotion analysis; using physiological signals. IEEE Trans Affect Comput 3(1):18–31

    Article  Google Scholar 

  23. Tandle A, Jog N, Dharmadhikari A, et al (2016) Estimation of valence of emotion from musically stimulated EEG using frontal theta asymmetry. International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, pp 63–68

  24. Morris JD (1995) Observations: SAM: the Self-Assessment Manikin; an efficient cross-cultural measurement of emotional response. J Advert Res 35(6):63–68

    Google Scholar 

  25. Jung TP, Humphries C, Lee TW, Makeig S, McKeown MJ, Iragui V, Sejnowski TJ (1998) Removing electroencephalographic artifacts: comparison between ica and pca. In: Neural Networks for Signal Processing VIII, pp 63–72

  26. Gao C, Ma L, Li H (2015) An ICA/HHT hybrid approach for automatic ocular artifact correction. Int J Pattern Recogn 29(2):1558001

    Article  MathSciNet  Google Scholar 

  27. Scherer KR (1986) Vocal affect expression: A review and a model for future research. Psychol Bull 99(2):143–165

    Article  Google Scholar 

  28. Murray IR, Arnott JL (1993) Toward the simulation of emotion in synthetic speech: a review of the literature on human vocal emotion. J Acoust Soc Am 93(2):1097–1108

    Article  Google Scholar 

  29. Johnstone T, Scherer KR (2000) Vocal communication of emotion. In: Handbook of Emotions, 2nd edition. The Guilford Press, New York, pp 220–235

    Google Scholar 

  30. Juslin PN, Laukka P (2003) Communication of emotions in vocal expression and music performance: different channels, same code? Psychol Bull 129(5):770–814

    Article  Google Scholar 

  31. Lartillot O, Toiviainen P, Eerola T (2008) A matlab toolbox for music information retrieval. Data analysis, machine learning and applications, pp 261–268

  32. Glasberg BR, Moore BC (2002) A model of loudness applicable to Time-Varying Sounds. J Audio Eng Soc 50(5):331–342

    Google Scholar 

  33. Skinner ER (1935) A calibrated recording and analysis of the pitch, force and quality of vocal tones expressing happiness and sadness; and a determination of the pitch and force of the subjective concepts of ordinary, soft, and loud tones. Commun Monogr 2(1):81–137

    Google Scholar 

  34. Williams CE, Stevens KN (1972) Emotions and Speech: Some Acoustical Correlates. J Acoust Soc Am 52(4B):1238–1250

    Article  Google Scholar 

  35. Gobl C, Chasaide AN (2003) The role of voice quality in communicating emotion, mood and attitude. Speech Commun 40(12):189–212

    Article  MATH  Google Scholar 

  36. Huang X, Acero A, Hon HW, Reddy R (2001) Spoken language processing: a guide to theory, algorithm, and system development. Prentice hall PTR, Upper Saddle River

    Google Scholar 

  37. Kasi K, Zahorian SA (2002) Yet another algorithm for pitch tracking. In: International conference on acoustics, speech, and signal processing (ICASSP), pp 361–364

  38. Myers JL, Well AD (2013) Research design and statistical analysis, 2nd edn. Routledge, Abingdon

    Google Scholar 

  39. Klimesch W (1999) EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis. Brain Res Rev 29(2):169–195

    Article  Google Scholar 

  40. Welch P (1967) The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans Audio Electr 15(2):70–73

    Article  Google Scholar 

  41. Vapnik V (2013) The nature of statistical learning theory. Springer, New York

    MATH  Google Scholar 

Download references

Acknowledgements

Our thanks to supports from National Natural Science Foundation of China (61671187), Shenzhen Foundational Research Funding (JCYJ20150929143955341, JCYJ20150625142543470), Open Funding of MOE-Microsoft Key Laboratory of Natural Language Processing and Speech (HIT.KLOF.20150xx, HIT.KLOF.20160xx). The authors are grateful for the anonymous reviewers who made constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haifeng Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bo, H., Ma, L., Liu, Q. et al. Music-evoked emotion recognition based on cognitive principles inspired EEG temporal and spectral features. Int. J. Mach. Learn. & Cyber. 10, 2439–2448 (2019). https://doi.org/10.1007/s13042-018-0880-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-018-0880-z

Keywords

Navigation