The Journal of Supercomputing

, Volume 65, Issue 1, pp 274–286 | Cite as

Bridging the semantic gap in multimedia emotion/mood recognition for ubiquitous computing environment

Article

Abstract

With the advent of the ubiquitous era, multimedia emotion/mood could be used as an important clue in multimedia understanding, retrieval, recommendation, and some other multimedia applications. Many issues for multimedia emotion recognition have been addressed by different disciplines such as physiology, psychology, cognitive science, and musicology. Recently, many researchers have tried to uncover the relationship between multimedia contents such as image or music and emotion in many applications. In this paper, we introduce the existing emotion models and acoustic features. We also present a comparison of different emotion/mood recognition methods.

Keywords

Emotion/Mood recognition Multimedia features Semantic analysis Ubiquitous computing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Birmingham W, Dannenberg R, Pardo B (2006) An introduction to query by humming with the vocal search system. Commun ACM 49(8):49–52 CrossRefGoogle Scholar
  2. 2.
    Carvalho V, Chao C (2005) Sentiment retrieval in popular music based on sequential learning. In: SIGIR Google Scholar
  3. 3.
    Cowie R, Douglas-Cowie E, Savvidou S, McMahon E, Sawey M, Schröder M (2000) ‘FEELTRACE’: an instrument for recording perceived emotion in real time. In: ISCA Workshop on speech and emotion, Northern Ireland, pp 19–24 Google Scholar
  4. 4.
    Dunker P, Nowak S, Begau A, Lanz C (2008) Content-based mood classification for photos and music—a generic multi-modal classification framework and evaluation approach. In: ACM international conference on multimedia information retrieval, Vancouver, Canada, pp 97–104 Google Scholar
  5. 5.
    Ellis D, Poliner PW, Graham E (2007) Identifying ‘cover songs’ with chroma features and dynamic programming beat tracking. In: IEEE international conference on acoustics, speech, and signal processing (ICASSP), vol 4, pp 1429–1432 Google Scholar
  6. 6.
    Feng Y, Zhuang Y, Pan Y (2003) Music retrieval by detecting mood via computational media aesthetics. In: Proc of the IEEE/WIC international conference on web intelligence Google Scholar
  7. 7.
    Foote JT (1997) Content-based retrieval of music and audio. In: Multimedia storage and archiving systems II. Proceedings of SPIE. SPIE Press, Bellingham, pp 138–147 CrossRefGoogle Scholar
  8. 8.
    Han B, Rho S, Dannenberg R, Hwang E (2009) SMERS: music emotion recognition using support vector regression. In: Proceedings of international society for music information retrieval, pp 651–656 Google Scholar
  9. 9.
    Juslin PN, Sloboda JA (2001) Music and emotion: theory and research. Oxford University Press, Oxford Google Scholar
  10. 10.
    Ko K-E, Yang H-C, Sim K-B (2009) Emotion recognition using EEG signals with relative power values and Bayesian network. Int J Control Autom Syst 7(5):865–870 CrossRefGoogle Scholar
  11. 11.
    Krumhansl C (1990) Cognitive foundations of musical pitch. Oxford University Press, Oxford Google Scholar
  12. 12.
    Leon E, Clarke G, Callaghan V, Sepulveda F (2007) A user-independent real-time emotion recognition system for software agents in domestic environments. Eng Appl Artif Intell 20(3):337–345 CrossRefGoogle Scholar
  13. 13.
    Li SZ (2000) Content-based classification and retrieval audio using the nearest feature line method. IEEE Trans Speech Audio Process 8(5):618–625 CrossRefGoogle Scholar
  14. 14.
    Li T, Ogihara M (2004) Content-based music similarity search and emotion detection. In: ICASSP, pp 705–708 Google Scholar
  15. 15.
    Liu D, Lu L, Zhang HJ (2003) Automatic mood detection from acoustic music data. In: International symposium on music information retrieval, Baltimore, Maryland, USA Google Scholar
  16. 16.
    Lu L, Liu D, Zhang HJ (2006) Automatic mood detection and tracking of music audio signals. IEEE Trans Audio, Speech Audio Process 14(1):5–18 MathSciNetCrossRefGoogle Scholar
  17. 17.
    Meyers O (2007) A mood-based music classification. PhD thesis, MIT Google Scholar
  18. 18.
    Ortony A, Clore GL, Collins L (1998) The cognitive structure of emotions. Cambridge University Press, Cambridge Google Scholar
  19. 19.
    OWL web ontology language (2010) Available at: http://www.w3.org/TR/owl-ref/
  20. 20.
    Pachet F, Zils A (2004) Evolving automatically high-level music descriptors from acoustic signals. In: LNCS. Springer, Berlin Google Scholar
  21. 21.
    Paulo N et al (2006) Emotions on agent based simulators for group formation. In: Proceedings of the European simulation and modeling conference, pp 5–18 Google Scholar
  22. 22.
    Protégé Editor (2010) Available at: http://protege.stanford.edu
  23. 23.
    Rho S, Hwang E (2009) Content-based scene segmentation scheme for efficient multimedia information retrieval. Int J Wirel Mob Comput (IJWMC) 3(4):299–311 CrossRefGoogle Scholar
  24. 24.
    Rho S, Park J (2010) Intelligent multimedia services using semantic web technologies in internet computing environments. J Internet Technol (JIT) 11(3):353–360 Google Scholar
  25. 25.
    Rho S, Han B, Hwang E, Kim M (2008) MUSEMBLE: a novel music retrieval system with automatic voice query transcription and reformulation. J Syst Softw 81(7):1065–1080 CrossRefGoogle Scholar
  26. 26.
    Rho S, Han B, Hwang E (2009) SVR-based music mood classification and context-based music recommendation. ACM Multimedia, Beijing, pp 713–716 Google Scholar
  27. 27.
    Russell JA (1980) A circumplex model of affect. J Personal Soc Psychol 39 Google Scholar
  28. 28.
    Schubert E, Wolfe J, Tarnopolsky A (2004) Spectral centroid and timbre in complex, multiple instrumental textures. In: Proceedings of the international conference on music perception and cognition, North Western University, Illinois Google Scholar
  29. 29.
    Thayer RE (1989) The biopsychology of mood and arousal. Oxford University Press, New York Google Scholar
  30. 30.
    Tzanetakis G, Cook P (2000) MARSYAS: a framework for audio analysis. Organised Sound 4(30) Google Scholar
  31. 31.
    Van de Laar B (2006) Emotion detection in music, a survey. In: 20th Student conference on IT Google Scholar
  32. 32.
    Yang D, Lee W (2004) Disambiguating music emotion using software agents. In: Proc int conf music information retrieval, pp 52–58 Google Scholar
  33. 33.
    Yang Y-H, Lin Y-C, Su Y-F, Chen H-H (2008) A regression approach to music emotion recognition. IEEE Trans Audio Speech Lang Process (TASLP) 16(2):448–457 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.School of Electrical EngineeringKorea UniversitySeoulKorea
  2. 2.Division of Computer EngineeringMokwon UniversityDaejeonKorea

Personalised recommendations