Advertisement

Extracting Emotions from Music Data

  • Alicja Wieczorkowska
  • Piotr Synak
  • Rory Lewis
  • Zbigniew W.Raś
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3488)

Abstract

Music is not only a set of sounds, it evokes emotions, subjectively perceived by listeners. The growing amount of audio data available on CDs and in the Internet wakes up a need for content-based searching through these files. The user may be interested in finding pieces in a specific mood. The goal of this paper is to elaborate tools for such a search. A method for the appropriate objective description (parameterization) of audio files is proposed, and experiments on a set of music pieces are described. The results are summarized in concluding chapter.

Keywords

Single Subject Musical Instrument Audio Data Music Piece Human Listener 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Apple: iTunes (2004), http://www.apple.com/itunes/
  2. 2.
    Batlle, E., Cano, P.: Automatic Segmentation for Music Classification using Competitive Hidden Markov Models. In: Proceedings of International Symposium on Music Information Retrieval, Plymouth, MA (2000); Available at http://www.iua.upf.es/mtg/publications/ismir2000-eloi.pdf
  3. 3.
    Berger, A.: Error-correcting output coding for text classification. In: IJCAI 1999: Workshop on machine learning for information filtering. Stockholm, Sweden (1999), Available at http://www-2.cs.cmu.edu/~aberger/pdf/ecoc.pdf
  4. 4.
    Brown, J.C.: Computer identification of musical instruments using pattern recognition with cepstral coefficients as features. J. Acoust. Soc. of America 105, 1933–1941 (1999)CrossRefGoogle Scholar
  5. 5.
    Carletta, J.: Assessing agreement on classification tasks: the kappa statistic. Computational Linguistics 22 (2), 249–254 (1996), Available at http://homepages.inf.ed.ac.uk/jeanc/squib.pdf Google Scholar
  6. 6.
    Cross, I.: Music, cognition, culture and evolution. Annals of the New York Academy of Sciences 930, 28–42 (2001), Available at http://www-ext.mus.cam.ac.uk/~ic108/PDF/IRMCNYAS.pdf CrossRefGoogle Scholar
  7. 7.
    Dellaert, F., Polzin, T., Waibel, A.: Recognizing Emotion in Speech. In: Proc. ICSLP 1996, vol. 3, pp. 1970–1973 (1996)Google Scholar
  8. 8.
    Eronen, A., Klapuri, A.: Musical Instrument Recognition Using Cepstral Coefficients and Temporal Features. In: Proc. IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2000, Plymouth, MA, pp. 753–756 (2000)Google Scholar
  9. 9.
    Fujinaga, I., McMillan, K.: Realtime recognition of orchestral instruments. In: Proceedings of the International Computer Music Conference, pp. 141–143 (2000)Google Scholar
  10. 10.
    Herrera, P., Amatriain, X., Batlle, E., Serra, X.: Towards instrument segmentation for music content description: a critical review of instrument classification techniques. In: Proc. of International Symposium on Music Information Retrieval ISMIR 2000, Plymouth, MA (2000)Google Scholar
  11. 11.
    Hevner, K.: Experimental studies of the elements of expression in music. American Journal of Psychology 48, 246–268 (1936)CrossRefGoogle Scholar
  12. 12.
    Huron, D.: Sound, music and emotion: An introduction to the experimental research. In: Seminar presentation, Society for Music Perception and Cognition Conference, Massachusetts Institute of Technology, Cambridge (1997)Google Scholar
  13. 13.
    Juslin, P., Sloboda, J. (eds.): Music and Emotion: Theory and Research. Series in Affective Science. Oxford University Press, Oxford (2001)Google Scholar
  14. 14.
    Kaminskyj, I.: Multi-feature Musical Instrument Classifier. MikroPolyphonie 6 (2000), Online journal at http://www.mikropol.net/
  15. 15.
    Kostek, B., Czyzewski, A.: Representing Musical Instrument Sounds for Their Automatic Classification. J. Audio Eng. Soc. 49(9), 768–785 (2001)Google Scholar
  16. 16.
    Kostek, B., Wieczorkowska, A.: Parametric Representation Of Musical Sounds. Archives of Acoustics 22(1), 3–26 (1997)Google Scholar
  17. 17.
    Lavy, M.M.: Emotion and the Experience of Listening to Music. A Framework for Empirical Research. PhD. dissertation, Jesus College, Cambridge (2001)Google Scholar
  18. 18.
    Li, T., Ogihara, M.: Detecting emotion in music. In: 4th International Conference on Music Information Retrieval ISMIR, Washington, D.C., and Baltimore, MD (2003)Google Scholar
  19. 19.
    de Mantaras, R.L., Arcos, J.L.: AI and Music. From Composition to Expressive Performance. AI Magazine, 43–58 (Fall 2002)Google Scholar
  20. 20.
    Marasek, K.: Private communication (2004)Google Scholar
  21. 21.
    Martin, K.D., Kim, Y.E.: Musical instrument identification: A pattern-recognition approach. In: 136 meeting of the Acoustical Soc. of America, Norfolk, VA (1998)Google Scholar
  22. 22.
    Microsoft Corp.: Windows Media Player (2004), http://www.microsoft.com/
  23. 23.
    Pachet, F.: Beyond the Cybernetic Jam Fantasy: The Continuator. IEEE Computers Graphics and Applications (January/February 2004); spec. issue on Emerging TechnologiesGoogle Scholar
  24. 24.
    Pachet, F.: Knowledge Management and Musical Metadata. In: Schwartz, D. (ed.) Encyclopedia of Knowledge Management. Idea Group (2005)Google Scholar
  25. 25.
    Peeters, G., Rodet, X.: Automatically selecting signal descriptors for Sound Classification. In: ICMC 2002 Goteborg, Sweden (2002)Google Scholar
  26. 26.
    Pollard, H.F., Jansson, E.V.: A Tristimulus Method for the Specification of Musical Timbre. Acustica 51, 162–171 (1982)Google Scholar
  27. 27.
    Tato, R., Santos, R., Kompe, R., Pardo, J.M.: Emotional Space Improves Emotion Recognition. In: 7th International Conference on Spoken Language Processing ICSLP 2002, Denver, Colorado (2002)Google Scholar
  28. 28.
    Tzanetakis, G., Cook, P.: Marsyas: A framework for audio analysis. Organized Sound 4(3), 169–175 (2000)CrossRefGoogle Scholar
  29. 29.
    Widmer, G.: Discovering Simple Rules in Complex Data: A Meta-learning Algorithm and Some Surprising Musical Discoveries. Artificial Intelligence 146(2) (2003)Google Scholar
  30. 30.
    Wieczorkowska, A., Wroblewski, J., Slezak, D., Synak, P.: Application of temporal descriptors to musical instrument sound recognition. Journal of Intelligent Information Systems 21(1), 71–93 (2003)CrossRefGoogle Scholar
  31. 31.
    WordIQ Dictionary (2004), The Internet http://www.wordiq.com/dictionary/

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Alicja Wieczorkowska
    • 1
  • Piotr Synak
    • 1
  • Rory Lewis
    • 2
  • Zbigniew W.Raś
    • 1
    • 2
  1. 1.Polish-Japanese Institute of Information TechnologyWarsawPoland
  2. 2.Charlotte, Computer Science Dept.University of North CarolinaCharlotteUSA

Personalised recommendations