Advertisement

Multi-Label Classification of Emotions in Music

  • Alicja Wieczorkowska
  • Piotr Synak
  • Zbigniew W. Raś
Part of the Advances in Soft Computing book series (AINSC, volume 35)

Abstract

This paper addresses the problem of multi-label classification of emotions in musical recordings. The testing data set contains 875 samples (30 seconds each). The samples were manually labelled into 13 classes, without limits regarding the number of labels for each sample. The experiments and test results are presented.

Keywords

Audio Data Music Information Retrieval Audio Sample Musical Recording Ontology Graph 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    1. Bernstein., L. (1959) The Joy of Music, New York, Simon and Schuster.Google Scholar
  2. 2.
    2. Boutell, M., Shen, X., Luo, J., Brown, C. (2003) Multi-label Semantic Scene Classification. Technical Report, Dept. of Computer Science, U. RochesterGoogle Scholar
  3. 3.
    3. Clare, A., King, R.D. (2001) Knowledge Discovery in Multi-label Phenotype Data. Lecture Notes in Computer Science 2168 42–53.CrossRefGoogle Scholar
  4. 4.
    4. Dellaert, F., Polzin, T., Waibel, A. (1996) Recognizing Emotion in Speech. Proc. ICSLP 96 3 1970–1973.Google Scholar
  5. 5.
    5. Fujinaga, L, McMillan, K. (2000) Realtime recognition of orchestral instruments. Proceedings of the International Computer Music Conference, 141–143.Google Scholar
  6. 6.
    6. Guarino, N. (Ed.) (1998) Formal Ontology in Information Systems, IOS Press, Amsterdam.Google Scholar
  7. 7.
    7. Li, T., Ogihara, M. (2003) Detecting emotion in music. 4th International Conference on Music Information Retrieval ISMIR, Washington, D.C., and Baltimore, MD. Available at http://ismir2003.ismir.net/papers/Li.pdfGoogle Scholar
  8. 8.
    8. Logan, B. and Salomon, A. (2001) A Music Similarity Function Based on Signal Analysis, IEEE International Conference on Multimedia and EXPO (ICME 2001).Google Scholar
  9. 9.
    9. McCallum, A. (1999) Multi-label Text Classification with a Mixture Model Trained by EM. AAAI'99 Workshop on Text Learning.Google Scholar
  10. 10.
    10. Peeters, G. Rodet, X. (2002) Automatically selecting signal descriptors for Sound Classification. ICMC 2002 Goteborg, SwedenGoogle Scholar
  11. 11.
    11. Pollard, H. F., Jansson, E. V. (1982) A Tristimulus Method for the Specification of Musical Timbre. Acustica 51 162–171Google Scholar
  12. 12.
    12. Synak, P. and Wieczorkowska, A. (s2005). Some Issues on Detecting Emotions in Music, in: D. Slezak, J. Yao, J. F. Peters, W. Ziarko, X. Hu (Eds.), Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing. 10th International Conference, RSFDGrC 2005, Regina, Canada, August/September 2005, Proceedings, Part II. LNAI 3642, Springer, 314–322Google Scholar
  13. 13.
    13. Tato, R., Santos, R., Kompe, R., Pardo, J. M. (2002) Emotional Space Improves Emotion Recognition. 7th International Conference on Spoken Language Processing ICSLP 2002, Denver, ColoradoGoogle Scholar
  14. 14.
    14. Tzanetakis, G., Cook, P. (2000) Marsyas: A framework for audio analysis. Organized Sound 4(3) 169–175. Available at http://www-2.cs.cmu.edu/~gtzan/ work/pubs/organisedOOgtzan.pdfCrossRefGoogle Scholar
  15. 15.
    15. Tzanetakis, G., Essl, G. and Cook, P. (2001) Automatic Musical Genre Classification of Audio Signals, 2nd International Conference on Music Information Retrieval (ISMIR 2001)Google Scholar
  16. 16.
    16. Wieczorkowska, A. A. (2005) Towards Extracting Emotions from Music, in: L. Bole, Z. Michalewicz, T. Nishida (Eds), Intelligent Media Technology for Communicative Intelligence, Second International Workshop, IMTCI 2004, Warsaw, Poland, September 2004, Revised Selected Papers. LNAI 3490, Springer, 228–238.CrossRefGoogle Scholar
  17. 17.
    17. Wieczorkowska, A. A., Ras, Z.W., Tsay, L.-S. (2003) Representing audio data by FS-trees and Adaptable TV-trees, in Foundations of Intelligent Systems, Proceedings of ISMIS Symposium, Maebashi City, Japan, LNAI, Springer-Verlag, No. 2871, 2003, 135–142Google Scholar
  18. 18.
    18. Wieczorkowska, A. A., Ras, Z.W. (Eds.) (2003) Music Information Retrieval, Special Issue, Journal of Intelligent Information Systems, Kluwer, Vol. 21, No. 1, 2003Google Scholar
  19. 19.
    19. Wieczorkowska, A., Synak, P., Lewis, R., Ras, Z. W. (2005) Extracting Emotions from Music Data, in: M.-S. Hacid, N. V. Murray, Z. W. Ras, S. Tsumoto (Eds.), Foundations of Intelligent Systems. 15th International Symposium, ISMIS 2005, Saratoga Springs, NY, USA, May 25–28, 2005, Proceedings. LNAI 3488, Springer, 456–465.Google Scholar
  20. 20.
    20. Wieczorkowska, A., Synak, P., Lewis, R., Ras, Z. W. (2005) Creating Reliable Database for Experiments on Extracting Emotions from Music, in: M. A. Kłopotek, S. Wierzchon, K. Trojanowski (Eds.), Intelligent Information Processing and Web Mining. Proceedings of the International IIS: IIPWM'05 Conference held in Gdansk, Poland, June 13–16, 2005. Advances in Soft Computing, Springer, 395–402.Google Scholar
  21. 21.
    21. Yang, C. (2001) Music Database Retrieval Based on Spectral Similarity, 2nd International Conference on Music Information Retrieval (ISMIR 2001), Poster.Google Scholar

Copyright information

© Springer 2006

Authors and Affiliations

  • Alicja Wieczorkowska
    • 1
  • Piotr Synak
    • 1
  • Zbigniew W. Raś
    • 2
    • 1
    • 3
  1. 1.Polish-Japanese Institute of Information TechnologyWarsawPoland
  2. 2.Charlotte, Computer Science Dept.University of North CarolinaCharlotteUSA
  3. 3.Institute of Computer SciencePolish Academy of SciencesWarsawPoland

Personalised recommendations