Skip to main content

High-Level Libraries for Emotion Recognition in Music: A Review

  • Conference paper
  • First Online:
Human-Computer Interaction (HCI-COLLAB 2018)

Abstract

This article presents a review of high-level libraries that enable to recognize emotions in digital files of music. The main objective of the work is to study and compare different high-level content-analyzer libraries, showing their main functionalities, focused on the extraction of low and high level relevant features to classify musical pieces through an affective classification model. In addition, there has been a review of different works in which those libraries have been used to emotionally classify the musical pieces, through rhythmic and tonal features reconstruction, and the automatic annotation strategies applied, which generally incorporate machine learning techniques. For the comparative evaluation of the different high-level libraries, in addition to the common attributes in the chosen libraries, the most representative attributes in music emotion recognition field (MER) were selected. The comparative evaluation enables to identify the current development in MER regarding high-level libraries and to analyze the musical parameters that are related with emotions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Sloboda, J.A.: La mente musical: La psicología cognitiva de la música., Madrid (2012)

    Google Scholar 

  2. Cho, Y.-H., Lim, H., Kim, D.-W., Lee, I.-K.: Music emotion recognition using chord progressions. In: 2016 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 002588–002593. IEEE, Hungary (2016). https://doi.org/10.1109/SMC.2016.7844628

  3. Kim, Y.E., et al.: Music emotion recognition : a state of the art review. In: Information Retrieval, pp. 255–266 (2010)

    Google Scholar 

  4. Pouyanfar, S., Sameti, H.: Music emotion recognition using two level classification. Proc. Intell. Syst. 1–6 (2014). https://doi.org/10.1109/iraniancis.2014.6802519

  5. Mckay, C.: Automatic Music Classification with jMIR, jmir.sourceforge.net (2010)

    Google Scholar 

  6. Grekow, J.: Audio features dedicated to the detection of arousal and valence in music recordings. In: 2017 IEEE International Conference on Innovations in Intelligent Systems and Applications (INISTA), pp. 40–44. IEEE, Gdynia (2017). https://doi.org/10.1109/inista.2017.8001129

  7. Spotify: Spotify Developer API. https://developer.spotify.com/

  8. McEnnis, D., McKay, C., Fujinaga, I., Depalle, P.: JAUDIO: a feature extraction library. In: Proceedings of the International Conference on Music Information Retrieval, pp. 600–603 (2005)

    Google Scholar 

  9. Music Technology Group, U.P.F: AcousticBrainz. https://acousticbrainz.org/

  10. Cabrera, D., Ferguson, S., Schubert, E.: PsySound3: software for acoustical and psychoacoustical analysis of sound recordings. In: Display, P. (ed.) Proceedings of the 13th International Conference on Auditory Display, pp. 356–363, Canada (2007)

    Google Scholar 

  11. Lartillot, O., Toiviainen, P., Eerola, T.: A matlab toolbox for music information retrieval. In: Preisach, C., Burkhardt, H., Schmidt-Thieme, L., Decker, R. (eds.) Data Analysis, Machine Learning and Applications, pp. 261–268. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78246-9_31

  12. Tzanetakis, G., Cook, P.: MARSYAS: a framework for audio analysis. Organised Sound 4, S1355771800003071 (2000). https://doi.org/10.1017/S1355771800003071

    Article  Google Scholar 

  13. Soleymani, M., Aljanaki, A., Yang, Y.-H.: DEAM: MediaEval Database for Emotional Analysis in Music, pp. 3–5 (2016)

    Google Scholar 

  14. Tristan, J., Brian, W.: Echonest. http://the.echonest.com/

  15. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178 (1980). https://doi.org/10.1037/h0077714

    Article  Google Scholar 

  16. Solarte, L., Sánches, M., Chanchí, G.E., Duran, D., Arciniegas, J.L.: Dataset de contenidos musicales de video basado en emociones Dataset of music video content based on emotions (2016)

    Google Scholar 

  17. Chanchí, G.E.: Arquitectura basada en contexto para el soporte del servicio de vod de iptv móvil, apoyada en sistemas de recomendaciones y streaming adaptativo (2016)

    Google Scholar 

  18. Andjelkovic, I., Parra, D., O’Donovan, J.: Moodplay. In: Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization - UMAP 2016, pp. 275–279. ACM Press, Canada (2016). https://doi.org/10.1145/2930238.2930280

  19. Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C.-Y., Yang, Y.-H.: 1000 songs for emotional analysis of music. In: York, A.N. (ed.) Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia - CrowdMM 2013, pp. 1–6. ACM Press, Barcelona (2013). https://doi.org/10.1145/2506364.2506365

  20. JMIR: JMIR Audio Utilities. http://jmir.sourceforge.net/index_jAudio.html

  21. Music Technology Group U.P.F: Essentia. http://essentia.upf.edu/documentation/

  22. Kaye, R.: Musicbrainz. https://musicbrainz.org/

  23. Hu, X., Downie, J.S.: Exploring mood metadata: relationships with genre, artist and usage metadata. In: Proceedings of 8th International Conference on Music Information Retrieval ISMIR 2007, pp. 67–72 (2007)

    Google Scholar 

  24. Laurier, C., Meyers, O., Serra, J., Blech, M., Herrera, P.: Music mood annotator design and integration. In: 2009 Seventh International Workshop on Content-Based Multimedia Indexing, pp. 156–161. IEEE (2009). https://doi.org/10.1109/cbmi.2009.45

  25. Martins de Sousa, J., Torres Pereira, E., Ribeiro Veloso, L.: A robust music genre classification approach for global and regional music datasets evaluation. In: 2016 IEEE International Conference on Digital Signal Processing (DSP), pp. 109–113. IEEE, Beijing (2016). https://doi.org/10.1109/icdsp.2016.7868526

  26. Grekow, J.: Audio features dedicated to the detection of four basic emotions. In: Saeed, K., Homenda, W. (eds.) Computer Information Systems and Industrial Management CISIM 2015, vol. 9339. Springer, Cham. https://doi.org/10.1007/978-3-319-24369-6_49

  27. Yang, Y.-H., Chen, H.H.: Music Emotion Recognition. Taylor & Francis Group, Boca Raton (2011)

    Google Scholar 

Download references

Acknowledgment

This work has been partially financed by the Spain Government through the contract TIN2015-72241-EXP.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yesid Ospitia Medina , Sandra Baldassarri or José Ramón Beltrán .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ospitia Medina, Y., Baldassarri, S., Beltrán, J.R. (2019). High-Level Libraries for Emotion Recognition in Music: A Review. In: Agredo-Delgado, V., Ruiz, P. (eds) Human-Computer Interaction. HCI-COLLAB 2018. Communications in Computer and Information Science, vol 847. Springer, Cham. https://doi.org/10.1007/978-3-030-05270-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-05270-6_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-05269-0

  • Online ISBN: 978-3-030-05270-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics