Abstract
This article presents a review of high-level libraries that enable to recognize emotions in digital files of music. The main objective of the work is to study and compare different high-level content-analyzer libraries, showing their main functionalities, focused on the extraction of low and high level relevant features to classify musical pieces through an affective classification model. In addition, there has been a review of different works in which those libraries have been used to emotionally classify the musical pieces, through rhythmic and tonal features reconstruction, and the automatic annotation strategies applied, which generally incorporate machine learning techniques. For the comparative evaluation of the different high-level libraries, in addition to the common attributes in the chosen libraries, the most representative attributes in music emotion recognition field (MER) were selected. The comparative evaluation enables to identify the current development in MER regarding high-level libraries and to analyze the musical parameters that are related with emotions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Sloboda, J.A.: La mente musical: La psicología cognitiva de la música., Madrid (2012)
Cho, Y.-H., Lim, H., Kim, D.-W., Lee, I.-K.: Music emotion recognition using chord progressions. In: 2016 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 002588–002593. IEEE, Hungary (2016). https://doi.org/10.1109/SMC.2016.7844628
Kim, Y.E., et al.: Music emotion recognition : a state of the art review. In: Information Retrieval, pp. 255–266 (2010)
Pouyanfar, S., Sameti, H.: Music emotion recognition using two level classification. Proc. Intell. Syst. 1–6 (2014). https://doi.org/10.1109/iraniancis.2014.6802519
Mckay, C.: Automatic Music Classification with jMIR, jmir.sourceforge.net (2010)
Grekow, J.: Audio features dedicated to the detection of arousal and valence in music recordings. In: 2017 IEEE International Conference on Innovations in Intelligent Systems and Applications (INISTA), pp. 40–44. IEEE, Gdynia (2017). https://doi.org/10.1109/inista.2017.8001129
Spotify: Spotify Developer API. https://developer.spotify.com/
McEnnis, D., McKay, C., Fujinaga, I., Depalle, P.: JAUDIO: a feature extraction library. In: Proceedings of the International Conference on Music Information Retrieval, pp. 600–603 (2005)
Music Technology Group, U.P.F: AcousticBrainz. https://acousticbrainz.org/
Cabrera, D., Ferguson, S., Schubert, E.: PsySound3: software for acoustical and psychoacoustical analysis of sound recordings. In: Display, P. (ed.) Proceedings of the 13th International Conference on Auditory Display, pp. 356–363, Canada (2007)
Lartillot, O., Toiviainen, P., Eerola, T.: A matlab toolbox for music information retrieval. In: Preisach, C., Burkhardt, H., Schmidt-Thieme, L., Decker, R. (eds.) Data Analysis, Machine Learning and Applications, pp. 261–268. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78246-9_31
Tzanetakis, G., Cook, P.: MARSYAS: a framework for audio analysis. Organised Sound 4, S1355771800003071 (2000). https://doi.org/10.1017/S1355771800003071
Soleymani, M., Aljanaki, A., Yang, Y.-H.: DEAM: MediaEval Database for Emotional Analysis in Music, pp. 3–5 (2016)
Tristan, J., Brian, W.: Echonest. http://the.echonest.com/
Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178 (1980). https://doi.org/10.1037/h0077714
Solarte, L., Sánches, M., Chanchí, G.E., Duran, D., Arciniegas, J.L.: Dataset de contenidos musicales de video basado en emociones Dataset of music video content based on emotions (2016)
Chanchí, G.E.: Arquitectura basada en contexto para el soporte del servicio de vod de iptv móvil, apoyada en sistemas de recomendaciones y streaming adaptativo (2016)
Andjelkovic, I., Parra, D., O’Donovan, J.: Moodplay. In: Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization - UMAP 2016, pp. 275–279. ACM Press, Canada (2016). https://doi.org/10.1145/2930238.2930280
Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C.-Y., Yang, Y.-H.: 1000 songs for emotional analysis of music. In: York, A.N. (ed.) Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia - CrowdMM 2013, pp. 1–6. ACM Press, Barcelona (2013). https://doi.org/10.1145/2506364.2506365
JMIR: JMIR Audio Utilities. http://jmir.sourceforge.net/index_jAudio.html
Music Technology Group U.P.F: Essentia. http://essentia.upf.edu/documentation/
Kaye, R.: Musicbrainz. https://musicbrainz.org/
Hu, X., Downie, J.S.: Exploring mood metadata: relationships with genre, artist and usage metadata. In: Proceedings of 8th International Conference on Music Information Retrieval ISMIR 2007, pp. 67–72 (2007)
Laurier, C., Meyers, O., Serra, J., Blech, M., Herrera, P.: Music mood annotator design and integration. In: 2009 Seventh International Workshop on Content-Based Multimedia Indexing, pp. 156–161. IEEE (2009). https://doi.org/10.1109/cbmi.2009.45
Martins de Sousa, J., Torres Pereira, E., Ribeiro Veloso, L.: A robust music genre classification approach for global and regional music datasets evaluation. In: 2016 IEEE International Conference on Digital Signal Processing (DSP), pp. 109–113. IEEE, Beijing (2016). https://doi.org/10.1109/icdsp.2016.7868526
Grekow, J.: Audio features dedicated to the detection of four basic emotions. In: Saeed, K., Homenda, W. (eds.) Computer Information Systems and Industrial Management CISIM 2015, vol. 9339. Springer, Cham. https://doi.org/10.1007/978-3-319-24369-6_49
Yang, Y.-H., Chen, H.H.: Music Emotion Recognition. Taylor & Francis Group, Boca Raton (2011)
Acknowledgment
This work has been partially financed by the Spain Government through the contract TIN2015-72241-EXP.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Ospitia Medina, Y., Baldassarri, S., Beltrán, J.R. (2019). High-Level Libraries for Emotion Recognition in Music: A Review. In: Agredo-Delgado, V., Ruiz, P. (eds) Human-Computer Interaction. HCI-COLLAB 2018. Communications in Computer and Information Science, vol 847. Springer, Cham. https://doi.org/10.1007/978-3-030-05270-6_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-05270-6_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05269-0
Online ISBN: 978-3-030-05270-6
eBook Packages: Computer ScienceComputer Science (R0)