Skip to main content

Symbolic and Acoustic: Multi-domain Music Emotion Modeling for Instrumental Music

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14179))

Included in the following conference series:

  • 450 Accesses

Abstract

Music Emotion Recognition involves the automatic identification of emotional elements within music tracks, and it has garnered significant attention due to its broad applicability in the field of Music Information Retrieval. It can also be used as the upstream task of many other human-related tasks such as emotional music generation and music recommendation. Due to existing psychology research, music emotion is determined by multiple factors such as the Timbre, Velocity, and Structure of the music. Incorporating multiple factors in MER helps achieve more interpretable and finer-grained methods. However, most prior works were uni-domain and showed weak consistency between arousal modeling performance and valence modeling performance. Based on this background, we designed a multi-domain emotion modeling method for instrumental music that combines symbolic analysis and acoustic analysis. At the same time, because of the rarity of music data and the difficulty of labeling, our multi-domain approach can make full use of limited data. Our approach was implemented and assessed using the publicly available piano dataset EMOPIA, resulting in a notable improvement over our baseline model with a 2.4% increase in overall accuracy, establishing its state-of-the-art performance.

K. Zhu and X. Zhang—These authors have equal contributions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. de Berardinis, J., Cangelosi, A., Coutinho, E.: The multiple voices of musical emotions: source separation for improving music emotion recognition models and their interpretability. In: Proceedings of the 21st International Society for Music Information Retrieval Conference, pp. 310–317 (2020)

    Google Scholar 

  2. Chen, N., Wang, S.: High-level music descriptor extraction algorithm based on combination of multi-channel cnns and lstm. In: Proceedings of the 18th International Society for Music Information Retrieval Conference, pp. 509–514 (2017)

    Google Scholar 

  3. Choi, K., Fazekas, G., Sandler, M., Cho, K.: Transfer learning for music classification and regression tasks. In: 18th International Society for Music Information Retrieval Conference, pp. 141–149. International Society for Music Information Retrieval (2017)

    Google Scholar 

  4. Chou, Y.H., Chen, I., Chang, C.J., Ching, J., Yang, Y.H., et al.: Midibert-piano: large-scale pre-training for symbolic music understanding. arXiv:2107.05223 (2021)

  5. Chou, Y.H., Chen, I., Chang, C.J., Ching, J., Yang, Y.H., et al.: Midibert-piano: Large-scale pre-training for symbolic music understanding. arXiv:2107.05223 (2021)

  6. Coutinho, E., Trigeorgis, G., Zafeiriou, S., Schuller, B.: Automatically estimating emotion in music with deep long-short term memory recurrent neural networks. In: Working Notes Proceedings of the MediaEval 2015 Workshop, vol. 1436, pp. 1–3 (2015)

    Google Scholar 

  7. Ferreira, L., Whitehead, J.: Learning to generate music with sentiment. In: Proceedings of the 20th International Society for Music Information Retrieval Conference, pp. 384–390 (2019)

    Google Scholar 

  8. Fukayama, S., Goto, M.: Music emotion recognition with adaptive aggregation of gaussian process regressors. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 71–75. IEEE (2016)

    Google Scholar 

  9. Gómez-Cañón, J.S., et al.: Music emotion recognition: toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Process. Mag. 38(6), 106–114 (2021)

    Article  Google Scholar 

  10. Grekow, J., Raś, Z.W.: Detecting emotions in classical music from midi files. In: Foundations of Intelligent Systems: 18th International Symposium, pp. 261–270. Springer (2009)

    Google Scholar 

  11. Hawthorne, C., et al.: Onsets and frames: Dual-objective piano transcription. In: Proceedings of the 19th International Society for Music Information Retrieval Conference, pp. 50–57 (2018)

    Google Scholar 

  12. Hawthorne, C., et al.: Enabling factorized piano music modeling and generation with the MAESTRO dataset. In: 7th International Conference on Learning Representations (2019)

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  14. Huang, Y.S., Chou, S.Y., Yang, Y.H.: Music thumbnailing via neural attention modeling of music emotion. In: 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pp. 347–350. IEEE (2017)

    Google Scholar 

  15. Huang, Y.S., Yang, Y.H.: Pop music transformer: beat-based modeling and generation of expressive pop piano compositions. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 1180–1188 (2020)

    Google Scholar 

  16. Hung, H., Ching, J., Doh, S., Kim, N., Nam, J., Yang, Y.: EMOPIA: a multi-modal pop piano dataset for emotion recognition and emotion-based music generation. In: Proceedings of the 22nd International Society for Music Information Retrieval Conference, pp. 318–325 (2021)

    Google Scholar 

  17. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations (2015)

    Google Scholar 

  18. Laukka, P., Eerola, T., Thingujam, N.S., Yamasaki, T., Beller, G.: Universal and culture-specific factors in the recognition and performance of musical affect expressions. Emotion 13(3), 434 (2013)

    Article  Google Scholar 

  19. Li, X., Tian, J., Xu, M., Ning, Y., Cai, L.: Dblstm-based multi-scale fusion for dynamic emotion prediction in music. In: 2016 IEEE International Conference on Multimedia and Expo, pp. 1–6. IEEE (2016)

    Google Scholar 

  20. Lin, Y., Chen, X., Yang, D.: Exploration of music emotion recognition based on midi. In: Proceedings of the 14th International Society for Music Information Retrieval Conference, pp. 221–226 (2013)

    Google Scholar 

  21. Lin, Z., et al.: A structured self-attentive sentence embedding. In: 5th International Conference on Learning Representations (2017)

    Google Scholar 

  22. Liu, X., Chen, Q., Wu, X., Liu, Y., Liu, Y.: Cnn based music emotion classification. arXiv:1704.05665 (2017)

  23. Malik, M., Adavanne, S., Drossos, K., Virtanen, T., Ticha, D., Jarina, R.: Stacked convolutional and recurrent neural networks for music emotion recognition. arXiv:1706.02292 (2017)

  24. McFee, B., et al.: librosa: audio and music signal analysis in python. In: Proceedings of the 14th Python in Science Conference, vol. 8, pp. 18–25. Citeseer (2015)

    Google Scholar 

  25. McKay, C., Fujinaga, I.: jsymbolic: a feature extractor for midi files. In: Proceedings of the 2006 International Computer Music Conference (2006)

    Google Scholar 

  26. Oore, S., Simon, I., Dieleman, S., Eck, D., Simonyan, K.: This time with feeling: learning expressive musical performance. Neural Comput. Appl. 32(4), 955–967 (2020)

    Article  Google Scholar 

  27. Panda, R., Malheiro, R., Paiva, R.P.: Musical texture and expressivity features for music emotion recognition. In: 19th International Society for Music Information Retrieval Conference, pp. 383–391 (2018)

    Google Scholar 

  28. Panda, R., Malheiro, R., Rocha, B., Oliveira, A., Paiva, R.P.: Multi-modal music emotion recognition: a new dataset, methodology and comparative analysis. In: International Symposium on Computer Music Multidisciplinary Research (2013)

    Google Scholar 

  29. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  30. Qiu, J., Chen, C., Zhang, T.: A novel multi-task learning method for symbolic music emotion recognition. arXiv:2201.05782 (2022)

  31. Ru, G., Zhang, X., Wang, J., Cheng, N., Xiao, J.: Improving music genre classification from multi-modal properties of music and genre correlations perspective. In: ICASSP 2023–2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5 (2023). https://doi.org/10.1109/ICASSP49357.2023.10097241

  32. Tang, H., Zhang, X., Wang, J., Cheng, N., Xiao, J.: Emomix: emotion mixing via diffusion models for emotional speech synthesis. In: 24th Annual Conference of the International Speech Communication Association (2023)

    Google Scholar 

  33. Tsai, Y.H.H., Bai, S., Liang, P.P., Kolter, J.Z., Morency, L.P., Salakhutdinov, R.: Multimodal transformer for unaligned multimodal language sequences. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, vol. 2019, p. 6558. NIH Public Access (2019)

    Google Scholar 

  34. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems 30 (2017)

    Google Scholar 

  35. Won, M., Ferraro, A., Bogdanov, D., Serra, X.: Evaluation of cnn-based automatic music tagging models. arXiv:2006.00751 (2020)

  36. Xu, J., Li, X., Hao, Y., Yang, G.: Source separation improves music emotion recognition. In: Proceedings of International Conference on Multimedia Retrieval, pp. 423–426 (2014)

    Google Scholar 

  37. Zhao, J., Ru, G., Yu, Y., Wu, Y., Li, D., Li, W.: Multimodal music emotion recognition with hierarchical cross-modal attention network. In: 2022 IEEE International Conference on Multimedia and Expo, pp. 1–6. IEEE (2022)

    Google Scholar 

  38. Zhao, J., Wu, Y., Wen, L., Ma, L., Ruan, L., Wang, W., Li, W.: Improving automatic piano transcription by refined feature fusion and weighted loss. In: Proceedings of the 9th Conference on Sound and Music Technology. pp. 43–53. Springer, Cham (2023). doi: https://doi.org/10.1007/978-981-19-4703-2_4

Download references

Acknowledgement

This paper is supported by the Key Research and Development Program of Guangdong Province under grant No. 2021B0101400003.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianzong Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, K., Zhang, X., Wang, J., Cheng, N., Xiao, J. (2023). Symbolic and Acoustic: Multi-domain Music Emotion Modeling for Instrumental Music. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14179. Springer, Cham. https://doi.org/10.1007/978-3-031-46674-8_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46674-8_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46673-1

  • Online ISBN: 978-3-031-46674-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics