Advertisement

Journal of Intelligent Information Systems

, Volume 48, Issue 3, pp 633–651 | Cite as

Labeling data and developing supervised framework for hindi music mood analysis

  • Braja Gopal Patra
  • Dipankar Das
  • Sivaji Bandyopadhyay
Article

Abstract

Digitization has created a wide platform for music, in the form of televisions, desktops and other hand held devices. This has increased the reach of musical content as well as its impact on people. Music is often associated with distinct emotional content, generally referred to as music mood. Literature focusing on analyzing the content of a music piece, often discusses music mood as an important metadata. The present article addresses the issue of Hindi music mood classification by considering important issues like taxonomy development, annotation and automated mood classification. We annotated a total of 1540 music clips of 60 seconds duration each, with either of a proposed set of five mood classes derived from Russell’s circumplex model. We developed several supervised systems with the help of different classification algorithms and neural networks such as Support Vector Machines, Decision Trees, and Feed Forward Neural Networks. Our experiments reveal that features like timbre, rhythm, and intensity are associated with enhanced classification accuracy. Overall, the results were found satisfactory and Feed Forward Neural Networks based system achieved the maximum F-measure of 0.725 based on 10-fold cross validation.

Keywords

Arousal-valence Feed forward neural networks Hindi songs Intensity Mood taxonomy Music mood classification 

Notes

Acknowledgments

The work reported in this paper is supported by a grant from the “Visvesvaraya Ph.D. Scheme for Electronics and IT” funded by Media Lab Asia of Ministry of Electronics and Information Technology (MeitY), Government of India.

References

  1. Agarwal, P., Karnick, H., & Raj, B. (2013). A Comparative Study Of Indian And Western Music Forms. In Proceedings International Society for Music Information Retrieval (ISMIR) (pp. 29–34).Google Scholar
  2. Bischoff, K., Firan, C.S., Paiu, R., Nejdl, W., Laurier, C., & Sordo, M. (2009). Music Mood and Theme Classification-a Hybrid Approach. In Proceedings International Society for Music Information Retrieval (ISMIR) (pp. 657–662).Google Scholar
  3. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.CrossRefGoogle Scholar
  4. Cooper, D. (2000). The cinema of Satyajit Ray: between tradition and modernity Cambridge University Press.Google Scholar
  5. Duncan, N., & Fox, M. (2005). Computer-aided music distribution: The future of selection, retrieval and transmission. First Monday, 10(4).Google Scholar
  6. Ekman, P. (1993). Facial expression and emotion. American Psychologist, 48(4), 384–392.CrossRefGoogle Scholar
  7. Fu, Z., Lu, G., Ting, K.M., & Zhang, D. (2011). A survey of audio-based music classification and annotation. IEEE Transactions on Multimedia, 13(2), 303–319.CrossRefGoogle Scholar
  8. Ghosh, M. (2002). Natyashastra (ascribed to Bharata Muni), Varanasi: Chowkhamba Sanskrit Series Office.Google Scholar
  9. Gopal, S., & Moorti, S. (2008). Global Bollywood: travels of Hindi song and dance U of Minnesota Press.Google Scholar
  10. Hampiholi, V. (2012). A method for Music Classification based on Perceived Mood Detection for Indian Bollywood Music, World Academy of Science. Engineering and Technology, 72, 507–514.Google Scholar
  11. Hevner, K. (1936). Experimental studies of the elements of expression in music. The American Journal of Psychology, 246–268.Google Scholar
  12. Homburg, H., Mierswa, I., Mller, B., Morik, K., & Wurst, M. (2005). A Benchmark Dataset for Audio Classification and Clustering. In Proceedings International Society for Music Information Retrieval (ISMIR) (pp. 528–31).Google Scholar
  13. Hu, X. (2010). Music and mood: Where theory and reality meet, Proceedings 2010 iConference.Google Scholar
  14. Hu, X., Downie, J.S., Laurier, C., Bay, M., & Ehmann, A.F. (2008). The 2007 MIREX Audio Mood Classification Task: Lessons Learned. In Proceedings International Society for Music Information Retrieval (ISMIR) (pp. 462–467).Google Scholar
  15. Katayose, H., Imai, M., & Inokuchi, S. (1988). Sentiment extraction in music. In Proceedings 9th International Conference on Pattern Recognition (pp. 1083–1087).Google Scholar
  16. Kim, Y.E., Schmidt, E.M., Migneco, R., Morton, B.G., Richardson, P., Scott, J., Speck, J.A., & Turnbull, D. (2010). Music emotion recognition: A state of the art review. In Proceedings International Society for Music Information Retrieval (ISMIR) (pp. 255–266).Google Scholar
  17. Koduri, G.K., & Indurkhya, B. (2010). A behavioral study of emotions in south Indian classical music and its implications in music recommendation systems. In Proceedings 2010 ACM workshop on Social, adaptive and personalized multimedia interaction and access (pp. 55–60).Google Scholar
  18. Kostek, B., & Plewa, M. (2013). Parametrisation and correlation analysis applied to music mood classification. International Journal of Computational Intelligence Studies, 2(1), 4–25.CrossRefGoogle Scholar
  19. Laurier, C. (2011). Automatic Classification of Musical Mood by Content Based Analysis, PhD dissertation. Spain: Universitat Pompeu Fabra.Google Scholar
  20. Laurier, C., Sordo, M., Serra, J., & Herrera, P. (2009). Music Mood Representations from Social Tags. In Proceedings International Society for Music Information Retrieval (ISMIR) (pp. 381–386).Google Scholar
  21. Liebetrau, J., Schneider, S., & Jezierski, R. (2012). Application of free choice profiling for the evaluation of emotions elicited by music. In Proceedings 9th Int. Symp. Comput. Music Modeling and Retrieval (CMMR 2012): Music and Emotions (pp. 78–93).Google Scholar
  22. Liu, D., Lu, L., & Zhang, H.J. (2003). Automatic mood detection from acoustic music data. In Proceedings International Society for Music Information Retrieval (ISMIR).Google Scholar
  23. Lu, L., Liu, D., & Zhang, H. (2006). Automatic mood detection and tracking of music audio signals. IEEE Transactions on audio, speech, and language processing, 14 (1), 5–18.CrossRefGoogle Scholar
  24. Mathematica Neural Networks-Train and Analyze Neural Networks to Fit Your Data, Wolfram Research Inc. First Edition, Champaign (2005). http://media.wolfram.com/documents/NeuralNetworksDocumentation.pdf.
  25. McKay, C., Fujinaga, I., & Depalle, P. (2005). jAudio: A feature extraction library. In Proceedings International Society for Music Information Retrieval (ISMIR) (pp. 600–603).Google Scholar
  26. Patra, B.G., Das, D., & Bandyopadhyay, S. (2013a). Automatic Music Mood Classification of Hindi Songs. In Proceedings 3rd Workshop on Sentiment Analysis where AI meets Psychology (IJCNLP 2013), Nagoya, Japan (pp. 24–28).Google Scholar
  27. Patra, B.G., Das, D., & Bandyopadhyay, S. (2013b). Unsupervised Approach to Hindi Music Mood Classification, Mining Intelligence and Knowledge Exploration. In Prasath, R., & Kathirvalavakumar, T. (Eds.) LNAI 8284 (pp. 62–69).Google Scholar
  28. Patra, B.G., Das, D., & Bandyopadhyay, S. (2015a). Mood Classification of Hindi Songs based on Lyrics. In Proceedings 12th International Conference on Natural Language Processing (ICON 2015) (pp. 78–93).Google Scholar
  29. Patra, B.G., Das, D., & Bandyopadhyay, S. (2015b). Music Emotion Recognition System. In Proceedings International Symposium Frontiers of Research on Speech and Music (FRSM-2015), Indian Institute of Technology, Kharagpur India (pp. 114–119).Google Scholar
  30. Patra, B.G., Maitra, P., Das, D., & Bandyopadhyay, S. (2015c). MediaEval 2015: Feed-Forward Neural Network based Music Emotion Recognition. In MediaEval 2015 Workshop, Wurzen, Germany.Google Scholar
  31. Plewa, M., & Kostek, B. (2013). Multidimensional Scaling Analysis Applied to Music Mood Recognition. In Audio Engineering Society Convention 134. Audio Engineering Society.Google Scholar
  32. Plewa, M., & Kostek, B. (2015). Music Mood Visualization Using Self-Organizing Maps. Archives of Acoustics, 40(4), 513–525.CrossRefGoogle Scholar
  33. Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1988). Learning representations by back-propagating errors. Cognitive modeling, 5(3).Google Scholar
  34. Russell, J. (1980). A circumplex model of affect. Journal of personality and social psychology, 39(6), 1161–1178.CrossRefGoogle Scholar
  35. Scherer, K.R., & Zentner, M.R. (2001). Emotional effects of music: Production rules. Music and emotion: Theory and research, 361–392.Google Scholar
  36. Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C., & Yang, Y. (2013). 1000Songs for Emotional Analysis ofMusic. In Proceedings 2nd ACM international workshop on Crowdsourcing for multimedia pp. 1–6 ACM.Google Scholar
  37. Thayer, R.E. (1989). The biopsychology of mood and arousal Oxford University Press.Google Scholar
  38. Trainor, L.J., & Schmidt, L.A. (2003). Processing emotions induced by music. The cognitive neuroscience of music, 310–324.Google Scholar
  39. Trochidis, K., Delb, C., & Bigand, E. (2011). Investigation of the relationships between audio features and induced emotions in Contemporary Western music. In Proceedings of the 8th Sound and Music Computing Conference.Google Scholar
  40. Tzanetakis, G., & Cook, P. (2002). Musical genre classification of audio signals. IEEE transactions on Speech and Audio Processing, 10(5), 293–302.CrossRefGoogle Scholar
  41. Ujlambkar, A.M., & Attar, V.Z. (2012). Mood classification of Indian popular music. In Proceedings CUBE International Information Technology Conference (pp. 278–283).Google Scholar
  42. Velankar, M.R., & Sahasrabuddhe, H.V. (2012). A Pilot Study of Hindustani Music Sentiments. In Proceedings 2nd Workshop on Sentiment Analysis where AI meets Psychology (COLING 2012) (pp. 91–98).Google Scholar
  43. Yang, Y., Lin, Y., Cheng, H., Liao, I., Ho, Y., & Chen, H.H. (2008a). Toward multi-modal music emotion classification. In Proceedings Advances in Multimedia Information Processing-PCM 2008 (pp. 70–79).Google Scholar
  44. Yang, Y., Lin, Y., Su, Y., & Chen, H.H. (2008b). A regression approach to music emotion recognition. IEEE Transactions on Audio Speech, and Language Processing, 16(2), 448–457.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringJadavpur UniversityKolkataIndia

Personalised recommendations