Abstract
The recognition of emotion has become a multi-disciplinary research area that has received great interest. Recognizing emotion of audio data will be useful for content-based searching, mood detection etc. The goal of this paper is to elaborate a system that automatically recognizes the emotion of the music. We present a technique used for document classification, Latent Dirichlet allocation (LDA) for the purpose of identifying emotion from music. The recognition process consists of three steps. In the first step, extractions of ten distinct features from music are performed followed by Clustering of values of these features, and finally in the third step an LDA model for each of the emotions is constructed. After constructing the LDA the emotion of the given music is identified. This model was tested on South Indian film music to recognize 6 emotions happy, sad, angry, love, disgust, fear and achieved an average accuracy of 80%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Wieczorkowska, A., Synak, P., Lewis, R., Ras, Z.: Extracting Emotion from Music Data. In: IEEE ICDM 2006 (2006)
Han, B.-J., Rho, S., Dannenberg, R.B., Hwang, E.: SMERS: Music Emotion Recognition using Support Vector Regression. In: 10th International Society for Music Information Retrieval Conference 2009, pp. 651–656 (2009)
Kulkarni, A., Iyer, D., Sridharan, S.R.: Audio Segmentation. In: IEEE, International Conference on Data Mining, ICDM (2001)
Lu, Q., Chen, X., Yang, D., Wang, J.: Boosting for multi-modal music emotionclassification. In: 11th International Society for Music Information Retrieval Conference (ISMIR 2010), pp. 105–110 (2010)
Schmidt, E.M., Turnbull, D., Kim, Y.E.: Feature Selection for Content-Based, Time-Varying Musical Emotion Regression. In: International Conference on Data Mining (ICDM), pp. 267–273 (2010)
Hu, D.J.: Latent Dirichlet Allocation for Text, Images and Music. University of California, San Diego (2009)
Blei, D.M., Ng, A.Y., Jordon, M.I.: Latent Dirichlet Allocation. University of Californiya, Berkeley (2009)
Sridhar, R., Subramanian, M., Lavanya, B.M., Malinidevi, B., Geetha, T.V.: Latent Dirichlet Allocation Model for raga identification of Carnatic Music. Journal of Computer Science, 1711–1716 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer India
About this paper
Cite this paper
Arulheethayadharthani, S., Sridhar, R. (2013). Latent Dirichlet Allocation Model for Recognizing Emotion from Music. In: Kumar M., A., R., S., Kumar, T. (eds) Proceedings of International Conference on Advances in Computing. Advances in Intelligent Systems and Computing, vol 174. Springer, New Delhi. https://doi.org/10.1007/978-81-322-0740-5_58
Download citation
DOI: https://doi.org/10.1007/978-81-322-0740-5_58
Publisher Name: Springer, New Delhi
Print ISBN: 978-81-322-0739-9
Online ISBN: 978-81-322-0740-5
eBook Packages: EngineeringEngineering (R0)