Abstract
Music plays significant role in every individual life. People often get confused with the large set of music library which songs they have to listen based on current mood and this is time consuming process, very tedious, and need manual work. Different types of algorithms have been introduced for automating the music library. However, the existing algorithms used are slow and less accurate. This proposed system algorithms in view of facial expression will create a playlist consequently thereby reducing the work and time engaged with delivering the cycle physically. In terms of accuracy, emotion extraction algorithm gives around 80–90% for real-time images, 95–100% for the static pictures. In this way, it yields better exactness concerning execution and computational time and lessens the planning cost, contrasted with the algorithms utilized in the literature survey. Playlist is created, based on the detected feature.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
G.H. He, J. Jin, Y. Xiong, B. Chen, W. Sun, L. Zhao, Language Feature Mining for Music Emotion Classification via Supervised Learning from Lyrics, in International Symposium on Intelligence Computation and Applications (Springer Berlin Heidelberg, Berlin, Heidelberg, 2008), pp. 426–435
X. Hu, J. Downie, When lyrics outperform audio for music mood classification: a feature analysis, in ISMIR, ed. by J. Downie, R.C. Veiltkamp (International Society for Music Information Retrieval, 2010), pp. 619–624
H.-C. Kwon, M. Kim, Lyrics-based emotion classification using feature selection by partial syntactic analysis, in 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence (ICTAI 2011) (2011), pp. 960–964
P. Lameere, E. Pampak, Social tags and music information retrieval, in ISMIR 2008, 9th International Conference on Music Information Retrieval (Dreexel University, Philladelphia, PA, USA, Sept. 14–18 2008), p. 24
J.H. Lee, X. Hu, Generating ground truth for music mood classification using mechanical turk, in Proceedings of the 12th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL ’12 (ACM, New York, NY, USA, 2012), pp. 129–138
J.A. Speck, E.M. Schmidtt, B.G. Morten, Y.E. Kim, A comparative study of collaborative vs. traditional musical mood annotatization, in ISMIR, ed. by A. Klapari, C. Leiider (University of Miaami, 2011), pp. 549–554
D. Tang, B. Qin, T. Liu, Deep learning for sentiment analysis: successful approaches and future challenges. Int. Rev. Data Min. Knowl. Disc. 5(6), 292–303 (2015)
M. van Zaanen, P. Kanters, Automatic mood classification using tf*idf based on lyrics, in ISMIR, ed. by J.S. Downie, R.C. Veltkamp (International Society for Music Information Retrieval, 2010), pp. 75–80
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sunitha, M., Adilakshmi, T., Renuka (2023). Emotion-Based Music Recommendation System to Generate a Dynamic Playlist. In: Reddy, A.B., Nagini, S., Balas, V.E., Raju, K.S. (eds) Proceedings of Third International Conference on Advances in Computer Engineering and Communication Systems. Lecture Notes in Networks and Systems, vol 612. Springer, Singapore. https://doi.org/10.1007/978-981-19-9228-5_30
Download citation
DOI: https://doi.org/10.1007/978-981-19-9228-5_30
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-9227-8
Online ISBN: 978-981-19-9228-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)