Skip to main content

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 612))

  • 379 Accesses

Abstract

Music plays significant role in every individual life. People often get confused with the large set of music library which songs they have to listen based on current mood and this is time consuming process, very tedious, and need manual work. Different types of algorithms have been introduced for automating the music library. However, the existing algorithms used are slow and less accurate. This proposed system algorithms in view of facial expression will create a playlist consequently thereby reducing the work and time engaged with delivering the cycle physically. In terms of accuracy, emotion extraction algorithm gives around 80–90% for real-time images, 95–100% for the static pictures. In this way, it yields better exactness concerning execution and computational time and lessens the planning cost, contrasted with the algorithms utilized in the literature survey. Playlist is created, based on the detected feature.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • G.H. He, J. Jin, Y. Xiong, B. Chen, W. Sun, L. Zhao, Language Feature Mining for Music Emotion Classification via Supervised Learning from Lyrics, in International Symposium on Intelligence Computation and Applications (Springer Berlin Heidelberg, Berlin, Heidelberg, 2008), pp. 426–435

    Google Scholar 

  • X. Hu, J. Downie, When lyrics outperform audio for music mood classification: a feature analysis, in ISMIR, ed. by J. Downie, R.C. Veiltkamp (International Society for Music Information Retrieval, 2010), pp. 619–624

    Google Scholar 

  • H.-C. Kwon, M. Kim, Lyrics-based emotion classification using feature selection by partial syntactic analysis, in 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence (ICTAI 2011) (2011), pp. 960–964

    Google Scholar 

  • P. Lameere, E. Pampak, Social tags and music information retrieval, in ISMIR 2008, 9th International Conference on Music Information Retrieval (Dreexel University, Philladelphia, PA, USA, Sept. 14–18 2008), p. 24

    Google Scholar 

  • J.H. Lee, X. Hu, Generating ground truth for music mood classification using mechanical turk, in Proceedings of the 12th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL ’12 (ACM, New York, NY, USA, 2012), pp. 129–138

    Google Scholar 

  • J.A. Speck, E.M. Schmidtt, B.G. Morten, Y.E. Kim, A comparative study of collaborative vs. traditional musical mood annotatization, in ISMIR, ed. by A. Klapari, C. Leiider (University of Miaami, 2011), pp. 549–554

    Google Scholar 

  • D. Tang, B. Qin, T. Liu, Deep learning for sentiment analysis: successful approaches and future challenges. Int. Rev. Data Min. Knowl. Disc. 5(6), 292–303 (2015)

    Google Scholar 

  • M. van Zaanen, P. Kanters, Automatic mood classification using tf*idf based on lyrics, in ISMIR, ed. by J.S. Downie, R.C. Veltkamp (International Society for Music Information Retrieval, 2010), pp. 75–80

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Sunitha .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sunitha, M., Adilakshmi, T., Renuka (2023). Emotion-Based Music Recommendation System to Generate a Dynamic Playlist. In: Reddy, A.B., Nagini, S., Balas, V.E., Raju, K.S. (eds) Proceedings of Third International Conference on Advances in Computer Engineering and Communication Systems. Lecture Notes in Networks and Systems, vol 612. Springer, Singapore. https://doi.org/10.1007/978-981-19-9228-5_30

Download citation

Publish with us

Policies and ethics