Skip to main content
Log in

Algorithmic composition for pop songs based on lyrics emotion retrieval

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Musical composition is difficult for people due to the complicated composition theories and the combination of artistic conception with emotion-based ideas. A 2-D emotional plane which can define the valence and arousal coordinate has been developed. With the proposed algorithmic composition, it is possible to perform the mapping technique between music and emotion based on a song’s segment emotion retrieval. The proposed emotion-based algorithmic music composition uses song lyrics’ emotion retrieval to classify several music idea segments and also uses the mapping technology between musical and emotional aesthetics. To analyze the lyrics, the system automatically segments the sentences, then calculates various feature values via emotional vocabulary, and finally conducts Support Vector Machine (SVM) assortments based on the lyrics emotion dataset. The proposed Algorithmic Composition Based on Lyrics Emotion (ACBLE) for pop songs study composes songs using the lyrics that have been released. According to survey feedback, satisfaction with the songs is 3.33. The system can enable anyone who has no knowledge of music theory, easily compose a song. Some demos finally demonstrate the results. Therefore, the proposed method can be applied to fields including pop music composition, background music, musical health, and educational music.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Amrit C, Paauw T, Aly R, Lavric M (2017) Identifying child abuse through text mining and machine learning. Expert Syst Appl 88:402–418

    Article  Google Scholar 

  2. An Y, Sun S, Wang S (2017) Naive Bayes classifiers for music emotion classification based on lyrics. In: 2017 IEEE/ACIS 16th international conference on computer and information science (ICIS), IEEE, pp. 635–638

  3. An Y, Sun S, Wang S (2017, May) Naive Bayes classifiers for music emotion classification based on lyrics. In 2017 IEEE/ACIS 16th international conference on computer and information science (ICIS). IEEE, pp. 635–638

  4. Bai J, Luo K, Peng J, Shi J, Wu Y, Feng L, Wang Y (2017, July) Music emotions recognition by cognitive classification methodologies. In 2017 IEEE 16th International Conference on Cognitive Informatics & Cognitive Computing (ICCI* CC). IEEE. pp. 121–129

  5. Bayu QDAP, Suyanto S, Arifianto A (2019, December) Hierarchical SVM-kNN to classify music emotion. In 2019 international seminar on research of information technology and intelligent systems (ISRITI). IEEE, pp. 5–10

  6. Chauhan VK, Dahiya K, Sharma A (2019) Problem formulations and solvers in linear SVM: a review. Artif Intell Rev 52(2):803–855

    Article  Google Scholar 

  7. Chou J (n.d.) “Silence” Official MV: https://www.youtube.com/watch?v=1hI-7vj2FhE. Accessed 14 March 2019

  8. CKIP (2019) Chinese Words Segmentation System, http://ckipsvr.iis.sinica.edu.tw/. Accessed 14 March 2019

  9. Dai H, Zhao G, Lin M, Wu J, Zheng G (2018) A novel estimation method for the state of health of lithium-ion battery using prior knowledge-based neural network and Markov chain. IEEE Trans Ind Electron 66(10):7706–7716

    Article  Google Scholar 

  10. De Prisco R, Zaccagnino G, Zaccagnino R (2020) EvoComposer: an evolutionary algorithm for 4-voice music compositions. Evol Comput 28(3):489–530

    Article  Google Scholar 

  11. Fukayama S et al (2010) Automatic song composition from the lyrics exploiting prosody of the Japanese language, proc. 7th sound and music computing conference (SMC), pp. 299–302

  12. Goto M (2006) AIST annotation for the RWC music database. In ISMIR, pp 359–360

  13. Grekow J (2018) MIDI features. In From Content-based Music Emotion Recognition to Emotion Maps of Musical Pieces, Springer, Cham, MIDI Features, pp. 27–41

  14. Niu, Y, Xie, R, Liu, Z, Sun, M, HowNet Knowledge Database (2019) http://www.keenage.com. Accessed 14 March 2019

  15. Junyi S  (2019)  Jieba Database: https://github.com/fxsjy/jieba. Accessed 14 March 2019

  16. KKBOX International Limited, KKBOX (2019) https://www.kkbox.com/tw/tc/index.html. Accessed 14 March 2019

  17. Li Z, Fan Y, Jiang B, Lei T, Liu W (2019) A survey on sentiment analysis and opinion mining for social multimedia. Multimed Tools Appl 78(6):6939–6967

    Article  Google Scholar 

  18. Liao W-F (2016) The evolution of Taiwan Mandopop music lyric mood by sentiment analysis, Master’s Thesis, Department of Information Technology and Management, Shih Chien University, Taiwan, 2016, pp. 1–53

  19. Nakamura K, Nose T, Chiba Y, Ito A (2020) A symbol-level melody completion based on a convolutional neural network with generative adversarial learning. J Inf Proc 28:248–257

    Google Scholar 

  20. Sinica A (2019) NLPLab: http://academiasinicanlplab.github.io. Accessed 14 March 2019

  21. Raju KS, Murty MR, Rao MV, Satapathy SC (2018) Support vector machine with k-fold cross validation model for software fault prediction. Int J Pure Appl Math 118(20):321–334

    Google Scholar 

  22. Shih C-C (2009) The components, analysis and application of module digital music-the case study of Chinese popular music, retrieved from SHU-TE University graduate School of Applied Design Master’s thesis, Taiwan

  23. Srinilta C, Sunhem W, Tungjitnob S, Thasanthiah S, Vatathanavaro S (2017) Lyric-based sentiment polarity classification of Thai songs. In Proceedings of the International Multi Conference of Engineers and Computer Scientists

  24. Sun W, Cai Z, Li Y, Liu F, Fang S, Wang G (2018) Data processing and text mining technologies on electronic medical records: a review. J Healthc Eng 2018:1–9

    Article  Google Scholar 

  25. Swain M, Routray A, Kabisatpathy P (2018) Databases, features and classifiers for speech emotion recognition: a review. Int J Speech Technol 21(1):93–120

    Article  Google Scholar 

  26. Tien H (2015) “Little Lucky”, https://www.youtube.com/watch?v=GCgvpwLNvtY. Accessed 14 March 2019

  27. Visnu Dharsini S, Balaji B, Kirubha Hari KS (2020) Music recommendation system based on facial emotion recognition. J Comput Theor Nanosci 17(4):1662–1665

    Article  Google Scholar 

  28. Wang J, Mulder A, Wanderley MM (2019) Practical considerations for MIDI over Bluetooth low energy as a wireless Interface. In NIME, pp 25–30

  29. Weiss C, Peeters G (2021, November) Training deep pitch-class representations with a multi-label CTC loss. In International Society for Music Information Retrieval Conference (ISMIR)

  30. Zhang L, Ai J, Jiang B, Lu H, Li X (2017) Saliency detection via absorbing Markov chain with learnt transition probability. IEEE Trans Image Process 27(2):987–998

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to appreciate the support from Ministry of Science and Technology project of Taiwan: 108-2511-H-424 -001 -MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chih-Fang Huang.

Ethics declarations

Conflicts of interests/competing interests

There is no conflict of interest for this article titled “Algorithmic Composition for Pop Songs Based on Lyrics Emotion Retrieval”.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, CF., Yao, SH. Algorithmic composition for pop songs based on lyrics emotion retrieval. Multimed Tools Appl 81, 12421–12440 (2022). https://doi.org/10.1007/s11042-022-12408-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-12408-y

Keywords

Navigation