Multi-agent system application for music features extraction, meta-classification and context analysis

  • Javier Pérez-MarcosEmail author
  • Diego M. Jiménez-Bravo
  • Juan F. De Paz
  • Gabriel Villarrubia González
  • Vivian F. López
  • Ana B. Gil
Regular Paper


Manual music classification is a slow and costly process. Most recent works about music auto-classification such as genre or emotions make this process easier, but are focused on a single task. In this work, a music multi-classification platform is presented. This platform is based on multi-agent systems, allowing to distribute the extraction, classification, and service tasks among agents. The platform performs a musical genre and emotional classification and provides context information of songs from social networks such as Twitter and The methods chosen based on meta-classifiers to perform single-label and multi-label classification obtain great results. In the case of multi-label classification, better results are obtained than in other previous works.


Music classification Multi-agent system Multi-label classification Meta-classifiers Musical genre Musical emotions Social networks 



This work was supported by the Spanish Ministry, Ministerio de Economía y Competitividad and FEDER funds. Project. SURF: Intelligent System for integrated and sustainable management of urban fleets TIN2015-65515-C4-3-R.


  1. 1.
    Bajo J, Corchado JM (2009) Thomas: practical applications of agents and multiagent systems. Springer, Berlin, pp 512–513.
  2. 2.
    Baniya BK, Ghimire D, Lee J (2015) Automatic music genre classification using timbral texture and rhythmic content features. In: International conference on advanced communication technology, ICACT, vol 2015-August. IEEE, pp 434–443.
  3. 3.
    Bellifemine F, Poggi A, Rimassa G (2001) Developing multi-agent systems with JADE. Springer, Berlin, pp 89–103.
  4. 4.
    Bergstra J, Casagrande N, Erhan D, Eck D, Kégl B (2006) Aggregate features and ADABOOST for music classification. Mach Learn 65(2–3):473–484. CrossRefGoogle Scholar
  5. 5.
    Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140. CrossRefzbMATHGoogle Scholar
  6. 6.
    Chew E (2000) Towards a mathematical model of tonality. Ph.D. Thesis.
  7. 7.
    Choi K, Fazekas G, Sandler M, Cho K (2017) Convolutional recurrent neural networks for music classification. In: 2017 IEEE International conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2392–2396.
  8. 8.
    Cohn R (1997) Neo-riemannian operations, parsimonious trichords, and their tonnetz representations. J Music Theory 41(1):1. CrossRefGoogle Scholar
  9. 9.
    Freund Y, Schapire RE (1999) A short introduction to boosting. J Jpn Soc Artif Intell 14(5):771–780Google Scholar
  10. 10.
    Gómez LM, Cáceres MN (2018) Applying data mining for sentiment analysis in music. Springer, Cham, pp 198–205.
  11. 11.
    Gregori ME, Cámara JP, Bada GA (2006) A jabber-based multi-agent system platform. In: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems: AAMAS ’06. ACM Press, New York, p 1282.
  12. 12.
    Harte C, Sandler M, Gasser M (2006) Detecting harmonic change in musical audio. In: Proceedings of the 1st ACM workshop on audio and music computing multimedia AMCMM 06 C(06):21.
  13. 13.
    Hauger D, Schedl M (2014) Exploring geospatial music listening patterns in microblog data. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) vol 8382, pp 133–146.
  14. 14.
    Hong J, Deng H, Yan Q (2008) Tag-based artist similarity and genre classification. In: 2008 IEEE International symposium on knowledge acquisition and modeling workshop proceedings, KAM 2008. IEEE, pp 628–631.
  15. 15.
    Hu X, Choi K, Downie JS (2017) A framework for evaluating multimodal music mood classification. J Assoc Inf Sci Technol 68(2):273–285. CrossRefGoogle Scholar
  16. 16.
    Hu X, Yang Y (2017) The mood of Chinese Pop music: representation and recognition. J Assoc Inf Sci Technol 68(8):1899–1910. CrossRefGoogle Scholar
  17. 17.
    Jiang D-N, Lu L, Zhang H-J, Tao J-H, Cai L-H (2002) Music type classification by spectral contrast feature. In: Proceedings of the IEEE International conference on multimedia and expo. IEEE, pp 113–116.
  18. 18.
    Kim Y, Suh B, Lee K (2014) #Nowplaying the future billboard. In: Proceedings of the first international workshop on Social media retrieval and analysis—SoMeRA ’14. ACM Press, New York, pp 51–56.
  19. 19.
    Li T, Ogihara M, Li Q (2003) A comparative study on content-based music genre classification. In: Proceedings of the 26th annual international ACM SIGIR conference on research and development in information retrieval—SIGIR ’03. ACM Press, New York, p 282.
  20. 20.
    Luaces O, Díez J, Barranquero J, Del Coz JJ, Bahamonde A (2012) Binary relevance efficacy for multilabel classification. Prog Artif Intell 1:303–313. CrossRefGoogle Scholar
  21. 21.
    Lunney GS (2014) Copyright on the internet: consumer copying and collectives. In: Frankel S, Gervais D (eds) The evolution and equilibrium of copyright in the digital age. Cambridge University Press, Cambridge, pp 285–311. CrossRefGoogle Scholar
  22. 22.
    Mehrabian A (1997) Analysis of affiliation-related traits in terms of the PAD temperament model. J Psychol 131(1):101–117. CrossRefGoogle Scholar
  23. 23.
    Nanni L, Costa YM, Lumini A, Kim MY, Baek SR (2016) Combining visual and acoustic features for music genre classification. Expert Syst Appl 45:108–117. CrossRefGoogle Scholar
  24. 24.
    Ortony A, Clore GL, Collins A (1988) The cognitive structure of emotions. Cambridge University Press.
  25. 25.
    Poria S, Gelbukh A, Hussain A, Bandyopadhyay S, Howard N (2013) Music genre classification: a semi-supervised approach. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 7914 LNCS, pp 254–263. Springer, Berlin, Heidelberg.
  26. 26.
    Read J, Pfahringer B, Holmes G, Frank E, Brodley Read CJ, Pfahringer B, Holmes G, Frank E (2011) Classifier chains for multi-label classification. Mach Learn.
  27. 27.
    Rincon JA, Bajo J, Fernandez A, Julian V, Carrascosa C (2016) Using emotions for the development of human-agent societies. Front Inf Technol Electron Eng 17(4):325–337. CrossRefGoogle Scholar
  28. 28.
    Rincon JA, Julian V, Carrascosa C (2015) An emotional-based hybrid application for human-agent societies. Springer, Cham, pp 203–213.
  29. 29.
    Schapire RE (2013) Explaining AdaBoost. In: Empirical inference. Springer, Berlin, pp 37–52.
  30. 30.
    Schedl M, Hauger D, Urbano J (2014) Harvesting microblogs for contextual music similarity estimation: a co-occurrence-based framework. Multimed Syst 20(6):693–705. CrossRefGoogle Scholar
  31. 31.
    Seo JS, Lee S (2011) Higher-order moments for musical genre classification.
  32. 32.
    Tellegen A, Watson D, Clark LA (1999) On the dimensional and hierarchical structure of affect. Psychol Sci 10(4):297–303. CrossRefGoogle Scholar
  33. 33.
    Theodoridis S, Chellappa R (2013) Academic Press library in signal processing: communications and radar signal processing. Academic Press.
  34. 34.
    Trohidis K, Tsoumakas G, Kalliris G, Vlahavas I (2011) Multi-label classification of music by emotion. EURASIP J Audio Speech Music Process 2011(1):4. CrossRefGoogle Scholar
  35. 35.
    Tsoumakas G, Vlahavas I, (2007) Random k-labelsets: an ensemble method for multilabel classification. In: Kok JN, Koronacki J, Mantaras RL, Matwin S, Mladenič D, Skowron A (eds) Machine learning: ECML 2007. ECML, (2007) Lecture Notes in Computer Science, vol 4701. Springer, Berlin, HeidelbergGoogle Scholar
  36. 36.
    Tzanetakis G, Essl G, Cook P Automatic musical genre classification of audio signals.
  37. 37.
    Yang KC, Huang CH, Yang C, Lin YS (2017) A study on music mood detection in online digital music database. In: PACIS 2017 Proceedings.
  38. 38.
    Zato C, Villarrubia G, Sánchez A, Barri I, Rubión E, Fernández A, Rebate C, Cabo JA, Álamos T, Sanz J, Seco J, Bajo J, Corchado JM AISC 151—PANGEA—Platform for automatic construction of organizations of intelligent agents. AISC 151:229–239.
  39. 39.
    Zheng F, Zhang G, Song Z (2001) Comparison of different implementations of MFCC. J Comput Sci Technol 16(6):582–589. CrossRefzbMATHGoogle Scholar
  40. 40.
    Zottesso RHD, Costa YMG, Bertolini D (2016) Music genre classification using visual features with feature selection. In: 2016 35th international conference of the Chilean computer science society (SCCC), pp 1–6. IEEE.

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer Science and AutomaticUniversity of SalamancaSalamancaSpain

Personalised recommendations