Advertisement

Interpretable Music Categorisation Based on Fuzzy Rules and High-Level Audio Features

  • Igor VatolkinEmail author
  • Günter Rudolph
Conference paper
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

Music classification helps to manage song collections, recommend new music, or understand properties of genres and substyles. Until now, the corresponding approaches are mostly based on less interpretable low-level characteristics of the audio signal, or on metadata, which are not always available and require high efforts for filtering the relevant information. A listener-friendly approach may rather benefit from high-level and meaningful characteristics. Therefore, we have designed a set of high-level audio features, which is capable to replace the baseline low-level feature set without a significant decrease of classification performance. However, many common classification methods change the original feature dimensions or create complex models with lower interpretability. The advantage of the fuzzy classification is that it describes the properties of music categories in an intuitive, natural way. In this work, we explore the ability of a simple fuzzy classifier based on high-level features to predict six music genres and eight styles from our previous studies.

Keywords

Classification Model Fuzzy Controller Audio Signal Linguistic Term Fuzzy Classification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

We thank the Klaus Tschira Foundation for the financial support.

References

  1. Abeßer, J., Lukashevich, H., Dittmar, C., & Schuller, G. (2009). Genre classification using bass-related high-Level features and playing styles. In Proceedings of the 10th Int’L Conference on Music Information Retrieval (ISMIR) (pp. 453–458).Google Scholar
  2. Celma, Ò., & Serra, X. (2008). FOAFing the music: Bridging the semantic gap in music recommendation. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 6(4), 250–256.CrossRefGoogle Scholar
  3. Essid, S., Richard, G., & David, B. (2006). Musical instrument recognition by pairwise classiffication strategies. IEEE Transactions on Audio, Speech, and Language Processing, 14(4), 1401–1412.CrossRefGoogle Scholar
  4. Geyer-Schulz, A. (1998). Fuzzy genetic algorithms. In H. T. Ngyen & M. Sugeno (Eds.), Fuzzy systems. Boston: Kluwer Academic Publishers.Google Scholar
  5. Guyon, I., Nirkavesh, M., Gunn, S., & Zadeh, L. A. (2006). Feature extraction. foundations and applications. Berlin/Heidelberg: Springer.CrossRefzbMATHGoogle Scholar
  6. Fernández, F., & Chávez, F. (2012). Fuzzy rule based system ensemble for music genre classification. In Proceedings of the 1st International Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design (EvoMUSART) (pp. 84–95). Berlin: Springer.CrossRefGoogle Scholar
  7. Friberg, A. (2005). A fuzzy analyzer of emotional expression in music performance and body motion. In J. Sundberg & B. Brunson (Eds.), Proceedings of Music and Music Science.Google Scholar
  8. Hu, X., & Liu, J. (2010). User-centered music information retrieval evaluation. In Proceedings of the Joint Conference on Digital Libraries (JCDL) Workshop: Music Information Retrieval for the Masses.Google Scholar
  9. Mauch, M., & Levy, M. (2011). Structural change on multiple time scales as a correlate of musical complexity. In Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR) (pp. 489–494).Google Scholar
  10. Mckay, C., & Fujinaga, I. (2006). Musical genre classification: Is it worth pursuing and how can it be improved? In Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR) (pp. 101–106).Google Scholar
  11. Pachet, F., & Zils, A. (2003). Evolving automatically high-level music descriptors from acoustic signals. In Proceedings of the 1st International Symposium on Computer Music Modeling and Retrieval (CMMR) (pp. 42–53).Google Scholar
  12. Sturm, B. (2012). A survey of evaluation in music genre recognition. In Proceedings of the 10th International Workshop on Adaptive Multimedia Retrieval (AMR).Google Scholar
  13. Vatolkin, I., Theimer, W., & Botteck, M. (2012). Partition based feature processing for improved music classification. In W. A. Gaul, A. Geyer-Schulz, L. Schmidt-Thieme, & J. Kunze (Eds.), Challenges at the interface of data analysis, computer science, and optimization (pp. 411–419). Berlin: Springer.CrossRefGoogle Scholar
  14. Vatolkin, I. (2013). Improving supervised music classification by means of multi-objective evolutionary feature selection. PhD thesis, Department of Computer Science, TU Dortmund, 2013.Google Scholar
  15. Yang, Y. -H., Liu, C. -C., & Chen, H. H. (2006). Music emotion classification: A fuzzy approach. In: K. Nahrstedt, M. Turk, Y. Rui, W. Klas, & K. Mayer-Patel (Eds.), Proceedings of the 14th ACM International Conference on Multimedia (pp. 81–84).Google Scholar
  16. Wolpert, D. (1992). Stacked generalization. Neural Networks, 5(2), 241–260.CrossRefMathSciNetGoogle Scholar
  17. Zhang, H., & Liu, D. (2006). Fuzzy modeling and fuzzy control. Boston/Basel/Berlin: Birkhäuser.zbMATHGoogle Scholar
  18. Zhou, Z. -H. (2012). Ensemble methods: Foundations and algorithms. Boca Raton: CRC Press.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.TU DortmundChair of Algorithm EngineeringDortmundGermany

Personalised recommendations