Advertisement

Similarity Clustering of Music Files According to User Preference

  • Bastian Tenbergen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4827)

Abstract

A plug-in for the Machine Learning Environment Yale has been developed that automatically structures digital music corpora into similarity clusters using a SOM on the basis of features that are extracted from files in a test corpus. Perceptionally similar music files are represented in the same cluster. A human user was asked to rate music files according to their subjective similarity. Compared to the user’s judgment, the system had a mean accuracy of 65.7%. The accuracy of the framework increases with the size of the music corpus to a maximum of 75%. The study at hand shows that it is possible to categorize music files into similarity clusters by taking solely mathematical features into account that have been extracted from the files themselves. This allows for a variety of different applications like lowering the search space in manual music comparison, or content-based music recommendation.

Keywords

Feature Extraction Method Similarity Cluster Audio Data Test Corpus Music Information Retrieval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allamanche, E., Herre, J., Hellmuth, O., Fröba, B., Kastner, T., Cremer, M.: Content-based identification of audio material using MPEG-7 low level description. In: ISMIR. Proceedings of the Second Annual International Symposium on Music Information Retrieval, pp. 197–204 (2001)Google Scholar
  2. 2.
    Fischer, S., Klinkenberg, R., Mierswa, I., Ritthoff, O.: YALE: Yet Another Learning Environment – Tutorial. No. CI-136/02, Collaborative Research Center 531, University of Dortmund, Dortmund, Germany (2002)Google Scholar
  3. 3.
    Hacker, S.: MP3, the definitive guide. O’Reilly & Associates, Inc. (2000)Google Scholar
  4. 4.
    Kohonen, T.: Self-Organized Formation of Topologically Correct Feature Maps. Biological Cybernetics 43, 59–69 (1982)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Kohonen, T.: Self-Organizing Maps. Springer, New York (2001)zbMATHGoogle Scholar
  6. 6.
    Kurth, F., Clausen, M.: Full-Text Indexing of Very Large Audio Data Bases. 110th Audio Engineering Society Convention  (2001)Google Scholar
  7. 7.
    Koza, J.R.: Genetic Programming. Encyclopedia for Computer Science and Technology (1997)Google Scholar
  8. 8.
    Liu, Z., Wang, Y., Chen, T.: Audio Feature Extraction and Analysis for Scene Segmentation and Classification. Journal of VLSI Signal Processing 20, 61–79 (1998)CrossRefGoogle Scholar
  9. 9.
    Mierswa, I., Wurst, M., Klinkenberg, R., Scholz, M., Euler, T.: YALE: Rapid Prototyping for Complex Data Mining Tasks. In: KDD 2006. Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM Press, New York (2006)Google Scholar
  10. 10.
    Mierswa, I., Morik, K.: Automatic Feature Extraction for Classifying Audio Data. Machine Learning Journal 58, 127–149 (2005)zbMATHCrossRefGoogle Scholar
  11. 11.
    Mierswa, I.: Value Series Processing with Yale. Version 3.3Google Scholar
  12. 12.
    Pachet, F., Laigre, D.: A Naturalist Approach to Music File Name Analysis. In: Proceedings of 2nd International Symposium on Music Information Retrieval (2001)Google Scholar
  13. 13.
    Pardo, B., Shlfrin, J., Birmingham, W.: Name That Tune: A Pilot Study in Finding a Melody From a Sung Query. Journal of the American Society for Information Science and Technology 55(4) (2004)Google Scholar
  14. 14.
    Schedl, M., Pampalk, E., Widmer, G.: Intelligent Structuring and Exploration of Digital Music Collections. Austrian Research Institute for Artificial Intelligence (ÖFAI), Vienna, Austria and Department of Computational Perception Johannes Kepler Universität (JKU) Linz, Austria (2004)Google Scholar
  15. 15.
    Tzanetakis, G., Essl, G., Cook, P.: Automatic Musical Genre Classification Of Audio Signals. In: ISMIR. Proceedings of the Int. Symposium on Music Information Retrieval, pp. 205–210 (2001)Google Scholar
  16. 16.
    Uitdenbogerd, A.L., Zobel, J.: An Architecture for Effective Music Information Retrieval. Journal of the American Society for Information Science and Technology 55(12), 1053–1057 (2004)CrossRefGoogle Scholar
  17. 17.
    Unal, E., Narayanan, S.S., Chew, E.: A Statistical Approach to Retrieval under User-dependent Uncertainty in Query-by-Humming Systems. In: International Multimedia Conference, Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval, pp. 113–118. ACM Press, New York (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Bastian Tenbergen
    • 1
  1. 1.Human-Computer Interaction M.A. Program, State University of New York at Oswego, Oswego, NY, 13126, Formerly:, Cognitive Science Bachelor Program, School of Human Sciences, University of OsnabrückGermany

Personalised recommendations