Advertisement

Creating a Reliable Music Discovery and Recommendation System

Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 541)

Abstract

The aim of this chapter is to show problems related to creating a reliable music discovery system. The SYNAT database that contains audio files is used for the purpose of experiments. The files are divided into 22 classes corresponding to music genres with different cardinality. Of utmost importance for a reliable music recommendation system are the assignment of audio files to their appropriate genres and optimum parameterization for music-genre recognition. Hence, the starting point is audio file filtering, which can only be done automatically, but to a limited extent, when based on low-level signal processing features. Therefore, a variety of parameterization techniques are shortly reviewed in the context of their suitability to music retrieval from a large music database. In addition, some significant problems related to choosing an excerpt of audio file for an acoustic analysis and parameterization are pointed out. Then, experiments showing results of searching for songs that bear the greatest resemblance to the song in a given query are presented. In this way music recommendation system may be created that enables to retrieve songs that are similar to each other in terms of their low-level feature description and genre inclusion. The experiments performed also provide basis for more general observations and conclusions.

Keywords

Music information retrieval Music databases Music parameterization Feature vectors Principal component analysis Music classification 

Notes

Acknowledgments

This research was conducted and partially founded within project No. SP/I/1/77065/10, ‘The creation of a universal, open, repository platform for the hosting and communication of networked knowledge resources for science, education and an open knowledge society’, which is part of the Strategic Research Program, ‘Interdisciplinary systems of interactive scientific and technical information’ supported by the National Centre for Research and Development (NCBiR) in Poland.

The authors are very grateful to the reviewers for their comments and suggestions.

References

  1. 1.
    Aucouturier, J.-J., Pachet, F.: Representing musical genre: a state of art. J. New Music Res. str. 32, 83–93 (2003)Google Scholar
  2. 2.
    Benesty, J., Mohan Sondhi, M., Huang, Y.: Springer Handbook of Speech Processing. Springer, Heidelberg (2008)Google Scholar
  3. 3.
    Bello, J.P.: Low-level features and timbre, MPATE-GE 2623 music information retrieval. New York University. http://www.nyu.edu/classes/bello/MIR_files/timbre.pdf
  4. 4.
    Ewert, S.: Signal processing methods for music, synchronization, audio matching, and source separation, Bonn (2012)Google Scholar
  5. 5.
    Holzapfel, A., Stylianou, Y.: Musical genre classification using nonnegative matrix factorization-based features. IEEE Trans. Audio Speech Lang. Process. 16(2), 424–434 (2008)CrossRefGoogle Scholar
  6. 6.
    Hyoung-Gook, K., Moreau, N., Sikora, T.: MPEG-7 Audio and Beyond: Audio Content Indexing and Retrieval. Wiley, New York (2005)Google Scholar
  7. 7.
    Jang, D., Jin, M., Yoo, C.D.: Music genre classification using novel features and a weighted voting method. In: ICME, pp. 1377–1380 (2008)Google Scholar
  8. 8.
    Kilem, G.: Inter-Rater Reliability: Dependency on Trait Prevalence and Marginal Homogeneity, Statistical Methods For Inter-Rater Reliability Assessment, No. 2, (2002)Google Scholar
  9. 9.
    Kostek, B.: Content-Based Approach to Automatic Recommendation of Music, 131 Audio Engineering Convention. New York (2011) Google Scholar
  10. 10.
    Kostek, B.: Music information retrieval in music repositories. In: Suraj, Z., Skowron, A. (eds.) Chapter in Intelligent Systems Reference Library, pp. 459–485. Springer, Berlin (2012) Google Scholar
  11. 11.
    Kostek, B.: Music information retrieval in music repositories. In: Skowron, A., Suraj, Z. (eds.) Rough Sets and Intelligent Systems, pp. 463–489. Springer, Berlin 2013Google Scholar
  12. 12.
    Kostek, B.: Perception-Based Data Processing in Acoustics, Applications to Music Information Retrieval and Psychophysiology of Hearing, Series on Cognitive Technologies. Springer, Berlin (2005)Google Scholar
  13. 13.
    Kostek B.: Soft computing in acoustics, applications of neural networks, fuzzy logic and rough sets to musical acoustics. In: Studies in Fuzziness and Soft Computing, Physica, New York (1999)Google Scholar
  14. 14.
    Kostek, B., Czyzewski, A.: Representing musical instrument sounds for their automatic classification. J. Audio Eng. Soc. 49, 768–785 (2001)Google Scholar
  15. 15.
    Kostek, B., Kania, Ł.: Music information analysis and retrieval techniques. Arch. Acoust. Str. 33(4), 483–496 (2008)Google Scholar
  16. 16.
    Kostek, B., Kupryjanow, A., Zwan, P., Jiang, W., Ras, Z., Wojnarski, M., Swietlicka, J.: Report of the ISMIS 2011 Contest: Music Information Retrieval, Foundations of Intelligent Systems, ISMIS 2011, pp. 715–724. Springer, Berlin (2011)Google Scholar
  17. 17.
  18. 18.
    Li, T., Ogihara, M., Li, Q.: A comparative study on content-based music genre classification. In: 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, str. pp. 282–289. Toronto, Canada (2003)Google Scholar
  19. 19.
    Lidy, T., Rauber, A., Pertusa, A., Inesta, J.: Combining audio and symbolic descriptors for music classification from audio. In: Music Information Retrieval Information Exchange (MIREX) (2007)Google Scholar
  20. 20.
    Lindsay, A., Herre J.: MPEG-7 and MPEG-7 audio - an overview. J. Audio Eng. Soc. Str. 49(7/8), 589–594 (2001)Google Scholar
  21. 21.
    Mandel, M., Ellis, D.: LABROSA’s audio music similarity and classification submissions, Austrian Computer Society, Columbia University, LabROSA (2007)Google Scholar
  22. 22.
    Music store Amazon: http://www.amazon.com/
  23. 23.
  24. 24.
    Panagakis, E., Benetos, E., Kotropoulos, C.: Music genre classification: a multilinear approach. In: Proceedings of ISMIR, pp. 583–588 (2008)Google Scholar
  25. 25.
  26. 26.
    Shlens, J.: A tutorial on principal component analysis, Salk Insitute for Biological Studies La Jolla, New York (2005)Google Scholar
  27. 27.
    Symeonidis, P., Ruxanda, P., Nanopoulos, A., Manolopoulos, Y.: Ternary semantic analysis of social tags for personalized music recommendation. In: 9th International Conference on Music Information Retrieval str., pp. 219–224 (2008)Google Scholar
  28. 28.
    The International Society for Music Information Retrieval/Intern. Conference on Music Information Retrieval, website http://www.ismir.net/
  29. 29.
    Tzanetakis, G., Cook, P.: Musical genre classification of audio signal. In: IEEE Transactions on Speech and Audio Processing Str., pp. 293–302 (2002)Google Scholar
  30. 30.
    Tzanetakis, G., Essl, G., Cook, P.: Automatic musical genre classification of audio signals. In: Proceedings of International Symposium on Music Information Retrieval (ISMIR) (2001)Google Scholar
  31. 31.
    Żwan, P., Kostek, B.: System for automatic singing voice recognition. J. Audio Eng. Soc. 56(9), 710–723 (2008)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Audio Acoustics LaboratoryGdańsk University of TechnologyGdańskPoland
  2. 2.Multimedia Systems DepartmentGdańsk University of TechnologyGdańskPoland

Personalised recommendations