Advertisement

Music Recommendation

  • Òscar Celma
Chapter

Abstract

This chapter focuses on the recommendation problem in the music domain. Section 3.1 presents some common use cases in music recommendation. After that, Sect. 3.2, discusses user profiling and modelling, and how to link the elements of a user profile with the music concepts. Then, Sect. 3.3 presents the main components to describe the musical items, that are artists and songs. The existing music recommendation methods (collaborative filtering, content, context-based, and hybrid) and the pros and cons of each approach are presented in Sect. 3.4. Finally, Sect. 3.5 summarises the work presented, and provides some links with the remaining chapters of the book.

Keywords

Gaussian Mixture Model Collaborative Filter Audio Feature Musical Genre Music Information Retrieval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    C. Baccigalupo and E. Plaza, “Poolcasting: a social web radio architecture for group customisation,” in Proceedings of the 3rd International Conference on Automated Production of Cross Media Content for Multi-Channel Distribution AXMEDIS, pp. 115–122, IEEE Computer Society, 2007.Google Scholar
  2. 2.
    A. Crossen, J. Budzik, and K. J. Hammond, “Flytrap: Intelligent group music recommendation,” in Proceedings of the 7th International Conference on Intelligent User Interfaces, (New York, NY, USA), pp. 184–185, ACM, 2002.Google Scholar
  3. 3.
    S. J. Cunningham, D. Bainbridge, and A. Falconer, “More of an art than a science: Supporting the creation of playlists and mixes,” in Proceedings of 7th International Conference on Music Information Retrieval, (Victoria, Canada), pp. 240–245, 2006.Google Scholar
  4. 4.
    T.W. Leong, F. Vetere, and S. Howard, “The serendipity shuffle,” in Proceedings of 19th Conference of the Computer-Human Interaction Special Interest Group, (Narrabundah, Australia), pp. 1–4, 2005.Google Scholar
  5. 5.
    A. Uitdenbogerd and R. van Schnydel, “A review of factors affecting music recommender success,” in Proceedings of 3rd International Conference on Music Information Retrieval, (Paris, France), 2002.Google Scholar
  6. 6.
    D. Jennings, Net, Blogs and Rock ’n’ Roll: How Digital Discovery Works and What it Means for Consumers. Boston, MA: Nicholas Brealey Publishing, 2007.Google Scholar
  7. 7.
    M. Lesaffre, M. Leman, and J.-P. Martens, “A user-oriented approach to music information retrieval,” in Content-Based Retrieval, Dagstuhl Seminar Proceedings, 2006.Google Scholar
  8. 8.
    F. Vignoli and S. Pauws, “A music retrieval system based on user driven similarity and its evaluation,” in Proceedings of the 6th International Conference on Music Information Retrieval, (London, UK), pp. 272–279, 2005.Google Scholar
  9. 9.
    D. N. Sotiropoulos, A. S. Lampropoulos, and G. A. Tsihrintzis, “Evaluation of modeling music similarity perception via feature subset selection,” in User Modeling, vol. 4511 of Lecture Notes in Computer Science, (Berlin, Heidelberg), pp. 288–297, Springer, 2007.Google Scholar
  10. 10.
    V. Sandvold, T. Aussenac, O. Celma, and P. Herrera, “Good vibrations: Music discovery through personal musical concepts,” in Proceedings of 7th International Conference on Music Information Retrieval, (Victoria, Canada), 2006.Google Scholar
  11. 11.
    P. Kazienko and K. Musial, “Recommendation framework for online social networks,” in Advances in Web Intelligence and Data Mining, vol. 23 of Studies in Computational Intelligence, pp. 111–120, Springer, 2006.Google Scholar
  12. 12.
    S. Baumann, B. Jung, A. Bassoli, and M. Wisniowski, “Bluetuna: Let your neighbour know what music you like,” in CHI – extended abstracts on Human factors in computing systems, (New York, NY, USA), pp. 1941–1946, ACM, 2007.Google Scholar
  13. 13.
    C. S. Firan, W. Nejdl, and R. Paiu, “The benefit of using tag-based profiles,” in Proceedings of the 2007 Latin American Web Conference (LA-WEB), (Washington, DC, USA), pp. 32–41, IEEE Computer Society, 2007.Google Scholar
  14. 14.
    E. Perik, B. de Ruyter, P. Markopoulos, and B. Eggen, “The sensitivities of user profile information in music recommender systems,” in Proceedings of Private, Security, Trust, 2004.Google Scholar
  15. 15.
    W. Chai and B. Vercoe, “Using user models in music information retrieval systems,” in Proceedings of 1st International Conference on Music Information Retrieval, (Berlin), 2000.Google Scholar
  16. 16.
    B. S. Manjunath, P. Salembier, and T. Sikora, Introduction to MPEG 7: Multimedia Content Description Language. Ed. Wiley, 2002.Google Scholar
  17. 17.
    C. Tsinaraki and S. Christodoulakis, “Semantic user preference descriptions inMPEG-7/21,” in Hellenic Data Management Symposium, 2005.Google Scholar
  18. 18.
    Y. Raimond, S. A. Abdallah, M. Sandler, and F. Giasson, “The music ontology,” in Proceedings of the 8th International Conference on Music Information Retrieval, (Vienna, Austria), 2007.Google Scholar
  19. 19.
    F. Pachet, Knowledge Management and Musical Metadata. Idea Group, 2005.Google Scholar
  20. 20.
    B. Whitman and S. Lawrence, “Inferring descriptions and similarity for music from community metadata,” in Proceedings of International Computer Music Conference, (Goteborg, Sweden), 2002.Google Scholar
  21. 21.
    M. Schedl, P. Knees, T. Pohle, and G. Widmer, “Towards an automatically generated music information system via web content mining,” in Proceedings of the 30th European Conference on Information Retrieval (ECIR’08), (Glasgow, Scotland), 2008.Google Scholar
  22. 22.
    P. Knees, M. Schedl, and T. Pohle, “A deeper look into web-based classification of music artists,” in Proceedings of 2nd Workshop on Learning the Semantics of Audio Signals, (Paris, France), 2008.Google Scholar
  23. 23.
    B. Whitman, “Semantic rank reduction of music audio,” in Proceedings of the Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 135–138, 2003.Google Scholar
  24. 24.
    S. Baumann and O. Hummel, “Enhancing music recommendation algorithms using cultural metadata,” Journal of New Music Research, vol. 34, no. 2, 2005.Google Scholar
  25. 25.
    G. Geleijnse and J. Korst, “Web-based artist categorization,” in Proceedings of the 7th International Conference on Music Information Retrieval, (Victoria, Canada), pp. 266–271, 2006.Google Scholar
  26. 26.
    M. Zadel and I. Fujinaga, “Web services for music information retrieval,” in Proceedings of 5th International Conference on Music Information Retrieval, (Barcelona, Spain), 2004.Google Scholar
  27. 27.
    M. Schedl, P. Knees, and G. Widmer, “A web-based approach to assessing artist similarity using co-occurrences,” in Proceedings of 4th International Workshop on Content-Based Multimedia Indexing, (Riga, Latvia), 2005.Google Scholar
  28. 28.
    M. Schedl, P. Knees, and G. Widmer, “Improving prototypical artist detection by penalizing exorbitant popularity,” in Proceedings of 3rd International Symposium on Computer Music Modeling and Retrieval, pp. 196–200, 2005.Google Scholar
  29. 29.
    T. Pohle, P. Knees, M. Schedl, and G. Widmer, “Building an interactive next-generation artist recommender based on automatically derived high-level concepts,” in Proceedings of the 5th International Workshop on Content-Based Multimedia Indexing, (Bordeaux, France), 2007.Google Scholar
  30. 30.
    C. Baccigalupo, J. Donaldson, and E. Plaza, “Uncovering affinity of artists to multiple genres from social behaviour data,” in Proceedings of the 9th Conference on Music Information Retrieval, pp. 275–280, 2008.Google Scholar
  31. 31.
    F. Pachet, G. Westermann, and D. Laigre, “Musical data mining for electronic music distribution,” in Proceedings of 1st International Conference on Web Delivering of Music, 2001.Google Scholar
  32. 32.
    D. Ellis, B. Whitman, A.Berenzweig, and S.Lawrence, “The quest for ground truth in musical artist similarity,” in Proceedings of 3rd International Symposium on Music Information Retrieval, (Paris), pp. 170–177, 2002.Google Scholar
  33. 33.
    G. Geleijnse, M. Schedl, and P. Knees, “The quest for ground truth in musical artist tagging in the social web era,” in Proceedings of the 8th International Conference on Music Information Retrieval, (Vienna, Austria), 2007.Google Scholar
  34. 34.
    A. Berenzweig, B. Logan, D. Ellis, and B. Whitman, “A large-scale evalutation of acoustic and subjective music similarity measures,” in Proceedings of 4th International Symposium on Music Information Retrieval, (Baltimore, MD), 2003.Google Scholar
  35. 35.
    D. Turnbull, L. Barrington, and G. Lanckriet, “Five approaches to collecting tags for music,” in Proceedings of the 9th International Conference on Music Information Retrieval, (Philadelphia, PA), pp. 225–230, 2008.Google Scholar
  36. 36.
    G. Koutrika, F. A. Effendi, Z. Gyöngyi, P. Heymann, and H. Garcia-Molina, “Combating spam in tagging systems,” in Proceedings of the 3rd international workshop on Adversarial information retrieval on the web, (New York, NY), pp. 57–64, ACM, 2007.Google Scholar
  37. 37.
    L. R. Rabiner and B. H. Juang, Fundamentals of Speech Recognition. Englewood Cliffs, NJ: Prentice-Hall, 1993.Google Scholar
  38. 38.
    J.-J. Aucouturier and F. Pachet, “A scale-free distribution of false positives for a large class of audio similarity measures,” Pattern Recognition, vol. 41, no. 1, pp. 272–284, 2008.zbMATHCrossRefGoogle Scholar
  39. 39.
    J.-J. Aucouturier and F. Pachet, “Improving timbre similarity: How high’s the sky,” Journal of Negative Results in Speech and Audio Science, vol. 1, no. 1, 2004.Google Scholar
  40. 40.
    P. Herrera, V. Sandvold, and F. Gouyon, “Percussion-related semantic descriptors of music audio files,” in Proceedings of 25th International AES Conference, (London, UK), 2004.Google Scholar
  41. 41.
    K. Yoshii, M. Goto, and H. G. Okuno, “Automatic drum sound description for real-world music using template adaptation and matching methods,” in Proceedings of 5th International Conference on Music Information Retrieval, (Barcelona, Spain), 2004.Google Scholar
  42. 42.
    N. Chetry, M. Davies, and M. Sandler, “Musical instrument identification using LSF and Kmeans,” in Proceedings of the 118th Convention of the AES, (Barcelona, Spain), 2005.Google Scholar
  43. 43.
    F. Gouyon and S. Dixon, “A review of automatic rhythm description systems,” Computer Music Journal, vol. 29, pp. 34–54, 2005.CrossRefGoogle Scholar
  44. 44.
    P. Bello and M. Sandler, “Phase-based note onset detection for music signals,” in Proceedings of IEEE ICASSP, 2003.Google Scholar
  45. 45.
    J. P. Bello, C. Duxbury, M. E. Davies, and M. B. Sandler, “On the use of phase and energy for musical onset detection in the complex domain,” in IEEE Signal Processing Letters, pp. 533–556, 2004.Google Scholar
  46. 46.
    M. E. P. Davies and M. D. Plumbley, “Causal tempo tracking of audio,” in Proceedings of 5th International Conference on Music Information Retrieval, (Barcelona, Spain), 2004.Google Scholar
  47. 47.
    F. Gouyon and S. Dixon, “Dance music classification: A tempo-based approach,” in Proceedings of 5th International Conference on Music Information Retrieval, (Barcelona, Spain), 2004.Google Scholar
  48. 48.
    S. Dixon, F. Gouyon, and G. Widmer, “Towards characterization of music via rhythmic patterns,” in Proceedings of 5th International Conference on Music Information Retrieval, (Barcelona, Spain), 2004.Google Scholar
  49. 49.
    R. Dannenberg, “Toward automated holistic beat tracking, music analysis, and understanding,” in Proceedings of 6th International Conference on Music Information Retrieval, (London, UK), 2005.Google Scholar
  50. 50.
    J. Pickens, J. P. Bello, G. Monti, T. Crawford, M. Dovey, M. Sandler, and D. Byrd, “Polyphonic score retrieval using polyphonic audio queries: A harmonic modelling approach,” in Proceedings of 3rd International Conference on Music Information Retrieval, pp. 140–149, 2002.Google Scholar
  51. 51.
    E. Gómez, “Tonal description of polyphonic audio for music content processing,” INFORMS Journal on Computing, Special Cluster on Computation in Music, vol. 18, no. 3, 2006.Google Scholar
  52. 52.
    E. Gómez and P. Herrera, “Estimating the tonality of polyphonic audio files: Cognitive versus machine learning modelling strategies,” Proceedings of 5th International Conference on Music Information Retrieval, (Barcelona, Spain), 2004.Google Scholar
  53. 53.
    C. A. Harte and M. Sandler, “Automatic chord identification using a quantised chromagram,” in Proceedings of the 118th Convention of the AES, (Barcelona, Spain), 2005.Google Scholar
  54. 54.
    P. Bello and J. Pickens, “A robust mid-level representation for harmonic content in music signals,” in Proceedings of 6th International Conference on Music Information Retrieval, (London, UK), 2005.Google Scholar
  55. 55.
    E. Gómez, Tonal Description of Music Audio Signals. PhD thesis, 2006.Google Scholar
  56. 56.
    B. Ong and P. Herrera, “Semantic segmentation of music audio contents,” in Proceedings of International Computer Music Conference, (Barcelona, Spain), 2005.Google Scholar
  57. 57.
    A. Zils and F. Pachet, “Extracting automatically the perceived intensity of music titles,” in Proceedings of the 6th International Conference on Digital Audio Effects, (London, UK), 2003.Google Scholar
  58. 58.
    V. Sandvold and P. Herrera, “Towards a semantic descriptor of subjective intensity in music,” in Proceedings of 5th International Conference on Music Information Retrieval, (Barcelona, Spain), 2004.Google Scholar
  59. 59.
    F. Fabbri, “Browsing music spaces: Categories and the musical mind,” in Proceedings of the IASPM Conference, 1999.Google Scholar
  60. 60.
    D. Brackett, Intepreting Popular Music. New York, NY: Canbridge University Press, 1995.Google Scholar
  61. 61.
    C. Mckay and I. Fujinaga, “Musical genre classification: Is it worth pursuing and how can it be improved?,” in Proceedings of the 7th International Conference on Music Information Retrieval, (Victoria, Canada), 2006.Google Scholar
  62. 62.
    G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” IEEE Transactions on Speech and Audio Processing, vol. 10, no. 5, pp. 293–302, 2002.CrossRefGoogle Scholar
  63. 63.
    E. Guaus, Audio content processing for automatic music genre classification: descriptors, databases, and classifiers. PhD thesis, 2009.Google Scholar
  64. 64.
    P. N. Juslin and P. Laukka, “Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening,” Journal of New Music Research, vol. 22, no. 1, pp. 217–238, 2004.CrossRefGoogle Scholar
  65. 65.
    P. N. Juslin and J. A. Sloboda, Music and Emotion: Theory and Research. Oxford: Oxford University Press, 2001.Google Scholar
  66. 66.
    Y. Feng, Y. Zhuang, and Y. Pan, “Music information retrieval by detecting mood via computational media aesthetics,” in WI ’03: Proceedings of the 2003 IEEE/WIC International Conference on Web Intelligence, (Washington, DC, USA), p. 235, IEEE Computer Society, 2003.Google Scholar
  67. 67.
    D. Z. H. Lu, L Liu, “Automatic mood detection and tracking of music audio signals,” IEEE Transactions on Audio, Speech and Language Processing, vol. 14, pp. 5–18, 2006.CrossRefGoogle Scholar
  68. 68.
    C. Laurier, O. Lartillot, T. Eerola, and P. Toiviainen, “Exploring relationships between audio features and emotion in music,” in Conference of European Society for the Cognitive Sciences of Music, (Jyväskylä, Finland), 2009.Google Scholar
  69. 69.
    K. Bischoff, C. Firan, R. Paiu, W. Nejdl, C. Laurier, and M. Sordo, “Music mood and theme classification a hybrid approach,” in Proceedings of the 10th Conference on Music Information Retrieval, (Kobe, Japan), 2009.Google Scholar
  70. 70.
    U. Shardanand, “Social information filtering for music recommendation,” Master’s thesis, Massachussets Institute of Technology, September 1994.Google Scholar
  71. 71.
    M. Anderson, M. Ball, H. Boley, S. Greene, N. Howse, D. Lemire, and S. McGrath, “Racofi: A rule-applying collaborative filtering system,” in Proceedings of the Collaboration Agents Workshop, IEEE/WIC, (Halifax, Canada), 2003.Google Scholar
  72. 72.
    D. Lemire and A. Maclachlan, “Slope one predictors for online rating-based collaborative filtering,” in Proceedings of SIAM Data Mining, (Newport Beach, CA), 2005.Google Scholar
  73. 73.
    J. Foote, “Content-based retrieval of music and audio,” Multimedia Storage and Archiving Systems II. Proceedings of SPIE, pp. 138–147, 1997.Google Scholar
  74. 74.
    J.-J. Aucouturier and F. Pachet, “Music similarity measures: What’s the use?,” in Proceedings of 3rd International Conference on Music Information Retrieval, (Paris, France), pp. 157–163, 2002.Google Scholar
  75. 75.
    C. T. Y. Rubner and L. Guibas, “The earth mover’s distance as a metric for image retrieval,” tech. rep., Stanford University, 1998.Google Scholar
  76. 76.
    B. Logan and A. Salomon, “A music similarity function based on signal analysis,” IEEE International Conference on Multimedia and Expo, 2001. ICME 2001, pp. 745–748, 2001.Google Scholar
  77. 77.
    G. Tzanetakis, Manipulation, Analysis and Retrieval Systems for Audio Signals. PhD thesis, 2002.Google Scholar
  78. 78.
    E. Pampalk, Computational Models of Music Similarity and their Application to Music Information Retrieval. PhD thesis, 2006.Google Scholar
  79. 79.
    M. Slaney, K. Weinberger, and W. White, “Learning a metric for music similarity,” in Proceedings of the 9th Conference on Music Information Retrieval, pp. 313–318, 2008.Google Scholar
  80. 80.
    B. Cataltepe, Z. Altinel, “Music recommendation based on adaptive feature and user grouping,” in Proceedings of the 22nd International International Symposium on Computer and Information Sciences, (Ankara, Turkey), 2007.Google Scholar
  81. 81.
    K. Hoashi, K. Matsumoto, and N. Inoue, “Personalization of user profiles for content-based music retrieval based on relevance feedback,” in Proceedings of eleventh ACM international conference on Multimedia, (New York, NY, USA), pp. 110–119, ACM Press, 2003.Google Scholar
  82. 82.
    J. J. Rocchio, “Relevance feedback in information retrieval,” in The SMART Retrieval System: Experiments in Automatic Document Processing (G. Salton, ed.), Prentice-Hall Series in Automatic Computation, ch. 14, pp. 313–323, Englewood Cliffs, NJ: Prentice-Hall, 1971.Google Scholar
  83. 83.
    P. Cano, M. Koppenberger, and N. Wack, “An industrial-strength content-based music recommendation system,” in Proceedings of 28th International ACM SIGIR Conference, (Salvador, Brazil), 2005.Google Scholar
  84. 84.
    K. Yoshii, M. Goto, K. Komatani, T. Ogata, and H. G. Okuno, “An efficient hybrid music recommender system using an incrementally trainable probabilistic generative model,” IEEE Transaction on Audio Speech and Language Processing, vol. 16, no. 2, pp. 435–447, 2008.CrossRefGoogle Scholar
  85. 85.
    K. Yoshii, M. Goto, K. Komatani, T. Ogata, and H. G. Okuno, “Improving efficiency and scalability of model-based music recommender system based on incremental training,” in Proceedings of 8th International Conference on Music Information Retrieval, (Vienna, Austria), 2007.Google Scholar
  86. 86.
    K. Yoshii, M. Goto, K. Komatani, T. Ogata, and H. G. Okuno, “Hybrid collaborative and content-based music recommendation using probabilistic model with latent user preferences,” in Proceedings of 7th International Conference on Music Information Retrieval, (Victoria, Canada), pp. 296–301, 2006.Google Scholar
  87. 87.
    A. Popescul, L. Ungar, D. Pennock, and S. Lawrence, “Probabilistic models for unified collaborative and content-based recommendation in sparse-data environments,” in 17th Conference on Uncertainty in Artificial Intelligence, (Seattle, Washington), pp. 437–444, 2001.Google Scholar
  88. 88.
    M. Tiemann and S. Pauws, “Towards ensemble learning for hybrid music recommendation,” in Proceedings of 8th International Conference on Music Information Retrieval, (Vienna, Austria), 2007.Google Scholar

Copyright information

© Springer Berlin Heidelberg 2010

Authors and Affiliations

  1. 1.BMATBarcelonaSpain

Personalised recommendations