From Water Music to ‘Underwater Music’: Multimedia Soundtrack Retrieval with Social Mass Media Resources

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9819)

Abstract

In creative media, visual imagery is often combined with music soundtracks. In the resulting artefacts, the consumption of isolated music or imagery will not be the main goal, but rather the combined multimedia experience. Through frequent combination of music with non-musical information resources and the corresponding public exposure, certain types of music will get associated to certain types of non-musical contexts. As a consequence, when dealing with the problem of soundtrack retrieval for non-musical media, it would be appropriate to not only address corresponding music search engines in music-technical terms, but to also exploit typical surrounding contextual and connotative associations. In this work, we make use of this information, and present and validate a search engine framework based on collaborative and social Web resources on mass media and corresponding music usage. Making use of the SRBench dataset, we show that employing social folksonomic descriptions in search indices is effective for multimedia soundtrack retrieval.

References

  1. 1.
    Cai, R., Zhang, C., Wang, C., Zhang, L., Ma, W.-Y.: MusicSense: contextual music recommendation using emotional allocation modeling. In: Proceedings of the 15th ACM International Conference on Multimedia (ACM MM), pp. 553–556, Augsburg, Germany (2007)Google Scholar
  2. 2.
    Casey, M., Veltkamp, R., Goto, M., Leman, M., Rhodes, C., Slaney, M.: Content-based music information retrieval: current directions and future challenges. Proc. IEEE 96(4), 668–696 (2008)CrossRefGoogle Scholar
  3. 3.
    Cohen, A.J.: How music influences the interpretation of film and video: approaches from experimental psychology. In: Kendall, R., Savage, R.W. (eds.) Selected Reports in Ethnomusicology: Perspectives in Systematic Musicology, vol. 12, pp. 15–36. Department of Ethnomusicology, University of California, Los Angeles (2005)Google Scholar
  4. 4.
    Cook, N.: Analysing Musical Multimedia. Oxford University Press, New York (1998)Google Scholar
  5. 5.
    Kaminskas, M., Ricci, F.: Contextual music information retrieval: state of the art and challenges. Comput. Sci. Rev. 6(2–3), 89–119 (2012)CrossRefGoogle Scholar
  6. 6.
    Kuo, F.-F., Chiang, M.-F., Shan, M.-K., Lee, S.-Y.: Emotion-based music recommendation by association discovery from film music. In: Proceedings of the 13th ACM International Conference on Multimedia (ACM MM), pp. 507–510. Singapore (2005)Google Scholar
  7. 7.
    Li, C.-T., Shan, M.-K.: Emotion-based impressionism slideshow with automatic music accompaniment. In: Proceedings of the 15th ACM International Conference on Multimedia (ACM MM), pp. 839–842, Augsburg, Germany (2007)Google Scholar
  8. 8.
    Liem, C.C.S.: Mass media musical meaning: opportunities from the collaborative web. In: Proceedings of the 11th International Symposium on Computer Music Multidisciplinary Research (CMMR), Plymouth, UK (2015)Google Scholar
  9. 9.
    Liem, C.C.S., Bazzica, A., Hanjalic, A.: MuseSync: standing on the shoulders of Hollywood. In: Proceedings of the 20th ACM International Conference on Multimedia (ACM MM), pp. 1383–1384, Nara, Japan. ACM (2012)Google Scholar
  10. 10.
    Liem, C.C.S., Larson, M.A., Hanjalic, A.: When music makes a scene — characterizing music in multimedia contexts via user scene descriptions. Int. J. Multimedia Inf. Retrieval 2, 15–30 (2013)CrossRefGoogle Scholar
  11. 11.
    Lissa, Z.: Ästhetik der Filmmusik. Henschelverlag, Berlin (1965)Google Scholar
  12. 12.
    Schedl, M., Gómez, E., Urbano, J.: Music information retrieval: recent developments and applications. Found. Trends Inf. Retrieval 8(2–3), 127–261 (2014)CrossRefGoogle Scholar
  13. 13.
    Shah, R.R., Yu, Y., Zimmermann, R.: ADVISOR: personalized video soundtrack recommendation by late fusion with heuristic rankings. In: Proceedings of the 22nd ACM International Conference on Multimedia (ACM MM), pp. 607–616, Orlando, Florida, USA (2014)Google Scholar
  14. 14.
    Stupar, A., Michel, S.: PICASSO — to sing you must close your eyes and draw. In: Proceedings of the 34th Annual ACM SIGIR Conference, Beijing, China (2011)Google Scholar
  15. 15.
    Stupar, A., Michel, S.: SRbench — a benchmark for soundtrack recommendation systems. In: Proceedings of the 22nd ACM International Conference on Information & Knowledge Management (CIKM), San Francisco, USA (2013)Google Scholar
  16. 16.
    Tagg, P., Clarida, B.: Ten Little Title Tunes — Towards a Musicology of the Mass Media. The Mass Media Scholar’s Press, New York and Montreal (2003)Google Scholar
  17. 17.
    Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: Proceedings of the 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA (2015)Google Scholar
  18. 18.
    Wang, J.-C., Yang, Y.-H., Jhuo, I.-H, Lin, Y.-Y., Wang, H.-M.: The acousticvisual emotion guassians model for automatic generation of music video. In: Proceedings of the 20th ACM International Conference on Multimedia (ACM MM), pp. 1379–1380, Nara, Japan. ACM (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Multimedia Computing GroupDelft University of TechnologyDelftThe Netherlands

Personalised recommendations