Multimedia Tools and Applications

, Volume 65, Issue 2, pp 259–282 | Cite as

Implementing situation-aware and user-adaptive music recommendation service in semantic web and real-time multimedia computing environment

  • Seungmin Rho
  • Seheon Song
  • Yunyoung Nam
  • Eenjun Hwang
  • Minkoo Kim
Article

Abstract

With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user’s situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user’s musical preferences and contexts, and supporting reasoning about the user’s desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.

Keywords

Customization Ontology Reasoning Semantic web User profiles 

Notes

Acknowledgment

“This research was supported by the MKE(Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency)” (NIPA-2011-C1090-1101-0008)

References

  1. 1.
    All Music Guide, Available at: http://allmusic.com
  2. 2.
    Birmingham W, Dannenberg R, Pardo B (2006) An Introduction to query by humming with the vocal search system. Commun ACM 49(8):49–52CrossRefGoogle Scholar
  3. 3.
    Cano P et al (2005) “Content-based music audio recommendation,” Proc of ACM Multimedia, pp. 211–212Google Scholar
  4. 4.
    CYC upper ontology, Available at: http://www.cyc.com/cycdoc/vocab/vocab-toc.html
  5. 5.
    Ellis DPW, Poliner GE (2007) Identifying ‘Cover Songs’ with chroma features and dynamic programming beat tracking. IEEE Conf Acoustic Speech Signal Process ICASSP IV:1429–1432Google Scholar
  6. 6.
    GarageBand, Available at: http://www.garageband.com/
  7. 7.
    Grüninger M, Fox MS (1994) “The role of Mariano Fernández López 4–12 competency questions in enterprise engineering,” In: IFIP WG 5.7 Workshop on Benchmarking. Theory and Practice, Trondheim, NorwayGoogle Scholar
  8. 8.
    Horrocks I, Sattler U (2005) “A tableaux decision procedure for SHOIQ,” In: Proc. of the 19th Int. Joint Conf. on Artificial Intelligence (IJCAI 2005), Morgan KaufmanGoogle Scholar
  9. 9.
    Jun S, Rho S, Han B, Hwang E (2008) “A Fuzzy Inference-based music emotion recognition system,” International Conference on Visual Information Engineering, pp. 673–677, July 29 ~ Aug. 1Google Scholar
  10. 10.
    Juslin PN, Sloboda JA (2001) Music and emotion: theory and research. Oxford University Press, New YorkGoogle Scholar
  11. 11.
    Kanzaki Music Vocabulary, Available at: http://www.kanzaki.com/ns/music
  12. 12.
    Klaus RS, Marcel RZ (2001) Emotional effects of music: production rules,” music and emotion: theory and research. Oxford University Press, OxfordGoogle Scholar
  13. 13.
    Krumhansl C (1990) “Cognitive foundations of musical pitch”, Oxford University PressGoogle Scholar
  14. 14.
    Krzysztof J, Przemyslaw K, Katarzna M (2010) “Personalized ontology-based recommender systems for multimedia objects”, Agent and multi-agent technology for internet and enterprise systems, Studies in Computational Intelligence, Vol.289, pp.275-292, 2010Google Scholar
  15. 15.
    Last.fm, Available at: http://www.last.fm
  16. 16.
    List T, Fisher RB (2004) “CVML – An XML-based computer vision markup language,” Proceedings of the 17th international conference on pattern recognition, pp. 789–792Google Scholar
  17. 17.
    Lu L, Liu D, Zhang H-J (2006) Automatic mood detection and tracking of music audio signals. IEEE Trans Audio Speech Lang Process 14(1):5–18MathSciNetCrossRefGoogle Scholar
  18. 18.
    Mood Logic, Available at: http://www.moodlogic.com/
  19. 19.
  20. 20.
    MusicBrainz, Available at: http://musicbrainz.org
  21. 21.
    MyStrands, Available at: http://www.mystrands.com/
  22. 22.
    Ortony A, Clore GL, Collins L (1998) The cognitive structure of emotions. Cambridge University Press, CambridgeGoogle Scholar
  23. 23.
    Oscar C (2008) Foafing the music: bridging the semantic gap in music recommendation. J Web Seman 6(4):256–256Google Scholar
  24. 24.
    Oscar C, Perfecto H, Xavier S (2006) “A multimodal approach to bridge the music semantic gap,” Semantic and Digital Media Technologies (SAMT)Google Scholar
  25. 25.
    OWL Web Ontology Language, Available at: http://www.w3.org/TR/owl-ref/
  26. 26.
    Paulo N et al (2006) “Emotions on agent based simulators for group formation,” Proceedings of the European Simulation and Modeling Conference, pp. 5–18Google Scholar
  27. 27.
    Pauws S, Eggen B (2002) “PATS: realization and user evaluation of an automatic playlist generator,” Proceedings of ISMIR, pp. 222–227Google Scholar
  28. 28.
    Poppe C, Martens G, Potter PD, Walle RVD (2010) “Semantic web technologies for video surveillance metadata,” Multimedia Tools and Applications, online published atGoogle Scholar
  29. 29.
    Protégé Editor, Available at: http://protege.stanford.edu
  30. 30.
    Rho S, Han B, Hwang E, Kim M (2008) MUSEMBLE: a novel music retrieval system with automatic voice query transcription and reformulation. J Syst Softw Elsevier 81(7):1065–1080CrossRefGoogle Scholar
  31. 31.
    Richard A, Raphaël T, Steffen S, Lynda H (2009) “COMM: a core ontology for multimedia annotation”, Handbook on ontologies. International Handbooks on Information Systems, pp.403-421Google Scholar
  32. 32.
    Ruebenstrunk G “Emotional Computers,” Available at: http://ruebenstrunk.de/emeocomp/content.HTM
  33. 33.
    Russell JA (1980) “A circumplex model of affect,” J Pers Soc Psychol Vol. 39Google Scholar
  34. 34.
    Song S, Rho S, Hwang E, Kim M (2009) “Music ontology for mood and situation reasoning to support music retrieval and recommendation,” In: Proceedings of the International Conference on Digital Society, Cancun, Mexico, pp. 304–309, 2009Google Scholar
  35. 35.
    Thayer RE (1989) The biopsychology of mood and arousal. Oxford University Press, New YorkGoogle Scholar
  36. 36.
    W3C. RDF Specification, Available at: http://www.w3c.org/RDF
  37. 37.
  38. 38.
    WordNet, Available at: http://wordnet.princeton.edu/
  39. 39.
    Yves R, Frederick G, “Music ontology specification,” Available at: http://www.musicontology.com/
  40. 40.
    Yves R, Samer A, Mark S, Frederick G (2007) The music ontology. Proc Int Conf Music Inf Retrieval ISMIR 2007:417–422Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Seungmin Rho
    • 1
  • Seheon Song
    • 2
  • Yunyoung Nam
    • 3
  • Eenjun Hwang
    • 1
  • Minkoo Kim
    • 4
  1. 1.School of Electrical EngineeringKorea UniversitySeoulKorea
  2. 2.Graduate School of Information and CommunicationAjou UniversitySuwonKorea
  3. 3.Center of excellence for Ubiquitous SystemAjou UniversitySuwonSouth Korea
  4. 4.Division of Information and Computer EngineeringAjou UniversitySuwonKorea

Personalised recommendations