Towards a Personalised and Context-Dependent User Experience in Multimedia and Information Systems

  • Matevž Pesek
  • Gregor Strle
  • Jože Guna
  • Emilija Stojmenova
  • Matevž Pogačnik
  • Matija Marolt
Chapter

Abstract

Advances in multimedia and information systems have shifted the focus from general content repositories towards personalized systems. Much effort has been put into modeling and integration of affective states with the purpose of improving overall user experience and functionality of the system. In this chapter, we present a multi-modal dataset of users’ emotional and visual (color) responses to music, with accompanying personal and demographic profiles, which may serve as the knowledge basis for such improvement. Results show that emotional mediation of users’ perceptive states can significantly improve user experience in terms of context-dependent personalization in multimedia and information systems.

References

  1. 1.
    Juslin, P. N., & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31(5), 559–575.Google Scholar
  2. 2.
    Kim, Y. E., Schmidt, E. M., Migneco, R., Morton, B. G., Richardson, P., Scott, J., et al. (2010). Music emotion recognition: A state of the art review. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 255–266), Utrecht.Google Scholar
  3. 3.
    Laurier, C., Meyers, O., Serrà, J., Blech, M., Herrera, P., & Serra, X. (2009). Indexing music by mood: design and integration of an automatic content-based annotator. Multimedia Tools and Applications, 48(1), 161–184.CrossRefGoogle Scholar
  4. 4.
    Song, Y., Dixon, S., & Pearce, M. (2012). A survey of music recommendation systems and future perspectives. In Proceedings of the 9th international symposium on computer music modelling and retrieval (CMMR) (pp. 395–410), London.Google Scholar
  5. 5.
    Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178.Google Scholar
  6. 6.
    Barthet, M., Marston, D., Baume, C., Fazekas, G., & Sandler, M. (2013). Design and evaluation of semantic mood models for music recommendation using editorial tags. In Proceedings of the international conference on music information retrieval (ISMIR), Curitiba.Google Scholar
  7. 7.
    Laurier, C., Sordo, M., Serrà, J., & Herrera, P. (2009). Music mood representations from social tags. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 381–386).Google Scholar
  8. 8.
    Mcvicar, M., Freeman, T., & De Bie, T. (2011). Mining the correlation between lyrical and audio features and the emergence of mood. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 783–788), Miami.Google Scholar
  9. 9.
    Yang, Y.-H., & Chen, H. H. (2012). Machine recognition of music emotion.Google Scholar
  10. 10.
    Eerola, T., & Vuoskoski, J. K. (2010). A comparison of the discrete and dimensional models of emotion in music. Psychology of Music, 39(1), 18–49.CrossRefGoogle Scholar
  11. 11.
    Schmidt, E. M., & Kim, Y. E. (2011). Modeling musical emotion dynamics with conditional random fields. In ISMIR (pp. 777–782).Google Scholar
  12. 12.
    Turnbull, D., Barrington, L., Torres, D., & Lanckriet, G. (2008). Semantic annotation and retrieval of music and sound effects. IEEE Transactions on Audio, Speech, and Language Processing, 16(2), 467–476.CrossRefGoogle Scholar
  13. 13.
    Schuller, B., Hage, C., Schuller, D., & Rigoll, G. (2010). ‘Mister DJ, Cheer Me Up!’: Musical and textual features for automatic mood classification. Journal of New Music Research, 39(1), 13–34.CrossRefGoogle Scholar
  14. 14.
    Aljanaki, A., Bountouridis, D., Burgoyne, J. A., van Balen, J., Wiering, F., Honing, H., & Veltkamp, R. C. (2014). Designing games with a purpose for data collection in music research. Emotify and Hooked: Two case studies. Lecture Notes in Computer Science.Google Scholar
  15. 15.
    Hu, X., & Downie, J. S. (2007). Exploring mood metadata: relationships with genre, artist and usage metadata. In Proceedings of the international conference on music information retrieval (ISMIR), Vienna.Google Scholar
  16. 16.
    Schedl, M., Flexer, A., & Urbano, J. (2013). The neglected user in music information retrieval research. Journal of Intelligent Information Systems, 41(3), 523–539.CrossRefGoogle Scholar
  17. 17.
    Donaldson, J., & Lamere, P. (2009). Using visualizations for music discovery. In Proceedings of the international conference on music information retrieval (ISMIR), Tutorial.Google Scholar
  18. 18.
    Grohganz, H., Clausen, M., Jiang, N., & Mueller, M. (2013). Converting path structures into block structures using eigenvalue decompositions of self-similarity matrices. In Proceedings of the international conference on music information retrieval (ISMIR), Curitiba.Google Scholar
  19. 19.
    Isaacson, E. (2005). What you see is what you get: On visualizing music. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 389–395), London.Google Scholar
  20. 20.
    Jiang, N., & Mueller, M. (2013). Automated methods for analyzing music recordings in sonata form. In Proceedings of the international conference on music information retrieval (ISMIR), Curitiba.Google Scholar
  21. 21.
    Mardirossian, A., & Chew, E. (2007). Visualizing music: Tonal progressions and distributions. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 189–194), Vienna.Google Scholar
  22. 22.
    Yoshii, K., & Goto, M. (2008). Music thumbnailer: Visualizing musical pieces in thumbnail images based on acoustic features. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 211–216), Philadelphia.Google Scholar
  23. 23.
    Torrens, M., Hertzog, P., & Arcos, J. L. (2004). Visualizing and exploring personal music libraries. In Proceedings of the international conference on music information retrieval (ISMIR), Barcelona.Google Scholar
  24. 24.
    Van Gulik, R., & Vignoli, F. (2005). Visual playlist generation on the artist map. In Proceedings of the international conference on music information retrieval (ISMIR), London.Google Scholar
  25. 25.
    Van Gulik, R., Vignoli, F., & Van de Wetering, H. (2004). Mapping music in the palm of your hand, explore and discover your collection. In Proceedings of the international conference on music information retrieval (ISMIR), Barcelona.Google Scholar
  26. 26.
    Julia, C. F., & Jorda, S. (2009). SongExplorer: A tabletop application for exploring large collections of songs. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 675–680), Kobe.Google Scholar
  27. 27.
    Lamere, P., & Eck, D. (2007). Using 3D visualizations to explore and discover music. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 173–174).Google Scholar
  28. 28.
    Lugmayr, A. (2013). Brief introduction into information systems & management research in media industries. In 2013 IEEE international conference on multimedia and expo workshops (ICMEW) (pp. 1–6). IEEE.Google Scholar
  29. 29.
    Lugmayr, A., Risse, T., Stockleben, B., Laurila, K., & Kaario, J. (2009). Semantic ambient media—An introduction. Multimedia Tools and Applications, 44(3), 337–359.CrossRefGoogle Scholar
  30. 30.
    Bachmayer, S., Lugmayr, A., & Kotsis, G. (2010). Convergence of collaborative web approaches and interactive TV program formats. International Journal of Web Information Systems, 6(1), 74–94.CrossRefGoogle Scholar
  31. 31.
    Jose, P. T., Miglani, S., & Yadav, S. (2014). Human computer interaction: Analysis and journey through eras. International Journal of Computer Science and Mobile Computing, 3(4), 653–650.Google Scholar
  32. 32.
    Pantic, M., Nijholt, A., Pentland, A., & Huanag, T. S. (2008). Human-centred intelligent human? Computer interaction (HCI2): How far are we from attaining it? International Journal of Autonomous and Adaptive Communications Systems, 1(2), 168–187.CrossRefGoogle Scholar
  33. 33.
    Picard, R. W. (2000). Toward computers that recognize and respond to user emotion. IBM Systems Journal, 39, 705–719.CrossRefGoogle Scholar
  34. 34.
    Tao, J., & Tan, T. (2005). Affective computing: A review (Vol. 1). Berlin, Heidelberg: Springer.Google Scholar
  35. 35.
    Ekman, P. (1992). An argument for basic emotions. Cognition and Emotion, 6, 169–200.CrossRefGoogle Scholar
  36. 36.
    Pesek, M., Godec, P., Poredos, M., Strle, G., Guna, J., Stojmenova, E., et al. (2014). Introducing a dataset of emotional and color responses to music. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 355–360), Taipei.Google Scholar
  37. 37.
    Albert, W., & Tullis, T. (2013). Measuring the user experience: Collecting, analyzing, and presenting usability metrics (Google eBook). Newnes.Google Scholar
  38. 38.
    Hart, S. G. (2006). Nasa-task load index (NASA-TLX); 20 years later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(9), 904–908.CrossRefGoogle Scholar
  39. 39.
    Griscom, W. S., & Palmer, S. E. (2012). The color of musical sounds: Color associates of harmony and timbre in non-synesthetes. Journal of Vision, 12(9), 74–74.Google Scholar
  40. 40.
    Palmer, S. E., Schloss, K. B., Zoe, X., & Prado-León, L. R. (2013). Music-color associations are mediated by emotion. Proceedings of the National Academy of Sciences, 110(22), 8836–8841.CrossRefGoogle Scholar
  41. 41.
    Juslin, P. N., & Sloboda, J. A. (2001). Music and emotion: Theory and research. Oxford University Press.Google Scholar
  42. 42.
    Ou, L.-C., Luo, M. R., Woodcock, A., & Wright, A. (2004). A study of colour emotion and colour preference. Part I: Colour emotions for single colours. Color Research & Application, 29(3), 232–240.Google Scholar
  43. 43.
    Schmidt, E. M., & Kim, Y. E. (2009). Projection of acoustic features to continuous valence-arousal mood labels via regression. In 10th international society for music information retrieval conference. ISMIR.Google Scholar
  44. 44.
    Pesek, M., Leonardis, A., & Marolt, M. (2014). A compositional hierarchical model for music information retrieval. In Proceedings of the international conference on music information retrieval (ISMIR) (pp. 131–136), Taipei.Google Scholar
  45. 45.
    Abdi, H. (2007). The method of least squares. Encyclopedia of measurement and statistics. CA, USA: Thousand Oaks.Google Scholar
  46. 46.
    Moon, C. B., Kim, L., Lee, H. A., & Kim, B. M. (2013). Analysis of relationships between mood and color for different musical preferences. Color Research & Application, 39(4), 413–423.CrossRefGoogle Scholar
  47. 47.
    Peter, C., & Beale, R. (2008). Affect and emotion in human-computer interaction: From theory to applications. Springer.Google Scholar
  48. 48.
    Kaminskas, M., & Ricci, F. (2012). Contextual music information retrieval and recommendation: State of the art and challenges. Computer Science Review, 6(2–3), 89–119.CrossRefGoogle Scholar
  49. 49.
    Sodnik, J., Jakus, G., & Tomažič, S. (2011). Multiple spatial sounds in hierarchical menu navigation for visually impaired computer users. International Journal of Human-Computer Studies, 69(1–2), 100–112.Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Matevž Pesek
    • 1
  • Gregor Strle
    • 2
  • Jože Guna
    • 3
  • Emilija Stojmenova
    • 3
  • Matevž Pogačnik
    • 3
  • Matija Marolt
    • 1
  1. 1.Faculty of Computer and Information ScienceUniversity of LjubljanaLjubljanaSlovenia
  2. 2.Scientific Research Centre of the Slovenian Academy of Sciences and ArtsInstitute of EthnomusicologyLjubljanaSlovenia
  3. 3.University of LjubljanaFaculty of Electrical EngineeringLjubljanaSlovenia

Personalised recommendations