Advertisement

Can Social Comments Contribute to Estimate Impression of Music Video Clips?

  • Shunki TsuchiyaEmail author
  • Naoki Ono
  • Satoshi Nakamura
  • Takehiro Yamamoto
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11000)

Abstract

The main objective of this paper is to estimate the impressions of music video clips using social comments to achieve impression-based music video clip searches or recommendation systems. To accomplish the objective, we generated a dataset that consisted of music video clips with evaluation scores on individual media and impression types. We then evaluated the precision with which each media and impression type were estimated by analyzing social comments. We also considered the possibility and limitations of using social comments to estimate impressions of content. As a result, we revealed that it is better to use proper parts-of-speech in social comments depending on each media/impression type.

Keywords

Estimating impression Music video clip Social comments 

Notes

Acknowledgments

This work was supported in part by JST ACCEL Grant Number JPMJAC1602, Japan.

References

  1. 1.
    Hevner, K.: Experimental studies of the elements of expression in music. Am. J. Psychol. 48(2), 246–268 (1936)CrossRefGoogle Scholar
  2. 2.
    Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980)CrossRefGoogle Scholar
  3. 3.
    Hu, X., Downie, J.S., Laurier, C., Bay, M., Ehmann, A.F.: The 2007 MIREX audio mood classification task: lessons learned. In: 9th International Conference on Music Information Retrieval, ISMIR 2008, Philadelphia, pp. 14–18 (2008)Google Scholar
  4. 4.
    Yamamoto, T., Nakamura, S.: Leveraging viewer comments for mood classification of music video clip. In: 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2013, Dublin, pp. 797–800 (2013)Google Scholar
  5. 5.
    Eickhoff, C., Li, W., de Vries, A.P.: Exploiting user comments for audio-visual content indexing and retrieval. In: Serdyukov, P., et al. (eds.) ECIR 2013. LNCS, vol. 7814, pp. 38–49. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-36973-5_4CrossRefGoogle Scholar
  6. 6.
    Filippova, K., Hall, K.: Improved video categorization from text metadata and user comments. In: 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2011, Beijing, pp. 835–842 (2011)Google Scholar
  7. 7.
    Acar, E., Hopfgartner, F., Albayrak, S.: Understanding affective content of music videos through Learned representations. In: Gurrin, C., Hopfgartner, F., Hurst, W., Johansen, H., Lee, H., O’Connor, N. (eds.) MMM 2014. LNCS, vol. 8325, pp. 303–314. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-04114-8_26CrossRefGoogle Scholar
  8. 8.
    Ashkan, Y., Evangelos, S., Nikolaos, F., Touradj, E.: Multimedia content analysis for emotional characterization of music video clips. EURASIP J. Image Video Process. 1–10 (2013)Google Scholar
  9. 9.
    Hu, X., Downie, J., Ehmann, A.: Lyric text mining in music mood classification. In: 10th International Society for Music Information Retrieval, ISMIR 2009, Kobe, pp. 411–416 (2009)Google Scholar
  10. 10.
    Laurier, C., Grivolla, J., Herrera, P.: Multimodal music mood classification using audio and lyrics. In: 7th International Conference on Machine Learning and Applications, ICMLA 2008, San Diego, pp. 688–693 (2008)Google Scholar
  11. 11.
    Sicheng, Z., Hongxun, Y., Xiaoshuai, S., Pengfei, X., Xianming, L., Rongrong, J.: Video indexing and recommendation based on affective analysis of viewers. In: 19th ACM International Conference on Multimedia, MM 2011, Scottsdale, pp. 1473–1476 (2011)Google Scholar
  12. 12.
    Goto, M.: A chorus section detection method for musical audio signals and its application to a music listening station. IEEE Trans. Audio Speech Lang. Process. 14(5), 1783–1794 (2006)Google Scholar
  13. 13.
    Kenmochi, H., Oshita, H.: VOCALOID – commercial singing synthesizer based on sample concatenation. In: 8th Annual Conference of the International Speech Communication Association, Interspeech 2007, Antwerp, pp. 4009–4010 (2007)Google Scholar
  14. 14.
    Kudo, T., Yamamoto, K., Matsumoto, Y.: Applying conditional random fields to Japanese morphological analysis. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, EMNLP 2004, Barcelona, pp. 230–237 (2004)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Shunki Tsuchiya
    • 1
    Email author
  • Naoki Ono
    • 1
  • Satoshi Nakamura
    • 1
  • Takehiro Yamamoto
    • 2
  1. 1.Meiji UniversityNakano-kuJapan
  2. 2.Kyoto UniversityKyotoJapan

Personalised recommendations