Impression-Aware Video Stream Retrieval System with Temporal Color-Sentiment Analysis and Visualization

  • Shuichi Kurabayashi
  • Yasushi Kiyoki
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7447)


To retrieve Web video intuitively, the concept of “impression” is of great importance, because many users consider feelings and moods to be one of the most significant factors motivating them to watch videos. In this paper, we propose an impression-aware video stream retrieval system for querying the visual impression of video streams by analyzing the temporal change in sentiments. As a metric of visual impression, we construct a 180-dimensional vector space called as color-impression space; each dimension corresponds to a specific adjective representing humans’ color perception. The main feature of this system is a context-dependent query processing mechanism to generate a ranking by considering the temporal transition of each video’s visual impressions on viewers’ emotion. We design an impression-aware noise reduction mechanism that dynamically reduces the number on non-zero features for each item mapped in the high-dimensional color-impression space by extracting the dominant salient impression features from a video stream. This system allows users to retrieve videos by submitting emotional queries such as “Find videos whose overall impression is happy and which have several sad and cool scenes”. Through this query processing mechanism, users can effectively retrieve videos without requiring detailed information about them.


video-search impression visualization sentiment analysis 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Cisco: Cisco Visual Networking Index: Forecast and Methodology, 2009-2014 (2010),
  2. 2.
    Cunningham, S.J., Nichols, D.M.: How people find videos. In: Proceedings of the 8th ACM/IEEE-CS Joint Conference on Digital Libraries, pp. 201–210 (2008)Google Scholar
  3. 3.
    Kiyoki, Y., Kitagawa, T., Hayama, T.: A metadatabase system for semantic image search by a mathematical model of meaning. ACM SIGMOD Record 23(4), 34–41 (1994)CrossRefGoogle Scholar
  4. 4.
    Kiyoki, Y., Kitagawa, T., Hitomi, Y.: A fundamental framework for realizing semantic interoperability in a multidatabase environment. Journal of Integrated Computer-Aided Engineering 2(1), 3–20 (1995)Google Scholar
  5. 5.
    Lew, M.S., Sebe, N., Djeraba, C., Jain, R.: Content-based multimedia information retrieval: State of the art and challenges. ACM TOMCCAP 2(1), 1–19 (2006)CrossRefGoogle Scholar
  6. 6.
    Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-based image retrieval at the end of the early years. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(12), 1349–1380 (2000)CrossRefGoogle Scholar
  7. 7.
    Smeaton, A.F.: Techniques used and open challenges to the analysis, indexing and retrieval of digital video. Information Systems 32(4), 545–559 (2007)CrossRefGoogle Scholar
  8. 8.
    Hou, X., Zhang, L.: Color conceptualization. In: Proceedings of the 15th International Conference on Multimedia, pp. 265–268. ACM (2007)Google Scholar
  9. 9.
    Corridoni, J.M., Del Bimbo, A., Pala, P.: Image retrieval by color semantics. Multimedia Systems 7(3), 175–183 (1999)CrossRefGoogle Scholar
  10. 10.
    Valdez, P., Mehrabian, A.: Effects of color on emotions. Journal of Experimental Psychology: General 123(4), 394–409 (1994)CrossRefGoogle Scholar
  11. 11.
    Kobayashi, S.: The aim and method of the color image scale. Color Research & Application 6(2), 93–107 (1981)CrossRefGoogle Scholar
  12. 12.
    Kobayashi, S.: Color Image Scale. Oxford University Press (1992)Google Scholar
  13. 13.
    Nakamura, S., Tanaka, K.: Video Search by Impression Extracted from Social Annotation. In: Vossen, G., Long, D.D.E., Yu, J.X. (eds.) WISE 2009. LNCS, vol. 5802, pp. 401–414. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  14. 14.
    Lehane, B., O’Connor, N.E., Lee, H., Smeaton, A.F.: Indexing of Fictional Video Content for Event Detection and Summarisation. EURASIP Journal on Image and Video Processing, Article ID 14615, 15 pages (2007)Google Scholar
  15. 15.
    Russell, J.A., Mehrabian, A.: Evidence for a three-factor theory of emotions. Journal of Research in Personality 11, 273–294 (1977)CrossRefGoogle Scholar
  16. 16.
    Arifin, S., Cheung, P.Y.K.: A computation method for video segmentation utilizing the pleasure-arousal-dominance emotional information. In: Proceedings of the 15th ACM International Conference on Multimedia, pp. 68–77 (2007)Google Scholar
  17. 17.
    Kurabayashi, S., Ueno, T., Kiyoki, Y.: A Context-Based Whole Video Retrieval System with Dynamic Video Stream Analysis Mechanisms. In: Proceedings of the 11th IEEE International Symposium on Multimedia (ISM 2009), pp. 505–510 (2009)Google Scholar
  18. 18.
    Kurabayashi, S., Kiyoki, Y.: MediaMatrix: A Video Stream Retrieval System with Mechanisms for Mining Contexts of Query Examples. In: Kitagawa, H., Ishikawa, Y., Li, Q., Watanabe, C. (eds.) DASFAA 2010, Part II. LNCS, vol. 5982, pp. 452–455. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  19. 19.
    Newhall, S.M., Nickerson, D., Judd, D.B.: Final Report of the O.S.A. Subcommittee on the Spacing of the Munsell Colors. Journal of the Optical Society of America 33(7), 385–411 (1943)CrossRefGoogle Scholar
  20. 20.
    Godlove, I.H.: Improved Color-Difference Formula, with Applications to the Perceptibility and Acceptability of Fadings. Journal of the Optical Society of America 41(11), 760–770 (1951)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Shuichi Kurabayashi
    • 1
  • Yasushi Kiyoki
    • 1
  1. 1.Faculty of Environment and Information StudiesKeio UniversityKanagawaJapan

Personalised recommendations