Advertisement

Task-Driven Saliency Detection on Music Video

  • Shunsuke NumanoEmail author
  • Naoko Enami
  • Yasuo Ariki
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9009)

Abstract

We propose a saliency model to estimate the task-driven eye-movement. Human eye movement patterns is affected by observer’s task and mental state [1]. However, the existing saliency model are detected from the low-level image features such as bright regions, edges, colors, etc. In this paper, the tasks (e.g., evaluation of a piano performance) are given to the observer who is watching the music videos. Unlike existing visual-based methods, we use musical score features and image features to detect a saliency. We show that our saliency model outperforms existing models that use eye movement patterns.

Keywords

Ground Truth Music Video Musical Note Saliency Model Musical Score 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Yarbus, A.L.: Eye Movements and Vision. Plenum Press, New York (1967)CrossRefGoogle Scholar
  2. 2.
    Henderson, J.M., Shinkareva, S.V., Wang, J., Luke, S.G., Olejarczyk, J.: Predicting cognitive state from eye movements. PLoS One 8(5), e64937 (2013)CrossRefGoogle Scholar
  3. 3.
    DeAngelusa, M., Pelza, J.B.: Top-down control of eye movements: yarbus revisited. Visual Cogn. 17, 790–811 (2009)CrossRefGoogle Scholar
  4. 4.
    Borji, A., Itti, L.: Defending Yarbus: eye movements reveal observers’ task. J. Vis. 14(3), 29 (2014)CrossRefGoogle Scholar
  5. 5.
    Kunze, K., Utsumi, Y., Shiga, Y., Kise, K., Bulling, A.: I know what you are reading: recognition of document types using mobile eye tracking. In: International Symposium on Wearable Computers (2013)Google Scholar
  6. 6.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. PAMI 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  7. 7.
    Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.H.: Saliency detection via graph-based manifold ranking. In: CVPR, pp. 3166–3173 (2013)Google Scholar
  8. 8.
    Riche, N., Mancas, M., Culibrk, D., Crnojevic, V., Gosselin, B., Dutoit, T.: Dynamic saliency models and human attention: a comparative study on videos. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012, Part III. LNCS, vol. 7726, pp. 586–598. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  9. 9.
    Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: NIPS (2005)Google Scholar
  10. 10.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: NIPS (2006)Google Scholar
  11. 11.
    Seo, H.J., Milanfar, P.: Static and space-time visual saliency detection by self-resemblance. J. Vis. 9(15), 1–27 (2009)Google Scholar
  12. 12.
    Wang, L., Xue, J., Zheng, N., Hua, G.: Automatic salient object extraction with contextual cue. In: ICCV (2011)Google Scholar
  13. 13.
    Shi, K., Wang, K., Lu, J., Lin, L.: PISA: pixelwise image saliency by aggregating complementary appearance contrast measures with spatial priors. In: CVPR (2013)Google Scholar
  14. 14.
    Iwatsuki, A., Hirayama, T., Mase, K.: Analysis of soccer coach’s eye gaze behavior. In: Proceedings of ASVAI (2013)Google Scholar
  15. 15.
    Peters, R.J., et al.: Components of bottom-up gaze allocation in natural images. Vis. Res. 45(18), 2397–2416 (2005)CrossRefGoogle Scholar
  16. 16.
    Ouerhani, N., von Wartburg, R., Hugli, H., Muri, R.M.: Empirical validation of saliency-based model of visual attention. Electron. Lett. Comput. Vis. Image Anal. 3(1), 13–24 (2003)Google Scholar
  17. 17.
    Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: Advances in Neural Information Processing Systems (2005)Google Scholar
  18. 18.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: CVPR (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Kobe UniversityKobeJapan

Personalised recommendations