Dynamic Saliency Models and Human Attention: A Comparative Study on Videos
- Cite this paper as:
- Riche N., Mancas M., Culibrk D., Crnojevic V., Gosselin B., Dutoit T. (2013) Dynamic Saliency Models and Human Attention: A Comparative Study on Videos. In: Lee K.M., Matsushita Y., Rehg J.M., Hu Z. (eds) Computer Vision – ACCV 2012. ACCV 2012. Lecture Notes in Computer Science, vol 7726. Springer, Berlin, Heidelberg
Significant progress has been made in terms of computational models of bottom-up visual attention (saliency). However, efficient ways of comparing these models for still images remain an open research question. The problem is even more challenging when dealing with videos and dynamic saliency. The paper proposes a framework for dynamic-saliency model evaluation, based on a new database of diverse videos for which eye-tracking data has been collected. In addition, we present evaluation results obtained for 4 state-of-the-art dynamic-saliency models, two of which have not been verified on eye-tracking data before.
Unable to display preview. Download preview PDF.