Dynamic Saliency Models and Human Attention: A Comparative Study on Videos

  • Nicolas Riche
  • Matei Mancas
  • Dubravko Culibrk
  • Vladimir Crnojevic
  • Bernard Gosselin
  • Thierry Dutoit
Conference paper

DOI: 10.1007/978-3-642-37431-9_45

Part of the Lecture Notes in Computer Science book series (LNCS, volume 7726)
Cite this paper as:
Riche N., Mancas M., Culibrk D., Crnojevic V., Gosselin B., Dutoit T. (2013) Dynamic Saliency Models and Human Attention: A Comparative Study on Videos. In: Lee K.M., Matsushita Y., Rehg J.M., Hu Z. (eds) Computer Vision – ACCV 2012. ACCV 2012. Lecture Notes in Computer Science, vol 7726. Springer, Berlin, Heidelberg

Abstract

Significant progress has been made in terms of computational models of bottom-up visual attention (saliency). However, efficient ways of comparing these models for still images remain an open research question. The problem is even more challenging when dealing with videos and dynamic saliency. The paper proposes a framework for dynamic-saliency model evaluation, based on a new database of diverse videos for which eye-tracking data has been collected. In addition, we present evaluation results obtained for 4 state-of-the-art dynamic-saliency models, two of which have not been verified on eye-tracking data before.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Nicolas Riche
    • 1
    • 2
  • Matei Mancas
    • 1
    • 2
  • Dubravko Culibrk
    • 1
    • 2
  • Vladimir Crnojevic
    • 1
    • 2
  • Bernard Gosselin
    • 1
    • 2
  • Thierry Dutoit
    • 1
    • 2
  1. 1.University of MonsBelgium
  2. 2.University of Novi SadSerbia

Personalised recommendations