Skip to main content
Log in

Contribution of color in saliency model for videos

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Much research has been concerned with the contribution of the low-level features of a visual scene to the deployment of visual attention. Bottom-up saliency models have been developed to predict the location of gaze according to these features. So far, color besides intensity, contrast and motion is considered as one of the primary features in computing bottom-up saliency. However, its contribution in guiding eye movements when viewing natural scenes has been debated. We investigated the contribution of color information in a bottom-up visual saliency model. The model efficiency was tested using the experimental data obtained on 45 observers who were eye-tracked while freely exploring a large dataset of color and grayscale videos. The two datasets of recorded eye positions, for grayscale and color videos, were compared with a luminance-based saliency model (Marat et al. Int J Comput Vis 82:231–243, 2009). We incorporated chrominance information to the model. Results show that color information improves the performance of the saliency model in predicting eye positions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Alleysson, D., Meary, D.: Neurogeometry of color vision. J. Physiol. Paris 106, 284–296 (2012)

    Article  Google Scholar 

  2. Baddeley, R.J., Tatler, B.W.: High frequency edges (but not contrast) predict where we fixate: a Bayesian system identification analysis. Vis. Res. 46(18), 2824–2833 (2006)

    Article  Google Scholar 

  3. Beaudot, W.H.A., Mullen, K.T.: Orientation selectivity in luminance and color vision assessed using 2-d bandpass filtered spatial noise. Vis. Res. 45(6), 687–696 (2005)

    Article  Google Scholar 

  4. Benedetti, L., Corsini, M., Cignoni, P., Callieri, M., Scopigno, R.: Color to gray conversions in the context of stereo matching algorithms. Mach. Vis. Appl. 57(2), 254–348 (2010)

    Google Scholar 

  5. Bruno, E., Pellerin, D.: Robust motion estimation using spatial Gabor-like filters. Signal Process 82, 297–309 (2002)

    Article  MATH  Google Scholar 

  6. Connor, C.E., Egeth, H.E., Yantis, S.: Visual attention: bottom-up versus top-down. Curr. Biol. 14, 850–852 (2004)

    Article  Google Scholar 

  7. Coutrot, A., Guyader, N., Ionescu, G., Caplier, A.: Influence of soundtrack on eye movements during video exploration. J. Eye Mov. Res. 5(4), 1–10 (2012)

    Google Scholar 

  8. Dorr, M., Martinetz, T., Barth, E.: Variability of eye movements when viewing dynamic natural scenes. J. Vis. 10(10), 1–17 (2010)

    Article  Google Scholar 

  9. Follet, B., Le Meur, O., Baccino, T.: New insights on ambient and focal visual fixations using an automatic classification algorithm. iPerception 2(6), 592–610 (2011)

    Google Scholar 

  10. Frey, H.P., Honey, C., König, P.: What’s color got to do with it? the influence of color on visual attention in different categories. J. Vis. 11(3), 1–15 (2008)

    Google Scholar 

  11. Frintrop, S.: VOCUS: A visual attention system for object detection and goal-directed search. Ph.D. thesis, Rheinische Friedrich-Wilhelms-Universität Für Informatik and Fraunhofer Institut Für Autonome Intelliegente Systeme (2006)

  12. Gegenfurtner, K.R.: Cortical mechanisms of colour vision. Nat. Rev. Neurosci. 4(7), 563–572 (2003)

    Article  Google Scholar 

  13. Ho-Phuoc, T., Guyader, N., Guérin-Dugué, A.: When viewing natural scenes, do abnormal colors impact on spatial or temporal parameters of eye movements? J. Vis. 12(2), 1–13 (2012)

    Article  Google Scholar 

  14. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell 20, 1254–1259 (1998)

    Article  Google Scholar 

  15. Itti, L.: Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes. Vis. Cognit. 12, 1093–1123 (2005)

    Article  Google Scholar 

  16. Itti, L., Baldi, P.: Bayesian surprise attracts human attention. Vis. Res. 49(10), 1295–1306 (2009)

    Article  Google Scholar 

  17. Klab: http://www.klab.caltech.edu/harel/share/gbvs.php

  18. Krauskopf, J., Williams, D.R., Heeley, D.W.: Cardinal direction of color space. Vis. Res. 22, 1123–1131 (1982)

    Article  Google Scholar 

  19. Le Callet, P.: Critères objectifs avec référence de qualité visuelle des images couleur. Ph.D. thesis, Université de Nantes (2001)

  20. Le Meur, O., Le Callet, P., Barba, D.: Predicting visual fixations on video based on low-level visual features. Vis. Res. 47(19), 2483–2498 (2007)

    Article  Google Scholar 

  21. Marat, S., Ho Phuoc, T., Granjon, L., Guyader, N., Pellerin, D., Guérin-Dugué, A.: Modelling spatio-temporal saliency to predict gaze direction for short videos. Int. J. Comput. Vis. 82(3), 231–243 (2009)

    Article  Google Scholar 

  22. Marat, S., Rahman, A., Pellerin, D., Guyader, N., Houzet, D.: Improving visual saliency by adding ’face feature map’ and ’center bias’. Cognit. Comput. 5(1), 63–75 (2013)

    Article  Google Scholar 

  23. Mital, P.K., Smith, T.J., Hill, R.L., Henderson, J.M.: Clustering of gaze during dynamic scene viewing is predicted by motion. Cognit. Comput. 3(1), 5–24 (2011)

    Article  Google Scholar 

  24. Rahman, A.: Face perception in videos: contributions to a visual saliency model and its implementation on GPUs. Ph.D. thesis, Univ. Grenoble Alpes, France (2013)

  25. Rahman, A., Houzet, D., Pellerin, D., Marat, S., Guyader, N.: Parallel implementation of a spatio-temporal visual saliency model. Real-Time Image Process. 6(1), 3–14 (2010)

    Article  Google Scholar 

  26. Rahman, A., Pellerin, D., Houzet, D.: Influence of number, location and size of faces on gaze in video. J. Eye Mov. Res. 7(2), 1–11 (2014)

    Google Scholar 

  27. Rousselet, G.A., Macé, M.J.M., Fabre-Thorpe, M.: Is it an animal? Is it a human face? Fast processing in upright and inverted natural scenes. J. Vis. 3(6), 440–455 (2003)

    Article  Google Scholar 

  28. Salvucci, D., Goldberg, J.H.: Identifying fixations and saccades in eye-tracking protocols. Proc. Symp. Eye Track. Res. Appl. 469(1), 71–78 (2000)

    Article  Google Scholar 

  29. Santella, A., DeCarlo, D.: Robust clustering of eye movement recordings for quantification of visual interest. In: eye tracking research and applications (ETRA) symposium (2004)

  30. Treisman, A.M., Gelade, G.: A feature integration theory of attention. Cognit. Psychol. 12, 97–136 (1980)

    Article  Google Scholar 

  31. Trémeau, A., Fernandez-Maloigne, C., Bonton, P.: Image numérique couleur. De l’acquisition au traitement, chap. 2, pp. 32–43. Dunod, Paris (2004)

  32. Wolfe, J.M., Cave, K.R., Franzel, S.L.: Guided search: an alternative to the feature integration model for visual search. J. Exp. Psychol. Hum. Percept. Perform. 15, 419–433 (1989)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shahrbanoo Hamel.

Additional information

This research was supported by the Rhone-Alpes region (France). We thank A. Rahman for the GPU implementation of saliency model of Marat et al. We also thank D. Alleysson and D. Meary for providing us with spectrometer measurements.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hamel, S., Guyader, N., Pellerin, D. et al. Contribution of color in saliency model for videos. SIViP 10, 423–429 (2016). https://doi.org/10.1007/s11760-015-0765-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-015-0765-5

Keywords

Navigation