Dynamic Saliency from Adaptative Whitening

  • Víctor Leborán Alvarez
  • Antón García-Díaz
  • Xosé R. Fdez-Vidal
  • Xosé M. Pardo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7931)

Abstract

This paper describes a unified framework for the static and dynamic saliency detection by whitening both the chromatic characteristics and the spatio-temporal frequencies. This approach is grounded in the statistical adaptation to the input data, resembling the human visual system’s early codification. Our approach, AWS-D, outperforms state-of-the-art models in the ability to predict human eye fixations while freely viewing a set of videos from three open access datasets (task free). We used as assessment measure an adaptation of the shuffling-AUC metric to spatio-temporal stimulus, together with a permutation test. Under this criterion, AWS-D not only reaches the highest AUC values, but also holds significant AUC figures for longer periods of time (more frames), over all the videos used in the test. The model also reproduces psychophysical results obtained in pop-out experiments in agreement with human behavior.

Keywords

Spatio-temporal saliency visual attention adaptive whitening short-term adaptation eye fixations 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Borji, A., Sihite, D., Itti, L.: Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Trans. on Image Processing (99), 1 (2012)Google Scholar
  2. 2.
    Dosil, R., Fdez-Vidal, X.R., Pardo, X.M.: Motion representation using composite energy features. Pattern Recognition 41(3), 1110–1123 (2008)MATHCrossRefGoogle Scholar
  3. 3.
    Frintrop, S., Rome, E., Christensen, H.I.: Computational visual attention systems and their cognitive foundations: A survey. ACM Trans. Appl. Percept. 7(1), 6:1–6:39 (2010)Google Scholar
  4. 4.
    García-Díaz, A., Leborán, V., Fdez-Vidal, X.R., Pardo, X.M.: On the relationship between optical variability, visual saliency, and eye fixations: A computational approach. Journal of Vision 12(6) (2012)Google Scholar
  5. 5.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Schölkopf, B., Platt, J., Hoffman, T. (eds.) Advances in Neural Information Processing Systems 19, pp. 545–552. MIT Press, Cambridge (2007)Google Scholar
  6. 6.
    Itti, L., Baldi, P.F.: Bayesian surprise attracts human attention. Vision Research 49(10), 1295–1306 (2009)CrossRefGoogle Scholar
  7. 7.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. on PAMI 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  8. 8.
    Mathe, S., Sminchisescu, C.: Actions in the eye: Dynamic gaze datasets and learnt saliency models for visual recognition. Tech. rep., Institute of Mathematics of the Romanian Academy and University of Bonn (2012)Google Scholar
  9. 9.
    Seo, H.J., Milanfar, P.: Static and space-time visual saliency detection by self-resemblance. Journal of Vision 9(12) (2009)Google Scholar
  10. 10.
    Mathe, S., Sminchisescu, C.: Dynamic eye movement datasets and learnt saliency models for visual action recognition. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, vol. 7573, pp. 842–856. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Toet, A.: Computational versus psychophysical bottom-up image saliency: A comparative evaluation study. IEEE Trans. on PAMI 33(11), 2131–2146 (2011)CrossRefGoogle Scholar
  12. 12.
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognitive Psychology 12(1), 97–136 (1980)CrossRefGoogle Scholar
  13. 13.
    Tsotsos, J.K., Pomplun, M., Liu, Y., Martinez-Trujillo, J.C., Simine, E.: Attending to motion: Localizing and classifying motion patterns in image sequences. In: Bülthoff, H.H., Lee, S.-W., Poggio, T.A., Wallraven, C. (eds.) BMCV 2002. LNCS, vol. 2525, pp. 439–452. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  14. 14.
    Tsotsos, J.K.: A Computational Perspective on Visual Attention. MIT Press (2011)Google Scholar
  15. 15.
    Wilming, N., Betz, T., Kietzmann, T.C., König, P.: Measures and limits of models of fixation selection. PLoS ONE 6(9), e24038 (2011)Google Scholar
  16. 16.
    Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: A bayesian framework for saliency using natural statistics. Journal of Vision 8(7) (2008)Google Scholar
  17. 17.
    Zhang, L., Tong, M.H.: W, G.: Sunday: Saliency using natural statistics for dynamic analysis of scenes. In: Proc. of the Thirty-First Annual Cognitive Science Society Conference (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Víctor Leborán Alvarez
    • 1
  • Antón García-Díaz
    • 1
    • 2
  • Xosé R. Fdez-Vidal
    • 1
  • Xosé M. Pardo
    • 1
  1. 1.Centro de Investigación en Tecnoloxías da Información (CITIUS)University of Santiago de CompostelaSpain
  2. 2.AIMEN Technology Center - O PorriñoSpain

Personalised recommendations