Visual Saliency by Keypoints Distribution Analysis

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6978)


In this paper we introduce a new method for Visual Saliency detection. The goal of our method is to emphasize regions that show rare visual aspects in comparison with those showing frequent ones. We propose a bottom up approach that performs a new technique based on low level image features (texture) analysis. More precisely, we use SIFT Density Maps (SDM), to study the distribution of keypoints into the image with different scales of observation, and its relationship with real fixation points. The hypothesis is that the image regions that show a larger distance from the mode (most frequent value) of the keypoints distribution over all the image are the same that better capture our visual attention. Results have been compared to two other low-level approaches and a supervised method.


saliency visual attention texture SIFT 


  1. 1.
    Constantinidis, C., Steinmetz, M.A.: Posterior parietal cortex automatically encodes the location of salient stimuli. The Journal of Neuroscience 25(1), 233–238 (2005)CrossRefGoogle Scholar
  2. 2.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  3. 3.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  4. 4.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems 19, pp. 545–552. MIT Press, Cambridge (2007)Google Scholar
  5. 5.
    Luo, J.: Subject content-based intelligent cropping of digital photos. In: IEEE International Conference on Multimedia and Expo (2007)Google Scholar
  6. 6.
    Sundstedt, V., Chalmers, A., Cater, K., Debattista, K.: Topdown visual attention for efficient rendering of task related scenes. In: In Vision, Modeling and Visualization, pp. 209–216 (2004)Google Scholar
  7. 7.
    Itti, L., Koch, C.: Computational modeling of visual attention. Nature Reviews Neuroscience 2(3) (2001)Google Scholar
  8. 8.
    Chen, L.-Q., Xie, X., Fan, X., Ma, W.-Y., Zhang, H.-J., Zhou, H.-Q.: A visual attention model for adapting images on small displays. ACM Multimedia Systems Journal 9(4) (2003)Google Scholar
  9. 9.
    Judd, Y., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: IEEE 12th International Conference on Computer Vision, pp. 2106–2133 (2009)Google Scholar
  10. 10.
  11. 11.
    Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  12. 12.

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Dipartimento di Ingegneria Chimica, Gestionale, Informatica e Meccanica.Università degli Studi di PalermoPalermoItaly

Personalised recommendations