Geodesic Saliency Using Background Priors

  • Yichen Wei
  • Fang Wen
  • Wangjiang Zhu
  • Jian Sun
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7574)


Generic object level saliency detection is important for many vision tasks. Previous approaches are mostly built on the prior that “appearance contrast between objects and backgrounds is high”. Although various computational models have been developed, the problem remains challenging and huge behavioral discrepancies between previous approaches can be observed. This suggest that the problem may still be highly ill-posed by using this prior only.

In this work, we tackle the problem from a different viewpoint: we focus more on the background instead of the object. We exploit two common priors about backgrounds in natural images, namely boundary and connectivity priors, to provide more clues for the problem. Accordingly, we propose a novel saliency measure called geodesic saliency. It is intuitive, easy to interpret and allows fast implementation. Furthermore, it is complementary to previous approaches, because it benefits more from background priors while previous approaches do not.

Evaluation on two databases validates that geodesic saliency achieves superior results and outperforms previous approaches by a large margin, in both accuracy and speed (2 ms per image). This illustrates that appropriate prior exploitation is helpful for the ill-posed saliency detection problem.


Image Patch Salient Object Saliency Detection Image Boundary Salient Object Detection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  2. 2.
    Parkhurst, D., Law, K., Niebur, E.: Modeling the role of saliency in the allocation of overt visual attention. Vision Research (2002)Google Scholar
  3. 3.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: ICCV (2009)Google Scholar
  4. 4.
    Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An Eye Fixation Database for Saliency Detection in Images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 30–43. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Itti, L., Baldi, P.: Bayesian surprise attracts human attentions. In: NIPS (2005)Google Scholar
  6. 6.
    Bruce, N.D.B., Tsotsos, K.: Saliency based on information maximization. In: NIPS (2005)Google Scholar
  7. 7.
    Gao, D., Mahadevan, V., Vasconcelos, N.: The discriminant center-surround hypothesis for bottom-up saliency. In: NIPS (2007)Google Scholar
  8. 8.
    Liu, T., Sun, J., Zheng, N., Tang, X., Shum, H.: Learning to detect a salient object. In: CVPR (2007)Google Scholar
  9. 9.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: CVPR (2009)Google Scholar
  10. 10.
    Valenti, R., Sebe, N., Gevers, T.: Image saliency by isocentric curvedness and color. In: ICCV (2009)Google Scholar
  11. 11.
    Goferman, S., Manor, L., Tal, A.: Context-aware saliency detection. In: CVPR (2010)Google Scholar
  12. 12.
    Cheng, M., Zhang, G., Mitra, N., Huang, X., Hu, S.: Global contrast based salient region detection. In: CVPR (2011)Google Scholar
  13. 13.
    Klein, D., Frintrop, S.: Center-surround divergence of feature statistics for salient object detection. In: ICCV (2011)Google Scholar
  14. 14.
    Rubinstein, M., Guterrez, D., Sorkine, O., Shamir, A.: A comparative study of image retargeting. In: SIGGRAPH Asia (2010)Google Scholar
  15. 15.
    Sun, J., Ling, H.: Scale and object aware image retargeting for thumbnail browsing. In: ICCV (2011)Google Scholar
  16. 16.
    Marchesotti, L., Cifarelli, C., Csurka, G.: A framework for visual saliency detection with applications to image thumbnailing. In: ICCV (2009)Google Scholar
  17. 17.
    Lempitsky, V., Kohli, P., Rother, C., Sharp, T.: Image segmentation with a bounding box prior. In: ICCV (2009)Google Scholar
  18. 18.
    Wang, L., Xue, J., Zheng, N., Hua, G.: Automatic salient object extraction with contextual cue. In: ICCV (2011)Google Scholar
  19. 19.
    Alexe, B., Deselaers, T., Ferrari, V.: What is an object. In: CVPR (2010)Google Scholar
  20. 20.
    Feng, J., Wei, Y., Tao, L., Zhang, C., Sun, J.: Salient object detection by composition. In: ICCV (2011)Google Scholar
  21. 21.
    Hou, X., Zhang, L.: Saliency detection: A spectral residual approach. In: CVPR (2007)Google Scholar
  22. 22.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: NIPS (2006)Google Scholar
  23. 23.
    Grady, L., Jolly, M., Seitz, A.: Segmentation from a box. In: ICCV (2011)Google Scholar
  24. 24.
    Vicente, S., Kolmogorov, V., Rother, C.: Graph cut based image segmentation with connectivity priors. In: CVPR (2008)Google Scholar
  25. 25.
    Nowozin, S., Lampert, C.H.: Global connectivity potentials for random field models. In: CVPR (2009)Google Scholar
  26. 26.
    Butko, N.J., Movellan, J.R.: Optimal scanning for faster object detection. In: CVPR (2009)Google Scholar
  27. 27.
    Toivanen, P.J.: New geodesic distance transforms for gray-scale images. Pattern Recognition Letters 17, 437–450 (1996)CrossRefGoogle Scholar
  28. 28.
    Veksler, O., Boykov, Y., Mehrani, P.: Superpixels and Supervoxels in an Energy Optimization Framework. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part V. LNCS, vol. 6315, pp. 211–224. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  29. 29.
    Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001),
  30. 30.
    Movahedi, V., Elder, J.H.: Design and perceptual validation of performance measures for salient object segmentation. In: IEEE Computer Society Workshop on Perceptual Organization in Computer Vision (2010),
  31. 31.
    Bai, X., Sapiro, G.: A geodesic framework for fast interactive image and video segmentation and matting. In: ICCV (2007)Google Scholar
  32. 32.
    Criminisi, A., Sharp, T., Blake, A.: GeoS: Geodesic Image Segmentation. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 99–112. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  33. 33.
    Zhai, Y., Shah, M.: Visual attention detection in video sequences using spatiotemporal cues. ACM Multimedia (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Yichen Wei
    • 1
  • Fang Wen
    • 1
  • Wangjiang Zhu
    • 1
  • Jian Sun
    • 1
  1. 1.Microsoft Research AsiaChina

Personalised recommendations