Skip to main content
Log in

How important is location information in saliency detection of natural images

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Location information, i.e., the position of content in image plane, is considered as an important supplement in saliency detection. The effect of location information is usually evaluated by integrating it with the selected saliency detection methods and measuring the improvement, which is highly influenced by the selection of saliency methods. In this paper, we provide direct and quantitative analysis of the importance of location information for saliency detection in natural images. We firstly analyze the relationship between content location and saliency distribution on four public image datasets, and validate the distribution by simply treating location based Gaussian distribution as saliency map. To further validate the effectiveness of location information, we propose a location based saliency detection approach, which completely initializes saliency maps with location information and propagate saliency among patches based on color similarity, and discuss the robustness of location information’s effect. The experimental results show that location information plays a positive role in saliency detection, and the proposed method can outperform most state-of-the-art saliency detection methods and handle natural images with different object positions and multiple salient objects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

References

  1. Achanta R, Hemami S, Estrada F, Susstrunk S (2009) Frequency-tuned salient region detection. IEEE international conference on computer vision and pattern recognition. IEEE, pp 1597–1604

  2. Alpert S, Galun M, Basri R, Brandt A (2007) Image segmentation by probabilistic bottom-up aggregation and cue integration. IEEE international conference on computer vision and pattern recognition. IEEE, pp 1–8

  3. Bao B-K, Li T, Yan S (2012) Hidden-concept driven multilabel image annotation and label ranking. IEEE Trans Multimed 14(1):199–210. IEEE

    Article  Google Scholar 

  4. Bao L, Lu J, Li Y, Shi Y (2014) A saliency detection model using shearlet transform. Multimedia tools and applications. Springer, pp 1–14

  5. Barnbaum B (2010) The art of photography: an approach to personal expression. Rocky Nook Press

  6. Borji A, Itti L (2013) State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1):185–207. IEEE

    Article  MathSciNet  Google Scholar 

  7. Borji A, Sihite D N, Itti L (2012) Salient object detection: a benchmark. European Conference on Computer Vision. Springer, pp 414–429

  8. Borji A, Sihite DN, Itti L (2013) Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans Image Process 22(1):55–69. IEEE

    Article  MathSciNet  Google Scholar 

  9. Borji A, Tavakoli H R, Sihite D N, Itti L (2013) Analysis of scores, datasets, and models in visual saliency prediction. IEEE International Conference on Computer Vision. IEEE, pp 921–928

  10. Borji A, Cheng M-M, Jiang H, Li J (2014) Salient object detection: a survey. arXiv:1411.5878

  11. Cheng M-M, Mitra NJ, Huang X, Hu S-M (2014) Salientshape: group saliency in image collections. Vis Comput 30(4):443–453. Springer

    Article  Google Scholar 

  12. Cheng M-M, Mitra NJ, Huang X, Torr PHS, Hu S-M (2015) Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 37(3):409–416. IEEE

    Article  Google Scholar 

  13. Del Bimbo A, Pala P (1997) Visual image retrieval by elastic matching of user sketches. IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (2):121–132. IEEE

    Article  Google Scholar 

  14. Desingh K, Krishna K. M, Rajan D, Jawahar CV (2013) Depth really matters. Improving visual salient region detection with depth. British Machine Vision Conference 98:1–11

    Google Scholar 

  15. Goferman S, Zelnik-Manor L, Tal A (2012) Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(10):1915–1926. IEEE

    Article  Google Scholar 

  16. Gopalakrishnan V, Hu Y, Rajan D (2009) Random walks on graphs to model saliency in images. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 1698–1705

  17. Hou X, Zhang L (2007) Saliency detection. A spectral residual approach. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 1–8

  18. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11):1254–1259

    Article  Google Scholar 

  19. Jia Y, Han M (2013) Category-independent object-level saliency detection. IEEE International Conference on Computer Vision. IEEE, pp 1761–1768

  20. Jiang H, Wang J, Yuan Z, Liu T, Zheng N, Li S (2011) Automatic salient object segmentation based on context and shape prior. British Machine Vision Conference 3(4):1–7

    Google Scholar 

  21. Jiang P, Ling H, Yu J, Peng J (2013) Salient region detection by UFO: Uniqueness, focusness and objectness. IEEE International Conference on Computer Vision. IEEE, pp 1976–1983

  22. Jing Y, Baluja S (2008) Visualrank: applying pagerank to large-scale image search. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(11):1877–1890. IEEE

    Article  Google Scholar 

  23. Ju R, Xu X, Yang Y, Wu G (2013) Stereo GrabCut: interactive and consistent object extraction for stereo images. Pacific-Rim Conference on Multimedia. Springer, pp 418–429

  24. Ju R, Ge L, Geng W, Ren T, Wu G. (2014) Depth saliency based on anisotropic center-surround difference. IEEE International Conference on Image Processing. IEEE

  25. Judd T, Ehinger K, Durand F, Torralba A. Learning to predict where humans look (2009) IEEE International Conference on Computer Vision, 2106–2113. IEEE

  26. Kim W, Jung C, Kim C (2011) Spatiotemporal saliency detection and its applications in static and dynamic scenes. IEEE Transactions on Circuits and Systems for Video Technology 21(4):446–456. IEEE

    Article  MathSciNet  Google Scholar 

  27. Lang C, Liu G, Yu J, Yan S (2012) Saliency detection by multitask sparsity pursuit. IEEE Trans Image Process 21(3):1327–1338. IEEE

    Article  MathSciNet  Google Scholar 

  28. Li N, Ye J, Ji Y, Ling H, Yu J (2014) Saliency detection on light field. IEEE International Conference on Computer Vision and Pattern Recognition. IEEE, pp 2806–2813

  29. Liu T, Slotnick SD, Serences JT, Yantis S (2003) Cortical mechanisms of feature-based attentional control. Cereb Cortex 13(12):1334–1343. Oxford University Press

    Article  Google Scholar 

  30. Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, Shum H-Y (2011) Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(2):353–367. IEEE

    Article  Google Scholar 

  31. Liu Q, Han T, Sun Y, Chu Z, Shen B (2012) A two step salient objects extraction framework based on image segmentation and saliency detection. Multimedia Tools and Applications 67(1):231–247. Springer

    Article  Google Scholar 

  32. Liu Z, Zou W, Le Meur O (2014) Saliency tree: a novel saliency detection framework. IEEE Trans Image Process 23(5):1937–1952. IEEE

    Article  MathSciNet  Google Scholar 

  33. Ma Y-F, Zhang H-J (2003) Contrast-based image attention analysis by using fuzzy growing. ACM International Conference on Multimedia. ACM, pp 374–381

  34. Marchesotti L, Cifarelli C, Csurka G (2009) A framework for visual saliency detection with applications to image thumbnailing. IEEE International Conference on Computer Vision. IEEE, pp 2232–2239

  35. Margolin R, Tal A, Zelnik-Manor L (2013) What makes a patch distinct? IEEE International Conference on Computer Vision and Pattern Recognition. IEEE, pp 1139–1146

  36. Movahedi V, Elder J H (2010) Design and perceptual validation of performance measures for salient object segmentation. IEEE International Conference on Computer Vision and Pattern Recognition Workshops. IEEE, pp 49–56

  37. Qi G-J, Hua X-S, Rui Y, Tang J, Mei T, Zhang H-J (2007) Correlative multi-label video annotation. ACM International Conference on Multimedia. ACM, pp 17–26

  38. Qiu Z, Ren T, Liu Y, Bei J, Song M. (2013) Image retargeting by combining region warping and occlusion. Pacific-Rim Conference on Multimedia. Springer, pp 200–210

  39. Ren T, Liu Y, Wu G (2009) Image retargeting based on global energy optimization. IEEE International Conference on Multimedia and Expo. IEEE, pp 406–409

  40. Ren T, Qiu Z, Liu Y, Yu T, Bei J (2014) Soft-assigned bag of features for object tracking. Multimedia Systems. Springer, pp 1–17

  41. Schauerte B, Stiefelhagen R (2013) How the distribution of salient objects in images influences salient object detection. IEEE International Conference on Image Processing. IEEE, pp 74–78

  42. Tatler BW, Baddeley RJ, Gilchrist ID (2005) Visual correlates of fixation selection: effects of scale and time. Vis Res 45(5):643–659. Elsevier

    Article  Google Scholar 

  43. Wang M, Hua X-S, Hong R, Tang J, Qi GJ, Song Y (2009) Unified video annotation via multigraph learning. IEEE Transactions on Circuits and Systems for Video Technology 19(5):733–746. IEEE

    Article  Google Scholar 

  44. Wang J, Lu K, Pan D, He N, Bao B-K (2014) Robust object removal with an exemplar-based image inpainting approach. Neurocomputing 123:150–155. Elsevier

    Article  Google Scholar 

  45. Xu X, Geng W, Ju R, Yang Y, Ren T, Wu G. OBSIR (2014) Object-based stereo image retrieval. IEEE International Conference on Multimedia and Expo. IEEE

  46. Yang X, Zhang T, Xu C (2013) Locality discriminative coding for image classification. International Conference on Internet Multimedia Computing and Service. ACM, pp 52–55

  47. Zha Z-J, Hua X-S, Mei T, Wang J, Qi G J, Wang Z (2008) Joint multi-label multi-instance learning for image classification. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 1–8

  48. Zhang S, Tian Q, Huang Q, Gao W, Rui Y (2013) Multi-order visual phrase for scalable image search. International Conference on Internet Multimedia Computing and Service. ACM, pp 145–149

  49. Zhong W, Lu H, Yang M-H (2012) Robust object tracking via sparsity-based collaborative model. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 1838–1845

  50. Zhou W, Tian Q, Lu Y, Yang L, Li H (2011) Latent visual context learning for web image applications. Pattern Recog 44(10):2263–2273. Elsevier

    Article  Google Scholar 

  51. Zhu S, Wang G, Ngo C-W, Jiang Y-G. (2010) On the sampling of web images for learning visual concept classifiers. ACM International Conference on Image and Video Retrieval. ACM, pp 50–57

Download references

Acknowledgments

The authors would like to thank the anonymous reviewers for the associate editor for their valuable comments, which have greatly helped us to make improvements, and Jingfan Guo for his contribution in experiment. This paper is supported by the National Science Foundation of China (No. 61321491, 61202320), Research Project of Excellent State Key Laboratory (No.61223003), Natural Science Foundation of Jiangsu Province (No.BK2012304), and National Special Fund (No.2011ZX05035-004-004HZ).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gangshan Wu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ren, T., Liu, Y., Ju, R. et al. How important is location information in saliency detection of natural images. Multimed Tools Appl 75, 2543–2564 (2016). https://doi.org/10.1007/s11042-015-2875-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-015-2875-z

Keywords

Navigation