Advertisement

Salient Object Detection Using Window Mask Transferring with Multi-layer Background Contrast

  • Quan ZhouEmail author
  • Shu Cai
  • Shaojun Zhu
  • Baoyu Zheng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9005)

Abstract

In this paper, we present a novel framework to incorporate bottom-up features and top-down guidance to identify salient objects based on two ideas. The first one automatically encodes object location prior to predict visual saliency without the requirement of center-biased assumption, while the second one estimates image saliency using contrast with respect to background regions. The proposed framework consists of the following three basic steps: In the top-down process, we create a specific location saliency map (SLSM), which can be identified by a set of overlapping windows likely to cover salient objects. The binary segmentation masks of training windows are treated as high-level knowledge to be transferred to the test image windows, which may share visual similarity with training windows. In the bottom-up process, a multi-layer segmentation framework is employed, which is able to provide vast robust background candidate regions specified by SLSM. Then the background contrast saliency map (BCSM) is computed based on low-level image stimuli features. SLSM and BCSM are finally integrated to a pixel-accurate saliency map. Extensive experiments show that our approach achieves the state-of-the-art results over MSRA 1000 and SED datasets.

Keywords

Test Image Training Image Background Region Salient Object Salient Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgement

The authors would like to thank all the anonymous reviewers valuable comments. We would like to thank Prof. Liang Zhou for his valuable comments to improve the readability of the whole paper. This work was supported by NSFC 61201165, 61271240, 61401228, 61403350, PAPD and NY213067.

References

  1. 1.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. TPAMI 20, 1254–1259 (1998)CrossRefGoogle Scholar
  2. 2.
    Zhao, R., Ouyang, W., Wang, X.: Person re-identification by salience matching. In: ICCV, pp. 73–80 (2013)Google Scholar
  3. 3.
    Jiang, H., Wang, J., Yuan, Z., Wu, Y., Zheng, N., Li, S.: Salient object detection: a discriminative regional feature integration approach. In: CVPR, pp. 2083–2090 (2013)Google Scholar
  4. 4.
    Yan, Q., Xu, L., Shi, J., Jia, J.: Hierarchical saliency detection. In: CVPR, pp. 1155–1162 (2013)Google Scholar
  5. 5.
    Borji, A., Tavakoli, H.R., Sihite, D.N., Itti, L.: Analysis of scores, datasets, and models in visual saliency prediction. In: CVPR, pp. 921–928 (2013)Google Scholar
  6. 6.
    Li, X., Lu, H., Zhang, L., Ruan, X., Yang, M.H.: Saliency detection via dense and sparse reconstruction. In: ICCV (2013)Google Scholar
  7. 7.
    Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.H.: Saliency detection via graph-based manifold ranking. In: CVPR, pp. 3166–3173 (2013)Google Scholar
  8. 8.
    Marchesotti, L., Cifarelli, C., Csurka, G.: A framework for visual saliency detection with applications to image thumbnailing. In: ICCV, pp. 2232–2239 (2009)Google Scholar
  9. 9.
    Alexe, B., Deselaers, T., Ferrari, V.: What is an object? In: CVPR, pp. 73–80 (2010)Google Scholar
  10. 10.
    Gao, D., Han, S., Vasconcelos, N.: Discriminant saliency, the detection of suspicious coincidences, and applications to visual recognition. TPAMI 31, 989–1005 (2009)CrossRefGoogle Scholar
  11. 11.
    Toshev, A., Shi, J., Daniilidis, K.: Image matching via saliency region correspondences. In: CVPR, pp. 1–8 (2007)Google Scholar
  12. 12.
    Jung, C., Kim, C.: A unified spectral-domain approach for saliency detection and its application to automatic object segmentation. TIP 21, 1272–1283 (2012)MathSciNetGoogle Scholar
  13. 13.
    Mahadevan, V., Vasconcelos, N.: Saliency-based discriminant tracking. In: CVPR, pp.1007–1013 (2009)Google Scholar
  14. 14.
    Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. In: CVPR, pp. 2376–2383 (2010)Google Scholar
  15. 15.
    Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.: Learning to detect a salient object. TPAMI 33, 353–367 (2011)CrossRefGoogle Scholar
  16. 16.
    Tatler, B.: The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7, 1–17 (2007)CrossRefGoogle Scholar
  17. 17.
    Shen, X., Wu, Y.: A unified approach to salient object detection via low rank matrix recovery. In: CVPR, pp. 853–860 (2012)Google Scholar
  18. 18.
    Wei, Y., Wen, F., Zhu, W., Sun, J.: Geodesic saliency using background priors. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part III. LNCS, vol. 7574, pp. 29–42. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  19. 19.
    Borji, A., Sihite, D.N., Itti, L.: Salient object detection: a benchmark. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, vol. 7573, pp. 414–429. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  20. 20.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: ICCV, pp. 2106–2113 (2009)Google Scholar
  21. 21.
    Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000)CrossRefGoogle Scholar
  22. 22.
    Cheng, M., Zhang, G., Mitra, N., Huang, X., Hu, S.: Global contrast based salient region detection. In: CVPR, pp. 409–416 (2011)Google Scholar
  23. 23.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: CVPR, pp. 1597–1604 (2009)Google Scholar
  24. 24.
    Borji, A., Itti, L.: Exploiting local and global patch rarities for saliency detection. In: CVPR, pp. 478–485 (2012)Google Scholar
  25. 25.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: Slic superpixels. EPEL. Technical report 149300 (2010)Google Scholar
  26. 26.
    Kuettel, D., Ferrari, V.: Figure-ground segmentation by transferring window masks. In: CVPR, pp. 558–565 (2012)Google Scholar
  27. 27.
    Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: CVPR, pp. 1–8 (2007)Google Scholar
  28. 28.
    Perazzi, F., Krahenbuhl, P., Pritch, Y., Hornung, A.: Saliency filters: contrast based filtering for salient region detection. In: CVPR, pp. 733–740 (2012)Google Scholar
  29. 29.
    J., H., C., K., P., P.: Graph-based visual saliency, pp. 545–552. In: NIPS (2006)Google Scholar
  30. 30.
    Achanta, R., Estrada, F.J., Wils, P., Süsstrunk, S.: Salient region detection and segmentation. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 66–75. Springer, Heidelberg (2008) CrossRefGoogle Scholar
  31. 31.
    Zhai, Y., Shah, M.: Visual attention detection in video sequences using spatiotemporal cues. In: ACMMM, pp. 815–824 (2006)Google Scholar
  32. 32.
    Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vis. Res. 42, 107–124 (2002)CrossRefGoogle Scholar
  33. 33.
    Wang, W., Wang, Y., Huang, Q., Gao, W.: Measuring visual saliency by site entropy rate. In: CVPR, pp. 2368–2375 (2010)Google Scholar
  34. 34.
    Gopalakrishnan, V., Hu, Y., Rajan, D.: Random walks on graphs for salient object detection in images. TIP 19, 3232–3242 (2010)MathSciNetGoogle Scholar
  35. 35.
    Guo, C., Ma, Q., Zhang, L.: Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In: CVPR, pp. 1–8 (2008)Google Scholar
  36. 36.
    Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: NIPS, pp. 155–162 (2006)Google Scholar
  37. 37.
    Lang, C., Liu, G., Yu, J., Yan, S.: Saliency detection by multi-task sparsity pursuit. TIP 21, 1327–1338 (2012)MathSciNetGoogle Scholar
  38. 38.
    Li, J., Tian, Y., Huang, T., Gao, W.: Probabilistic multi-task learning for visual saliency estimation in video. IJCV 90, 150–165 (2010)CrossRefGoogle Scholar
  39. 39.
    Ma, Y.F., Hua, X.S., Lu, L., Zhang, H.J.: A generic framework of user attention model and its application in video summarization. TMM 7, 907–919 (2005)Google Scholar
  40. 40.
    Navalpakkam, V., Itti, L.: Search goal tunes visual features optimally. Neuron 53, 605–617 (2007)CrossRefGoogle Scholar
  41. 41.
    Torralba, A., Oliva, A., Castelhano, M., Henderson, J.: Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113, 766–786 (2006)CrossRefGoogle Scholar
  42. 42.
    Zhang, L., Tong, M., Marks, T., Shan, H., Cottrell, G.: Sun: a bayesian framework for saliency using natural statistics. J. Vis. 8, 1–20 (2008)Google Scholar
  43. 43.
    Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV 42, 145–175 (2001)CrossRefzbMATHGoogle Scholar
  44. 44.
    Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. TPAMI 24, 603–619 (2002)CrossRefGoogle Scholar
  45. 45.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. TPAMI 22, 888–905 (2000)Google Scholar
  46. 46.
    Everingham, M., Zisserman, A., Williams, C.K.I., Van Gool, L.: The PASCAL visual object classes challenge (VOC2006) results 2006. (http://www.pascal-network.org/challenges/VOC/voc2006/results.pdf)
  47. 47.
    Itti, L., Koch, C.: Feature combination strategies for saliency-based visual attention systems. J. Electron. Imaging 10, 161–169 (2001)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.College of Telecommunication and Information EngineeringNanjing University of Posts and TelecommunicationsNanjingPeople’s Republic of China
  2. 2.Department of Computer and Information ScienceUniversity of PennsylvaniaPhiladelphiaUSA

Personalised recommendations