Advertisement

Saliency Detection Using Texture and Local Cues

  • Qiang Qi
  • Muwei JianEmail author
  • Yilong Yin
  • Junyu Dong
  • Wenyin Zhang
  • Hui Yu
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 773)

Abstract

In this paper, a simple but effective method is proposed for detecting salient objects by utilizing texture and local cues. In contrast to the existing saliency detection models, which mainly consider visual features such as orientation, color, and shape information, our proposed method takes the significant texture cue into consideration to guarantee the accuracy of the detected salient regions. Firstly, an effective method based on selective contrast (SC), which explores the most distinguishable component information in texture, is used to calculate the texture saliency map. Then, we detect local saliency by using a locality-constrained linear coding algorithm. Finally, the output saliency map is computed by integrating texture and local saliency cues simultaneously. Experimental results, based on a widely used and openly available database, demonstrate that the proposed method can produce competitive results and outperforms some existing popular methods.

Notes

Acknowledgments

This work was supported by National Natural Science Foundation of China (NSFC) (61601427); Natural Science Foundation of Shandong Province (ZR2015FQ011); Applied Basic Research Project of Qingdao (16-5-1-4-jch); China Postdoctoral Science Foundation funded project (2016M590659); Postdoctoral Science Foundation of Shandong Province (201603045); Qingdao Postdoctoral Science Foundation funded project (861605040008) and The Fundamental Research Funds for the Central Universities (201511008, 30020084851).

References

  1. 1.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 1597–1604 (2009)Google Scholar
  2. 2.
    Chen, J., Chen, J., Lu, H., Chi, Z.: CNN for saliency detection with low-level feature integration. Neurocomputing 226(C), 212–220 (2017)Google Scholar
  3. 3.
    Cheng, M.-M., Liu, Y., Hou, Q., Bian, J., Torr, P., Hu, S.-M., Tu, Z.: HFS: hierarchical feature selection for efficient image segmentation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 867–882. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_53 CrossRefGoogle Scholar
  4. 4.
    Cholakkal, H., Johnson, J., Rajan, D.: A classifier-guided approach for top-down salient object detection. Sig. Process. Image Commun. 45(C), 24–40 (2016)CrossRefGoogle Scholar
  5. 5.
    Cholakkal, H., Johnson, J., Rajan, D.: Weakly supervised top-down salient object detection (2016)Google Scholar
  6. 6.
    Fu, H., Cao, X., Tu, Z.: Cluster-based co-saliency detection. IEEE Trans. Image Process. 22(10), 3766 (2013). A Publication of the IEEE Signal Processing SocietyMathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Gao, D., Mahadevan, V., Vasconcelos, N.: The discriminant center-surround hypothesis for bottom-up saliency. In: Advances in Neural Information Processing Systems, vol. 20, pp. 497–504 (2007)Google Scholar
  8. 8.
    Goferman, S., Zelnikmanor, L., Tal, A.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 1915–1926 (2012)CrossRefGoogle Scholar
  9. 9.
    Guo, Z., Zhang, L., Zhang, D.: Rotation invariant texture classification using LBP variance (LBPV) with global matching. Pattern Recogn. 43(3), 706–719 (2010)CrossRefzbMATHGoogle Scholar
  10. 10.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: International Conference on Neural Information Processing Systems, pp. 545–552 (2006)Google Scholar
  11. 11.
    Hou, Q., Cheng, M.M., Hu, X.W., Borji, A., Tu, Z., Torr, P.: Deeply supervised salient object detection with short connections (2017)Google Scholar
  12. 12.
    Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2007, pp. 1–8 (2007)Google Scholar
  13. 13.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998). IEEE Computer SocietyCrossRefGoogle Scholar
  14. 14.
    Jian, M., Lam, K.M., Dong, J., Shen, L.: Visual-patch-attention-aware saliency detection. IEEE Trans. Cybern. 45(8), 1575–1586 (2015)CrossRefGoogle Scholar
  15. 15.
    Jian, M., Qi, Q., Dong, J., Sun, X., Sun, Y., Lam, K.-M.: The OUC-vision large-scale underwater image database (2017)Google Scholar
  16. 16.
    Jian, M., Qi, Q., Dong, J., Sun, X., Sun, Y., Lam, K.M.: Saliency detection using quatemionic distance based weber descriptor and object cues. In: Signal and Information Processing Association Summit and Conference, pp. 1–4 (2017)Google Scholar
  17. 17.
    Jian, M., Qi, Q., Dong, J., Sun, X., Sun, Y., Lam, K.-M.: Saliency detection using quaternionic distance based weber local descriptor and level priors. Multimed. Tools Appl. (2017).  https://doi.org/10.1007/s11042-017-5032-z
  18. 18.
    Kanan, C., Tong, M.H., Zhang, L., Cottrell, G.W.: SUN: top-down saliency using natural statistics. Vis. Cogn. 17(6–7), 979 (2009)CrossRefGoogle Scholar
  19. 19.
    Li, X., Lu, H., Zhang, L., Xiang, R., Yang, M.H.: Saliency detection via dense and sparse reconstruction. In: IEEE International Conference on Computer Vision, pp. 2976–2983 (2013)Google Scholar
  20. 20.
    Ma, Q.: New strategy for image and video quality assessment. J. Electron. Imaging 19(1), 011019 (2010)CrossRefGoogle Scholar
  21. 21.
    Ma, Y.F., Zhang, H.J.: Contrast-based image attention analysis by using fuzzy growing. In: Eleventh ACM International Conference on Multimedia, pp. 374–381 (2003)Google Scholar
  22. 22.
    Qin, Y., Lu, H., Xu, Y., Wang, H.: Saliency detection via cellular automata. In: Computer Vision and Pattern Recognition, pp. 110–119 (2015)Google Scholar
  23. 23.
    Rahtu, E., Kannala, J., Salo, M., Heikkilä, J.: Segmenting salient objects from images and videos. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 366–379. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15555-0_27 CrossRefGoogle Scholar
  24. 24.
    Ran, M., Tal, A., Zelnikmanor, L.: What makes a patch distinct? In: Computer Vision and Pattern Recognition, pp. 1139–1146 (2013)Google Scholar
  25. 25.
    Rezazadegan Tavakoli, H., Rahtu, E., Heikkilä, J.: Fast and efficient saliency detection using sparse sampling and kernel density estimation. In: Heyden, A., Kahl, F. (eds.) SCIA 2011. LNCS, vol. 6688, pp. 666–675. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21227-7_62 CrossRefGoogle Scholar
  26. 26.
    Tian, H., Fang, Y., Zhao, Y., Lin, W.: Salient region detection by fusing bottom-up and top-down features extracted from a single image. IEEE Trans. Image Process. 23(10), 4389–4398 (2014). A Publication of the IEEE Signal Processing SocietyMathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Tong, N., Lu, H., Zhang, Y., Xiang, R.: Salient object detection via global and local cues. Pattern Recogn. 48(10), 3258–3267 (2015)CrossRefGoogle Scholar
  28. 28.
    Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., Gong, Y.: Locality-constrained linear coding for image classification, vol. 119, no. 5, pp. 3360–3367 (2010)Google Scholar
  29. 29.
    Wang, L., Xue, J., Zheng, N., Hua, G.: Automatic salient object extraction with contextual cue, vol. 23, no. 5, pp. 105–112 (2011)Google Scholar
  30. 30.
    Wang, Q., Yuan, Y., Yan, P.: Visual saliency by selective contrast. IEEE Trans. Circ. Syst. Video Technol. 23(7), 1150–1155 (2013)CrossRefGoogle Scholar
  31. 31.
    Yan, Q., Xu, L., Shi, J., Jia, J.: Hierarchical saliency detection. In: Computer Vision and Pattern Recognition, pp. 1155–1162 (2013)Google Scholar
  32. 32.
    Yang, C., Zhang, L., Lu, H.: Graph-regularized saliency detection with convex-hull-based center prior. IEEE Signal Process. Lett. 20(7), 637–640 (2013)CrossRefGoogle Scholar
  33. 33.
    Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.H.: Saliency detection via graph-based manifold ranking. In: Computer Vision and Pattern Recognition, pp. 3166–3173 (2013)Google Scholar
  34. 34.
    Yu, K., Zhang, T., Gong, Y.: Nonlinear learning using local coordinate coding. In: International Conference on Neural Information Processing Systems, pp. 2223–2231 (2009)Google Scholar
  35. 35.
    Zhang, J.: Seam carving for content-aware image resizing. ACM Trans. Graph. 26(3), 10 (2007)CrossRefGoogle Scholar
  36. 36.
    Song, H., Liu, Z., Du, H., et al.: Depth-aware salient object detection and segmentation via multiscale discriminative saliency fusion and bootstrap learning. IEEE Trans. Image Process. 26(9), 4204–4216 (2017)MathSciNetCrossRefGoogle Scholar
  37. 37.
    Guan, Y., Jiang, B., Xiao, Y., et al.: A new graph ranking model for image saliency detection problem. In: Software Engineering Research, Management and Applications (SERA), pp. 151–156 (2017)Google Scholar
  38. 38.
    He, Z., Jiang, B., Xiao, Y., Ding, C., Luo, B.: Saliency detection via a graph based diffusion model. In: Foggia, P., Liu, C.-L., Vento, M. (eds.) GbRPR 2017. LNCS, vol. 10310, pp. 3–12. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58961-9_1 CrossRefGoogle Scholar
  39. 39.
    Peng, H., Li, B., Ling, H., et al.: Salient object detection via structured matrix decomposition. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 818–832 (2017)CrossRefGoogle Scholar
  40. 40.
    Yang, J., Yang, M.H.: Top-down visual saliency via joint CRF and dictionary learning. IEEE Trans. Pattern Anal. Mach. Intell. 39(3), 576–588 (2017)CrossRefGoogle Scholar
  41. 41.
    Deng, T., Yang, K., Li, Y., et al.: Where does the driver look? top-down-based saliency detection in a traffic driving environment. IEEE Trans. Intell. Transp. Syst. 17(7), 2051–2062 (2016)CrossRefGoogle Scholar
  42. 42.
    Liu, Y., Yang, J., Meng, Q., et al.: Stereoscopic image quality assessment method based on binocular combination saliency model. Signal Process. 125, 237–248 (2016)CrossRefGoogle Scholar
  43. 43.
    Zhang, K., Liu, Q., Wu, Y.: Robust visual tracking via convolutional networks without training. IEEE Trans. Image Process. 25(4), 1779–1792 (2016)MathSciNetGoogle Scholar
  44. 44.
    Lee, S.H., Kang, J.W., Kim, C.S.: Compressed domain video using global and local spatiotemporal features. J. Vis. Commun. Image Represent. 35, 169–183 (2016)CrossRefGoogle Scholar
  45. 45.
    Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: a Bayesian framework for saliency using natural statistics. J. Vis. 8(7), 32 (2008)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2017

Authors and Affiliations

  • Qiang Qi
    • 1
    • 2
  • Muwei Jian
    • 1
    • 2
    Email author
  • Yilong Yin
    • 1
  • Junyu Dong
    • 2
  • Wenyin Zhang
    • 3
  • Hui Yu
    • 4
  1. 1.School of Computer Science and TechnologyShandong University of Finance and EconomicsJinanChina
  2. 2.Department of Computer Science and TechnologyOcean University of ChinaQingdaoChina
  3. 3.School of Information Science and EngineeringLinyi UniversityLinyiChina
  4. 4.School of Creative TechnologiesUniversity of PortsmouthPortsmouthUK

Personalised recommendations