Advertisement

Gradient-Induced Co-Saliency Detection

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images. In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection (GICD) method. We first abstract a consensus representation for a group of images in the embedding space; then, by comparing the single image with consensus representation, we utilize the feedback gradient information to induce more attention to the discriminative co-salient features. In addition, due to the lack of Co-SOD training data, we design a jigsaw training strategy, with which Co-SOD networks can be trained on general saliency datasets without extra pixel-level annotations. To evaluate the performance of Co-SOD methods on discovering the co-salient object among multiple foregrounds, we construct a challenging CoCA dataset, where each image contains at least one extraneous foreground along with the co-salient object. Experiments demonstrate that our GICD achieves state-of-the-art performance. Our codes and dataset are available at https://mmcheng.net/gicd/.

Keywords

Co-saliency detection New dataset Gradient inducing Jigsaw training 

Notes

Acknowledgements

Zhao Zhang and Wenda Jin are the joint first authors. This research was supported by Major Project for New Generation of AI under Grant No. 2018AAA0100400, NSFC (61922046), Tianjin Natural Science Foundation (18ZXZNGX00110), and the Fundamental Research Funds for the Central Universities, Nankai University (63201169).

Supplementary material

504453_1_En_27_MOESM1_ESM.pdf (5 mb)
Supplementary material 1 (pdf 5142 KB)

References

  1. 1.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: CVPR, pp. 1597–1604 (2009)Google Scholar
  2. 2.
    Batra, D., Kowdle, A., Parikh, D., Luo, J., Chen, T.: iCoseg: interactive co-segmentation with intelligent scribble guidance. In: CVPR, pp. 3169–3176. IEEE (2010)Google Scholar
  3. 3.
    Borji, A., Cheng, M.M., Jiang, H., Li, J.: Salient object detection: a benchmark. IEEE TIP 24(12), 5706–5722 (2015)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Cao, X., Tao, Z., Zhang, B., Fu, H., Feng, W.: Self-adaptively weighted co-saliency detection via rank constraint. IEEE TIP 23(9), 4175–4186 (2014)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Chang, K.Y., Liu, T.L., Lai, S.H.: From co-saliency to co-segmentation: an efficient and fully unsupervised energy minimization model. In: CVPR 2011, pp. 2129–2136. IEEE (2011)Google Scholar
  6. 6.
    Chen, H.T.: Preattentive co-saliency detection. In: ICIP, pp. 1117–1120. IEEE (2010)Google Scholar
  7. 7.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: CVPR, pp. 248–255. IEEE (2009)Google Scholar
  8. 8.
    Fan, D.-P., Cheng, M.-M., Liu, J.-J., Gao, S.-H., Hou, Q., Borji, A.: Salient objects in clutter: bringing salient object detection to the foreground. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 196–212. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01267-0_12CrossRefGoogle Scholar
  9. 9.
    Fan, D.P., Cheng, M.M., Liu, Y., Li, T., Borji, A.: Structure-measure: a new way to evaluate foreground maps. In: ICCV, pp. 4548–4557 (2017)Google Scholar
  10. 10.
    Fan, D.P., Gong, C., Cao, Y., Ren, B., Cheng, M.M., Borji, A.: Enhanced-alignment measure for binary foreground map evaluation. In: IJCAI, pp. 698–704 (2018)Google Scholar
  11. 11.
    Fan, D.P., Lin, Z., Ji, G.P., Zhang, D., Fu, H., Cheng, M.M.: Taking a deeper look at the co-salient object detection. In: CVPR (2020)Google Scholar
  12. 12.
    Fan, D.P., Zhai, Y., Borji, A., Yang, J., Shao, L.: BBS-Net: RGB-D salient object detection with a bifurcated backbone strategy network. In: Vedaldi, A., et al. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 275–292. Springer, Heidelberg (2020)Google Scholar
  13. 13.
    Fu, H., Cao, X., Tu, Z.: Cluster-based co-saliency detection. IEEE TIP 22(10), 3766–3778 (2013)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Gao, S.H., Tan, Y.Q., Cheng, M.M., Lu, C., Chen, Y., Yan, S.: Highly efficient salient object detection with 100k parameters. In: ECCV (2020)Google Scholar
  15. 15.
    Gao, Z., Xu, C., Zhang, H., Li, S., de Albuquerque, V.H.C.: Trustful internet of surveillance things based on deeply-represented visual co-saliency detection. IEEE Internet Things J. 7, 4092–4100 (2020)CrossRefGoogle Scholar
  16. 16.
    Han, J., Cheng, G., Li, Z., Zhang, D.: A unified metric learning-based framework for co-saliency detection. IEEE TCSVT 28(10), 2473–2483 (2017)Google Scholar
  17. 17.
    Hsu, K.-J., Tsai, C.-C., Lin, Y.-Y., Qian, X., Chuang, Y.-Y.: Unsupervised CNN-based co-saliency detection with graphical optimization. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 502–518. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01228-1_30CrossRefGoogle Scholar
  18. 18.
    Jerripothula, K.R., Cai, J., Yuan, J.: CATS: co-saliency activated tracklet selection for video co-localization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9911, pp. 187–202. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46478-7_12CrossRefGoogle Scholar
  19. 19.
    Jerripothula, K.R., Cai, J., Yuan, J.: Efficient video object co-localization with co-saliency activated tracklets. IEEE TCSVT 29(3), 744–755 (2018)Google Scholar
  20. 20.
    Jiang, B., Jiang, X., Zhou, A., Tang, J., Luo, B.: A unified multiple graph learning and convolutional network model for co-saliency estimation. In: ACM Multimedia, pp. 1375–1382 (2019)Google Scholar
  21. 21.
    Joulin, A., Tang, K., Fei-Fei, L.: Efficient image and video co-localization with Frank-Wolfe algorithm. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 253–268. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10599-4_17CrossRefGoogle Scholar
  22. 22.
    Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: ICLR (2015)Google Scholar
  23. 23.
    Li, B., Sun, Z., Tang, L., Sun, Y., Shi, J.: Detecting robust co-saliency with recurrent co-attention neural network. In: IJCAI, pp. 818–825 (2019)Google Scholar
  24. 24.
    Li, H., Ngan, K.N.: A co-saliency model of image pairs. IEEE TIP 20(12), 3365–3375 (2011)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Li, Z., Chen, Q., Koltun, V.: Interactive image segmentation with latent diversity. In: CVPR, pp. 577–585 (2018)Google Scholar
  26. 26.
    Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR, pp. 2117–2125 (2017)Google Scholar
  27. 27.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  28. 28.
    Liu, J.J., Hou, Q., Cheng, M.M., Feng, J., Jiang, J.: A simple pooling-based design for real-time salient object detection. In: CVPR, pp. 3917–3926 (2019)Google Scholar
  29. 29.
    Liu, T., et al.: Learning to detect a salient object. IEEE TPAMI 33(2), 353–367 (2010)Google Scholar
  30. 30.
    Luo, Y., Jiang, M., Wong, Y., Zhao, Q.: Multi-camera saliency. IEEE TPAMI 37(10), 2057–2070 (2015)CrossRefGoogle Scholar
  31. 31.
    Mahajan, D., et al.: Exploring the limits of weakly supervised pretraining. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 185–201. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01216-8_12CrossRefGoogle Scholar
  32. 32.
    Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: NeurIPS, pp. 8024–8035 (2019)Google Scholar
  33. 33.
    Plaut, D.C.: Graded modality-specific specialisation in semantics: a computational account of optic aphasia. Cognit. Neuropsychol. 19(7), 603–639 (2002)CrossRefGoogle Scholar
  34. 34.
    Qi, H., Brown, M., Lowe, D.G.: Low-shot learning with imprinted weights. In: CVPR, pp. 5822–5830 (2018)Google Scholar
  35. 35.
    Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., Jagersand, M.: Basnet: boundary-aware salient object detection. In: CVPR, pp. 7479–7489 (2019)Google Scholar
  36. 36.
    Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626 (2017)Google Scholar
  37. 37.
    Sun, H., Zhen, X., Zheng, Y., Yang, G., Yin, Y., Li, S.: Learning deep match kernels for image-set classification. In: CVPR, pp. 6240–6249 (2017)Google Scholar
  38. 38.
    Tang, K., Joulin, A., Li, L.J., Fei-Fei, L.: Co-localization in real-world images. In: CVPR, pp. 1464–1471 (2014)Google Scholar
  39. 39.
    Tsai, C.C., Li, W., Hsu, K.J., Qian, X., Lin, Y.Y.: Image co-saliency detection and co-segmentation via progressive joint optimization. IEEE TIP 28(1), 56–71 (2018)MathSciNetzbMATHGoogle Scholar
  40. 40.
    Wang, C., Zha, Z.J., Liu, D., Xie, H.: Robust deep co-saliency detection with group semantic. In: AAAI, vol. 33, pp. 8917–8924 (2019)Google Scholar
  41. 41.
    Wang, L., Lu, H., Wang, Y., Feng, M., Wang, D., Yin, B., Ruan, X.: Learning to detect salient objects with image-level supervision. In: CVPR (2017)Google Scholar
  42. 42.
    Wei, L., Zhao, S., Bourahla, O.E.F., Li, X., Wu, F.: Group-wise deep co-saliency detection. In: IJCAI, pp. 3041–3047 (2017)Google Scholar
  43. 43.
    Wei, L., Zhao, S., Bourahla, O.E.F., Li, X., Wu, F., Zhuang, Y.: Deep group-wise fully convolutional network for co-saliency detection with graph propagation. IEEE TIP 28, 5052–5063 (2019)MathSciNetzbMATHGoogle Scholar
  44. 44.
    Wei, Y., et al.: Stc: a simple to complex framework for weakly-supervised semantic segmentation. IEEE TPAMI 39(11), 2314–2320 (2016)CrossRefGoogle Scholar
  45. 45.
    Winn, J., Criminisi, A., Minka, T.: Object categorization by learned universal visual dictionary. In: ICCV, pp. 1800–1807 (2005)Google Scholar
  46. 46.
    Wu, Z., Su, L., Huang, Q.: Stacked cross refinement network for edge-aware salient object detection. In: ICCV, pp. 7264–7273 (2019)Google Scholar
  47. 47.
    Zeng, Y., Zhuge, Y., Lu, H., Zhang, L.: Joint learning of saliency detection and weakly supervised semantic segmentation. In: ICCV, pp. 7223–7233 (2019)Google Scholar
  48. 48.
    Zha, Z., Wang, C., Liu, D., Xie, H., Zhang, Y.: Robust deep co-saliency detection with group semantic and pyramid attention. IEEE TNNLS 31(7), 2398–2408 (2020)Google Scholar
  49. 49.
    Zhang, C., Lin, G., Liu, F., Yao, R., Shen, C.: CANet: class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In: CVPR, pp. 5217–5226 (2019)Google Scholar
  50. 50.
    Zhang, D., Fu, H., Han, J., Borji, A., Li, X.: A review of co-saliency detection algorithms: fundamentals, applications, and challenges. ACM TIST 9(4), 1–31 (2018)CrossRefGoogle Scholar
  51. 51.
    Zhang, D., Han, J., Li, C., Wang, J., Li, X.: Detection of co-salient objects by looking deep and wide. IJCV 120(2), 215–232 (2016).  https://doi.org/10.1007/s11263-016-0907-4MathSciNetCrossRefGoogle Scholar
  52. 52.
    Zhang, D., Meng, D., Han, J.: Co-saliency detection via a self-paced multiple-instance learning framework. IEEE TPAMI 39(5), 865–878 (2016)CrossRefGoogle Scholar
  53. 53.
    Zhang, K., Li, T., Liu, B., Liu, Q.: Co-saliency detection via mask-guided fully convolutional networks with multi-scale label smoothing. In: CVPR, pp. 3095–3104 (2019)Google Scholar
  54. 54.
    Zhang, X., Wei, Y., Yang, Y., Huang, T.: SG-One: similarity guidance network for one-shot semantic segmentation (2018)Google Scholar
  55. 55.
    Zheng, X., Zha, Z.J., Zhuang, L.: A feature-adaptive semi-supervised framework for co-saliency detection. In: ACM Multimedia, pp. 959–966 (2018)Google Scholar
  56. 56.
    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR, pp. 2921–2929 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.TKLNDST, CSNankai UniversityTianjinChina
  2. 2.College of Intelligence and ComputingTianjin UniversityTianjinChina

Personalised recommendations