Skip to main content
Log in

CSA-Net: Deep Cross-Complementary Self Attention and Modality-Specific Preservation for Saliency Detection

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The multi-modality or multi-stream-based convolution neural network is the recent trend in saliency computation, which is receiving tremendous research interest. The previous models used modality-based independent fusion or cross-modality-based complementary fusion to find saliency that leads to incurring inconsistency or distribution loss of salient points and regions. Most existing models did not effectively utilize accurate localization of high-level semantic and contextual features. The proposed model collectively uses the above two methods and a precise deep localization model to target the abovementioned challenges. Specifically, CSA-Net comprises four essential features: non-complementarity, cross-complementary, intra-complementary, and deep localized improved high-level features. The designed \(2\times 3\) encoder and decoder streams produce these essential features and assure modality-specific saliency preservation. The cross and intra- complementary fusion are deeply guided by proposed novel, cross-complementary self-attention to produce fused saliency. The attention map is computed by two-stage additive fusion based on a Non-Local network. A novel, Optimal Selective Saliency, has been proposed to find two similar saliencies among three steam-wise saliencies. The experimental analysis demonstrates the effectiveness of the proposed \(2\times 3\) stream network and attention map. The experimental results show better performance in comparison with fourteen closely related state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Jerripothula KR, Cai J, Yuan J (2016) Image co-segmentation via saliency co-fusion. IEEE Trans Multimed 18(9):1896–1909

    Article  Google Scholar 

  2. T. Durand, T. Mordan, N. Thome, M. Cord, in IEEE Conference on computer vision and pattern recognition (CVPR 2017) (2017)

  3. B. Mahasseni, M. Lam, S. Todorovic, in Proceedings of the IEEE Conference on computer vision and pattern recognition (CVPR) (2017)

  4. Shokri M, Harati A, Taba K (2020) Salient object detection in video using deep non-local neural networks. J Vis Commun Image Represent 68:102769

    Article  Google Scholar 

  5. Zhang D, Meng D, Han J (2016) Co-saliency detection via a self-paced multiple-instance learning framework. IEEE Trans Pattern Anal Mach Intell 39(5):865–878

    Article  Google Scholar 

  6. W. Wang, J. Shen, in Proceedings of the IEEE International Conference on computer vision (2017), pp. 2186–2194

  7. Han J, Chen H, Liu N, Yan C, Li X (2017) Cnns-based rgb-d saliency detection via cross-view transfer and multiview fusion. IEEE Transon Cybern 48(11):3171–3183

    Article  Google Scholar 

  8. Wang N, Gong X (2019) Adaptive fusion for rgb-d salient object detection. IEEE Access 7:55277–55284

    Article  Google Scholar 

  9. N. Liu, N. Zhang, J. Han, in Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (2020), pp. 13,756–13,765

  10. Wang A, Wang M, Li X, Mi Z, Zhou H (2017) A two-stage Bayesian integration framework for salient object detection on light field. Neural Process Lett 46(3):1083–1094

    Article  Google Scholar 

  11. Chen H, Li Y (2019) Three-stream attention-aware network for rgb-d salient object detection. IEEE Trans Image Process 28(6):2825–2835

    Article  MathSciNet  MATH  Google Scholar 

  12. Fan DP, Lin Z, Zhang Z, Zhu M, Cheng MM (2020) Rethinking rgb-d salient object detection: Models, data sets, and large-scale benchmarks. IEEE Trans Neural Netw Learn Syst 2(5):2075–89

    Article  Google Scholar 

  13. Liu Z, Li Q, Li W (2020) Deep layer guided network for salient object detection. Neurocomputing 372:55–63

    Article  Google Scholar 

  14. K. Fu, D.P. Fan, G.P. Ji, Q. Zhao, in Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (2020), pp. 3052–3062

  15. Wu Y, Liu Z, Zhou X (2020) Saliency detection using adversarial learning networks. J Vis Commun Image Represent 67:102761

    Article  Google Scholar 

  16. Y. Hoshen, in Advances in Neural Information Processing Systems (2017), pp. 2698–2708

  17. Y. Cheng, H. Fu, X. Wei, J. Xiao, X. Cao, in Proceedings of international conference on internet multimedia computing and service (ACM, 2014), p. 23

  18. Cong R, Lei J, Zhang C, Huang Q, Cao X, Hou C (2016) Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion. IEEE Signal Process Lett 23(6):819–823

    Article  Google Scholar 

  19. C. Zhu, G. Li, W. Wang, R. Wang, in IEEE International Conference on computer vision workshop (ICCVW) (2017)

  20. D. Feng, N. Barnes, S. You, C. McCarthy, in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 2343–2350

  21. Zhou J, Ren Y, Yan Y, Pan L (2016) A multiple graph label propagation integration framework for salient object detection. Neural Process Lett 44(3):681–699

    Article  Google Scholar 

  22. Song H, Liu Z, Du H, Sun G, Le Meur O, Ren T (2017) Depth-aware salient object detection and segmentation via multiscale discriminative saliency fusion and bootstrap learning. IEEE Trans Image Process 26(9):4204–4216

    Article  MathSciNet  MATH  Google Scholar 

  23. Kienzle W, Franz MO, Schölkopf B, Wichmann FA (2009) Center-surround patterns emerge as optimal predictors for human saccade targets. J Vis 9(5):7–7

    Article  Google Scholar 

  24. W. Zhu, S. Liang, Y. Wei, J. Sun, in Proceedings of the IEEE conference on computer vision and pattern recognition (2014), pp. 2814–2821

  25. Wang G, Zhang Y, Li J (2017) High-level background prior based salient object detection. J Vis Commun Image Represent 48:432–441

    Article  Google Scholar 

  26. Zhang J, Sclaroff S (2016) Exploiting surroundedness for saliency detection: a boolean map approach. IEEE Trans Pattern Anal Mach Intell 38(5):889–902

    Article  Google Scholar 

  27. Zhou X, Wang Y, Zhu Q, Xiao C, Lu X (2019) Ssg: superpixel segmentation and grabcut-based salient object segmentation. Vis Comput 35(3):385–398

    Article  Google Scholar 

  28. Alexe B, Deselaers T, Ferrari V (2012) Measuring the objectness of image windows. IEEE Trans Pattern Anal Mach Intell 34(11):2189–2202

    Article  Google Scholar 

  29. Zhong G, Liu R, Cao J, Su Z (2016) A generalized nonlocal mean framework with object-level cues for saliency detection. Vis Comput 32(5):611–623

    Article  Google Scholar 

  30. Borji A, Cheng MM, Jiang H, Li J (2015) Salient object detection: a benchmark. IEEE Trans Image Process 24(12):5706–5722

    Article  MathSciNet  MATH  Google Scholar 

  31. Borji A, Cheng MM, Hou Q, Jiang H, Li J (2019) Salient object detection: a survey. Comput Vis Media. 5(2):117–150

    Article  Google Scholar 

  32. Y. Niu, Y. Geng, X. Li, F. Liu, in 2012 IEEE Conference on computer vision and pattern recognition (IEEE, 2012), pp. 454–461

  33. H. Peng, B. Li, W. Xiong, W. Hu, R. Ji, in European conference on computer vision (Springer, 2014), pp. 92–109

  34. Singh SK, Srivastava R (2020) A robust salient object detection using edge enhanced global topographical saliency. Multimed Tools Appl 79(25):17885–17902

    Article  Google Scholar 

  35. Singh SK, Srivastava R (2019) A novel probabilistic contrast-based complex salient object detection. J Math Imaging Vis 61(7):990–1006

    Article  MathSciNet  MATH  Google Scholar 

  36. Qu L, He S, Zhang J, Tian J, Tang Y, Yang Q (2017) Rgbd salient object detection via deep fusion. IEEE Trans Image Process 26(5):2274–2285

    Article  MathSciNet  MATH  Google Scholar 

  37. Liu Z, Shi S, Duan Q, Zhang W, Zhao P (2019) Salient object detection for rgb-d image by single stream recurrent convolution neural network. Neurocomputing 363:46–57

    Article  Google Scholar 

  38. H. Chen, Y. Li, in Proceedings of the IEEE conference on computer vision and pattern recognition (2018), pp. 3051–3060

  39. J.X. Zhao, Y. Cao, D.P. Fan, M.M. Cheng, X.Y. Li, L. Zhang, in Proceedings of the IEEE Conference on computer vision and pattern recognition (2019), pp. 3927–3936

  40. Y. Piao, W. Ji, J. Li, M. Zhang, H. Lu, in Proceedings of the IEEE/CVF International Conference on computer vision (2019), pp. 7254–7263

  41. X. Wang, R. Girshick, A. Gupta, K. He, in Proceedings of the IEEE conference on computer vision and pattern recognition (2018), pp. 7794–7803

  42. Dabov K, Foi A, Katkovnik V, Egiazarian K (2007) Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans Image Process 16(8):2080–2095

    Article  MathSciNet  Google Scholar 

  43. Y. Cao, J. Xu, S. Lin, F. Wei, H. Hu, in Proceedings of the IEEE/CVF International Conference on computer vision workshops (2019), pp. 0–0

  44. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, in Proceedings of the IEEE conference on computer vision and pattern recognition (2015), pp. 1–9

  45. Al Azzeh J, Alhatamleh H, Alqadi ZA, Abuzalata MK (2016) Creating a color map to be used to convert a gray image to color image. Int J Comput Appl 153(2):31–34

    Google Scholar 

  46. Q. Hou, M.M. Cheng, X. Hu, A. Borji, Z. Tu, P.H. Torr, in Proceedings of the IEEE Conference on computer vision and pattern recognition (2017), pp. 3203–3212

  47. G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, in Proceedings of the IEEE conference on computer vision and pattern recognition (2017), pp. 4700–4708

  48. C. Zhu, G. Li, in Proceedings of the IEEE International Conference on computer vision workshops (2017), pp. 3008–3014

  49. R. Ju, L. Ge, W. Geng, T. Ren, G. Wu, in 2014 IEEE international conference on image processing (ICIP) (IEEE, 2014), pp. 1115–1119

  50. N. Li, J. Ye, Y. Ji, H. Ling, J. Yu, in Proceedings of the IEEE Conference on computer vision and pattern recognition (2014), pp. 2806–2813

  51. D.P. Kingma, J. Ba, Adam A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  52. D.P. Fan, M.M. Cheng, Y. Liu, T. Li, A. Borji, in Proceedings of the IEEE international conference on computer vision (2017), pp. 4548–4557

  53. F. Perazzi, P. Krähenbühl, Y. Pritch, A. Hornung, in Computer vision and pattern recognition (CVPR), 2012 IEEE Conference on (IEEE, 2012), pp. 733–740

  54. D.P. Fan, C. Gong, Y. Cao, B. Ren, M.M. Cheng, A. Borji, Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Surya Kant Singh.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Singh, S.K., Srivastava, R. CSA-Net: Deep Cross-Complementary Self Attention and Modality-Specific Preservation for Saliency Detection. Neural Process Lett 54, 5587–5613 (2022). https://doi.org/10.1007/s11063-022-10875-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-022-10875-w

Keywords

Navigation