Skip to main content
Log in

Ternary symmetric fusion network for camouflaged object detection

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Camouflage object detection (COD) is designed to locate objects that are “seamlessly” embedded in the surrounding environment. Camouflaged object detection is a challenging task due to the high intrinsic similarities between objects and their backgrounds, as well as the low boundary contrast between them. To address this problem, this paper proposes a new ternary symmetric fusion network (TSFNet), which can detect camouflaged objects by fully fusing features of different levels and scales. Specifically, the network proposed in this paper mainly contains two key modules: the location-attention search (LAS) module and the ternary symmetric interaction fusion (TSIF) module. The location-attention search module makes full use of contextual information to position potential target objects from a global perspective while enhancing feature representation and guiding feature fusion. The ternary symmetric interaction fusion module consists of three branches: bilateral branches gather rich contextual information of multi-level features, and a middle branch provides fusion attention coefficients for the other two branches. The strategy can effectively achieve information fusion between low- and high-level features, and then achieve the refinement of edge details. Experimental results show that the method is an effective COD model and outperforms existing models. Compared with the existing model SINetV2, TSFNet significantly improves the performance by 3.5% weighted F-measure and 8.1% MAE on the COD10K.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Yan J, Le T-N, Nguyen K-D, Tran M-T, Do T-T, Nguyen TV (2021) MirrorNet: bio-inspired camouflaged object segmentation. IEEE Access 9:43290–43300. https://doi.org/10.1109/ACCESS.2021.3064443

    Article  Google Scholar 

  2. Fan D-P, Ji G-P, Cheng M-M, Shao L (2022) Concealed object detection. IEEE Trans Pattern Anal Mach Intell 44:6024–6042. https://doi.org/10.1109/TPAMI.2021.3085766

    Article  Google Scholar 

  3. Fan D-P, Ji G-P, Zhou T, Chen G, Fu H, Shen J, Shao L (2020) PraNet: parallel reverse attention network for polyp segmentation. In: Med image comput comput assist interv, pp 263–273. https://doi.org/10.1007/978-3-030-59725-2_26

    Chapter  Google Scholar 

  4. Fan D-P, Zhou T, Ji G-P, Zhou Y, Chen G, Fu H, Shen J, Shao L (2020) Inf-net: automatic COVID-19 lung infection segmentation from CT images. IEEE Trans Med Imaging 39:2626–2637. https://doi.org/10.1109/TMI.2020.2996645

    Article  Google Scholar 

  5. Xiao H, Ran Z, Mabu S, Li Y, Li L (2023) SAUNet++: an automatic segmentation model of COVID-19 lesion from CT slices. Vis Comput 39:2291–2304. https://doi.org/10.1007/s00371-022-02414-4

    Article  Google Scholar 

  6. Le T-N, Nguyen TV, Nie Z, Tran M-T, Sugimoto A (2019) Anabranch network for camouflaged object segmentation. Comput Vis Image Underst 184:45–56. https://doi.org/10.1016/j.cviu.2019.04.006

    Article  Google Scholar 

  7. Fan D-P, Ji G-P, Sun G, Cheng M-M, Shen J, Shao L (2020) Camouflaged object detection. In: IEEE conf comput vis pattern recognit, pp 2774–2784. https://doi.org/10.1109/CVPR42600.2020.00285

    Chapter  Google Scholar 

  8. Cheng M-M, Mitra NJ, Huang X, Torr PHS, Hu S-M (2015) Global contrast based salient region detection. IEEE Trans Pattern Anal Mach Intell 37:569–582. https://doi.org/10.1109/TPAMI.2014.2345401

    Article  Google Scholar 

  9. Wei Y, Wen F, Zhu W, Sun J (2012) Geodesic saliency using background priors. In: Eur conf comput vis, pp 29–42. https://doi.org/10.1007/978-3-642-33712-3_3

    Chapter  Google Scholar 

  10. Zhao R, Ouyang W, Li H, Wang X (2015) Saliency detection by multi-context deep learning. In: IEEE conf comput vis pattern recognit, pp 1265–1274. https://doi.org/10.1109/CVPR.2015.7298731

    Chapter  Google Scholar 

  11. Zhang J, Sclaroff S, Lin Z, Shen X, Price B, Mech R (2016) Unconstrained salient object detection via proposal subset optimization. In: IEEE conf comput vis pattern recognit, pp 5733–5742. https://doi.org/10.1109/CVPR.2016.618

    Chapter  Google Scholar 

  12. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: IEEE conf comput vis pattern recognit, pp 3431–3440. https://doi.org/10.1109/CVPR.2015.7298965

    Chapter  Google Scholar 

  13. Zhang P, Wang D, Lu H, Wang H, Yin B (2017) Learning uncertain convolutional features for accurate saliency detection. In: Int Conf Comput Vis, pp 212–221. https://doi.org/10.1109/ICCV.2017.32

    Chapter  Google Scholar 

  14. Li G, Yu Y (2016) Deep contrast learning for salient object detection. In: IEEE conf comput vis pattern recognit. IEEE, pp 478–487. https://doi.org/10.1109/CVPR.2016.58

    Chapter  Google Scholar 

  15. Hou Q, Cheng M-M, Hu X, Borji A, Tu Z, Torr PHS (2019) Deeply supervised salient object detection with short connections. IEEE Trans Pattern Anal Mach Intell 41:815–828. https://doi.org/10.1109/TPAMI.2018.2815688

    Article  Google Scholar 

  16. Wang J, Zhao Z, Yang S, Chai X, Zhang W, Zhang M (2022) Global contextual guided residual attention network for salient object detection. Appl Intell 52:6208–6226. https://doi.org/10.1007/s10489-021-02713-8

    Article  Google Scholar 

  17. Feng M, Lu H, Ding E (2019) Attentive feedback network for boundary-aware salient object detection. In: IEEE conf comput vis pattern recognit, pp 1623–1632. https://doi.org/10.1109/CVPR.2019.00172

    Chapter  Google Scholar 

  18. Sengottuvelan P, Wahi A, Shanmugam A (2008) Performance of decamouflaging through exploratory image analysis. In: Int conf emerg trends eng technol, pp 6–10. https://doi.org/10.1109/ICETET.2008.232

    Chapter  Google Scholar 

  19. Yin J, Han Y, Hou W, Li J (2011) Detection of the mobile object with camouflage color under dynamic background based on optical flow. Procedia Eng 15:2201–2205. https://doi.org/10.1016/j.proeng.2011.08.412

    Article  Google Scholar 

  20. Chen G, Liu S-J, Sun Y-J, Ji G-P, Wu Y-F, Zhou T (2022) Camouflaged object detection via context-aware cross-level fusion. IEEE Trans Circuits Syst Video Technol 32:6981–6993. https://doi.org/10.1109/TCSVT.2022.3178173

    Article  Google Scholar 

  21. Mei H, Ji G-P, Wei Z, Yang X, Wei X, Fan D-P (2021) Camouflaged object segmentation with distraction mining. In: IEEE conf comput vis pattern recognit, pp 8768–8777. https://doi.org/10.1109/CVPR46437.2021.00866

    Chapter  Google Scholar 

  22. Pang Y, Zhao X, Xiang T-Z, Zhang L, Lu H (2022) Zoom in and out: a mixed-scale triplet network for camouflaged object detection. In: IEEE conf comput vis pattern recognit, pp 2150–2160. https://doi.org/10.1109/CVPR52688.2022.00220

    Chapter  Google Scholar 

  23. Ji G-P, Zhu L, Zhuge M, Fu K (2022) Fast camouflaged object detection via edge-based reversible re-calibration network. Pattern Recogn 123:108414. https://doi.org/10.1016/j.patcog.2021.108414

    Article  Google Scholar 

  24. Zhou T, Zhou Y, Gong C, Yang J, Zhang Y (2022) Feature aggregation and propagation network for camouflaged object detection. IEEE Trans Image Process 31:7036–7047. https://doi.org/10.1109/tip.2022.3217695

    Article  Google Scholar 

  25. Lv Y, Zhang J, Dai Y, Li A, Liu B, Barnes N, Fan D-P, Localize S (2021) Segment and rank the camouflaged objects. In: IEEE conf comput vis pattern recognit, pp 11586–11596. https://doi.org/10.1109/CVPR46437.2021.01142

    Chapter  Google Scholar 

  26. Li A, Zhang J, Lv Y, Liu B, Zhang T, Dai Y (2021) Uncertainty-aware joint salient object and camouflaged object detection. In: IEEE conf comput Vis pattern recognit, pp 10066–10076. https://doi.org/10.1109/CVPR46437.2021.00994

    Chapter  Google Scholar 

  27. Zhai Q, Li X, Yang F, Chen C, Cheng H, Fan D-P (2021) Mutual graph learning for camouflaged object detection. In: IEEE conf comput vis pattern recognit, pp 12992–13002. https://doi.org/10.1109/CVPR46437.2021.01280

    Chapter  Google Scholar 

  28. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) DeepLab: semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 40:834–848. https://doi.org/10.1109/TPAMI.2017.2699184

    Article  Google Scholar 

  29. Wang P, Chen P, Yuan Y, Liu D, Huang Z, Hou X, Cottrell G (2018) Understanding convolution for semantic segmentation. In: IEEE work appl comput vis, pp 1451–1460. https://doi.org/10.1109/WACV.2018.00163

    Chapter  Google Scholar 

  30. Xia H, Ma M, Li H, Song S (2022) MC-net: multi-scale context-attention network for medical CT image segmentation. Appl Intell 52:1508–1519. https://doi.org/10.1007/s10489-021-02506-z

    Article  Google Scholar 

  31. Zhao H, Shi J, Qi X, Wang X, Jia J (2017) Pyramid scene parsing network. In: IEEE conf comput vis pattern recognit, pp 6230–6239. https://doi.org/10.1109/CVPR.2017.660

    Chapter  Google Scholar 

  32. Zhang L, Dai J, Lu H, He Y, Wang G (2018) A bi-directional message passing model for salient object detection. In: IEEE conf comput vis pattern recognit, pp 1741–1750. https://doi.org/10.1109/CVPR.2018.00187

    Chapter  Google Scholar 

  33. Liu J-J, Hou Q, Cheng M-M, Feng J, Jiang J (2019) A simple pooling-based design for real-time salient object detection. In: IEEE conf comput vis pattern recognit, pp 3912–3921. https://doi.org/10.1109/CVPR.2019.00404

    Chapter  Google Scholar 

  34. Liu N, Han J, Yang M-H (2018) PiCANet: learning pixel-wise contextual attention for saliency detection. In: IEEE conf comput vis pattern recognit, pp 3089–3098. https://doi.org/10.1109/CVPR.2018.00326

    Chapter  Google Scholar 

  35. Li Y, Yao T, Pan Y, Mei T (2023) Contextual transformer networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 45:1489–1500. https://doi.org/10.1109/TPAMI.2022.3164083

    Article  Google Scholar 

  36. Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H (2019) Dual attention network for scene segmentation. In: IEEE conf comput vis pattern recognit, pp 3141–3149. https://doi.org/10.1109/CVPR.2019.00326

    Chapter  Google Scholar 

  37. Wang L, He K, Feng X, Ma X (2022) Multilayer feature fusion with parallel convolutional block for fine-grained image classification. Appl Intell 52:2872–2883. https://doi.org/10.1007/s10489-021-02573-2

    Article  Google Scholar 

  38. Gao S-H, Cheng M-M, Zhao K, Zhang X-Y, Yang M-H, Torr P (2021) Res2Net: a new multi-scale backbone architecture. IEEE Trans Pattern Anal Mach Intell 43:652–662. https://doi.org/10.1109/TPAMI.2019.2938758

    Article  Google Scholar 

  39. Liu S, Huang D, Wang Y (2018) Receptive field block net for accurate and fast object detection. In: Eur conf comput vis, pp 404–419. https://doi.org/10.1007/978-3-030-01252-6_24

    Chapter  Google Scholar 

  40. Qin X, Zhang Z, Huang C, Gao C, Dehghan M, Jagersand M (2019) BASNet: boundary-aware salient object detection. In: IEEE conf comput vis pattern recognit, pp 7471–7481. https://doi.org/10.1109/CVPR.2019.00766

    Chapter  Google Scholar 

  41. Wei J, Wang S, Huang Q (2020) F3Net: fusion, feedback and focus for salient object detection. In: Proc AAAI conf artif intell, pp 12321–12328. https://doi.org/10.1609/aaai.v34i07.6916

    Chapter  Google Scholar 

  42. Skurowski P, Abdulameer H, Błaszczyk J, Depta T, Kornacki A, Kozie P (2018) Animal camouflage analysis: Chameleon database. Unpublished Manuscript

  43. Chen T, Xiao J, Hu X, Zhang G, Wang S (2022) Boundary-guided network for camouflaged object detection. Knowledge-Based Syst 248:108901. https://doi.org/10.1016/j.knosys.2022.108901

    Article  Google Scholar 

  44. Fan D-P, Cheng M-M, Liu Y, Li T, Borji A (2017) Structure-measure: a new way to evaluate foreground maps. In: Int conf comput vis, pp 4558–4567. https://doi.org/10.1109/ICCV.2017.487

    Chapter  Google Scholar 

  45. Fan D-P, Ji G-P, Qin X, Cheng M-M (2021) Cognitive vision inspired object segmentation metric and loss function. Sci Sin Informationis 51:1475–1489. https://doi.org/10.1360/SSI-2020-0370

    Article  Google Scholar 

  46. Margolin R, Zelnik-Manor L, Tal A (2014) How to evaluate foreground maps. In: IEEE conf comput vis pattern recognit, pp 248–255. https://doi.org/10.1109/CVPR.2014.39

    Chapter  Google Scholar 

  47. Perazzi F, Krahenbuhl P, Pritch Y, Hornung A (2012) Saliency filters: contrast based filtering for salient region detection. In: IEEE comput soc conf comput vis pattern recognit, pp 733–740. https://doi.org/10.1109/CVPR.2012.6247743

    Chapter  Google Scholar 

  48. Luo Z, Mishra A, Achkar A, Eichel J, Li S, Jodoin PM (2017) Non-local deep features for salient object detection. In: IEEE conf comput vis pattern recognit, pp 6593–6601. https://doi.org/10.1109/CVPR.2017.698

    Chapter  Google Scholar 

  49. Wu Z, Su L, Huang Q (2019) Cascaded partial decoder for fast and accurate salient object detection. In: IEEE conf comput vis pattern recognit, pp 3902–3911. https://doi.org/10.1109/CVPR.2019.00403

    Chapter  Google Scholar 

  50. Zhao J, Liu J-J, Fan D-P, Cao Y, Yang J, Cheng M-M (2019) EGNet: edge guidance network for salient object detection. In: Int conf comput vis, pp 8778–8787. https://doi.org/10.1109/ICCV.2019.00887

    Chapter  Google Scholar 

  51. Wu Z, Su L, Huang Q (2019) Stacked cross refinement network for edge-aware salient object detection. In: Int conf comput vis, pp 7263–7272. https://doi.org/10.1109/ICCV.2019.00736

    Chapter  Google Scholar 

  52. Gao S-H, Tan Y-Q, Cheng M-M, Lu C, Chen Y, Yan S (2020) Highly efficient salient object detection with 100K parameters. In: Eur conf comput vis, pp 702–721. https://doi.org/10.1007/978-3-030-58539-6_42

    Chapter  Google Scholar 

  53. Zhou H, Xie X, Lai J-H, Chen Z, Yang L (2020) Interactive two-stream decoder for accurate and fast saliency detection. In: IEEE conf comput vis pattern recognit, pp 9138–9147. https://doi.org/10.1109/CVPR42600.2020.00916

    Chapter  Google Scholar 

  54. Zhang J, Fan D-P, Dai Y, Anwar S, Saleh FS, Zhang T, Barnes N (2020) UC-net: uncertainty inspired RGB-D saliency detection via conditional variational autoencoders. In: IEEE conf comput vis pattern recognit, pp 8579–8588. https://doi.org/10.1109/CVPR42600.2020.00861

    Chapter  Google Scholar 

  55. Zhang J, Yu X, Li A, Song P, Liu B, Dai Y (2020) Weakly-supervised salient object detection via scribble annotations. In: IEEE conf comput vis pattern recognit, pp 12543–12552. https://doi.org/10.1109/CVPR42600.2020.01256

    Chapter  Google Scholar 

Download references

Acknowledgments

This research was funded by the National Natural Science Foundation of China (62002100).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Wang.

Ethics declarations

Conflict of interests

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deng, ., Ma, J., Li, Y. et al. Ternary symmetric fusion network for camouflaged object detection. Appl Intell 53, 25216–25231 (2023). https://doi.org/10.1007/s10489-023-04898-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04898-6

Keywords

Navigation