Skip to main content

CFA-Net: Cross-Level Feature Fusion and Aggregation Network for Salient Object Detection

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13537))

Included in the following conference series:

  • 1541 Accesses

Abstract

Salient object detection (SOD) is a research hotspot yet challenging task in computer vision, aiming at locating the most visually distinctive objects in an image. Recently, several deep learning based SOD models have been developed and achieved promising performance. Due to the salient variations among different convolutional layers, it is still a challenge to fuse the multi-level features and obtain better prediction performance. In this paper, we propose a Cross-level Feature fusion and Aggregation Network (CFA-Net) for SOD, which effectively integrates cross-level features to boost salient object performance. Specifically, an Adjacent Fusion Module (AFM) is proposed to effectively integrate the location information from low-level features with the rich semantic information of high-level features. Then, a Cascaded Feature Aggregation Module (CFAM) is proposed to further process the fused feature map, which is conducive to avoiding the introduction of redundant information and makes the features of the fusion feature map more distinctive. Unlike existing decoders, CFAM is able to enhance its own scale information while collecting the multi-scale information provided by the encoder, while allowing adjacent scale features to gain. The Hybrid-loss is introduced to assign weights for different pixels, making the network that focuses on details and boundaries. Extensive experimental results on several public datasets show the effectiveness of the proposed CFA-Net over other state-of-the-art SOD methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhou, T., Fan, D.-P., Cheng, M.-M., Shen, J., Shao, L.: RGB-D salient object detection: a survey. Comput. Visual Media 7(1), 37–69 (2021). https://doi.org/10.1007/s41095-020-0199-z

    Article  Google Scholar 

  2. Ishikura, K., Kurita, N., Chandler, D.M., Ohashi, G.: Saliency detection based on multiscale extrema of local perceptual color differences. IEEE TIP 27(2), 703–717 (2017)

    MathSciNet  MATH  Google Scholar 

  3. Zhang, P., Wang, D., Lu, H., Wang, H., Ruan, X.: Amulet: aggregating multi-level convolutional features for salient object detection. In: ICCV, pp. 202–211 (2017)

    Google Scholar 

  4. Zhang, X., Wang, T., Qi, J., Lu, H., Wang, G.: Progressive attention guided recurrent network for salient object detection. In: CVPR, pp. 714–722 (2018)

    Google Scholar 

  5. Hu, X., Zhu, L., Qin, J., Fu, C.-W., Heng, P.-A.: Recurrently aggregating deep features for salient object detection. In: AAAI, vol. 32 (2018)

    Google Scholar 

  6. Fan, D.-P., Ji, G.-P., Qin, X., Cheng, M.-M.: Cognitive vision inspired object segmentation metric and loss function (2021)

    Google Scholar 

  7. Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., Jagersand, M.: Basnet: boundary-aware salient object detection. In: CVPR, pp. 7479–7489 (2019)

    Google Scholar 

  8. Liu, J.-J., Hou, Q., Cheng, M.-M., Feng, J., Jiang, J.: A simple pooling-based design for real-time salient object detection. In: CVPR, pp. 3917–3926 (2019)

    Google Scholar 

  9. Fan, D.-P., Ji, G.-P., Sun, G., Cheng, M.-M., Shen, J., Shao, L.: Camouflaged object detection. In: CVPR, pp. 2777–2787 (2020)

    Google Scholar 

  10. Zhang, Q., Shi, Y., Zhang, X.: Attention and boundary guided salient object detection. Pattern Recogn. 107, 107484 (2020)

    Article  Google Scholar 

  11. Wang, G., Chen, C., Fan, D.-P., Hao, A., Qin, H.: From semantic categories to fixations: a novel weakly-supervised visual-auditory saliency detection approach. In: CVPR, pp. 15119–15128 (2021)

    Google Scholar 

  12. Zhang, P., Liu, W., Wang, D., Lei, Y., Wang, H., Lu, H.: Non-rigid object tracking via deep multi-scale spatial-temporal discriminative saliency maps. Pattern Recogn. 100, 107130 (2020)

    Article  Google Scholar 

  13. Mohammadi, S., Noori, M., Bahri, A., Majelan, S.G., Havaei, M.: Cagnet: content-aware guidance for salient object detection. Pattern Recogn. 103, 107303 (2020)

    Article  Google Scholar 

  14. Pang, Y., Zhao, X., Zhang, L., Lu, H.: Multi-scale interactive network for salient object detection. In: CVPR, pp. 9413–9422 (2020)

    Google Scholar 

  15. Qin, Y., et al.: Autofocus layer for semantic segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 603–611. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00931-1_69

    Chapter  Google Scholar 

  16. Feng, G., Burlutskiy, N., Andersson, M., Wilén, L.: Multi-resolution networks for semantic segmentation in whole slide images

    Google Scholar 

  17. Tokunaga, H., Teramoto, Y., Yoshizawa, A., Bise, R.: Adaptive weighting multi-field-of-view CNN for semantic segmentation in pathology. In: CVPR, pp. 12597–12606 (2019)

    Google Scholar 

  18. Lyu, Y., Schiopu, I., Munteanu, A.: Multi-modal neural networks with multi-scale RGB-t fusion for semantic segmentation. Electron. Lett. 56(18), 920–923 (2020)

    Article  Google Scholar 

  19. Si, H., Zhang, Z., Lv, F., Yu, G., Lu, F.: Real-time semantic segmentation via multiply spatial fusion network

    Google Scholar 

  20. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: CVPR (2018)

    Google Scholar 

  21. Zhou, T., Fu, H., Chen, G., Zhou, Y., Fan, D.-P., Shao, L.: Specificity-preserving RGB-D saliency detection. In: ICCV (2021)

    Google Scholar 

  22. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  23. Wei, J., Wang, S., Huang, Q.: F\(^3\)net: fusion, feedback and focus for salient object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12321–12328 (2020)

    Google Scholar 

  24. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (VOC) challenge. IJCV 88(2), 303–338 (2010)

    Article  Google Scholar 

  25. Wang, L., et al.: Learning to detect salient objects with image-level supervision. In: CVPR, pp. 136–145 (2017)

    Google Scholar 

  26. Xie, Y., Lu, H., Yang, M.-H.: Bayesian saliency via low and mid level cues. IEEE TIP 22(5), 1689–1698 (2012)

    MathSciNet  MATH  Google Scholar 

  27. Li, Y., Hou, X., Koch, C., Rehg, J.M., Yuille, A.L.: The secrets of salient object segmentation. In: CVPR, pp. 280–287 (2014)

    Google Scholar 

  28. Li, G., Yu, Y.: Visual saliency based on multiscale deep features. In: CVPR, pp. 5455–5463 (2015)

    Google Scholar 

  29. Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.-H.: Saliency detection via graph-based manifold ranking. In: CVPR, pp. 3166–3173 (2013)

    Google Scholar 

  30. Perazzi, F., Krähenbühl, P., Pritch, Y., Hornung, A.: Saliency filters: contrast based filtering for salient region detection. In: 2012 CVPR, pp. 733–740. IEEE (2012)

    Google Scholar 

  31. Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: CVPR, pp. 1597–1604. IEEE (2009)

    Google Scholar 

  32. Fan, D.-P., Gong, C., Cao, Y., Ren, B., Cheng, M.-M., Borji, A.: Enhanced-alignment measure for binary foreground map evaluation. In: IJCAI (2018)

    Google Scholar 

  33. Fan, D.-P., Cheng, M.-M., Liu, Y., Li, T., Borji, A.: Structure-measure: a new way to evaluate foreground maps. In: ICCV, pp. 4548–4557 (2017)

    Google Scholar 

  34. Lee, G., Tai, Y.-W., Kim, J.: Deep saliency with encoded low level distance map and high level features. In: CVPR, pp. 660–668 (2016)

    Google Scholar 

  35. Li, G., Yu, Y.: Deep contrast learning for salient object detection. In: CVPR, pp. 478–487 (2016)

    Google Scholar 

  36. Wang, T., Borji, A., Zhang, L., Zhang, P., Lu, H.: A stagewise refinement model for detecting salient objects in images. In: ICCV, pp. 4019–4028 (2017)

    Google Scholar 

  37. Hou, Q., Cheng, M.-M., Hu, X., Borji, A., Tu, Z., Torr, P.H.: Deeply supervised salient object detection with short connections. In: CVPR, pp. 3203–3212 (2017)

    Google Scholar 

  38. Chen, S., Tan, X., Wang, B., Hu, X.: Reverse attention for salient object detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 236–252. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01240-3_15

    Chapter  Google Scholar 

  39. Li, X., Yang, F., Cheng, H., Liu, W., Shen, D.: Contour knowledge transfer for salient object detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 370–385. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_22

    Chapter  Google Scholar 

  40. Wang, T., Zhang, L., Wang, S., Lu, H., et al.: Detect globally, refine locally: a novel approach to saliency detection. In: CVPR, pp. 3127–3135 (2018)

    Google Scholar 

  41. Wang, W., Shen, J., Cheng, M.-M., Shao, L.: An iterative and cooperative top-down and bottom-up inference network for salient object detection. In: CVPR, pp. 5968–5977 (2019)

    Google Scholar 

  42. Su, J., Li, J., Zhang, Y., Xia, C., Tian, Y.: Selectivity or invariance: boundary-aware salient object detection. In: ICCV, pp. 3799–3808 (2019)

    Google Scholar 

  43. Liu, N., Han, J., Yang, M.-H.: Picanet: learning pixel-wise contextual attention for saliency detection. In: CVPR, pp. 3089–3098 (2018)

    Google Scholar 

  44. Wu, Z., Su, L., Huang, Q.: Cascaded partial decoder for fast and accurate salient object detection. In: CVPR, pp. 3907–3916 (2019)

    Google Scholar 

  45. Feng, M., Lu, H., Ding, E.: Attentive feedback network for boundary-aware salient object detection. In: CVPR, pp. 1623–1632 (2019)

    Google Scholar 

  46. Zhao, J.-X., Liu, J.-J., Fan, D.-P., Cao, Y., Yang, J., Cheng, M.-M.: Egnet: edge guidance network for salient object detection. In: ICCV, pp. 8779–8788 (2019)

    Google Scholar 

  47. Qin, X., Zhang, Z., Huang, C., Dehghan, M., Zaiane, O.R., Jagersand, M.: U2-net: going deeper with nested u-structure for salient object detection. Pattern Recogn. 106, 107404 (2020)

    Article  Google Scholar 

  48. Zhao, X., Pang, Y., Zhang, L., Lu, H., Zhang, L.: Suppress and balance: a simple gated network for salient object detection. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 35–51. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_3

    Chapter  Google Scholar 

  49. Gao, S.-H., Tan, Y.-Q., Cheng, M.-M., Lu, C., Chen, Y., Yan, S.: Highly efficient salient object detection with 100K parameters. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 702–721. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_42

    Chapter  Google Scholar 

  50. Liu, Y., Zhang, X.-Y., Bian, J.-W., Zhang, L., Cheng, M.-M.: Samnet: stereoscopically attentive multi-scale network for lightweight salient object detection. IEEE TIP 30, 3804–3814 (2021)

    Google Scholar 

  51. Yang, Z., Soltanian-Zadeh, S., Farsiu, S.: Biconnet: an edge-preserved connectivity-based approach for salient object detection, arXiv preprint arXiv:2103.00334

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, J., Li, H., Li, L., Zhou, T., Chen, C. (2022). CFA-Net: Cross-Level Feature Fusion and Aggregation Network for Salient Object Detection. In: Yu, S., et al. Pattern Recognition and Computer Vision. PRCV 2022. Lecture Notes in Computer Science, vol 13537. Springer, Cham. https://doi.org/10.1007/978-3-031-18916-6_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-18916-6_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-18915-9

  • Online ISBN: 978-3-031-18916-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics