Abstract
Salient object detection (SOD) is a research hotspot yet challenging task in computer vision, aiming at locating the most visually distinctive objects in an image. Recently, several deep learning based SOD models have been developed and achieved promising performance. Due to the salient variations among different convolutional layers, it is still a challenge to fuse the multi-level features and obtain better prediction performance. In this paper, we propose a Cross-level Feature fusion and Aggregation Network (CFA-Net) for SOD, which effectively integrates cross-level features to boost salient object performance. Specifically, an Adjacent Fusion Module (AFM) is proposed to effectively integrate the location information from low-level features with the rich semantic information of high-level features. Then, a Cascaded Feature Aggregation Module (CFAM) is proposed to further process the fused feature map, which is conducive to avoiding the introduction of redundant information and makes the features of the fusion feature map more distinctive. Unlike existing decoders, CFAM is able to enhance its own scale information while collecting the multi-scale information provided by the encoder, while allowing adjacent scale features to gain. The Hybrid-loss is introduced to assign weights for different pixels, making the network that focuses on details and boundaries. Extensive experimental results on several public datasets show the effectiveness of the proposed CFA-Net over other state-of-the-art SOD methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Zhou, T., Fan, D.-P., Cheng, M.-M., Shen, J., Shao, L.: RGB-D salient object detection: a survey. Comput. Visual Media 7(1), 37–69 (2021). https://doi.org/10.1007/s41095-020-0199-z
Ishikura, K., Kurita, N., Chandler, D.M., Ohashi, G.: Saliency detection based on multiscale extrema of local perceptual color differences. IEEE TIP 27(2), 703–717 (2017)
Zhang, P., Wang, D., Lu, H., Wang, H., Ruan, X.: Amulet: aggregating multi-level convolutional features for salient object detection. In: ICCV, pp. 202–211 (2017)
Zhang, X., Wang, T., Qi, J., Lu, H., Wang, G.: Progressive attention guided recurrent network for salient object detection. In: CVPR, pp. 714–722 (2018)
Hu, X., Zhu, L., Qin, J., Fu, C.-W., Heng, P.-A.: Recurrently aggregating deep features for salient object detection. In: AAAI, vol. 32 (2018)
Fan, D.-P., Ji, G.-P., Qin, X., Cheng, M.-M.: Cognitive vision inspired object segmentation metric and loss function (2021)
Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., Jagersand, M.: Basnet: boundary-aware salient object detection. In: CVPR, pp. 7479–7489 (2019)
Liu, J.-J., Hou, Q., Cheng, M.-M., Feng, J., Jiang, J.: A simple pooling-based design for real-time salient object detection. In: CVPR, pp. 3917–3926 (2019)
Fan, D.-P., Ji, G.-P., Sun, G., Cheng, M.-M., Shen, J., Shao, L.: Camouflaged object detection. In: CVPR, pp. 2777–2787 (2020)
Zhang, Q., Shi, Y., Zhang, X.: Attention and boundary guided salient object detection. Pattern Recogn. 107, 107484 (2020)
Wang, G., Chen, C., Fan, D.-P., Hao, A., Qin, H.: From semantic categories to fixations: a novel weakly-supervised visual-auditory saliency detection approach. In: CVPR, pp. 15119–15128 (2021)
Zhang, P., Liu, W., Wang, D., Lei, Y., Wang, H., Lu, H.: Non-rigid object tracking via deep multi-scale spatial-temporal discriminative saliency maps. Pattern Recogn. 100, 107130 (2020)
Mohammadi, S., Noori, M., Bahri, A., Majelan, S.G., Havaei, M.: Cagnet: content-aware guidance for salient object detection. Pattern Recogn. 103, 107303 (2020)
Pang, Y., Zhao, X., Zhang, L., Lu, H.: Multi-scale interactive network for salient object detection. In: CVPR, pp. 9413–9422 (2020)
Qin, Y., et al.: Autofocus layer for semantic segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 603–611. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00931-1_69
Feng, G., Burlutskiy, N., Andersson, M., Wilén, L.: Multi-resolution networks for semantic segmentation in whole slide images
Tokunaga, H., Teramoto, Y., Yoshizawa, A., Bise, R.: Adaptive weighting multi-field-of-view CNN for semantic segmentation in pathology. In: CVPR, pp. 12597–12606 (2019)
Lyu, Y., Schiopu, I., Munteanu, A.: Multi-modal neural networks with multi-scale RGB-t fusion for semantic segmentation. Electron. Lett. 56(18), 920–923 (2020)
Si, H., Zhang, Z., Lv, F., Yu, G., Lu, F.: Real-time semantic segmentation via multiply spatial fusion network
Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: CVPR (2018)
Zhou, T., Fu, H., Chen, G., Zhou, Y., Fan, D.-P., Shao, L.: Specificity-preserving RGB-D saliency detection. In: ICCV (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
Wei, J., Wang, S., Huang, Q.: F\(^3\)net: fusion, feedback and focus for salient object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12321–12328 (2020)
Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (VOC) challenge. IJCV 88(2), 303–338 (2010)
Wang, L., et al.: Learning to detect salient objects with image-level supervision. In: CVPR, pp. 136–145 (2017)
Xie, Y., Lu, H., Yang, M.-H.: Bayesian saliency via low and mid level cues. IEEE TIP 22(5), 1689–1698 (2012)
Li, Y., Hou, X., Koch, C., Rehg, J.M., Yuille, A.L.: The secrets of salient object segmentation. In: CVPR, pp. 280–287 (2014)
Li, G., Yu, Y.: Visual saliency based on multiscale deep features. In: CVPR, pp. 5455–5463 (2015)
Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.-H.: Saliency detection via graph-based manifold ranking. In: CVPR, pp. 3166–3173 (2013)
Perazzi, F., Krähenbühl, P., Pritch, Y., Hornung, A.: Saliency filters: contrast based filtering for salient region detection. In: 2012 CVPR, pp. 733–740. IEEE (2012)
Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: CVPR, pp. 1597–1604. IEEE (2009)
Fan, D.-P., Gong, C., Cao, Y., Ren, B., Cheng, M.-M., Borji, A.: Enhanced-alignment measure for binary foreground map evaluation. In: IJCAI (2018)
Fan, D.-P., Cheng, M.-M., Liu, Y., Li, T., Borji, A.: Structure-measure: a new way to evaluate foreground maps. In: ICCV, pp. 4548–4557 (2017)
Lee, G., Tai, Y.-W., Kim, J.: Deep saliency with encoded low level distance map and high level features. In: CVPR, pp. 660–668 (2016)
Li, G., Yu, Y.: Deep contrast learning for salient object detection. In: CVPR, pp. 478–487 (2016)
Wang, T., Borji, A., Zhang, L., Zhang, P., Lu, H.: A stagewise refinement model for detecting salient objects in images. In: ICCV, pp. 4019–4028 (2017)
Hou, Q., Cheng, M.-M., Hu, X., Borji, A., Tu, Z., Torr, P.H.: Deeply supervised salient object detection with short connections. In: CVPR, pp. 3203–3212 (2017)
Chen, S., Tan, X., Wang, B., Hu, X.: Reverse attention for salient object detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 236–252. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01240-3_15
Li, X., Yang, F., Cheng, H., Liu, W., Shen, D.: Contour knowledge transfer for salient object detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 370–385. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_22
Wang, T., Zhang, L., Wang, S., Lu, H., et al.: Detect globally, refine locally: a novel approach to saliency detection. In: CVPR, pp. 3127–3135 (2018)
Wang, W., Shen, J., Cheng, M.-M., Shao, L.: An iterative and cooperative top-down and bottom-up inference network for salient object detection. In: CVPR, pp. 5968–5977 (2019)
Su, J., Li, J., Zhang, Y., Xia, C., Tian, Y.: Selectivity or invariance: boundary-aware salient object detection. In: ICCV, pp. 3799–3808 (2019)
Liu, N., Han, J., Yang, M.-H.: Picanet: learning pixel-wise contextual attention for saliency detection. In: CVPR, pp. 3089–3098 (2018)
Wu, Z., Su, L., Huang, Q.: Cascaded partial decoder for fast and accurate salient object detection. In: CVPR, pp. 3907–3916 (2019)
Feng, M., Lu, H., Ding, E.: Attentive feedback network for boundary-aware salient object detection. In: CVPR, pp. 1623–1632 (2019)
Zhao, J.-X., Liu, J.-J., Fan, D.-P., Cao, Y., Yang, J., Cheng, M.-M.: Egnet: edge guidance network for salient object detection. In: ICCV, pp. 8779–8788 (2019)
Qin, X., Zhang, Z., Huang, C., Dehghan, M., Zaiane, O.R., Jagersand, M.: U2-net: going deeper with nested u-structure for salient object detection. Pattern Recogn. 106, 107404 (2020)
Zhao, X., Pang, Y., Zhang, L., Lu, H., Zhang, L.: Suppress and balance: a simple gated network for salient object detection. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 35–51. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_3
Gao, S.-H., Tan, Y.-Q., Cheng, M.-M., Lu, C., Chen, Y., Yan, S.: Highly efficient salient object detection with 100K parameters. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 702–721. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_42
Liu, Y., Zhang, X.-Y., Bian, J.-W., Zhang, L., Cheng, M.-M.: Samnet: stereoscopically attentive multi-scale network for lightweight salient object detection. IEEE TIP 30, 3804–3814 (2021)
Yang, Z., Soltanian-Zadeh, S., Farsiu, S.: Biconnet: an edge-preserved connectivity-based approach for salient object detection, arXiv preprint arXiv:2103.00334
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, J., Li, H., Li, L., Zhou, T., Chen, C. (2022). CFA-Net: Cross-Level Feature Fusion and Aggregation Network for Salient Object Detection. In: Yu, S., et al. Pattern Recognition and Computer Vision. PRCV 2022. Lecture Notes in Computer Science, vol 13537. Springer, Cham. https://doi.org/10.1007/978-3-031-18916-6_36
Download citation
DOI: https://doi.org/10.1007/978-3-031-18916-6_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18915-9
Online ISBN: 978-3-031-18916-6
eBook Packages: Computer ScienceComputer Science (R0)