Skip to main content
Log in

TIENet: task-oriented image enhancement network for degraded object detection

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Degraded images often suffer from low contrast, color deviations, and blurring details, which significantly affect the performance of detectors. Many previous works have attempted to obtain high-quality images based on human perception using image enhancement algorithms. However, these enhancement algorithms usually suppress the performance of degraded object detection. In this paper, we propose a task-oriented image enhancement network (TIENet) to directly improve degraded object detection’s performance by enhancing the degraded images. Unlike common human perception-based image-to-image methods, TIENet is a zero-reference enhancement network, which obtains a detection-favorable structure image that is added to the original degraded image. In addition, this paper presents a fast Fourier transform-based structure loss for the enhancement task. With the new loss, our TIENet enables the structure image obtained to enhance more useful detection-favorable structural information and suppress irrelevant information. Extensive experiments and comprehensive evaluations on underwater (URPC2020) and foggy (RTTS) datasets show that our proposed framework can achieve 0.5–1.6% AP absolute improvements on classic detectors, including Faster R-CNN, RetinaNet, FCOS, ATSS, PAA, and TOOD. Besides, our method also generalizes well to the PASCAL VOC dataset, which can achieve 0.2–0.7% gains. We expect this study can draw more attention to high-level task-oriented degraded image enhancement. The code and pre-trained models are available at https://github.com/BIGWangYuDong/lqit/tree/main/configs/detection/tienet.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Ancuti, C.O., Ancuti, C., De Vleeschouwer, C., Bekaert, P.: Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 27(1), 379–393 (2017)

    Article  MathSciNet  Google Scholar 

  2. Ancuti, C.O., Ancuti, C., Timofte, R.: Nh-haze: an image dehazing benchmark with non-homogeneous hazy and haze-free images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 444–445 (2020)

  3. Ancuti, C., Ancuti, C.O., Timofte, R., De Vleeschouwer, C.: I-haze: a dehazing benchmark with real hazy and haze-free indoor images. In: Advanced Concepts for Intelligent Vision Systems: 19th International Conference, ACIVS 2018, Poitiers, France, September 24–27, 2018, Proceedings 19, pp. 620–631. Springer, (2018)

  4. Ancuti, C., Ancuti, C.O., Haber, T., Bekaert, P.: Enhancing underwater images and videos by fusion. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 81–88. IEEE, (2012)

  5. Anwar, S., Li, C.: Diving deeper into underwater image enhancement: a survey. Signal Process. Image Commun. 89, 115978 (2020)

    Article  Google Scholar 

  6. Chen, W.-T., Chen, I.-H., Yeh, C.-Y., Yang, H.-H., Ding, J.-J., Kuo, S.-Y.: Sjdl-vehicle: semi-supervised joint defogging learning for foggy vehicle re-identification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 347–355, (2022)

  7. Dai, D., Van Gool, L.: Dark model adaptation: semantic image segmentation from daytime to nighttime. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 3819–3824. IEEE, (2018)

  8. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  9. Fan, Z., Tang, C., Shen, Y., Xu, M., Lei, Z.: RME: a low-light image enhancement model based on reflectance map enhancing. Signal Image Video Process. 17(4), 1493–1502 (2023)

    Article  Google Scholar 

  10. Feng, C., Zhong, Y., Gao, Y., Scott, M.R., Huang, W.: TOOD: task-aligned one-stage object detection. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3490–3499. IEEE Computer Society, (2021)

  11. Fu, B., Dong, Y., Fu, S., Wu, Y., Ren, Y., Thanh, D.N.: Multistage supervised contrastive learning for hybrid-degraded image restoration. Signal Image Video Process. 17(2), 573–581 (2023)

    Article  Google Scholar 

  12. Galdran, A., Pardo, D., Picón, A., Alvarez-Gila, A.: Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 26, 132–145 (2015)

    Article  Google Scholar 

  13. Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1780–1789, (2020)

  14. Guo, C., Wu, R., Jin, X., Han, L., Chai, Z., Zhang, W., Li, C.: Underwater ranker: learn which is better and how to be better. arXiv preprint arXiv:2208.06857 (2022)

  15. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010)

    Google Scholar 

  16. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2012)

    Article  Google Scholar 

  17. Jiang, L., Wang, Y., Jia, Q., Xu, S., Liu, Y., Fan, X., Li, H., Liu, R., Xue, X., Wang, R.: Underwater species detection using channel sharpening attention. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 4259–4267, (2021)

  18. Jichang, G., Huihui, Y., Yi, Z., Di, L., Xiaowen, L., Sida, Z.: The analysis of image enhancement on salient object detection. J. Image Graph. 27(7), 2129–2147 (2022)

    Google Scholar 

  19. Kang, Y., Jiang, Q., Li, C., Ren, W., Liu, H., Wang, P.: A perception-aware decomposition and fusion framework for underwater image enhancement. IEEE Trans. Circ. Syst. Video Technol. 33(3), 988–1002 (2022)

    Article  Google Scholar 

  20. Kim, K., Lee, H.S.: Probabilistic anchor assignment with iou prediction for object detection. In: ECCV, (2020)

  21. Koschmieder, H.: Theorie der horizontalen sichtweite. Beitrage zur Physik der freien Atmosphare, pp. 33–53 (1924)

  22. Li, B., Ren, W., Dengpan, F., Tao, D., Feng, D., Zeng, W., Wang, Z.: Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 28(1), 492–505 (2018)

    Article  MathSciNet  Google Scholar 

  23. Li, C.-Y., Guo, J.-C., Cong, R.-M., Pang, Y.-W., Wang, B.: Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 25(12), 5664–5677 (2016)

    Article  MathSciNet  Google Scholar 

  24. Li, C., Anwar, S., Porikli, F.: Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recogn. 98, 107038 (2020)

    Article  Google Scholar 

  25. Li, C., Guo, C., Guo, J., Han, P., Huazhu, F., Cong, R.: PDR-Net: perception-inspired single image dehazing network with refinement. IEEE Trans. Multimed. 22(3), 704–716 (2019)

    Article  Google Scholar 

  26. Li, C., Guo, C., Loy, C.C.: Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 44(8), 4225–4238 (2021)

    Google Scholar 

  27. Li, C., Guo, C., Ren, W., Cong, R., Hou, J., Kwong, S., Tao, D.: An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29, 4376–4389 (2019)

    Article  Google Scholar 

  28. Li, F., Di, X., Zhao, C., Zheng, Y., Wu, S.: FA-GAN: a feature attention GAN with fusion discriminator for non-homogeneous dehazing. Signal Image Video Process. 1-9 (2022)

  29. Li, J., Skinner, K.A., Eustice, R.M., Johnson-Roberson, M.: WaterGAN: unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 3(1), 387–394 (2017)

    Google Scholar 

  30. Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

  31. Liu, C., Li, H., Wang, S., Zhu, M., Wang, D., Fan, X., Wang, Z.: A dataset and benchmark of underwater object detection for robot picking. In: 2021 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 1–6. IEEE, (2021)

  32. Liu, G., Reda, F.A., Shih, K.J., Wang, T.-C.: Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100, (2018)

  33. Liu, R., Jiang, Z., Yang, S., Fan, X.: Twin adversarial contrastive learning for underwater image enhancement and beyond. IEEE Trans. Image Process. 31, 4922–4936 (2022)

    Article  Google Scholar 

  34. Liu, W., Ren, G., Yu, R., Guo, S., Zhu, J., Zhang, L.: Image-adaptive yolo for object detection in adverse weather conditions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 1792–1800 (2022)

  35. Liu, Z., Shao, M., Sun, Y., Peng, Z.: Multi-task feature-aligned head in one-stage object detection. Signal Image Video Process. 17(4), 1345–1353 (2023)

    Article  Google Scholar 

  36. Miao, Yu., Zhao, X., Kan, J.: An end-to-end single image dehazing network based on u-net. SIViP 16(7), 1739–1746 (2022)

    Article  Google Scholar 

  37. Pei, Y., Huang, Y., Zou, Q., Lu, Y., Wang, S.: Does haze removal help cnn-based image classification? In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 682–697, (2018)

  38. Pei, Y., Huang, Y., Zou, Q., Zhang, X., Wang, S.: Effects of image degradation and degradation removal to cnn-based image classification. IEEE Trans. Pattern Anal. Mach. Intell. 43(4), 1239–1253 (2019)

    Article  Google Scholar 

  39. Peng, Y.-T., Cao, K., Cosman, P.C.: Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 27(6), 2856–2868 (2018)

    Article  MathSciNet  Google Scholar 

  40. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(06), 1137–1149 (2017)

    Article  Google Scholar 

  41. Tian, Z., Shen, C., Chen, H., He, T.: Fcos: fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9627–9636 (2019)

  42. Wang, C.-Y., Liao, H.-Y.M., Wu, Y.-H., Chen, P.-Y., Hsieh, J.-W., Yeh, I.-H.: Cspnet: a new backbone that can enhance learning capability of cnn. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 390–391, (2020)

  43. Wang, H., Wu, X., Huang, X., Xing, E.P.: High-frequency component helps explain the generalization of convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8684–8694, (2020)

  44. Wang, N., Zhou, Y., Han, F., Zhu, H., Zheng, Y.: UWGAN: Underwater GAN for real-world underwater color restoration and dehazing. arXiv preprint arXiv:1912.10269, (2019)

  45. Wang, Y., Yan, X., Zhang, K., Gong, L., Xie, L., Wang, F.L., Wei, M.: Togethernet: Bridging image restoration and object detection together via dynamic enhancement learning. arXiv preprint arXiv:2209.01373, (2022)

  46. Wang, Y., Guo, J., Gao, H., Yue, H.: \(\text{ UIEC}^{2}\)-Net: CNN-based underwater image enhancement using two color space. Signal Process. Image Commun. 96, 116250 (2021)

    Article  Google Scholar 

  47. Wang, Y., Guo, J., He, W.: Underwater object detection aided by image reconstruction. In: 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), pp. 1–6, (2022)

  48. Xiao, Y., Jiang, A., Ye, J., Wang, M.-W.: Making of night vision: object detection under low-illumination. IEEE Access 8, 123075–123086 (2020)

    Article  Google Scholar 

  49. Zhang, J., Pan, D., Zhang, K., Jin, J., Ma, Y., Chen, M.: Underwater single-image restoration based on modified generative adversarial net. Signal Image Video Process. 17(4), 1153–1160 (2022)

    Article  Google Scholar 

  50. Zhang, S., Chi, C., Yao, Y., Lei, Z., Li, S.Z.: Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9759–9768, (2020)

  51. Zhang, W., Wang, Y., Li, C.: Underwater image enhancement by attenuated color channel correction and detail preserved contrast enhancement. IEEE J. Ocean. Eng. 47(3), 718–735 (2022)

    Article  Google Scholar 

  52. Zhou, J., Liu, D., Xie, X., Zhang, W.: Underwater image restoration by red channel compensation and underwater median dark channel prior. Appl. Opt. 61(10), 2915–2922 (2022)

    Article  Google Scholar 

  53. Zhuang, P., Jiamin, W., Porikli, F., Li, C.: Underwater image enhancement with hyper-Laplacian reflectance priors. IEEE Trans. Image Process. 31, 5442–5455 (2022)

    Article  Google Scholar 

Download references

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 62171315) and Tianjin Research Innovation Project for Postgraduate Students (No. 2021YJSB153).

Author information

Authors and Affiliations

Authors

Contributions

YW proposed the main ideas of the paper. His work includes methodology implementation, experimental design, programming, writing, reviewing, and editing the paper. JG proposed the main ideas of the paper. His work includes methodology implementation, experimental design, programming, writing, reviewing, and editing the paper. RW work includes discussing ideas, programming the analysis script, comparing experiments, and reviewing and editing the paper. WH work includes discussing ideas, collecting data, comparing experiments, and reviewing and editing the paper. CL proposed the main ideas of the paper. His work consists of discussing, conceptualizing the ideas for the paper, and reviewing and editing the paper.

Corresponding author

Correspondence to Jichang Guo.

Ethics declarations

Conflict of interest

I declare that the authors have no competing interests as defined by Springer, or other interests that might be perceived to influence the results and/or discussion reported in this paper. The authors declare no competing interests.

Ethical approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Guo, J., Wang, R. et al. TIENet: task-oriented image enhancement network for degraded object detection. SIViP 18, 1–8 (2024). https://doi.org/10.1007/s11760-023-02695-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-023-02695-9

Keywords

Navigation