Skip to main content
Log in

Underwater image enhancement via color conversion and white balance-based fusion

  • Research
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The task of enhancing underwater images presents a significant challenge due to the refraction and absorption of light in water, resulting in images that often appear bluish or greenish with diminished contrast. Furthermore, the scarcity of underwater datasets complicates the achievement of robust generalization capacity to address complex underwater scenarios. In this study, we introduce generalized underwater image enhancement model with color-guided adaptive feature fusion (GU-CAFF), designed to rectify various degraded underwater images, utilizing a minimal amount of training data. GU-CAFF primarily comprises two modules: a multi-level color-feature encoder (MCE) and a white balance-based fusion (WBF) module. The MCE integrates physical models to extract features from underwater images exhibiting different color deviations, emphasizing essential features while preserving their structural information. In addition, WBF, in conjunction with a statistical model, is proposed to fuse the features extracted by the encoder and rectify the color distortion of specific pixels in degraded images. The proposed method can be trained once on our developed dataset and exhibits robust generalization capabilities on other datasets. Quantitative and qualitative comparisons are conducted with several state-of-the-art underwater image enhancement models, demonstrating our superior performance in enhancing underwater images.The source code will be available at https://github.com/shiningZZ/GU-CAFF.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data availability

No datasets were generated or analyzed during the current study.

References

  1. Fan, J., Wang, X., Zhou, C., Ou, Y., Jing, F., Hou, Z.: Development, calibration, and image processing of underwater structured light vision system: a survey. IEEE Trans. Instrum. Meas. 72, 1–18 (2023). https://doi.org/10.1109/TIM.2023.3235420

    Article  Google Scholar 

  2. Rout, D.K., Subudhi, B.N., Veerakumar, T., Chaudhury, S., Soraghan, J.: Multiresolution visual enhancement of hazy underwater scene. Multimed. Tools Appl. 81(23), 32907–32936 (2022)

    Article  Google Scholar 

  3. Li, C.Y., Guo, J.C., Cong, R.M., Pang, Y.W., Wang, B.: Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 25(12), 5664–5677 (2016)

    Article  MathSciNet  Google Scholar 

  4. Zhuang, P., Li, C., Wu, J.: Bayesian retinex underwater image enhancement. Eng. Appl. Artif. Intell. 101, 104171 (2021)

    Article  Google Scholar 

  5. Peng, Y.T., Cosman, P.C.: Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 26(4), 1579–1594 (2017)

    Article  MathSciNet  Google Scholar 

  6. Akkaynak, D., Treibitz, T.: Sea-thru: A method for removing water from underwater images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1682–1691. (2019)

  7. Finlayson, G,D., Trezzi, E.: Shades of gray and colour constancy. In: Color and Imaging Conference. 1. Society for Imaging Science and Technology, pp. 37–41 (2004)

  8. Ancuti, C.O., Ancuti, C., De Vleeschouwer, C., Bekaert, P.: Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 27(1), 379–393 (2017)

    Article  MathSciNet  Google Scholar 

  9. Sun, X., Zhu, Y., Fu, X.: RGB and optimal waveband image fusion for real-time underwater clear image acquisition. IEEE Trans. Instrum. Meas. 72, 1–17 (2023). https://doi.org/10.1109/TIM.2023.3290366

    Article  Google Scholar 

  10. Guo, H., Sheng, B., Li, P., Chen, C.P.: Multiview high dynamic range image synthesis using fuzzy broad learning system. IEEE Trans. Cybern. 51(5), 2735–2747 (2019)

    Article  Google Scholar 

  11. Hambarde, P., Murala, S., Dhall, A.: UW-GAN: single-image depth estimation and image enhancement for underwater images. IEEE Trans. Instrum. Meas. 70, 1–12 (2021). https://doi.org/10.1109/TIM.2021.3120130

    Article  Google Scholar 

  12. Huang, Z., Li, J., Hua, Z., Fan, L.: Underwater image enhancement via adaptive group attention-based multiscale cascade transformer. IEEE Trans. Instrum. Meas. 71, 1–18 (2022). https://doi.org/10.1109/TIM.2022.3189630

    Article  Google Scholar 

  13. Liu, C., Shu, X., Pan, L., Shi, J., Han, B.: Multiscale underwater image enhancement in RGB and HSV color spaces. IEEE Trans. Instrum. Meas. 72, 1–14 (2023). https://doi.org/10.1109/TIM.2023.3298395

    Article  Google Scholar 

  14. Yan, X., Qin, W., Wang, Y., Wang, G., Fu, X.: Attention-guided dynamic multi-branch neural network for underwater image enhancement. Knowl.-Based Syst. 258, 110041 (2022)

    Article  Google Scholar 

  15. Li, C., Anwar, S., Hou, J., Cong, R., Guo, C., Ren, W.: Underwater image enhancement via medium transmission-guided multi-color space embedding. IEEE Trans. Image Process. 30, 4985–5000 (2021)

    Article  Google Scholar 

  16. Chen, L., Jiang, Z., Tong, L., Liu, Z., Zhao, A., Zhang, Q., et al.: Perceptual underwater image enhancement with deep learning and physical priors. IEEE Trans. Circuits Syst. Video Technol. 31(8), 3078–3092 (2020)

    Article  Google Scholar 

  17. Zhou, J., Zhang, D., Zhang, W.: Underwater image enhancement method via multi-feature prior fusion. Appl. Intell. 52(14), 16435–16457 (2022)

    Article  Google Scholar 

  18. Fu, Z., Wang, W., Huang, Y., Ding, X., Uncertainty, Ma. K.K., Enhancement, inspired underwater image. In: Computer Vision-ECCV 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVIII. Springer 2022, pp. 465–482 (2022)

  19. Li, J., Skinner, K.A., Eustice, R.M., Johnson-Roberson, M.: WaterGAN: unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 3(1), 387–394 (2017)

    Google Scholar 

  20. Mu, P., Qian, H., Bai, C.: Structure-inferred bi-level model for underwater image enhancement. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 2286–2295 (2022)

  21. Drews, P.L., Nascimento, E.R., Botelho, S.S., Campos, M.F.M.: Underwater depth estimation and image restoration based on single images. IEEE Comput. Graphics Appl. 36(2), 24–35 (2016)

    Article  Google Scholar 

  22. Buchsbaum, G.: A spatial processor model for object colour perception. J. Franklin Inst. 310(1), 1–26 (1980)

    Article  Google Scholar 

  23. Land, E.H.: The retinex theory of color vision. Sci. Am. 237(6), 108–129 (1977)

    Article  Google Scholar 

  24. Ebner, M.: Color Constancy, vol. 7. Wiley, New York (2007)

    Google Scholar 

  25. Fu, Z., Lin, H., Yang, Y., Chai, S., Sun, L., Huang, Y, et al.: Unsupervised underwater image restoration: from a homology perspective. In: Proceedings of the AAAI Conference on Artificial Intelligence. 1, pp. 643–651 (2022)

  26. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010)

    Google Scholar 

  27. Drews, P.L., Nascimento, E.R., Botelho, S.S., Campos, M.F.M.: Underwater depth estimation and image restoration based on single images. IEEE Comput. Graphics Appl. 36(2), 24–35 (2016)

    Article  Google Scholar 

  28. Peng, Y.T., Cao, K., Cosman, P.C.: Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 27(6), 2856–2868 (2018)

    Article  MathSciNet  Google Scholar 

  29. Akkaynak, D., Treibitz, T.: A revised underwater image formation model. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6723–6732 (2018)

  30. Iqbal, K., Odetayo, M., James, A., Salam, R.A., Talib, A.Z.H.: Enhancing the low quality images using unsupervised colour correction method. In: 2010 IEEE International Conference on Systems, Man and Cybernetics. IEEE, pp. 1703–1709 (2010)

  31. Paul, S., Sevcenco, I.S., Agathoklis, P.: Multi-exposure and multi-focus image fusion in gradient domain. J. Circuits Syst. Comput. 25(10), 1650123 (2016)

    Article  Google Scholar 

  32. Iqbal, K., Odetayo, M., James, A., Salam, R.A., Talib, A.Z.H. Enhancing the low quality images using unsupervised colour correction method. In: 2010 IEEE International Conference on Systems, Man and Cybernetics. IEEE, pp. 1703–1709 (2010)

  33. Sheng, B., Li, P., Fang, X., Tan, P., Wu, E.: Depth-aware motion deblurring using loopy belief propagation. IEEE Trans. Circuits Syst. Video Technol. 30(4), 955–969 (2019)

    Article  Google Scholar 

  34. Chen, Z., Qiu, G., Li, P., Zhu, L., Yang, X., Sheng, B.: MNGNAS: distilling adaptive combination of multiple searched networks for one-shot neural architecture search. IEEE Trans. Pattern Anal. Mach. Intell. 45(11), 13489–13508 (2023). https://doi.org/10.1109/TPAMI.2023.3293885

    Article  Google Scholar 

  35. Lin, X., Sun, S., Huang, W., Sheng, B., Li, P., Feng, D.D.: EAPT: efficient attention pyramid transformer for image processing. IEEE Trans. Multimedia 25, 50–61 (2023). https://doi.org/10.1109/TMM.2021.3120873

    Article  Google Scholar 

  36. Sheng, B., Li, P., Jin, Y., Tan, P., Lee, T.Y.: Intrinsic image decomposition with step and drift shading separation. IEEE Trans. Visual Comput. Graphics 26(2), 1332–1346 (2018)

    Article  Google Scholar 

  37. Dai, L., Wu, L., Li, H., Cai, C., Wu, Q., Kong, H., et al.: A deep learning system for detecting diabetic retinopathy across the disease spectrum. Nat. Commun. 12(1), 3242 (2021)

    Article  Google Scholar 

  38. Jin, Y., Sheng, B., Li, P., Chen, C.P.: Broad colorization. IEEE Trans. Neural Netw. Learn. Syst. 32(6), 2330–2343 (2020)

    Article  Google Scholar 

  39. Cheng, Z., Yang, Q., Sheng, B.: Deep colorization. In: Proceedings of the IEEE international conference on computer vision, pp. 415–423 (2015)

  40. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  41. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020)

    Article  MathSciNet  Google Scholar 

  42. Wen, Y., Chen, J., Sheng, B., Chen, Z., Li, P., Tan, P., et al.: Structure-aware motion deblurring using multi-adversarial optimized cyclegan. IEEE Trans. Image Process. 30, 6142–6155 (2021)

    Article  Google Scholar 

  43. Li, H., Sheng, B., Li, P., Ali, R., Chen, C.P.: Globally and locally semantic colorization via exemplar-based broad-GAN. IEEE Trans. Image Process. 30, 8526–8539 (2021)

    Article  Google Scholar 

  44. Li, C., Guo, J., Guo, C.: Emerging from water: underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. 25(3), 323–327 (2018)

    Article  Google Scholar 

  45. Islam, M.J., Xia, Y., Sattar, J.: Fast underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 5(2), 3227–3234 (2020)

    Article  Google Scholar 

  46. Li, C., Anwar, S., Porikli, F.: Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recogn. 98, 107038 (2020)

    Article  Google Scholar 

  47. Zhou, Y., Chen, Z., Li, P., Song, H., Chen, C.P., Sheng, B.: FSAD-Net: feedback spatial attention dehazing network. IEEE Trans. Neural Netw. Learn. Syst. (2022)

  48. Li, J., Skinner, K.A., Eustice, R.M., Johnson-Roberson, M.: WaterGAN: unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 3(1), 387–394 (2017)

    Google Scholar 

  49. Huang, S., Wang, K., Liu, H., Chen, J., Li, Y.: Contrastive semi-supervised learning for underwater image restoration via reliable bank. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 18145–18155 (2023)

  50. Qi, Q., Li, K., Zheng, H., Gao, X., Hou, G., Sun, K.: SGUIE-Net: semantic attention guided underwater image enhancement with multi-scale perception. IEEE Trans. Image Process. 31, 6816–6830 (2022)

    Article  Google Scholar 

  51. Schechner, Y.Y., Karpel, N.: Recovery of underwater visibility and structure by polarization analysis. IEEE J. Oceanic Eng. 30(3), 570–587 (2005)

    Article  Google Scholar 

  52. Dai, Y., Gieseke, F., Oehmcke, S., Wu, Y., Barnard, K.: Attentional feature fusion. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 3560–3569 (2021)

  53. Yang, H.H., Huang, K.C., Chen, W.T.: Laffnet: A lightweight adaptive feature fusion network for underwater image enhancement. In: 2021 IEEE International conference on robotics and automation (ICRA). IEEE, pp. 685–692 (2021)

  54. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp. 3–19 (2018)

  55. Xiao, Z., Han, Y., Rahardja, S., Ma, Y.: USLN: A statistically guided lightweight network for underwater image enhancement via dual-statistic white balance and multi-color space stretch. arXiv preprint arXiv:2209.02221. (2022)

  56. Peng, L., Zhu, C., Bian, L.: U-shape transformer for underwater image enhancement. In: Computer Vision–ECCV 2022 workshops: Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part II. Springer; 290–307 (2023)

  57. Li, C., Guo, C., Ren, W., Cong, R., Hou, J., Kwong, S., et al.: An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29, 4376–4389 (2019)

    Article  Google Scholar 

  58. Islam, M.J., Xia, Y., Sattar, J.: Fast underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 5(2), 3227–3234 (2020)

    Article  Google Scholar 

  59. Berman, D., Levy, D., Avidan, S., Treibitz, T.: Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Trans. Pattern Anal. Mach. Intell. 43(8), 2822–2837 (2020)

    Google Scholar 

  60. Liu, R., Fan, X., Zhu, M., Hou, M., Luo, Z.: Real-world underwater enhancement: challenges, benchmarks, and solutions under natural light. IEEE Trans. Circuits Syst. Video Technol. 30(12), 4861–4875 (2020)

    Article  Google Scholar 

  61. Li, C., Guo, C., Ren, W., Cong, R., Hou, J., Kwong, S., et al.: An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29, 4376–4389 (2019)

    Article  Google Scholar 

  62. Yang, M., Sowmya, A.: An underwater color image quality evaluation metric. IEEE Trans. Image Process. 24(12), 6062–6071 (2015)

    Article  MathSciNet  Google Scholar 

  63. Panetta, K., Gao, C., Agaian, S.: Human-visual-system-inspired underwater image quality measures. IEEE J. Oceanic Eng. 41(3), 541–551 (2015)

    Article  Google Scholar 

  64. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind’’ image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012)

    Article  Google Scholar 

  65. Ancuti, C., Ancuti, C.O., Haber, T., Bekaert, P., Enhancing underwater images and videos by fusion. In: IEEE conference on computer vision and pattern recognition. IEEE 2012, 81–88 (2012)

  66. Fabbri, C., Islam, M.J., Sattar, J.: Enhancing underwater imagery using generative adversarial networks. In: 2018 IEEE International conference on robotics and automation (ICRA). IEEE, pp. 7159–7165 (2018)

  67. Qin, X., Zhang, Z., Huang, C., Dehghan, M., Zaiane, O.R., Jagersand, M.: U2-Net: Going deeper with nested U-structure for salient object detection. Pattern Recogn. 106, 107404 (2020)

    Article  Google Scholar 

  68. Roboflow.: aquarium combined dataset [Open Source Dataset]. Roboflow. Visited on 2023-04-13. Available from: https://universe.roboflow.com/brad-dwyer/aquarium-combined

Download references

Acknowledgements

This work is supported by the Natural Science Foundation of China (Grant No. 62202429) and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LY23F020024.

Author information

Authors and Affiliations

Authors

Contributions

HX contributed to writing—original draft preparation. PM contributed to conceptualization and methodology. ZL contributed to visualization and investigation. SC contributed to validation, reviewing and editing.

Corresponding author

Correspondence to Pan Mu.

Ethics declarations

Conflict of interest

The authors declare no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, H., Mu, P., Liu, Z. et al. Underwater image enhancement via color conversion and white balance-based fusion. Vis Comput (2024). https://doi.org/10.1007/s00371-024-03421-3

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-024-03421-3

Keywords

Navigation