Skip to main content
Log in

A novel image denoising algorithm based on least square generative adversarial network

  • Research
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

In recent years, computer vision models have shown a significant improvement in performance on various image analysis tasks. However, these models are not robust against noisy images. There are various causes of noise including noise coming from image capturing conditions such as lighting, weather, and the noise induced by physical measurement devices such as the camera or sensor. To address the problem of noise in images, various works have been proposed. Most of these works focus on computer vision models that run on high-compute devices such as mainframes. However, with recent advances in the deployment of AI technology in mobile and edge devices, there is an increased need for models which work equally well on such devices. In this work, a novel image-denoising algorithm based on the Least Square Generative Adversarial Network (LSGAN) is proposed. It also has the ability to work in settings with lower computing access such as edge devices. The generator part of the LSGAN is a UNet-based architecture which is used to capture detailed low-level features while preserving image information. For the discriminator part of LSGAN, a fully connected network is used, which offers faster convergence, and more stability when training using the composite loss function formed with least-squares loss, visual perception loss and the mean square loss. The proposed model has been evaluated on publicly available denoising datasets including PolyU, CBSD68, DIV2K, and SIDD, under low-compute conditions. The obtained results demonstrate a considerable improvement when compared to various widely deployed works on low-computing devices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Similar content being viewed by others

Data Availability Statement

The codes and network models generated during the current study are available from the corresponding author upon reasonable request.

References

  1. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: dataset and study. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2017)

  2. Aharon, M., Elad, M., Bruckstein, A.: K-svd: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311–4322 (2006). https://doi.org/10.1109/TSP.2006.881199

    Article  Google Scholar 

  3. Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Multiscale Model. Simulat. 4(2), 490–530 (2005). https://doi.org/10.1137/040616024

    Article  MathSciNet  Google Scholar 

  4. Camuto, A., Willetts, M., Şimşekli, U., et al.: Explicit regularisation in gaussian noise injections. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20, p 16603-16614 (2020)

  5. Chen, Y., Pock, T.: Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1256–1272 (2017). https://doi.org/10.1109/TPAMI.2016.2596743

    Article  Google Scholar 

  6. Chen, H., Zhang, Y., Kalra, M.K., et al.: Low-dose ct with a residual encoder-decoder convolutional neural network. IEEE Trans. Med. Imaging 36(12), 2524–2535 (2017). https://doi.org/10.1109/TMI.2017.2715284

    Article  Google Scholar 

  7. Chen, J., Chen, J., Chao, H., et al.: Image blind denoising with generative adversarial network based noise modeling. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 3155–3164 (2018). https://doi.org/10.1109/CVPR.2018.00333

  8. Chen, S., Shi, D., Sadiq, M., et al.: Image denoising via generative adversarial networks with detail loss. In: ICISS 2019: Proceedings of the 2019 2nd International Conference on Information Science and Systems, pp 261–265 (2019). https://doi.org/10.1145/3322645.3322656

  9. Chen, Y., Xia R., K. Z.: Mfmam: Image inpainting via multi-scale feature module with attention module. Comput. Vis. Image Understand. 238, 103883 (2024). https://doi.org/10.1016/j.cviu.2023.103883

  10. Dabov, K., Foi, A., Katkovnik, V., et al.: Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007). https://doi.org/10.1109/TIP.2007.901238

    Article  MathSciNet  Google Scholar 

  11. Divakar, N., Babu, R.V.: Image denoising via cnns: an adversarial approach. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 1076–1083 (2017). https://doi.org/10.1109/CVPRW.2017.145

  12. Dong, W., Shi, G., Li, X.: Nonlocal image restoration with bilateral variance estimation: a low-rank approach. IEEE Trans. Image Process. 22(2), 700–711 (2013). https://doi.org/10.1109/TIP.2012.2221729

    Article  MathSciNet  Google Scholar 

  13. Dong, W., Zhang, L., Shi, G., et al.: Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 22(4), 1620–1630 (2013). https://doi.org/10.1109/TIP.2012.2235847

    Article  MathSciNet  Google Scholar 

  14. Dong, Z., Liu, G., Ni, G., et al.: Optical coherence tomography image denoising using a generative adversarial network with speckle modulation. J. Biophoton. 13(4), e201960135 (2020). https://doi.org/10.1002/jbio.201960135

    Article  Google Scholar 

  15. Elad, M., Kawar, B., Vaksman, G.: Image denoising: The deep learning revolution and beyond—a survey paper. 2301.03362 (2023)

  16. Hala, N., Mohamed, B.H., Javier, N., et al.: Doc-attentive-gan: attentive gan for historical document denoising. Multimed. Tools Appl. (2023). https://doi.org/10.1007/s11042-023-17476-2

  17. Zhang, K., Zuo, W., Chen, Y., et al.: Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017). https://doi.org/10.1109/TIP.2017.2662206

    Article  MathSciNet  Google Scholar 

  18. Han, Y.J., Yu, H.J.: Nm-flowgan: Modeling srgb noise with a hybrid approach based on normalizing flows and generative adversarial networks. 2312.10112 (2023)

  19. Huang, Z., Zhang, J., Zhang, Y., et al.: Du-gan: generative adversarial networks with dual-domain u-net-based discriminators for low-dose ct denoising. IEEE Trans. Instrum. Measure. 71, 1–12 (2022). https://doi.org/10.1109/TIM.2021.3128703

    Article  Google Scholar 

  20. Isola, P., Zhu, J.Y., Zhou, T., et al.: Image-to-image translation with conditional adversarial networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5967–5976 (2017). https://doi.org/10.1109/CVPR.2017.632

  21. Jiang, D., Dou, W., Vosters, L., et al.: Denoising of 3d magnetic resonance images with multi-channel residual learning of convolutional neural network. Jpn. J. Radiol. 36, 566–574 (2017). https://doi.org/10.1007/s11604-018-0758-8

    Article  Google Scholar 

  22. Kang, X., Mengting, L., Hu, C., et al.: Speckle denoising of optical coherence tomography image using residual encoder-decoder cyclegan. Signal Image Video Process. 17, 1521–1533 (2023). https://doi.org/10.1007/s11760-022-02361-6

    Article  Google Scholar 

  23. Koprowski, R.: Image pre-processing. Chap 5, 21–38 (2017). (Springer International Publishing, Cham). https://doi.org/10.1007/97833195049023

  24. Kunsoth, R., Biswas, M.: Modified decision based median filter for impulse noise removal. In: 2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), pp 1316–1319 (2016). https://doi.org/10.1109/WiSPNET.2016.7566350

  25. Li, H., Wu, X.J.: Densefuse: a fusion approach to infrared and visible images. IEEE Trans. Image Process. 28(5), 2614–2623 (2019). https://doi.org/10.1109/TIP.2018.2887342

    Article  MathSciNet  Google Scholar 

  26. Liu, F., Liu, X.: 2d gans meet unsupervised single-view 3d reconstruction. In: Avidan, S., Brostow, G., Cissé, M., et al. (eds.) Computer Vision - ECCV 2022, pp. 497–514. Springer Nature, Cham (2022)

    Google Scholar 

  27. Mairal, J., Bach, F., Ponce, J., et al.: Non-local sparse models for image restoration. In: 2009 IEEE 12th International Conference on Computer Vision, pp 2272–2279 (2009). https://doi.org/10.1109/ICCV.2009.5459452

  28. Mao, X., Li, Q., Xie, H., et al.: Least squares generative adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp 2813–2821 (2017). https://doi.org/10.1109/ICCV.2017.304

  29. Reddy, S.N., S, D.P.: Color, scale, and rotation independent multiple license plates detection in videos and still images. Math. Prob. Eng., 1–14 (2016). https://doi.org/10.1155/2016/9306282

  30. Reed, S., Akata, Z., Yan, X., et al.: Generative adversarial text to image synthesis (2016). arXiv:1605.05396

  31. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation (2015). arXiv:1505.04597

  32. Songtao, Wu., Shenghua Zhong, Y.L.: Deep residual learning for image steganalysis. Multimed. Tools Appl. 77, 10437–10453 (2017). https://doi.org/10.1007/s11042-017-4440-4

    Article  Google Scholar 

  33. Sungmin, C., Taesup, M.: Fully convolutional pixel adaptive image denoiser (2018). CoRR. arXiv:1807.07569

  34. Wang, D., Jin, W., Wu, Y., et al.: Improving global adversarial robustness generalization with adversarially trained gan (2021)

  35. Wang, Y., Chang, D., Zhao, Y.: A new blind image denoising method based on asymmetric generative adversarial network. IET Image Processing 15(6), 1260–1272 (2021). https://doi.org/10.1049/ipr2.12102

    Article  Google Scholar 

  36. Wang, Z., Liu, J., Li, G., et al.: Blind2unblind: Self-supervised image denoising with visible blind spots. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 2027–2036 (2022)

  37. Wang, J., Di, S., Chen, L., et al.: Noise2info: noisy image to information of noise for self-supervised image denoising. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp 16034–16043 (2023)

  38. Yi, X., Xu, H., Zhang, H., et al.: Diff-retinex: rethinking low-light image enhancement with a generative diffusion model. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp 12302–12311 (2023)

  39. Yin, W., Zhang, J., Wang, O., et al.: Learning to recover 3d scene shape from a single image. 2012.09365 (2020)

  40. You, C., Li, G., Zhang, Y., et al.: Ct super-resolution gan constrained by the identical, residual, and cycle learning ensemble (gan-circle). IEEE Trans. Med. Imaging 39(1), 188–203 (2020). https://doi.org/10.1109/TMI.2019.2922960

    Article  Google Scholar 

  41. Zhang, K., Zuo, W., Zhang, L.: Ffdnet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 27(9), 4608–4622 (2018). https://doi.org/10.1109/tip.2018.2839891

  42. Zhang, Q., Xiao, J., Tian, C., et al.: A robust deformed convolutional neural network (cnn) for image denoising. CAAI Trans. Intell. Technol. 8, 331–342 (2023)

    Article  Google Scholar 

  43. Zhang, J., He, Y., Chen, W., et al.: Corrformer: context-aware tracking with cross-correlation and transformer. Comput. Electrical Eng. 114, 109075 (2024). https://doi.org/10.1016/j.compeleceng.2024.109075. https://www.sciencedirect.com/science/article/pii/S004579062400003X

  44. Zhang, J., Lv, Y., Tao, J., et al.: A robust real-time anchor-free traffic sign detector with one-level feature. IEEE Trans. Emerg. Top. Comput. Intell. pp 1–15 (2024). https://doi.org/10.1109/TETCI.2024.3349464

  45. Zhang, J., Huang, H., Jin, X., et al.: Siamese visual tracking based on criss-cross attention and improved head network. Multimed. Tools Appl. 83(1589–1615), 1 (2024). https://doi.org/10.1007/s11042-023-15429-3

    Article  Google Scholar 

  46. Zhao, S., Lin, S., Cheng, X., et al.: Dual-gan complementary learning for real-world image denoising. IEEE Sens. J. 24(1), 355–366 (2024). https://doi.org/10.1109/JSEN.2023.3312389

    Article  Google Scholar 

  47. Zhu, J.Y., Park, T., Isola, P., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp 2242–2251 (2017). https://doi.org/10.1109/ICCV.2017.244

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brindha Murugan.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest to disclose. All the authors have participated in writing this paper and the work is original and is not published elsewhere. This manuscript has not been submitted to, nor is it under review at, another journal or other publishing venue.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohammed, S.W., Murugan, B. A novel image denoising algorithm based on least square generative adversarial network. J Real-Time Image Proc 21, 79 (2024). https://doi.org/10.1007/s11554-024-01447-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11554-024-01447-3

Keywords

Navigation