Skip to main content
Log in

LRB-T: local reasoning back-projection transformer for the removal of bad weather effects in images

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In computer vision, transformers have shown increasing effectiveness for high-level vision tasks. To further cope with low-level vision tasks, we propose a general framework, namely local reasoning back-projection transformer (LRB-T) for removing multiple types of bad weather (rain, haze, rain fog, raindrop, et al.) affecting images. Specifically, this paper first integrates the back-projection mechanism into the transformer architecture, where iterative up- and down-projection modules effectively feed and correct the feature reconstruction errors for spatial information preservation, and reduce computational costs since nearly half of the feature maps participate in the back-projection using half resolution. Besides, the proposed adaptive local reasoning block captures important neighborhood information through multiple local reasoning schemes. It can aggregate adjacent tokens to produce spatial-specific involution kernels, attention weights and dynamic positional encodings for local structure updating, and provide implicit spatial feature transform to achieve spatial-wise feature modulation. A pyramid scale guidance module is also established to enable arbitrary size generation consistent with the input, and generate scale-dependent trainable parameters to enhance skip connections. Extensive experiments on four types of well-known bad weather datasets show that the proposed LRB-T improves the image deraining, dehazing, de-rain fog and de-raindrop performance in terms of PSNR and SSIM effectively and outperforms state-of-the-art task-specific bad weather removal methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

It is confirmed by the authors that data supporting this research finding are present within the paper, and the publicly available datasets used in this study are rain datasets Rain100L [19], Rain100H [19], DDN-data [17], DID-data [18] and RealRain-data [50], haze datasets SOTS [51], HazeRD [52], NTIRE2018 [53] and URHI [51], rain fog dataset rain fog-data [35] and raindrop dataset RainDrop-data [24].

Notes

  1. https://xueyangfu.github.io/projects/LPNet.html.

References

  1. Fu X, Qi Q, Zha Z-J, Zhu Y, Ding X (2021) Rain streak removal via dual graph convolutional network. In: Proceedings of the AAAI conference on artificial intelligence (AAAI), vol 35, pp 1352–1360

  2. Jiang K, Wang Z, Yi P, Chen C, Huang B, Luo Y, Ma J, Jiang J (2020) Multi-scale progressive fusion network for single image deraining. In: Proceedings of the IEEE conference on computer vision and pattern (CVPR), pp 8346–8355

  3. Wu H, Qu Y, Lin S, Zhou J, Qiao R, Zhang Z, Xie Y, Ma L (2021) Contrastive learning for compact single image dehazing. In: Proceedings of the IEEE conference on computer vision and pattern (CVPR), pp 10551–10560

  4. Dong H, Pan J, Xiang L, Hu Z, Zhang X, Wang F, Yang M-H (2020) Multi-scale boosted dehazing network with dense feature fusion. In: Proceedings of the IEEE conference on computer vision and pattern (CVPR), pp 2157–2167

  5. Wang Y, Song Y, Ma C, Zeng B (2020) Rethinking image deraining via rain streaks and vapors. In: Proceedings of the European conference on computer vision (ECCV), pp 367–382

  6. Shao M-W, Li L, Meng D-Y, Zuo W-M (2021) Uncertainty guided multi-scale attention network for raindrop removal from a single image. IEEE Trans Image Process 30:4828–4839

    Article  Google Scholar 

  7. Zhang L, Zhou Y, Hu X, Sun F, Duan S (2022) MSL-MNN: image deraining based on multi-scale lightweight memristive neural network. Neural Comput Appl 34:7299–7309

    Article  Google Scholar 

  8. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Proceedings of the advances in neural information processing systems (NIPS), pp 5998–6008

  9. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al (2021) An image is worth 16x16 words: transformers for image recognition at scale. In: Proceedings of the international conference on learning representations (ICLR) (2021)

  10. Liu Z, Lin Y, Cao Y, Hu H, WeiY, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE conference on computer vision (ICCV), pp 10012–10022

  11. Wang W, Xie E, Li X, Fan D-P, Song K, Liang D, Lu T, Luo P, Shao L (2021) Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In: Proceedings of the IEEE conference on computer vision (ICCV), pp 568–578

  12. Chen H, Wang Y, Guo T, Xu C, Deng Y, Liu Z, Ma S, Xu C, Xu C, Gao W (2021) Pre-trained image processing transformer. In: Proceedings of the IEEE conference on computer vision and pattern Recognition (CVPR), pp 12299–12310

  13. Wang Z, Cun X, Bao J, Liu J (2022) Uformer: a general U-Shaped transformer for image restoration. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

  14. Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H (2021) Training data-efficient image transformers and distillation through attention. In: Proceedings of the international conference on machine learning (ICML). PMLR, pp 10347–10357

  15. Haris M, Shakhnarovich G, Ukita N (2018) Deep back-projection networks for super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1664–1673

  16. Wang X, Yu K, Dong C, Loy CC (2018) Recovering realistic texture in image super-resolution by deep spatial feature transform. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 606–615

  17. Fu X, Huang J, Zeng D, Huang Y, Ding X, Paisley J (2017) Removing rain from single images via a deep detail network. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 3855–3863

  18. Zhang H, Patel VM (2018) Density-aware single image de-raining using a multi-stream dense network. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 695–704

  19. Yang W, Tan RT, Feng J, Guo Z, Yan S, Liu J (2019) Joint rain detection and removal from a single image with contextualized deep networks. IEEE Trans Pattern Anal Mach Intell 42(6):1377–1393

    Article  Google Scholar 

  20. Deng S, Wei M, Wang J, Liang L, Xie H, Wang M (2020) DRD-Net: detail-recovery image deraining via context aggregation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 14560–14569

  21. Gao F, Mu X, Ouyang C, Yang K, Ji S, Guo J, Wei H, Wang N, Ma L, Yang B (2022) MLTDNet: an efficient multi-level transformer network for single image deraining. Neural Comput Appl 34:14013–14027

    Article  Google Scholar 

  22. Quan Y, Deng S, Chen Y, Ji H (2019) Deep learning for seeing through window with raindrops. In: Proceedings of the IEEE conference on computer vision (ICCV), pp 2463–2471

  23. Quan R, Yu X, Liang Y, Yang Y (2021) Removing raindrops and rain streaks in one go. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 9147–9156

  24. Qian R, Tan RT, Yang W, Su J, Liu J (2018) Attentive generative adversarial network for raindrop removal from a single image. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 2482–2491

  25. Hao Z, You S, Li Y, Li K, Lu F (2019) Learning from synthetic photorealistic raindrop for single image raindrop removal. In: Proceedings of the IEEE conference on computer vision (ICCV)

  26. Zhang K, Li D, Luo W, Ren W (2021) Dual attention-in-attention model for joint rain streak and raindrop removal. IEEE Trans Image Process 30:7608–7619

    Article  Google Scholar 

  27. Guo T, Li X, Cherukuri V, Monga V (2019) Dense scene information estimation network for dehazing. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

  28. Guo T, Cherukuri V, Monga V (2019) Dense ‘123’ color enhancement dehazing network. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

  29. Shao Y, Li L, Ren W, Gao C, Sang N (2020) Domain adaptation for image dehazing. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 2808–2817

  30. Dong J, Pan J (2020) Physics-based feature dehazing networks. In: Proceedings of the European conference on computer vision (ECCV), pp 188–204

  31. Qin X, Wang Z, Bai Y, Xie X, Jia H (2020) FFA-Net: feature fusion attention network for single image dehazing. In: Proceedings of the AAAI conference on artificial intelligence (AAAI), vol 34, pp 11908–11915

  32. Chen Z, Wang Y, Yang Y, Liu D (2021) PSD: principled synthetic-to-real dehazing guided by physical priors. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 7180–7189

  33. Yi W, Dong L, Liu M, Zhao Y, Hui M, Kong L (2022) DCNet: dual-cascade network for single image dehazing. Neural Comput Appl 34:16771–16783

    Article  Google Scholar 

  34. Chen J, Yang G, Xia M, Zhang D (2022) From depth-aware haze generation to real-world haze removal. Appl Neural Comput

  35. Li R, Cheong L-F, Tan RT (2019) Heavy rain image restoration: integrating physics model and conditional adversarial learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1633–1642

  36. Li R, Tan RT, Cheong L-F (2020) All in one bad weather removal using architectural search. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 3175–3185

  37. Chen W-T, Huang Z-K, Tsai C-C, Yang H-H, Ding J-J, Kuo S-Y (2022) Learning multiple adverse weather removal via two-stage knowledge learning and multi-contrastive regularization: toward a unified model. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 17653–17662

  38. Valanarasu JMJ, Yasarla R, Patel VM (2022) Transweather: transformer-based restoration of images degraded by adverse weather conditions. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 2353–2363

  39. Haris M, Shakhnarovich G, Ukita N (2018) Deep back-projection networks for super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1664–1673

  40. Liu Z-S, Wang L-W, Li, C-T, Siu W-C (2019) Hierarchical back projection network for image super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

  41. Chen X, Huang Y, Xu L (2021) Multi-scale hourglass hierarchical fusion network for single image deraining. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 872–879

  42. Chu X, Tian Z, Zhang B, Wang X, Wei X, Xia H, Shen C (2021) Conditional positional encodings for vision transformers. arXiv:2102.10882

  43. Han K, Xiao A, Wu E, Guo J, Xu C, Wang Y (2021) Transformer in transformer. In: Proceedings of the advances in neural information processing systems (NIPS)

  44. Yuan L, Chen Y, Wang T, Yu W, Shi Y, Jiang Z-H, Tay FE, Feng J, Yan S (2021) Tokens-to-token ViT: training vision transformers from scratch on ImageNet, pp 558–567

  45. Zhao D, Li J, Li H, Xu L (2021) Hybrid local-global transformer for image dehazing. arXiv:2109.07100

  46. Zamir SW, Arora A, Khan S, Hayat M, Khan FS, Yang M-H, Shao L (2020) Learning enriched features for real image restoration and enhancement. In: Proceedings of the European conference on computer vision (ECCV), pp 492–511

  47. Li D, Hu J, Wang C, Li X, She Q, Zhu L, Zhang T, Chen Q (2021) Involution: inverting the inherence of convolution for visual recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 12321–12330

  48. Xiao J, Fu X, Liu A, Wu F, Zha Z-J (2022) Image de-raining transformer. IEEE Trans Pattern Anal Mach Intell

  49. Liu X, Ma Y, Shi Z, Chen J (2019) GridDehazeNet: attention-based multi-scale network for image dehazing. In: Proceedings of the IEEE conference on computer vision (ICCV), pp 7314–7323

  50. Fu X, Liang B, Huang Y, Ding X, Paisley J (2019) Lightweight pyramid networks for image deraining. IEEE Trans Neural Netw Learn Syst 31(6):1794–1807

    Article  Google Scholar 

  51. Li B, Ren W, Fu D, Tao D, Feng D, Zeng W, Wang Z (2018) Benchmarking single-image dehazing and beyond. IEEE Trans Image Process 28(1):492–505

    Article  MathSciNet  Google Scholar 

  52. Zhang Y, Ding L, Sharma G (2017) HazeRD: an outdoor scene dataset and benchmark for single image dehazing. In: Proceedings of the international conference on image processing (ICIP), pp 3205–3209

  53. Ancuti C, Ancuti CO, Timofte R (2018) NTIRE 2018 challenge on image dehazing: methods and results. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 891–901

  54. Mittal A, Soundararajan R, Bovik AC (2012) Making a “completely blind’’ image quality analyzer. IEEE Signal Process Lett 20(3):209–212

    Article  Google Scholar 

  55. Paulson RM, Gopalakrishnan S, Mahendiran S, Srambical VP, Gopan NR (2022) A hybrid fusion-based algorithm for underwater image enhancement using fog aware density evaluator and mean saturation. In: Proceedings of the international conference on innovative computing and communication (ICICC), pp 129–140

  56. Min X, Zhai G, Gu K, Yang X, Guan X (2018) Objective quality evaluation of dehazed images. IEEE Trans Intell Transp Syst 20(8):2879–2892

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Nature Science Foundation of China under Grant 61872143.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongqing Zhu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, P., Zhu, H., Zhang, H. et al. LRB-T: local reasoning back-projection transformer for the removal of bad weather effects in images. Neural Comput & Applic 36, 773–789 (2024). https://doi.org/10.1007/s00521-023-09059-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09059-x

Keywords

Navigation