Skip to main content
Log in

One-Pot Multi-frame Denoising

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

The efficacy of learning-based denoising techniques is heavily reliant on the quality of clean supervision. Unfortunately, acquiring clean images in many scenarios is a challenging task. Conversely, capturing multiple noisy frames of the same field of view is feasible and natural in real-life scenarios. Thus, it is imperative to explore the potential of noisy data in model training and avoid the limitations imposed by clean labels. In this paper, we propose a novel unsupervised learning strategy called one-pot denoising (OPD), which is the first unsupervised multi-frame denoising method. OPD differs from traditional supervision schemes, such as supervised Noise2Clean, unsupervised Noise2Noise, and self-supervised Noise2Void, as it employs mutual supervision among all the multiple frames. This provides learning with more diverse supervision and allows models to better exploit the correlation among frames. Notably, we reveal that Noise2Noise is a special case of the proposed OPD. We provide two specific implementations, namely OPD-random coupling and OPD-alienation loss, to achieve OPD during model training based on data allocation and loss refine, respectively. Our experiments demonstrate that OPD outperforms other unsupervised denoising methods and is comparable to non-transformer-based supervised N2C methods for several classic noise patterns, including additive white Gaussian noise, signal-dependent Poisson noise, and multiplicative Bernoulli noise. Additionally, OPD shows remarkable performance in more challenging tasks such as mixed-blind denoising, denoising random-valued impulse noise, and text removal. The source code and pre-trained models are available at https://github.com/LujiaJin/One-Pot_Multi-Frame_Denoising.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. http://r0k.us/graphics/kodak/

References

  • Anwar, S., & Barnes, N. (2019). Real image denoising with feature attention. In Ieee/cvf International Conference on Computer Vision (iccv) (pp. 3155–3164).

  • Batson, J., & Royer, L. (2019). Noise2self: Blind denoising by self-supervision. In: International Conference on Machine Learning (icml) (pp. 524–533).

  • Bhat, G., Danelljan, M., Yu, F., Van Gool, L., & Timofte, R. (2021). Deep reparametrization of multi-frame super-resolution and denoising. In: Proceedings of the ieee/cvf International Conference on Computer Vision (pp. 2460–2470).

  • Buades, A., Coll, B., & Morel, J.-M. (2005a). A non-local algorithm for image denoising. In Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (Vol. 2, pp. 60–65). IEEE.

  • Buades, A., Coll, B., & Morel, J.-M. (2005b). A review of image denoising algorithms, with a new one. Multiscale Modeling & Simulation, 4(2), 490–530.

    Article  MathSciNet  Google Scholar 

  • Buades, T., Lou, Y., Morel, J.-M., & Tang, Z. (2009). A note on multi-image denoising. In Ieee 2009 International Workshop on Local and Nonlocal Approximation in Image Processing (pp. 1–15). IEEE.

  • Calvarons, A. F. (2021). Improved noise2noise denoising with limited data. In: Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (pp. 796–805).

  • Chen, T., Ma, K.-K., & Chen, L.-H. (1999). Tri-state median filter for image denoising. IEEE Transactions on Image Processing, 8(12), 1834–1838.

    Article  Google Scholar 

  • Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing (TIP), 16(8), 2080–2095.

    Article  MathSciNet  Google Scholar 

  • Dalsasso, E., Denis, L., & Tupin, F. (2021). Sar2sar: A semi-supervised despeckling algorithm for sar images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 4321–4329.

    Article  Google Scholar 

  • Dudhane, A., Zamir, S. W., Khan, S., Khan, F. S., & Yang, M.-H. (2022). Burst image restoration and enhancement. In Proceedings of the ieee/cvf Conference on Computer Vision and Pattern Recognition (pp. 5759–5768).

  • Godard, C., Matzen, K., & Uyttendaele, M. (2018). Deep burst denoising. In European Conference on Computer Vision (eccv) (pp. 538–554).

  • Guilloteau, C., Oberlin, T., Berné, O., & Dobigeon, N. (2020). Hyperspectral and multispectral image fusion under spectrally varying spatial blurs-application to high dimensional infrared astronomical imaging. IEEE Transactions on Computational Imaging (TCI), 6, 1362–1374.

    Article  MathSciNet  Google Scholar 

  • Guo, S., Yan, Z., Zhang, K., Zuo, W., & Zhang, L. (2019). Toward convolutional blind denoising of real photographs. In Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (pp. 1712–1722).

  • Hasan, A. M., Mohebbian, M. R., Wahid, K. A., & Babyn, P. (2020). Hybrid-collaborative noise2noise denoiser for low-dose ct images. IEEE Transactions on Radiation and Plasma Medical Sciences, 5(2), 235–244.

    Article  Google Scholar 

  • Hasinoff, S. W., Sharlet, D., Geiss, R., Adams, A., Barron, J. T., Kainz, F., & Levoy, M. (2016). Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics (TOG), 35(6), 1–12.

    Article  Google Scholar 

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In Ieee/cvf International Conference on Computer Vision (iccv) (pp. 1026–1034).

  • Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the ieee Conference on Computer Vision and Pattern Recognition (pp. 4700–4708).

  • Huang, T., Li, S., Jia, X., Lu, H., & Liu, J. (2021). Neighbor2neighbor: Self-supervised denoising from single noisy images. In Proceedings of the ieee/cvf Conference on Computer Vision and Pattern Recognition (pp. 14781–14790).

  • Huang, T., Li, S., Jia, X., Lu, H., & Liu, J. (2022). Neighbor2neighbor: A self-supervised framework for deep image denoising. IEEE Transactions on Image Processing, 31, 4023–4038.

    Article  Google Scholar 

  • Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the ieee Conference on Computer Vision and Pattern Recognition (pp. 1125–1134).

  • Jiang, Z., Huang, Z., Qiu, B., Meng, X., You, Y., Liu, X., et al. (2020). Comparative study of deep learning models for optical coherence tomography angiography. Biomedical Optics Express, 11(3), 1580–1597.

    Article  Google Scholar 

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv:1412.6980 .

  • Kong, Z., Yang, X., & He, L. (2020). A comprehensive comparison of multi-dimensional image denoising methods. arXiv:2011.03462 .

  • Kostadin, D., Alessandro, F., & Karen, E. (2007). Video denoising by sparse 3d transformdomain collaborative filtering. In Ieee 15th European Signal Processing Conference (espc) (Vol. 149, p. 2). IEEE.

  • Krull, A., Buchholz, T.-O., & Jug, F. (2019). Noise2void-learning denoising from single noisy images. In Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (pp. 2129–2137).

  • Kuan, D. T., Sawchuk, A. A., Strand, T. C., & Chavel, P. (1985). Adaptive noise smoothing filter for images with signal-dependent noise. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2, 165–177.

    Article  Google Scholar 

  • Laine, S., Karras, T., Lehtinen, J., & Aila, T. (2019). High-quality self-supervised deep image denoising. Advances in Neural Information Processing Systems, 32, 6970–6980.

    Google Scholar 

  • Lee, J.-S. (1981). Refined filtering of image noise using local statistics. Computer Graphics and Image Processing, 15(4), 380–389.

    Article  Google Scholar 

  • Lee, J.-S. (1983). Digital image smoothing and the sigma filter. Computer Vision, Graphics, and Image Processing, 24(2), 255–269.

    Article  Google Scholar 

  • Lefkimmiatis, S. (2018). Universal denoising networks: a novel cnn architecture for image denoising. In Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (pp. 3204–3213).

  • Lehtinen, J., Munkberg, J., Hasselgren, J., Laine, S., Karras, T., Aittala, M., & Aila, T. (2018). Noise2noise: Learning image restoration without clean data. arXiv:1803.04189.

  • Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., & Timofte, R. (2021). Swinir: Image restoration using swin transformer. In Proceedings of the ieee/cvf International Conference on Computer Vision (pp. 1833–1844).

  • Liu, Z., Yuan, L., Tang, X., Uyttendaele, M., & Sun, J. (2014). Fast burst images denoising. ACM Transactions on Graphics (TOG), 33(6), 1–9.

    Article  Google Scholar 

  • Maggioni, M., Boracchi, G., Foi, A., & Egiazarian, K. (2011). Video denoising using separable 4d nonlocal spatiotemporal transforms. Image Processing: Algorithms and Systems ix, 7870, 787003.

    Google Scholar 

  • Maggioni, M., Boracchi, G., Foi, A., & Egiazarian, K. (2012). Video denoising, deblocking, and enhancement through separable 4-d nonlocal spatiotemporal transforms. IEEE Transactions on Image Processing (TIP), 21(9), 3952–3966.

    Article  MathSciNet  Google Scholar 

  • Marinč, T., Srinivasan, V., Gül, S., Hellge, C., & Samek, W. (2019). Multi-kernel prediction networks for denoising of burst images. In Ieee International Conference on Image Processing (icip) (pp. 2404–2408).

  • Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Ieee/cvf International Conference on Computer Vision (iccv) (Vol. 2, pp. 416–423).

  • Meiniel, W., Olivo-Marin, J.-C., & Angelini, E. D. (2018). Denoising of microscopy images: A review of the state-of-the-art, and a new sparsity-based method. IEEE Transactions on Image Processing, 27(8), 3842–3856.

    Article  MathSciNet  Google Scholar 

  • Mildenhall, B., Barron, J. T., Chen, J., Sharlet, D., Ng, R., & Carroll, R. (2018). Burst denoising with kernel prediction networks. Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (pp. 2502–2510).

  • Moran, N., Schmidt, D., Zhong, Y., & Coady, P. (2020). Noisier2noise: Learning to denoise from unpaired noisy data. In Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (pp. 12064–12072).

  • Neshatavar, R., Yavartanoo, M., Son, S., & Lee, K. M. (2022). Cvf-sid: Cyclic multi-variate function for self-supervised image denoising by disentangling noise from image. In Proceedings of the ieee/cvf Conference on Computer Vision and Pattern Recognition (pp. 17583–17591).

  • Pang, T., Zheng, H., Quan, Y., & Ji, H. (2021). Recorrupted-to-recorrupted: unsupervised deep learning for image denoising. In Proceedings of the ieee/cvf Conference on Computer Vision and Pattern Recognition (pp. 2043–2052).

  • Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., & Chanan, G. (2019). Pytorch: An imperative style, highperformance deep learning library. Advances in Neural Information Processing Systems, 32, 8026–8037.

    Google Scholar 

  • Qiu, B., Huang, Z., Liu, X., Meng, X., You, Y., Liu, G., & Lu, Y. (2020). Noise reduction in optical coherence tomography images using a deep neural network with perceptuallysensitive loss function. Biomedical Optics Express, 11(2), 817–830.

    Article  Google Scholar 

  • Quan, Y., Chen, M., Pang, T., & Ji, H. (2020). Self2self with dropout: Learning selfsupervised denoising from single image. In Proceedings of the ieee/cvf Conference on Computer Vision and Pattern Recognition (pp. 1890–1898).

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (miccai) (pp. 234–241).

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV), 115(3), 211–252.

    Article  MathSciNet  Google Scholar 

  • Su, J., Xu, B., & Yin, H. (2022). A survey of deep learning approaches to image restoration. Neurocomputing, 487, 46–65.

    Article  Google Scholar 

  • Tian, C., Fei, L., Zheng, W., Xu, Y., Zuo, W., & Lin, C.-W. (2020a). Deep learning on image denoising: An overview. Neural Networks, 131, 251–275.

    Article  Google Scholar 

  • Tian, C., Xu, Y., Li, Z., Zuo, W., Fei, L., & Liu, H. (2020b). Attention-guided cnn for image denoising. Neural Networks, 124, 117–129.

    Article  Google Scholar 

  • Tico, M. (2008). Multi-frame image denoising and stabilization. In Ieee 16th European Signal Processing Conference (ESPC) (pp. 1–4). IEEE.

  • Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2018). Deep image prior. In Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (pp. 9446–9454).

  • Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2020). Deep image prior. International Journal of Computer Vision, 128, 1867–1888.

    Article  Google Scholar 

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing (TIP), 13(4), 600–612.

    Article  Google Scholar 

  • Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., & Li, H. (2022). Uformer: A general ushaped transformer for image restoration. In Proceedings of the ieee/cvf Conference on Computer Vision and Pattern Recognition (pp. 17683–17693).

  • Wu, D., Gong, K., Kim, K., Li, X., & Li, Q. (2019). Consensus neural network for medical imaging denoising with only noisy training samples. In International Conference on Medical Image Computing and Computer-Assisted Intervention (miccai) (pp. 741–749).

  • Xia, Z., Perazzi, F., Gharbi, M., Sunkavalli, K., & Chakrabarti, A. (2020). Basis prediction networks for effective burst denoising with large kernels. In Ieee/cvf Conference on Computer Vision and Pattern Recognition (cvpr) (pp. 11844–11853).

  • Xu, J., Huang, Y., Cheng, M.-M., Liu, L., Zhu, F., Xu, Z., & Shao, L. (2020). Noisy-as-clean: Learning self-supervised denoising from corrupted image. IEEE Transactions on Image Processing (TIP), 29, 9316–9329.

    Article  Google Scholar 

  • Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., & Yang, M. -H. (2022). Restormer: Efficient transformer for highresolution image restoration. In Proceedings of the ieee/cvf Conference on Computer Vision and Pattern Recognition (pp. 5728–5739).

  • Zbontar, J., Jing, L., Misra, I., LeCun, Y., & Deny, S. (2021). Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning (pp. 12310–12320).

  • Zeyde, R., Elad, M., & Protter, M. (2010). On single image scale-up using sparse-representations. In International Conference on Curves and Surfaces (pp. 711–730).

  • Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing (TIP), 26(7), 3142–3155.

    Article  MathSciNet  Google Scholar 

  • Zhang, M., & Gunturk, B. K. (2008). Multiresolution bilateral filtering for image denoising. IEEE Transactions on Image Processing (TIP), 17(12), 2324–2333.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Natural Science Foundation of China (82371112), Beijing Natural Science Foundation (Z210008), Peking University Medicine Sailing Program for Young Scholars’ Scientific & Technological Innovation (BMU2023YFJHMX007), and the Shenzhen Science and Technology Program, China (KQTD20180412181221912, JCYJ20200109140603831).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanye Lu.

Additional information

Communicated by Guang Yang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 9756 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, L., Guo, Q., Zhao, S. et al. One-Pot Multi-frame Denoising. Int J Comput Vis 132, 515–536 (2024). https://doi.org/10.1007/s11263-023-01887-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-023-01887-7

Keywords

Navigation