Abstract
Aiming at the problem of image motion blur caused by handheld camera jitter and object motion in the process of collecting photos, a generative adversarial network (GAN) based on feature fusion of back projection is proposed for blind image deblurring. Firstly, the generator network is established by using U-Net structure, and a feature fusion residual block based on back projection is designed according to the error feedback principle, which solves the problem of saving spatial information in U-Net structure. Secondly, the self-attention module is introduced into the generator network to extract the feature map that pays more attention to detail. Finally, the combination of perceptual loss, mean square error loss and relative generative adversarial loss effectively alleviates the mode collapse problem of traditional GAN and improves the stability of model training. The experimental results show that the peak signal-to-noise ratio and structural similarity of this method on GoPro data set are 30.183 dB and 0.941, respectively, and 26.962 and 0.837 on the Kohler dataset, with the shortest running time, which are better than the existing state-of-the-art methods. The restored image is clearer good visual results and richer in texture details, which can effectively improve the image deblurring effect.
Similar content being viewed by others
Data availability
Data sets generated during the current study are not publicly available due to funding restrictions but are available from the corresponding authors upon reasonable request.
References
Xiao, J., Jin, Z., Zhang, H.: A general model compression method for image restoration network. Signal. Process-Image 93, 182–192 (2021)
Hosseini, M.S., Plataniotis, K.N.: Convolutional deblurring for natural imaging. IEEE. T. Image. process 29, 250–264 (2019)
Li, T., Yuan, Y., Zhang, B.: Experimental verification of three-dimensional temperature field reconstruction method based on Lucy-Richardson and nearest neighbor filtering joint deconvolution algorithm for flame light field imaging. Appl. Therm. Eng 162, 325–334 (2019)
Dong, J., Roth, S., Schiele, B.: DWDN: deep wiener deconvolution network for non-blind image deblurring. Ieee. T. Pattern. Anal 1, 365–373 (2021)
Liu, Y.Q., Du, X., Shen, H.L.: Estimating generalized Gaussian blur kernels for out-of-focus image deblurring. IEEE. T. Circ. Syst. Vid 31, 829–843 (2020)
Perrone, D. Favaro, P.: Total variation blind deconvolution: The devil is in the details. In CVPR, pp. 2909–2916, (2014)
Xu, L., Zheng, S., Jia, J.: Unnatural l0 sparse representation for natural image deblurring. In CVPR, pp. 1107–1114, (2013)
Kim T. H, and Lee K. M.: Segmentation-free dynamic scene deblurring. In CVPR, pp. 2766–2773, (2014)
Anwar, S., Huynh, C.P., Porikli, F.: Image deblurring with a class-specific prior. IEEE. T. Pattern. Anal 41, 2112–2130 (2021)
Huang, L., Xia, Y., Ye, T.: Effective blind image deblurring using Matrix-Variable optimization. IEEE. T. Image. Process 30, 4653–4666 (2021)
Bai, Y., Cheung, G., Liu, X.: Graph-based blind image deblurring from a single photograph. IEEE. T. Image. Process 28, 1404–1418 (2019)
Liu, J., Yan, M., Zeng, T.: Surface-aware blind image deblurring. IEEE. T. Pattern. Anal 43, 1041–1055 (2021)
Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In CVPR, pp. 769–777, (2015)
Chakrabarti A.: A neural approach to blind motion deblurring. In ECCV, pp. 221–235, (2016)
Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, pp. 3883–3891, (2017)
Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In CVPR, pp. 8174–8182, (2018)
Goodfellow, I., Pouget-Abadie, J., Mirza, M.: Generative adversarial nets. Commun. Acm, pp. 2672–2681, (2014)
Kupyn, O., Budzan, V., Mykhailych, M.: Deblur-GAN: Blind motion deblurring using conditional adversarial networks. In CVPR, pp. 8183–8192, (2018)
Kupyn, O., Martyniuk, T., Wu, J.: Deblur-GANv2: Deblurring (orders-of-magnitude) faster and better. In ICCV, pp. 8878–8887, (2019)
Chen, L., Fang, F., Lei, S.: Enhanced sparse model for blind deblurring. In ECCV, pp. 631–646, (2020)
Wu, J., Di, X.: Integrating neural networks into the blind deblurring framework to compete with the end-to-end learning-based methods. IEEE. T. Image. Process 29, 6841–6851 (2020)
Zhang K., Luo W., Zhong Y.: Deblurring by realistic blurring. In CVPR, pp. 2737–2746, (2020)
Lu, B., Chen, J.C., Chellappa, R.: UID-GAN: Unsupervised image deblurring via disentangled representations. IEEE. T. Biometrics. Behavior. Identity. Science 2, 26–39 (2020)
Wen, Y., Chen, J., Sheng, B.: Structure-aware motion deblurring using multi-adversarial optimized CycleGan. IEEE. T. Image. Process 30, 6142–6155 (2021)
He, J., Shen, L., Yao.: Finger vein image deblurring using neighbors-based binary-GAN (NB-GAN). IEEE. T. Emerg. Top. Com, (2021)
Luo, B., Cheng, Z., Xu, L.: Blind image deblurring via superpixel segmentation prior. IEEE. T. Circ. Syst. Vid 32, 1467–1482 (2022)
Jung, S. H., Lee T. B., Heo Y. S.: Deep feature prior guided face deblurring. In WACV, pp. 3531–3540, (2022)
Ronneberger O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pp. 234–241, (2015)
Woo, S., Park, J. Lee, J. Y.: CBAM: Convolutional block attention module. In ECCV, pp. 3–19, (2018)
Jolicoeur, M. A.:The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734, (2018)
Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In CVPR, pp. 1664–1673, (2018)
Huang,G., Liu, Z., Van, D.: Densely connected convolutional networks. In CVPR, pp. 4700–4708, (2017)
Salimans, T., Goodfellow, I., Zaremba, W.: Improved techniques for training gans. Advances. Neural Inf. Process. Sys 29, 2234–2242 (2016)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In ICML, pp. 214–223, (2017)
Isola, P., Zhu, J. Y., Zhou, T.: Image-to-image translation with conditional adversarial networks. In CVPR, pp. 1125–1134, (2017)
K. He, X. Zhang and S. Ren, “Delving deep into rectifiers: Surpassing human-level performance on image-net classification,” In CVPR, pp. 1026–1034, 2015.
He, K., Zhang, X., Ren S.: Deep residual learning for image recognition. In CVPR, pp. 770–778, (2016)
Ledig C., Theis L., Huszár F.: Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, pp. 4681–4690, (2017)
Köhler R., Hirsch M., Mohler B.: Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In ECCV, pp. 27–40, (2012)
Lai, W. S., Huang, J. B., Hu Z.: A comparative study for single image blind deblurring. In CVPR, pp. 1701–1709, 2016.
Zhao, Z., Xiong, B., Gai, S., Wang, L.: Improved deep multi-patch hierarchical network with nested module for dynamic scene deblurring. IEEE Access 8, 62116–62126 (2020)
Wu, Y., Qian, P., Zhang, X.: Two-level wavelet-based convolutional neural network for image deblurring. IEEE Access 9, 45853–45863 (2021)
Pan J., Hu Z., Su Z.: Deblurring text images via L0-regularized intensity and gradient prior. In CVPR, pp. 2901–2908, (2014)
Li, L., Pan, J., Lai, W.S.: Blind image deblurring via deep discriminative priors. Int. J. Comput. Vision 127, 1025–1043 (2019)
Pan J., Sun D., Pfister H.: Blind image deblurring using dark channel prior. In CVPR, pp. 1628–1636, (2016)
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China under Grant 61772396 and 61902296, Natural Science Foundation of Shannxi Province of China under Grant 2022JM-369, and the Funded by Guangxi Key Laboratory of Trusted Software Research Project KX202061.
Author information
Ethics declarations
Conflict of interest
This article has no relevant competing interests.
Ethics approval and consent to participate
This article does not involve relevant ethical issues, and the relevant authors agree to participate.
Consent for publication
We have read and understood the publication policy and agree to submit this manuscript.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Li, C., Kong, W., Xue, J. et al. Image blind deblurring networks with back-projection feature fusion. SIViP 17, 2063–2071 (2023). https://doi.org/10.1007/s11760-022-02420-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-022-02420-y