Skip to main content
Log in

Cross-resolution feature attention network for image super-resolution

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

In recent years, single image super-resolution based on convolutional neural network (CNN) has been extensively researched. However, most CNN-based methods only focus on mining features at a single resolution or processing features at different resolutions in series, which will cause the loss of broad context or spatial details. To address these issues, we design a cross-resolution feature attention network, which can progressively reconstruct images at different scale factors. Specifically, the reconstruction of each scale factor contains cascaded cross-resolution residual block (CRRB) and a resolution-wise attention block (RAB). CRRB can extract features at different resolutions in parallel rather than in series, which enriches not only global contextual information but also spatial details. RAB can adaptively capture the importance of features from different resolutions, making the feature fusion more effective. We have tested our network on Set5, Set14, BSD100, Urban100 and Manga109 dataset for \(\times \) 2, \(\times \) 4 and \(\times \) 8 SR. Experimental results on the five datasets show that our proposed method achieves 0.24 dB, 0.09 dB and 0.21 dB higher PSNR than that of MS-LapSRN, MGBP and RMUN for \(\times \) 8 SR, respectively, and also achieves favorable performance against the state-of-the-art methods on \(\times \) 2 and \(\times \) 4 SR.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Zhao, X., Hu, X., Liao, Y., He, T., Zhang, T., Zou, X., Tian, J.: Accurate MR image super-resolution via lightweight lateral inhibition network. Comput. Vis. Image Underst. 201(1), 1–9 (2020)

    Google Scholar 

  2. Xu, Y., Wu, Z., Chanussot, J., Wei, Z.: Hyperspectral images super-resolution via learning high-order coupled tensor ring representation. IEEE Trans. Neural Netw. Learn. Syst. 31(11), 4747–4760 (2020)

    Article  MathSciNet  Google Scholar 

  3. Pang, Y., Cao, J., Wang, J., Han, J.: JCS-Net: joint classification and super-resolution network for small-scale pedestrian detection in surveillance images. IEEE Trans. Inf. Forensics Secur. 14(12), 3322–3331 (2019)

    Article  Google Scholar 

  4. Anwar, S., Khan, S., Barnes, N.: A deep journey into super-resolution: a survey. ACM Comput. Surv. 53(3), 1–34 (2020)

    Article  Google Scholar 

  5. Wang, Z., Chen, J., Hoi, S.C.: Deep learning for image super-resolution: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 43(10), 3365–3387 (2020)

    Article  Google Scholar 

  6. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision, pp. 184–199 (2014)

  7. Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)

  8. Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1645 (2016)

  9. Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3147–3155 (2017)

  10. Zhou, Y., Du, X., Wang, M., Huo, S., Zhang, Y., Kung, S.Y.: Cross-scale residual network: a general framework for image super-resolution, denoising, and deblocking. IEEE Trans. Cybern. (2021). https://doi.org/10.1109/TCYB.2020.3044374

    Article  Google Scholar 

  11. Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: European Conference on Computer Vision, pp. 391–407 (2016)

  12. Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144 (2017)

  13. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision, pp. 286–301 (2018)

  14. Jiang, Z., Zhu, H., Lu, Y., Ju, G., Men, A.: Lightweight super-resolution using deep neural learning. IEEE Trans. Broadcast. 66(4), 814–823 (2020)

    Article  Google Scholar 

  15. Zhang, Y., Wei, D., Qin, C., Wang, H., Pfister, H., Fu, Y.: Context reasoning attention network for image super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4278–4287 (2021)

  16. Mao, X.J., Shen, C., Yang, Y.B.: Image restoration using convolutional auto-encoders with symmetric skip connections. arXiv preprint arXiv:1606.08921 (2016)

  17. Li, X., He, M., Li, H., Shen, H.: A combined loss-based multiscale fully convolutional network for high-resolution remote sensing image change detection. IEEE Geosci. Remote Sens. Lett. 19(1), 1–5 (2021)

    Google Scholar 

  18. Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1664–1673 (2018)

  19. Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3867–3876 (2019)

  20. Michelini, P.N., Liu, H., Zhu, D.: Multigrid backprojection super–resolution and deep filter visualization. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 4642–4650 (2019)

  21. Ma, T., Tian, W.: Back-projection-based progressive growing generative adversarial network for single image super-resolution. Vis. Comput. 37(5), 925–938 (2021)

    Article  Google Scholar 

  22. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

  23. Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 510–519 (2019)

  24. Zhang, K., He, P., Yao, P., Chen, G., Yang, C., Li, H., Fu, L., Zheng, T.: Dnanet: de-normalized attention based multi-resolution network for human pose estimation. arXiv preprint arXiv:1909.05090 (2019)

  25. Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D., Wang, Z.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)

  26. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)

  27. Yang, X., Mei, H., Zhang, J., Xu, K., Yin, B., Zhang, Q., Wei, X.: DRFN: deep recurrent fusion network for single-image super-resolution with large factors. IEEE Trans. Multimed. 21(2), 328–337 (2019)

    Article  Google Scholar 

  28. Yang, X., Zhu, Y., Guo, Y., Zhou, D.: An image super-resolution network based on multi-scale convolution fusion. Vis. Comput. (2021). https://doi.org/10.1007/s00371-021-02297-x

    Article  Google Scholar 

  29. Xie, W., Song, D., Xu, C., Xu, C., Zhang, H., Wang, Y.: Learning frequency-aware dynamic network for efficient super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4308–4317 (2021)

  30. Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 624–632 (2017)

  31. Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Fast and accurate image super-resolution with deep laplacian pyramid networks. IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2599–2613 (2019)

    Article  Google Scholar 

  32. Chudasama, V., Upla, K., Raja, K., Ramachandra, R., Busch, C.: Compact and progressive network for enhanced single image super-resolution-ComPrESRNet. Vis. Comput. (2021). https://doi.org/10.1007/s00371-021-02193-4

    Article  Google Scholar 

  33. Liu, A., Li, S., Chang, Y.: Image super-resolution using multi-resolution attention network. In: ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1610–1614 (2021)

  34. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  35. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)

  36. Dai, T., Cai, J., Zhang, Y., Xia, S.T., Zhang, L.: Second-order attention network for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11065–11074 (2019)

  37. Shi, W., Du, H., Mei, W., Ma, Z.: (SARN) spatial-wise attention residual network for image super-resolution. Vis. Comput. 37(6), 1569–1580 (2021)

    Article  Google Scholar 

  38. Mei, Y., Fan, Y., Zhou, Y.: Image super-resolution with non-local sparse attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3517–3526 (2021)

  39. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  40. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016)

    Article  Google Scholar 

  41. Hui, Z., Li, J., Gao, X., Wang, X.: Progressive perception-oriented network for single image super-resolution. Inf. Sci. 546(1), 769–786 (2020)

    MathSciNet  Google Scholar 

  42. Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  43. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 416–423 (2001)

  44. Timofte, R., Rothe, R., Van Gool, L.: Seven ways to improve example-based single image super resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1865–1873 (2016)

  45. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: dataset and study. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2017)

  46. Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: British Machine Vision Conference (2012)

  47. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: International Conference on Curves and Surfaces, pp. 711–730 (2010)

  48. Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Machine Intell. 33(5), 898–916 (2010)

    Article  Google Scholar 

  49. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5197–5206 (2015)

  50. Matsui, Y., Ito, K., Aramaki, Y., Fujimoto, A., Ogawa, T., Yamasaki, T., Aizawa, K.: Sketch-based manga retrieval using manga109 dataset. Multimed. Tools Appl. 76(20), 21811–21838 (2017)

    Article  Google Scholar 

  51. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)

  52. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant 61971306.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sumei Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, A., Li, S. & Chang, Y. Cross-resolution feature attention network for image super-resolution. Vis Comput 39, 3837–3849 (2023). https://doi.org/10.1007/s00371-022-02519-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02519-w

Keywords

Navigation