Skip to main content
Log in

MFFN: image super-resolution via multi-level features fusion network

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Deep convolutional neural networks can effectively improve the performance of single-image super-resolution reconstruction. Deep networks tend to achieve better performance than others. However, the deep CNNs will lead to a dramatic increase in the size of parameters, limiting its application on embedding and resource-constrained devices, such as smart phone. To address the common problems of blurred image edges, inflexible convolution kernel size selection and slow convergence during training procedure due to redundant network structure in image super-resolution algorithms, this paper proposes a lightweight single-image super-resolution network that fuses multi-level features. The components are mainly two-level nested residual blocks. To better extract features and reduce the number of parameters, each residual block adopts an asymmetric structure. Firstly, it expands twice and then compresses the number of channels twice. Secondly, in the residual block, the feature information of different channels is weighted and fused by adding an autocorrelation weight unit. The quality of the reconstructed image of the proposed method is superior to the existing image super-resolution reconstruction methods in both subjective perception and objective evaluation indicators, and the reconstruction performance is better when the factor is large.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

All data generated or analyzed during this study are included in this published article and its supplementary information files. The authors would like to thank DIV2K, Set5, Set14, BSD100, Urban100, and Manga109 datasets, which allowed us to train and evaluate the proposed model.

References

  1. Shi, W., Caballero, J., Ledig, C., Zhuang, X., Bai, W., Bhatia, K., et al.: Cardiac image super-resolution with global correspondence using multi-atlas patchmatch. In: Proceedings of Medical Image Computing and Computer-Assisted Intervention. Berlin, Heidelberg: Springer Berlin Heidelberg, p. 916 (2013).

  2. Luo, Y., Zhou, L., Wang, S., Wang, Z.: Video satellite imagery super resolution via convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 14(12), 2398–2402 (2017)

    Article  Google Scholar 

  3. Zou, W.W.W., Yuen, P.C.: Very low resolution face recognition problem. IEEE Trans. Image Process. 21(1), 327–340 (2011)

    Article  MathSciNet  Google Scholar 

  4. Chen, Y., Liu, L., Phonevilay, V., Gu, K., Xia, R., Xie, J., Zhang, Q., Yang, K.: Image super-resolution reconstruction based on feature map attention mechanism. Appl. Intell. 51(7), 4367–4380 (2021)

    Article  Google Scholar 

  5. Chen, Y., Liu, L., Tao, J., Xia, R., Zhang, Q., Yang, K., Xiong, J., Chen, X.: The improved image inpainting algorithm via encoder and similarity constraint. Vis. Comput. 37, 1691–1705 (2021)

    Article  Google Scholar 

  6. Xia, R.L., Chen, Y.T., Ren, B.B.: Improved anti-occlusion object tracking algorithm using Unscented Rauch-Tung-Striebel smoother and kernel correlation filter. J. King Saud Univ. Comput. Inf. Sci. 34(8), 6008–6018 (2022)

    Google Scholar 

  7. Dong, C., Loy, C.C., He, K., Tang, X.:Learning a deep convolutional network for image super-resolution. In: Proceedings of the European Conference on Computer Vision. Zurich, Switzerland: Springer, Cham, pp. 184–199 (2014).

  8. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  9. Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using deep convolutional networks. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Las Vegas, USA, pp. 1646–1654 (2016).

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Las Vegas, USA, pp. 770–778 (2016).

  11. Kim, J., Kwon Lee, J., Mu Lee, K.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Las Vegas, USA, pp. 1637–1645 (2016).

  12. Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Honolulu, USA, pp. 3147–3155 (2017).

  13. Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Long Beach, USA, pp. 3867–3876 (2019).

  14. Hui, Z., Gao, X., Yang, Y., Wang, X.: Lightweight image super-resolution with information multi-distillation network. In: Proceedings of the 27th ACM International Conference on Multimedia. Association for Computing Machinery, New York, USA, pp. 2024–2032 (2019).

  15. Ahn, N., Kang, B., Sohn, K.A.: Fast, accurate, and lightweight super-resolution with cascading residual network. In: Proceedings of the European Conference on Computer Vision. Springer, Cham, Zurich, Switzerland, pp. 252–268 (2018).

  16. Zhu, F., Zhao, Q.: Efficient single image super-resolution via hybrid residual feature learning with compact back-projection network. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision Workshop. IEEE, Seoul, Korea, pp. 2453–2460 (2019).

  17. Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proceedings of the European Conference on Computer Vision. Springer, Cham, Zurich, Switzerland, pp. 517–532 (2018).

  18. Lai, W. S., Huang, J. .B, Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Honolulu, USA, pp. 5835–5843 (2017).

  19. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Honolulu, USA, pp. 1132–1140 (2017).

  20. Timofte, R., Agustsson, E., Gool, L.V., Yang, M., Zhang, L., Lim, B., et al.: Nitre 2017 challenge on single image super-resolution: methods and results. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Honolulu, USA, pp. 1110–1121 (2017).

  21. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, USA, pp. 2472–2481 (2018).

  22. Liu, J., Zhang, W., Tang, Y., Tang, J., Wu, G.: Residual dense network for image super-resolution. In: Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Seattle, USA, pp. 2356–2365 (2020).

  23. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision. Springer, Cham, Zurich, Switzerland, pp. 286–301 (2018).

  24. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, USA, pp. 4510–4520 (2018).

  25. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence. IEEE, San Francisco, USA, pp. 4278–4284 (2017).

  26. Wang, Z., Chen, J., Hoi, S.C.H.: Deep learning for image super-resolution: a survey. IEEE Trans. Pattern Anal. Mach. Intel. (2020).

  27. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXIV:1412.6980 (2014).

  28. Samlimans, T., Kingma, D.P.: Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. IEEE, Barcelona, Spain, pp. 901–909 (2016).

  29. Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on non-negative neighbor embedding. In: Proceedings of the 23rd British Machine Vision Conference. BMVA Press, Guildford, UK, pp. 135.1–135.10 (2012).

  30. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Proceedings of International Conference on Curves and Surfaces. Berlin, Germany: Springer, Heidelberg, pp. 711–730 (2010).

  31. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the 2001 International Conference on Computer Vision. IEEE, Vancouver, Canada, pp. 416–423 (2015).

  32. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Boston, USA, pp. 5197–5206 (2015).

  33. Matsui, Y., Ito, K., Aramaki, Y., Fujimoto, A., Ogawa, T., Yamasaki, T., et al.: Sketch-based manga retrieval using manga109 dataset. Multimedia Tools Appl. 76(20), 21811–21838 (2017)

    Article  Google Scholar 

  34. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  35. Dong, C., Loy, C.C., Tang, X.: Accelatring the super-resolution convolutional neural network. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Springer, Cham, Zurich, Switzerland, pp. 391–407 (2016).

  36. Liang, J., Zeng, H., Zhang, L.: Details or Artifacts: a locally discriminative learning approach to realistic image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, IEEE, USA (2022) https://arxiv.org/abs/2203.09195.

  37. Zhang, K., Zuo, W., Zhang, L.: Learning a single convolutional super-resolution network for multiple degradations. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, USA, pp. 3262–3271 (2018).

  38. Mou, C., Wu, Y.Z., Wang, X.T., Dong, C., Zhang, J., Shan, Y.: Metric learning based interactive modulation for real-world super-resolution. In: Proceedings of the 2022 European Conference on Computer Vision, CoRR abs/2205.05065.

  39. Liang, J.Y., Cao, J.Z., Sun, G.L., Zhang, K., Gool, L.V., Timofte, R.: Swin-IR: image restoration using swin transformer. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision Workshops, Montreal, Canada: IEEE, pp. 1833–1844 (2021).

  40. Chen, L.Y., Chu, X.J., Zhang, X.Y., Sun, J.: NAFNet: nonlinear activation free network for image restoration. In: Proceedings of NTIRE 2022 Challenge on Efficient Super-Resolution, https://github.com/murufeng/FUIR.

  41. Li, Z.Y., Liu, Y.Q., Chen, X.Y., Cai, H.M., Gu, J.J., Qiao, Y., Dong, C.: Blueprint separable residual network for efficient image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, IEEE, USA, pp. 833–834 (2022).

  42. Liu, J., Tang, J., Wu, G.S.: Residual feature distillation network for lightweight image super-resolution. In: Proceedings of the European Conference on Computer Vision Workshops, Glasgow: UK, pp. 41–55 (2020).

Download references

Funding

This work is supported by the Natural Science Foundation of Hunan Province of China under Grant 2020JJ4623, Changsha Major Science and Technology Projects under Grant KQ2102007, KQ1703018, KQ1706064, A Project Supported by Scientific Research Fund of Hunan Provincial Education Department under Grant 22A0701, Scientific Research Project of Hunan University of Information Technology under Grant XXY02ZD01, College Students’ Innovative Entrepreneurial Training Plan Program of Hunan University of Information Technology under Grant X202213836002, Smart Manufacturing Barcode Traceability Management System under Grant 20224301020010 and CON202204070272 (Hunan WUJO High-Tech Material Corporation Limited), University-Industry Collaborative Education Program under Grant 202102536008 and 221003279124130, China University Innovation Funding - Beslin Smart Education Project under Grant 2022BL055, A Project Supported by Teaching Reform Research Fund of Hunan Province General Higher Education Schools under Grant HNJG-2022-1335 and 2022 Part-time Vice President of Science and Technology of Changsha Enterprises from Changsha Science and Technology Bureau.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuantao Chen.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Y., Xia, R., Yang, K. et al. MFFN: image super-resolution via multi-level features fusion network. Vis Comput 40, 489–504 (2024). https://doi.org/10.1007/s00371-023-02795-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02795-0

Keywords

Navigation