Advertisement

Feature Representation Matters: End-to-End Learning for Reference-Based Image Super-Resolution

Conference paper
  • 948 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12349)

Abstract

In this paper, we are aiming for a general reference-based super-resolution setting: it does not require the low-resolution image and the high-resolution reference image to be well aligned or with a similar texture. Instead, we only intend to transfer the relevant textures from reference images to the output super-resolution image. To this end, we engaged neural texture transfer to swap texture features between the low-resolution image and the high-resolution reference image. We identified the importance of designing a super-resolution task-specific features rather than classification oriented features for neural texture transfer, making the feature extractor more compatible with the image synthesis task. We develop an end-to-end training framework for the reference-based super-resolution task, where the feature encoding network prior to matching and swapping is jointly trained with the image synthesis network. We also discovered that learning the high-frequency residual is an effective way for the reference-based super-resolution task. Without bells and whistles, the proposed method E2ENT\(^2\) achieved better performance than state-of-the method (i.e., SRNTT with five loss functions) with only two basic loss functions. Extensive experimental results on several datasets demonstrate that the proposed method E2ENT\(^2\) can achieve superior performance to existing best models both quantitatively and qualitatively.

Keywords

Super-resolution Reference-based Feature matching Feature swapping CUFED5 Flickr1024 

Notes

Acknowledgment

The work was supported by National Natural Science Foundation of China under 61972323, 61902022 and 61876155, and Key Program Special Fund in XJTLU under KSF-T-02, KSF-P-02, KSF-A-01, KSF-E-26.

References

  1. 1.
    Ahn, N., Kang, B., Sohn, K.A.: Fast, accurate, and lightweight super-resolution with cascading residual network. In: Proceedings of the European Conference on Computer Vision, pp. 252–268 (2018)Google Scholar
  2. 2.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)Google Scholar
  3. 3.
    Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)CrossRefGoogle Scholar
  4. 4.
    Freedman, G., Fattal, R.: Image and video upscaling from local self-examples. ACM Trans. Graph. 30(2), 1–11 (2011)CrossRefGoogle Scholar
  5. 5.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  6. 6.
    Jeon, D.S., Baek, S.H., Choi, I., Kim, M.H.: Enhancing the spatial resolution of stereo images using a parallax prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1721–1730 (2018)Google Scholar
  7. 7.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  8. 8.
    Jolicoeur-Martineau, A.: The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734 (2018)
  9. 9.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)Google Scholar
  10. 10.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1645 (2016)Google Scholar
  11. 11.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)Google Scholar
  12. 12.
    Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144 (2017)Google Scholar
  13. 13.
    Sajjadi, M.S., Scholkopf, B., Hirsch, M.: EnhanceNet: single image super-resolution through automated texture synthesis. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4491–4500 (2017)Google Scholar
  14. 14.
    Salvador, J.: Example-Based Super Resolution. Academic Press (2016)Google Scholar
  15. 15.
    Sun, L., Hays, J.: Super-resolution from internet-scale scene matching. In: 2012 IEEE International Conference on Computational Photography, pp. 1–12. IEEE (2012)Google Scholar
  16. 16.
    Timofte, R., De Smet, V., Van Gool, L.: Anchored neighborhood regression for fast example-based super-resolution. In: The IEEE International Conference on Computer Vision, December 2013Google Scholar
  17. 17.
    Timofte, R., De Smet, V., Van Gool, L.: Anchored neighborhood regression for fast example-based super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1920–1927 (2013)Google Scholar
  18. 18.
    Wang, L., et al.: Learning parallax attention for stereo image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12250–12259 (2019)Google Scholar
  19. 19.
    Wang, Y., Wang, L., Yang, J., An, W., Guo, Y.: Flickr1024: a large-scale dataset for stereo image super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, p. 0 (2019)Google Scholar
  20. 20.
    Wang, Y., Lin, Z., Shen, X., Mech, R., Miller, G., Cottrell, G.W.: Event-specific image importance. In: The IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  21. 21.
    Wang, Y., Liu, Y., Heidrich, W., Dai, Q.: The light field attachment: Turning a DSLR into a light field camera using a low budget camera ring. IEEE Trans. Visual Comput. Graphics 23(10), 2357–2364 (2016)CrossRefGoogle Scholar
  22. 22.
    Yue, H., Sun, X., Yang, J., Wu, F.: Landmark image super-resolution by retrieving web images. IEEE Trans. Image Process. 22(12), 4865–4878 (2013)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 294–310. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_18CrossRefGoogle Scholar
  24. 24.
    Zhang, Z., Wang, Z., Lin, Z., Qi, H.: Image super-resolution by neural texture transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7982–7991 (2019)Google Scholar
  25. 25.
    Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016)CrossRefGoogle Scholar
  26. 26.
    Zheng, H., Guo, M., Wang, H., Liu, Y., Fang, L.: Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2481–2486 (2017)Google Scholar
  27. 27.
    Zheng, H., Ji, M., Han, L., Xu, Z., Wang, H., Liu, Y., Fang, L.: Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution. In: Proceedings of the British Machine Vision Conference (2017)Google Scholar
  28. 28.
    Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.): ECCV 2018. LNCS, vol. 11210. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01231-1CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.School of Advanced TechnologyXi’an Jiaotong-Liverpool UniversitySuzhouChina
  2. 2.University of Science and Technology BeijingBeijingChina
  3. 3.Alibaba-Zhejiang University Joint Institute of Frontier TechnologiesHangzhouChina

Personalised recommendations