Advertisement

PIRM2018 Challenge on Spectral Image Super-Resolution: Methods and Results

  • Mehrdad ShoeibyEmail author
  • Antonio Robles-Kelly
  • Radu Timofte
  • Ruofan Zhou
  • Fayez Lahoud
  • Sabine Süsstrunk
  • Zhiwei Xiong
  • Zhan Shi
  • Chang Chen
  • Dong Liu
  • Zheng-Jun Zha
  • Feng Wu
  • Kaixuan Wei
  • Tao Zhang
  • Lizhi Wang
  • Ying Fu
  • Koushik Nagasubramanian
  • Asheesh K. Singh
  • Arti Singh
  • Soumik Sarkar
  • Baskar Ganapathysubramanian
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11133)

Abstract

In this paper, we describe the Perceptual Image Restoration and Manipulation (PIRM) workshop challenge on spectral image super-resolution, motivate its structure and conclude on results obtained by the participants. The challenge is one of the first of its kind, aiming at leveraging modern machine learning techniques to achieve spectral image super-resolution. It comprises of two tracks. The first of these (Track 1) is about example-based single spectral image super-resolution. The second one (Track 2) is on colour-guided spectral image super-resolution. In this manner, Track 1 focuses on the problem of super-resolving the spatial resolution of spectral images given training pairs of low and high spatial resolution spectral images. Track 2, on the other hand, aims to leverage the inherently higher spatial resolution of colour (RGB) cameras and the link between spectral and trichromatic images of the scene. The challenge in both tracks is then to recover a super-resolved image making use of low-resolution imagery at the input. We also elaborate upon the methods used by the participants, summarise the results and discuss their rankings.

Keywords

Super-resolution Multispectral Hyperspectral RGB Stereo 

Notes

Acknowledgement

The PIRM2018 challenge was sponsored by CSIRO’s DATA61, Deakin University, ETH Zurich, HUAWEI, and MediaTek.

References

  1. 1.
    Eren, P.E., Sezan, M.I., Tekalp, A.M.: Robust, object-based high-resolution image reconstruction from low-resolution video. IEEE Trans. Image Process. 6(10), 1446–1451 (1997)CrossRefGoogle Scholar
  2. 2.
    Bishop, T.E., Zanetti, S., Favaro, P.: Light field superresolution. In: 2009 IEEE International Conference on Computational Photography (ICCP), pp. 1–9. IEEE (2009)Google Scholar
  3. 3.
    Farsiu, S., Robinson, D., Elad, M., Milanfar, P.: Robust shift and add approach to superresolution. In: Applications of Digital Image Processing XXVI, vol. 5203, pp. 121–131. International Society for Optics and Photonics (2003)Google Scholar
  4. 4.
    Li, T.: Single image super-resolution: a historical review. In: ObEN Research Seminar (2018)Google Scholar
  5. 5.
    Tsai, R.: Multiframe image restoration and registration. Adv. Comput. Vis. Image Process. 1, 317–339 (1984)Google Scholar
  6. 6.
    Kim, S., Bose, N.K., Valenzuela, H.: Recursive reconstruction of high resolution image from noisy undersampled multiframes. IEEE Trans. Acoust. Speech Sign. Process. 38(6), 1013–1027 (1990)CrossRefGoogle Scholar
  7. 7.
    Bose, N., Kim, H., Valenzuela, H.: Recursive total least squares algorithm for image reconstruction from noisy, undersampled frames. Multidimension. Syst. Signal Process. 4(3), 253–268 (1993)CrossRefGoogle Scholar
  8. 8.
    Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016)CrossRefGoogle Scholar
  9. 9.
    Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Timofte, Radu, De Smet, Vincent, Van Gool, Luc: A+: adjusted anchored neighborhood regression for fast super-resolution. In: Cremers, Daniel, Reid, Ian, Saito, Hideo, Yang, Ming-Hsuan (eds.) ACCV 2014. LNCS, vol. 9006, pp. 111–126. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-16817-3_8CrossRefGoogle Scholar
  11. 11.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)Google Scholar
  12. 12.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  13. 13.
    Kappeler, A., Yoo, S., Dai, Q., Katsaggelos, A.K.: Video super-resolution with convolutional neural networks. IEEE Trans. Comput. Imaging 2(2), 109–122 (2016)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Timofte, R., et al.: Ntire 2017 challenge on single image super-resolution: methods and results. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1110–1121. IEEE (2017)Google Scholar
  15. 15.
    Timofte, R., et al.: Ntire 2018 challenge on single image super-resolution: methods and results. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE (2018)Google Scholar
  16. 16.
    Blau, Y., Mechrez, R., Timofte, R., Michaeli, T., Zelnik-Manor, L.: 2018 PIRM challenge on perceptual image super-resolution. In: European Conference on Computer Vision Workshops (ECCVW) (2018)Google Scholar
  17. 17.
    Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, vol. 1, p. 4 (2017)Google Scholar
  18. 18.
    Fan, Y., et al.: Balanced two-stage residual networks for image super-resolution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1157–1164. IEEE (2017)Google Scholar
  19. 19.
    Bei, Y., Damian, A., Hu, S., Menon, S., Ravi, N., Rudin, C.: New techniques for preserving global structure and denoising with low information loss in single-image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, vol. 4 (2018)Google Scholar
  20. 20.
    Ahn, N., Kang, B., Sohn, K.A.: Image super-resolution via progressive cascading residual network. Progressive 24, 0–771 (2018)Google Scholar
  21. 21.
    Haris, M., Shakhnarovich, G., Ukita, N.: Deep backprojection networks for super-resolution. In: Conference on Computer Vision and Pattern Recognition (2018)Google Scholar
  22. 22.
    Loncan, L., et al.: Hyperspectral pansharpening: a review. arXiv preprint arXiv:1504.04531 (2015)
  23. 23.
    Lanaras, C., Baltsavias, E., Schindler, K.: Hyperspectral super-resolution by coupled spectral unmixing. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3586–3594 (2015)Google Scholar
  24. 24.
    Kawakami, R., Matsushita, Y., Wright, J., Ben-Ezra, M., Tai, Y.W., Ikeuchi, K.: High-resolution hyperspectral imaging via matrix factorization. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2329–2336. IEEE (2011)Google Scholar
  25. 25.
    Li, Y., Hu, J., Zhao, X., Xie, W., Li, J.: Hyperspectral image super-resolution using deep convolutional neural network. Neurocomputing 266, 29–41 (2017)CrossRefGoogle Scholar
  26. 26.
    Yasuma, F., Mitsunaga, T., Iso, D., Nayar, S.K.: Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum. IEEE Trans. Image Process. 19(9), 2241–2253 (2010)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Chakrabarti, A., Zickler, T.: Statistics of real-world hyperspectral images. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 193–200. IEEE (2011)Google Scholar
  28. 28.
    Foster, D.H., Nascimento, S.M., Amano, K.: Information limits on neural identification of colored surfaces in natural scenes. Vis. Neurosci. 21(3), 331–336 (2004)CrossRefGoogle Scholar
  29. 29.
    Shoeiby, M., Robles-Kelly, A., Wei, R., Timofte, R.: PIRM2018 challenge on spectral image super-resolution: Dataset and study. In: European Conference on Computer Vision Workshops (ECCVW) (2018)Google Scholar
  30. 30.
    Arad, B., Ben-Shahar, O., Timofte, R.: Ntire 2018 challenge on spectral reconstruction from RGB images. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018Google Scholar
  31. 31.
    Chang, C.I.: Spectral information divergence for hyperspectral image analysis. In: Geoscience and Remote Sensing Symposium, 1999. IGARSS 1999 Proceedings. IEEE 1999 International, vol. 1, pp. 509–511. IEEE (1999)Google Scholar
  32. 32.
    Wang, Z., Bovik, A.C.: Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Sign. Process. Mag. 26(1), 98–117 (2009)CrossRefGoogle Scholar
  33. 33.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  34. 34.
    Lahoud, F., Zhou, R., Süsstrunk, S.: Multi-modal spectral image super-resolution. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)Google Scholar
  35. 35.
    Achanta, R., Arvanitopoulos, N., Süsstrunk, S.: Extreme image completion. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1333–1337. IEEE (2017)Google Scholar
  36. 36.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  37. 37.
    Shi, Z., Chen, C., Xiong, Z., Liu, D., Zha, Z.J., Wu, F.: Deep residual attention network for spectral image super-resolution. In: European Conference on Computer Vision Workshops (ECCVW) (2018)Google Scholar
  38. 38.
    Tai, Y., Yang, J., Liu, X., Xu, C.: Memnet: a persistent memory network for image restoration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4539–4547 (2017)Google Scholar
  39. 39.
    Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2755–2763. IEEE (2017)Google Scholar
  40. 40.
    Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, vol. 4, p. 12 (2017)Google Scholar
  41. 41.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507 7 (2017)
  42. 42.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  43. 43.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Mehrdad Shoeiby
    • 1
    Email author
  • Antonio Robles-Kelly
    • 2
  • Radu Timofte
    • 3
  • Ruofan Zhou
    • 4
  • Fayez Lahoud
    • 4
  • Sabine Süsstrunk
    • 4
  • Zhiwei Xiong
    • 5
  • Zhan Shi
    • 5
  • Chang Chen
    • 5
  • Dong Liu
    • 5
  • Zheng-Jun Zha
    • 5
  • Feng Wu
    • 5
  • Kaixuan Wei
    • 6
  • Tao Zhang
    • 6
  • Lizhi Wang
    • 6
  • Ying Fu
    • 6
  • Koushik Nagasubramanian
    • 7
  • Asheesh K. Singh
    • 7
  • Arti Singh
    • 7
  • Soumik Sarkar
    • 7
  • Baskar Ganapathysubramanian
    • 7
  1. 1.DATA61 - CSIROBlack Mountain LaboratoriesActonAustralia
  2. 2.Faculty of Science, Engineering and Built EnvironmentDeakin UniversityWaurn PondsAustralia
  3. 3.Computer Vision LaboratoryD-ITET, ETH ZurichZurichSwitzerland
  4. 4.Image and Visual Representation LaboratoryEPFLLausanneSwitzerland
  5. 5.University of Science and Technology of ChinaHefeiChina
  6. 6.Beijing Institute of TechnologyBeijingChina
  7. 7.Lab of MechanicsIowa State UniversityAmesUSA

Personalised recommendations