Abstract
Images captured under incorrect exposures unavoidably suffer from mixed degradations of lightness and structures. Most existing deep learning-based exposure correction methods separately restore such degradations in the spatial domain. In this paper, we present a new perspective for exposure correction with spatial-frequency interaction. Specifically, we first revisit the frequency properties of different exposure images via Fourier transform where the amplitude component contains most lightness information and the phase component is relevant to structure information. To this end, we propose a deep Fourier-based Exposure Correction Network (FECNet) consisting of an amplitude sub-network and a phase sub-network to progressively reconstruct the representation of lightness and structure components. To facilitate learning these two representations, we introduce a Spatial-Frequency Interaction (SFI) block in two formats tailored to these two sub-networks, which interactively process the local spatial features and the global frequency information to encourage the complementary learning. Extensive experiments demonstrate that our method achieves superior results than other approaches with fewer parameters and can be extended to other image enhancement tasks, validating its potential in wide-range applications. Code will be available at https://github.com/KevinJ-Huang/FECNet.
J. Huang and Y. Liu—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abdullah-Al-Wadud, M., Kabir, M.H., Akber Dewan, M.A., Chae, O.: A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53(2), 593–600 (2007)
Afifi, M., Derpanis, K.G., Ommer, B., Brown, M.S.: Learning multi-scale photo exposure correction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Wang et al., W.: GladNet: low-light enhancement network with global awareness. In: Proceedings of the IEEE Conference on Automatic Face and Gesture Recognition (FG) (2018)
Jiang, Y., et al.: EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans. Image Process. (TIP) 30, 2340–2349 (2021)
Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input output image pairs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2011)
Cai, B., Xu, X., Guo, K., Jia, K., Hu, B., Tao, D.: A joint intrinsic-extrinsic prior model for retinex. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4000–4009 (2017)
Cai, J., Gu, S., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. (TIP) 27(4), 2049–2062 (2018)
Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. arXiv preprint arXiv:1805.01934 (2018)
Chen, Y.S., Wang, Y.C., Kao, M.H., Chuang, Y.Y.: Deep photo enhancer: unpaired learning for image enhancement from photographs with GANs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6306–6314 (2018)
Chi, L., Jiang, B., Mu, Y.: Fast Fourier convolution. In: Advances in Neural Information Processing Systems (NIPS), vol. 33, pp. 4479–4488 (2020)
Chi, L., Tian, G., Mu, Y., Xie, L., Tian, Q.: Fast non-local neural networks with spectral residual learning. In: Proceedings of the 27th ACM International Conference on Multimedia (MM), pp. 2142–2151 (2019)
Feifan Lv, Feng Lu, J.W.C.L.: Mbllen: low-light image/video enhancement using CNNs. In: Proceedings of the The British Machine Vision Conference (BMVC) (2018)
Fuoli, D., Van Gool, L., Timofte, R.: Fourier space losses for efficient perceptual image super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2360–2369 (2021)
Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Trans. Graphics (TOG) 36(4), 118 (2017)
Guo, C.G., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1780–1789 (2020)
Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. (TIP) 26(2), 982–993 (2016)
He, J., Liu, Y., Qiao, Yu., Dong, C.: Conditional sequential modulation for efficient global image retouching. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12358, pp. 679–695. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58601-0_40
Hu, Y., He, H., Xu, C., Wang, B., Lin, S.: Exposure: a white-box photo post-processing framework. ACM Trans. Graphics (TOG) 37(2), 1–17 (2018)
Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: DSLR-quality photos on mobile devices with deep convolutional networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3277–3285 (2017)
Jiang, L., Dai, B., Wu, W., Loy, C.C.: Focal frequency loss for image reconstruction and synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13919–13929 (2021)
Katznelson, Y.: An Introduction to Harmonic Analysis. Cambridge University Press, Cambridge (2004)
Land, E.H.: The retinex theory of color vision. Sci. Am. 237(6), 108–129 (1977)
Li, J., Li, J., Fang, F., Li, F., Zhang, G.: Luminance-aware pyramid network for low-light image enhancement. IEEE Trans. Multimedia (TMM) 23, 3153–3165 (2020)
Li, M., Liu, J., Yang, W., Sun, X., Guo, Z.: Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. (TIP) 27(6), 2828–2841 (2018)
Lim, S., Kim, W.: DSLR: deep stacked Laplacian restorer for low-light image enhancement. IEEE Trans. Multimedia (TMM) 23, 4272–4284 (2020)
Liu, J., Xu, D., Yang, W., Fan, M., Huang, H.: Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vision 129(4), 1153–1184 (2021). https://doi.org/10.1007/s11263-020-01418-8
Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. Int. J. Comput. Vis. (IJCV) 129, 2175–2193 (2021). https://doi.org/10.1007/s11263-021-01466-8
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. (JMLR) 9(11), 2579–2605 (2008)
Moran, S., Marza, P., McDonagh, S., Parisot, S., Slabaugh, G.: DeepLPF: deep local parametric filters for image enhancement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Nsamp, N.E., Hu, Z., Wang, Q.: Learning exposure correction via consistency modeling. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 1–12 (2018)
Prince, E.: The fast Fourier transform. In: Mathematical Techniques in Crystallography and Materials Science, pp. 140–156. Springer, Heidelberg (1994). https://doi.org/10.1007/978-3-642-97576-9_10
Park, J., Lee, J.Y., Yoo, D., Kweon, I.S.: Distort-and-recover: color enhancement using deep reinforcement learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5928–5936 (2018)
Pizer, S.M., et al.: Adaptive histogram equalization and its variations. Comput. Vis. Graphics Image Process. 39(3), 355–368 (1987)
Rao, Y., Zhao, W., Zhu, Z., Lu, J., Zhou, J.: Global filter networks for image classification. In: Advances in Neural Information Processing Systems (NIPS), vol. 34 (2021)
Ren, X., Yang, W., Cheng, W.H., Liu, J.: Lr3m: robust low-light enhancement via low-rank regularized Retinex model. IEEE Trans. Image Process. (TIP) 29, 5862–5876 (2020)
Reza, A.M.: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal, Image Video Technol. 38(1), 35–44 (2004). https://doi.org/10.1023/B:VLSI.0000028532.53893.82
Rippel, O., Snoek, J., Adams, R.P.: Spectral representations for convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS), vol. 28 (2015)
Risheng, L., Long, M., Jiaao, Z., Xin, F., Zhongxuan, L.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Wang, R., Zhang, Q., Fu, C.W., Shen, X., Zheng, W.S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6849–6857 (2019)
Wang, W., Yang, W., Liu, J.: Hla-face: joint high-low adaptation for low light face detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Wei, C., Wang, W., Yang, W., Liu, J.: Deep Retinex decomposition for low-light enhancement. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 155–165 (2018)
Xu, Q., Zhang, R., Zhang, Y., Wang, Y., Tian, Q.: A Fourier-based framework for domain generalization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14383–14392 (2021)
Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3063–3072 (2020)
Yang, W., Wang, W., Huang, H., Wang, S., Liu, J.: Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. (TIP) 30, 2072–2086 (2021)
Yang, W., et al.: Advancing image understanding in poor visibility environments: a collective benchmark study. IEEE Trans. Image Process. (TIP) 29, 5737–5752 (2020)
Yang, Y., Soatto, S.: FDA: Fourier domain adaptation for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4085–4095 (2020)
Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W.: A new image contrast enhancement algorithm using exposure fusion framework. In: Felsberg, M., Heyden, A., Krüger, N. (eds.) CAIP 2017. LNCS, vol. 10425, pp. 36–46. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64698-5_4
Zhang, Q., Yuan, G., Xiao, C., Zhu, L., Zheng, W.S.: High-quality exposure correction of underexposed photos. In: ACM International Conference on Multimedia (ACM MM), pp. 582–590 (2018)
Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J.: Beyond brightening low-light images. Int. J. Comput. Vis. (IJCV) 129, 1013–1037 (2021)
Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: a practical low-light image enhancer. In: ACM International Conference on Multimedia (ACM MM), pp. 1632–1640 (2019)
Zhao, L., Lu, S.P., Chen, T., Yang, Z., Shamir, A.: Deep symmetric network for underexposed image enhancement with recurrent attentional learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12075–12084 (2021)
Acknowledgments.
This work was supported by the Anhui Provincial Natural Science Foundation under Grant 2108085UD12. We acknowledge the support of GPU cluster built by MCC Lab of Information Science and Technology Institution, USTC.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Huang, J. et al. (2022). Deep Fourier-Based Exposure Correction Network with Spatial-Frequency Interaction. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13679. Springer, Cham. https://doi.org/10.1007/978-3-031-19800-7_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-19800-7_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19799-4
Online ISBN: 978-3-031-19800-7
eBook Packages: Computer ScienceComputer Science (R0)