Skip to main content
Log in

An enhanced multi-scale weight assignment strategy of two-exposure fusion

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Multi-exposure image fusion (MEF) is a convenient way to get high dynamic range (HDR) images. However, when the input sequence with a large difference in exposure time, the existing MEF algorithms cannot well reconstruct relative contrast, resulting in loss of detail in underexposed or overexposed region. In order to get the details from the images as much as possible, an enhanced multi-scale weight assignment strategy is proposed in this paper. First, the input image guided by itself is decomposed into the first level with base layer (BL) and detail layer (DL) using guided filtering. Then, this BL continues to be decomposed into the second level like the first level. With the going of decomposition, the image can be decomposed into several DLs and 1 BL. Afterward, the image information contained in BLs and DLs is extracted by using the exposure weights and the global gradient weights to reconstruct the fused image, respectively. Finally, 39 sets of two-exposure image sequences are selected and compared with nine representative algorithms. Experimental results show that the proposed algorithm has good visual performance in subjective evaluation and state-of-the-art performance in objective evaluation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data Availability

The datasets analyzed during the current study are available in the [MEFB] repository, [https://github.com/xingchenzhang/MEFB]. All data included in this study are available upon request by contact with the corresponding author.

References

  1. Karr, B.A., Debattista, K., Chalmers, A.G.: Optical effects on HDR calibration via a multiple exposure noise-based workflow. Vis. Comput. 37, 895–910 (2021)

    Article  Google Scholar 

  2. Frédo, D., Julie, D.: Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Graph. 21, 257–266 (2002)

    Article  Google Scholar 

  3. Shen, R., Cheng, I., Shi, J., Basu, A.: Generalized random walks for fusion of multi-exposure images. IEEE Trans. Image Process. 20(12), 3634–3646 (2011)

    Article  MathSciNet  PubMed  ADS  Google Scholar 

  4. Han, D., Li, L., Guo, X., Ma, J.: Multi-exposure image fusion via deep perceptual enhancement. Inf. Fus. 79, 248–262 (2021)

    Article  Google Scholar 

  5. Ma, K., Wang, Z.: Multi-exposure image fusion: a patch-wise approach. In: 2015 IEEE International Conference on Image Processing(ICIP), pp. 1717–1721 (2015)

  6. Li, H., Ma, K., Yong, H., Zhang, L.: Fast multi-scale structural patch decomposition for multi-exposure image fusion. IEEE Trans. Image Process. 29, 5805–5816 (2020)

    Article  MathSciNet  ADS  Google Scholar 

  7. Li, S., Kang, X., Hu, J.: Image fusion with guided filtering. IEEE Trans. Image Process. 22(7), 2864–2875 (2013)

    Article  PubMed  ADS  Google Scholar 

  8. Ma, K., Li, H., Yong, H., Wang, Z., Meng, D., Zhang, L.: Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans. Image Process. 26(5), 2519–2532 (2017)

    Article  MathSciNet  ADS  Google Scholar 

  9. Ma, K., Duanmu, Z., Yeganeh, H., Wang, Z.: Multi-exposure image fusion by optimizing a structural similarity index. IEEE Trans. Comput. Imaging 4(1), 60–72 (2018)

    Article  MathSciNet  Google Scholar 

  10. Kou, F., Li, Z., Wen, C.: Edge-preserving smoothing pyramid based multi-scale exposure fusion. J. Vis. Commun. Image Represent. 53, 235–244 (2018)

    Article  Google Scholar 

  11. Paul, S., Sevcenco, I.S., Agathoklis, P.: Multi-exposure and multi-focus image fusion in gradient domain. J. Circuits Syst. Comput. 25(10), 1650123 (2016)

    Article  Google Scholar 

  12. Malik, M., Gilani, S., Anwaar, H.: Wavelet Based Exposure Fusion. Lecture Notes in Engineering & Computer Science. 1, 2170 (2008)

  13. Yang, Y., Cao, W., Wu, S., Li, Z.: Multi-scale fusion of two large-exposure-ratio images. IEEE Signal Process. Lett. 25(12), 1885–1889 (2018)

    Article  ADS  Google Scholar 

  14. Wang, Q., Chen, W., Wu, X., Li, Z.: Detail-enhanced multi-scale exposure fusion in yuv color space. IEEE Trans. Circuits Syst. Video Technol. 26(3), 1243–1252 (2019)

    Google Scholar 

  15. Qiu, X., Li, M., Zhang, L., Yuan, X.: Guided filter-based multi-focus image fusion through focus region detection. Signal Process. Image Commun. 72, 35–46 (2019)

    Article  Google Scholar 

  16. Gan, W., Wu, X., Wu, W., Yang, X., Chao, R., He, X., Liu, K.: Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter. Infrared Phys. Technol. 72, 37–51 (2015)

    Article  ADS  Google Scholar 

  17. Bavirisetti, D.P., Xiao, G., Zhao, J., Dhuli, R., Liu, G.: Multi-scale guided image and video fusion: a fast and efficient approach. Circuits Syst. Signal Process. 38(12), 5576–5605 (2019)

    Article  Google Scholar 

  18. Lee, S.-h., Park, J.S, Cho, N.I.: A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 1737–1741 (2018)

  19. Zhang, X.: Benchmarking and comparing multi-exposure image fusion algorithms. Inf. Fus. 74, 111–131 (2021)

    Article  Google Scholar 

  20. Liu, Y., Wang, Z.: Dense sift for ghost-free multi-exposure fusion. J. Vis. Commun. Image Represent. 31, 208–224 (2015)

    Article  CAS  Google Scholar 

  21. Burt, P.J., Kolczynski, R.J.: Enhanced image capture through fusion. In: Fourth International Conference on Computer Vision, pp. 173–182 (1993)

  22. Mertens, T., Kautz, J., Reeth, F.: Exposure fusion. In: 15th Pacific Conference on Computer Graphics and Applications, pp. 382–390 (2007)

  23. Qi, Y., Yu, M., Jiang, H., Jiang, G.: Multi-exposure image fusion based on tensor decomposition and convolution sparse representation. Opto-Electron. Eng. 46(1) (2019)

  24. Karakaya, D., Oguzhan, U., Mehmet, T.: PAS-MEF: Multi-exposure image fusion based on principal component analysis, adaptive well-exposedness and saliency map. arXiv:2105.11809 (2021)

  25. Liu, S., Zhang, Y.: Detail-preserving underexposed image enhancement via optimal weighted multi-exposure fusion. IEEE Trans. Consum. Electron. 65(3), 303–311 (2019)

    Article  CAS  Google Scholar 

  26. Cai, J., Gu, S., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)

    Article  MathSciNet  ADS  Google Scholar 

  27. Wang, C., He, C., Xu, M.: Fast exposure fusion of detail enhancement for brightest and darkest regions. Vis. Comput. 37, 1233–1243 (2021)

    Article  Google Scholar 

  28. Shen, J., Zhao, Y., He, Y.: Detail-preserving exposure fusion using subband architecture. Vis. Comput. 28(5), 463–473 (2012)

    Article  Google Scholar 

  29. Prabhakar, K.R., Srikar, V.S., Babu, R.V.: Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4724–4732 (2017)

  30. Xu, H., Ma, J., Jiang, J., Guo, X., Ling, H.: U2fusion: a unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 502–518 (2022)

    Article  PubMed  Google Scholar 

  31. Huang, S.-W., Peng, Y.-T., Chen, T.-H., Yang, Y.-C.: Two-exposure image fusion based on cross attention fusion. In: 2021 55th Asilomar Conference on Signals, pp. 867–872 (2021)

  32. Zhao, H., Zheng, J., Shang, X., Zhong, W.: Coarse-to-fine multi-scale attention-guided network for multi-exposure image fusion. Vis. Comput. 06, 1–14 (2023)

    Google Scholar 

  33. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)

    Article  PubMed  Google Scholar 

  34. Li, H., Ma, K., Yong, H., Zhang, L.: Fast multi-scale structural patch decomposition for multi-exposure image fusion. IEEE Trans. Image Process. 29, 5805–5816 (2020)

    Article  MathSciNet  ADS  Google Scholar 

  35. Wang, Q., Chen, W., Wu, X., Li, Z.: Detail-enhanced multi-scale exposure fusion in yuv color space. IEEE Trans. Circuits Syst. Video Technol. 30(8), 2418–2429 (2020)

    Article  Google Scholar 

  36. Liu, X.: Perceptual multi-exposure fusion. arXiv:2210.09604 (2022)

  37. Ma, K., Duanmu, Z., Yeganeh, H., Wang, Z.: Multi-exposure image fusion by optimizing a structural similarity index. IEEE Trans. Comput. Imaging 4(1), 60–72 (2017)

    Article  MathSciNet  Google Scholar 

  38. Xydeas, C.S., Petrović, V.: Objective image fusion performance measure. Electron. Lett. 36(4), 308–309 (2000)

    Article  ADS  Google Scholar 

Download references

Acknowledgements

This work was supported by Fundamental Research Funds for the Central Universities (No. 3072022QBZ0803), the China Postdoctoral Science Foundation (No. 2018M631911), and the Heilongjiang Postdoctoral Foundation, China (No. LBH-Z18055).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junwei Qi.

Ethics declarations

Conflicts of interest

All authors declare that there are no conflict of interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Yang, Z., Qi, J. et al. An enhanced multi-scale weight assignment strategy of two-exposure fusion. Vis Comput (2024). https://doi.org/10.1007/s00371-023-03258-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-023-03258-2

Keywords

Navigation