Skip to main content
Log in

Halo reduction multi-exposure image fusion technique

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Multi-exposure image fusion (MEF) involves combining images captured at different exposure levels to create a single, well-exposed fused image. MEF has a wide range of applications, including low light, low contrast, night photography, medical imaging, and remote sensing. However, MEF methods often face issues like artifacts, halos around edges, color inconsistencies, noise amplification, and difficulty in preserving fine details. Moreover, assessing the quality of fused images objectively is complex due to the subjective nature of human perception. Solving the challenges is essential to developing efficient MEF techniques that produce high-quality results across various scenarios. The proposed technique introduces an approach to handling halo artifacts and implementing MEF. The Dense Scale-Invariant Feature Transform (DSIFT) is used to capture vital information about image brightness, texture, and edges from source images. Three weight maps are computed from the local mean, signal strength, and the global gradient for initial weight estimation. The local mean represents the brightness of specific image areas, signal strength preserves essential details like textures and edges while reducing image noise, and global gradient helps identify regions with significant pixel value shifts across multiple exposure images. The weight maps are then combined using a weighted average and a Gaussian smoothing filter is applied to reduce inherent noise and discontinuities in the original weights. Subsequently, pyramid decomposition is performed to generate a fused image. The efficiency of the proposed method is extensively tested on challenging multi-exposure image sequences. The results of the proposed approach demonstrate its superiority in both subjective evaluation and objective metrics like the MEF-Structural Similarity Index (MEF-SSIM), Natural Image Quality Evaluator (NIQE) and Gradient based performance measure (\(Q^{AB/f}\)).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Data Availability

Data sharing does not apply to this article as no data sets were generated.

References

  1. Xu K, Wang Q, Xiao H, Liu K (2022) Multi-exposure image fusion algorithm based on improved weight function. Front Neurorobot vol. 16. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fnbot.2022.846580

  2. Keerativittayanun S, Kondo T, Kotani K, Phatrapornnant T, Karnjana J (2021) Two-layer pyramid-based blending method for exposure fusion. Mach Vis Appl 32(2):1–18

    Article  Google Scholar 

  3. Wang Z, Li X, Duan H, Su Y, Zhang X, Guan X (2021) Medical image fusion based on convolutional neural networks and non-subsampled contourlet transform. Expert Syst Appl 171:114574. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0957417421000154

  4. Hayat N, Imran M (2019) Ghost-free multi exposure image fusion technique using dense sift descriptor and guided filter. J Vis Commun Image Represent

  5. Gu B, Li W, Wong J, Zhu M, Wang M (2012) Gradient field multi-exposure images fusion for high dynamic range image visualization. J Vis Commun Image Represent

  6. Goshtasby A (2005) Fusion of multi-exposure images. Image Vis Comput

  7. Mertens T, Kautz J, Van Reeth F (2009) Exposure fusion: A simple and practical alternative to high dynamic range photography. In: Computer graphics forum, vol. 28, no. 1. Wiley Online Library, pp. 161–171

  8. Kou F, Li Z, Wen C, Chen W (2018) Edge-preserving smoothing pyramid based multi-scale exposure fusion. J Vis Commun Image Represent 53:235–244

    Article  Google Scholar 

  9. Xu F, Liu J, Song Y, Sun H, Wang X (2022) Multi-exposure image fusion techniques: A comprehensive review. Remote Sens 14(3):771

    Article  Google Scholar 

  10. Brown D, Davis E (2017) Histogram equalization for contrast enhancement. In: Proceedings of the International Conference on Computer Vision, pp. 456–467

  11. Kinoshita Y, Kiya H (2019) Scene segmentation-based luminance adjustment for multi-exposure image fusion. IEEE Trans Image Process 28(8):4101–4116

    Article  MathSciNet  Google Scholar 

  12. Ma K, Li H, Li H, Yong H, Wang Z, Meng D, Zhang L (2017) Robust multi-exposure image fusion: A structural patch decomposition approach. IEEE Trans Image Process

  13. Li H, Li H, Ma K, Yong H, Zhang L (2020) Fast multi-scale structural patch decomposition for multi-exposure image fusion. IEEE Trans Image Process

  14. Yang J, Wright J, Huang TS, Ma Y (2010) Image super-resolution via sparse representation. IEEE Trans Image Process 19(11):2861–2873

    Article  MathSciNet  Google Scholar 

  15. Li ZG, Zheng JH, Rahardja S (2012) Detail-enhanced exposure fusion. IEEE Trans Image Process 21(11):4672–4676

    Article  MathSciNet  Google Scholar 

  16. Song M, Tao D, Chen C, Bu J, Luo J, Zhang C (2011) Probabilistic exposure fusion. IEEE Trans Image Process 21(1):341–357

    Article  MathSciNet  Google Scholar 

  17. Babu RV, Brown MS (2019) Variational multi-exposure image fusion. IEEE Trans Image Process 28(12):5782–5796

    Google Scholar 

  18. Li Z, Robles-Kelly A (2016) A weighted patch-based approach for the fusion of exposure bracketed images. IEEE Trans Image Process 25(11):5187–5200

    MathSciNet  Google Scholar 

  19. Wang Q et al (2018) Multi-exposure image fusion: A survey of the state-of-the-art. J Vis Commun Image Represent 47:112–126

    Google Scholar 

  20. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion

  21. Wang Z, Li X, Duan H, Zhang X (2022) A self-supervised residual feature learning model for multifocus image fusion. IEEE Trans Image Process 31:4527–4542

    Article  Google Scholar 

  22. Lee SH, Lee SH, Park JS, Cho NI (2018) A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient. 2018 25th IEEE International Conference on Image Processing (ICIP)

  23. Han D, Li L, Guo X, Ma J (2022) Multi-exposure image fusion via deep perceptual enhancement. Inf Fusion 79:248–262. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1566253521002049

  24. Mertens T, Kautz J, Van Reeth F (2007) Exposure fusion. In: 15th Pacific Conference on Computer Graphics and Applications (PG’07). IEEE, pp. 382–390

  25. Zeng K, Ma K, Hassen R, Wang Z (2014) Perceptual evaluation of multi-exposure image fusion algorithms. In: 2014 Sixth International Workshop on Quality of Multimedia Experience (QoMEX). IEEE, pp. 7–12

  26. Ma K, Duanmu Z, Yeganeh H, Wang Z (2017) Multi-exposure image fusion by optimizing a structural similarity index. IEEE Trans Comput Imaging 4(1):60–72

    Article  MathSciNet  Google Scholar 

  27. Ma K, Zeng K, Wang Z (2015) Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process

  28. Mittal A, Mittal A, Mittal A, Soundararajan R, Bovik AC (2013) Making a “completely blind” image quality analyzer. IEEE Signal Process Lett

  29. Xydeas CS, Petrovic VS (2000) Objective image fusion performance measure. Electron Lett 36:308–309. [Online]. Available: https://api.semanticscholar.org/CorpusID:10365293

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benish Amin.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharif, R., Amin, B. & Sukhia, K.N. Halo reduction multi-exposure image fusion technique. Multimed Tools Appl (2024). https://doi.org/10.1007/s11042-024-19458-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11042-024-19458-4

Keywords

Navigation