Skip to main content
Log in

Reliable and robust low rank representation based noisy images multi-focus image fusion

  • 1207: Innovations in Multimedia Information Processing & Retrieval​
  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The noisy images fusion is still a challenging multi-focus image fusion (MIF) problem as the noise is inevitable for an input image. But most of the recent works do not bother about noisy images fusion and become challenging for color images. The fusion of noisy images pairs using existing MIF techniques compromises the robustness and reliability. We propose a novel framework to achieve the robust and reliable noisy images MIF called noisy image MIF (NIMIF). The NIMIF consists of the hybrid denoising technique, low-rank representation (LRR), and discrete wavelet transform (DWT). We propose two different NIMIF systems for greyscale MIF and color MIF. In color NIMIF, the input RGB color images have first converted into YCbCr color space due to their robustness. Then the Y color space image or greyscale image (in greyscale NIMIF) is decomposed using the DWT into high and low-level coefficients of each input image. We fuse the low-frequency coefficients using the spatial frequency (SF) technique. Before fusing the high-frequency coefficients, we apply the hybrid thresholding to suppress the noisy data from the input source images. The outcome of hybrid thresholding denoising fed to LRR to produce the fusion of high-frequency coefficients. Finally, NIMIF applies the inverse DWT to produce the MIF outcome. We present the comparative analysis of greyscale and color NIMIF using objective and visual results compare to state-of-art techniques. The simulation results prove that the integrated hybrid thresholding and LRR fusion technique form the reliable MIF solution for noisy greyscale and color image pairs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

source images

Fig. 2

source images

Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

taken from used Lytro Multi-focus Dataset [43]

Similar content being viewed by others

References

  1. Abdipour M, Nooshyar M (2016) Multi-focus image fusion using sharpness criteria for visual sensor networks in wavelet domain. Comput Electr Eng 51:74–88

    Article  Google Scholar 

  2. Adu J, Xie S, Gan J (2016) Image fusion based on visual salient features and the cross-contrast. J Vis Commun Image Represent 40:218–224

    Article  Google Scholar 

  3. Alhayani B, Abbas ST, Mohammed HJ, Mahajan HB (2021) Intelligent secured two-way image transmission using Corvus corone module over WSN. Wirel Pers Commun. https://doi.org/10.1007/s11277-021-08484-2

    Article  Google Scholar 

  4. Amin-Naji M, Aghagolzadeh A, Ezoji M (2019) CNNs hard voting for multi-focus image fusion. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-019-01199-0

    Article  Google Scholar 

  5. Amin-Naji M, Aghagolzadeh A, Ezoji M (2019) Ensemble of CNN for multi-focus image fusion. Inf Fusion. https://doi.org/10.1016/j.inffus.2019.02.003

    Article  Google Scholar 

  6. Aravind BN, Suresh KV (2015) An improved image denoising using wavelet transform. In: 2015 International conference on trends in automation, communications and computing technology (I-TACT-15). https://doi.org/10.1109/itact.2015.7492679

  7. Bai X, Liu M, Chen Z, Wang P, Zhang Y (2016) Multi-focus image fusion through gradient-based decision map construction and mathematical morphology. IEEE Access 4:4749–4760

    Article  Google Scholar 

  8. Bavirisetti DP, Dhuli R (2016) Multi-focus image fusion using multi-scale image decomposition and saliency detection. Ain Shams Eng J. https://doi.org/10.1016/j.asej.2016.06.011

    Article  Google Scholar 

  9. Bouzos O, Andreadis I, Mitianoudis N (2019) Conditional random field model for robust multi-focus image fusion. IEEE Trans Image Process. https://doi.org/10.1109/TIP.2019.2922097

    Article  MathSciNet  MATH  Google Scholar 

  10. Cao L, Jin L, Tao H, Li G, Zhuang Z, Zhang Y (2015) Multi-focus image fusion based on spatial frequency in discrete cosine transform domain. IEEE Signal Process Lett 22(2):220–224

    Article  Google Scholar 

  11. Chen YB, Guan JW, Cham WK (2018) Robust multi-focus image fusion using edge model and multi-matting. IEEE Trans Image Process 27(3):1526–1541

    Article  MathSciNet  MATH  Google Scholar 

  12. Cunha A, Zhou J, Do M (2006) The nonsubsampled contourlet transform: theory, design, and applications. IEEE Trans Image Process Publ IEEE Signal Process Soc 15:3089–3101. https://doi.org/10.1109/TIP.2006.877507

    Article  Google Scholar 

  13. Deshmukh V, Khaparde A, Shaikh S (2017) Multi-focus image fusion using deep belief network. In: Smart innovations, systems and technology. Springer, Cham, pp 233–241

  14. Diwakar M, Kumar P, Singh AK (2020) CT image denoising using NLM and its method noise thresholding. Multimed Tools Appl 79:14449–14464. https://doi.org/10.1007/s11042-018-6897-1

    Article  Google Scholar 

  15. Du CB, Gao S (2018) Multi-focus image fusion algorithm based on pulse coupled neural networks and modified decision map. Optik 157:1003–1015

    Article  Google Scholar 

  16. Du C, Gao S (2018) Multi-focus image fusion with the all convolutional neural network. Optoelectron Lett 14(1):71–75

    Article  Google Scholar 

  17. Gayathri N, Deepa PL (2016) Multi-focus color image fusion using NSCT and PCNN. In: 2016 International conference on communication systems and networks (ComNet). https://doi.org/10.1109/csn.2016.7824009

  18. Guo X, Nie R, Cao J, Zhou D, Qian W (2018) Fully convolutional network-based multifocus image fusion. Neural Comput 30(7):1775–1800

    Article  MathSciNet  Google Scholar 

  19. Guorong G, Luping X, Dongzhu F (2013) Multi-focus image fusion based on non-subsampled shearlet transform. IET Image Process 7(6):633–639

    Article  Google Scholar 

  20. Gupta V, Mahle R, Shriwas RS (2013) Image denoising using wavelet transform method. In: 2013 Tenth international conference on wireless and optical communications networks (WOCN). https://doi.org/10.1109/wocn.2013.6616235

  21. Hall D, Llinas J (1997) An introduction to multisensor data fusion. Proc IEEE 85:6–23. https://doi.org/10.1109/5.554205

    Article  Google Scholar 

  22. Han JG, Pauwels EJ, Zeeuw P (2013) Fast saliency-aware multi-modality image fusion. Neurocomputing 111:70–78

    Article  Google Scholar 

  23. He K, Zhou D, Zhang X, Nie R (2018). Multi-focus: focused region finding and multi-scale transform for image fusion. Neurocomputing 320:157–170

  24. Huang W, Jing Z (2007) Evaluation of focus measures in multi-focus image fusion. Pattern Recognit Lett 28:493–500. https://doi.org/10.1016/j.patrec.2006.09.005

    Article  Google Scholar 

  25. Jiang Q, Jin X, Hou J, Lee S-J, Yao S (2018) Multi-sensor image fusion based on interval type-2 fuzzy sets and regional features in nonsubsampled shearlet transform domain. IEEE Sens J 18(6):2494–2505

    Article  Google Scholar 

  26. Jin X, Nie R, Zhou D, Wang Q, He K (2016) Multifocus color image fusion based on NSST and PCNN. J Sens. https://doi.org/10.1155/2016/8359602

    Article  Google Scholar 

  27. Kaur H, Koundal D, Kadyan V (2021) Image fusion techniques: a survey. Arch Comput Methods Eng. https://doi.org/10.1007/s11831-021-09540-7

    Article  Google Scholar 

  28. Kekre HB, Thepade SD, Athawale A, Parkar A (2010) Using assorted color spaces and pixel window sizes for colorization of grayscale images. In: Proceedings of the international conference and workshop on emerging trends in technology—ICWET ’10. https://doi.org/10.1145/1741906.1742014

  29. Li H, Wu X-J (2017) Multi-focus image fusion using dictionary learning and low-rank representation. Image Graph. https://doi.org/10.1007/978-3-319-71607-7_59

    Article  Google Scholar 

  30. Li H, Li L, Zhang J (2015) Multi-focus image fusion based on sparse feature matrix decomposition and morphological filtering. Opt Commun 342:1–11

    Article  Google Scholar 

  31. Li H, Li X, Yu Z, Mao C (2016) Multifocus image fusion by combining with mixed-order structure tensors and multiscale neighborhood. Inf Sci 349–350:25–49

    Article  MATH  Google Scholar 

  32. Li H, Wu X-J, Durrani T (2018) Multi-focus noisy image fusion using low-rank representation

  33. Li C, Zhang X, Wu H (2018) Multifocus image fusion method for image acquisition of 3D objects. Appl Opt 57(16):4514. https://doi.org/10.1364/ao.57.004514

    Article  Google Scholar 

  34. Liu Y, Chen X, Ward RK, Jane Wang Z (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23(12):1882–1886. https://doi.org/10.1109/lsp.2016.2618776

    Article  Google Scholar 

  35. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207

    Article  Google Scholar 

  36. Liu S, Wang J, Lu Y, Hu S, Ma X, Wu Y (2019) Multi-focus image fusion based on residual network in non-subsampled shearlet domain. IEEE Access 7:152043–152063

    Article  Google Scholar 

  37. Ma J, Zhou Z, Wang B, Zong H (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8–17. https://doi.org/10.1016/j.infrared.2017.02.005

    Article  Google Scholar 

  38. Mahajan HB, Badarla A (2018) Application of Internet of Things for smart precision farming: solutions and challenges. Int J Adv Sci Technol 25(Dec 2018):37–45

    Google Scholar 

  39. Mahajan HB, Badarla A (2019) Experimental analysis of recent clustering algorithms for wireless sensor network: application of IoT based smart precision farming. J Adv Res Dyn Control Syst. https://doi.org/10.5373/JARDCS/V11I9/20193162

    Article  Google Scholar 

  40. Mahajan HB, Badarla A (2021) Cross-layer protocol for WSN-assisted IoT smart farming applications using nature inspired algorithm. Wirel Pers Commun. https://doi.org/10.1007/s11277-021-08866-6

    Article  Google Scholar 

  41. Mahajan HB, Badarla A, Junnarkar AA (2021) CL-IoT: cross-layer Internet of Things protocol for intelligent manufacturing of smart farming. J Ambient Intell Humaniz Comput 12:7777–7791. https://doi.org/10.1007/s12652-020-02502-0

    Article  Google Scholar 

  42. Naik AJ, Gopalakrishna MT (2021) Deep-violence: individual person violent activity detection in video. Multimed Tools Appl 80:18365–18380. https://doi.org/10.1007/s11042-021-10682-w

    Article  Google Scholar 

  43. Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inf Fusion 25:72–84. https://doi.org/10.1016/j.inffus.2014.10.004

    Article  Google Scholar 

  44. Nejati M, Samavi S, Karimi N, Reza Soroushmehr SM, Shirani S, Roosta I, Najarian K (2017) Surface area-based focus criterion for multi-focus image fusion. Inf Fusion 36:284–295

    Article  Google Scholar 

  45. Pan T, Jiang J, Yao J, Wang B, Tan B (2020) A novel multi-focus image fusion network with U-shape structure. Sensors (Basel Switz) 20(14):3901. https://doi.org/10.3390/s20143901

    Article  Google Scholar 

  46. Pohl C, van Genderen J (1998) Multisensor image fusion in remote sensing: concepts, methods and applications. Int J Remote Sens 19:823–854. https://doi.org/10.1080/014311698215748

    Article  Google Scholar 

  47. Rahman MA, Liu S, Wong CY, Lin SCF, Liu SC, Kwok NM (2017) Multifocal image fusion using degree of focus and fuzzy logic. Digit Signal Process 60:1–19

    Article  Google Scholar 

  48. Shreyamsha Kumar BK (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process 7(6):1125–1143. https://doi.org/10.1007/s11760-012-0361-x

    Article  Google Scholar 

  49. Shreyamsha Kumar BK (2013) Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process 9(5):1193–1204. https://doi.org/10.1007/s11760-013-0556-9

    Article  Google Scholar 

  50. Tang H, Xiao B, Li W, Wang G (2018) Pixel convolutional neural network for multi-focus image fusion. Inf Sci 433–434:125–141

    Article  MathSciNet  Google Scholar 

  51. Tian C, Xia J, Tang J et al (2020) Deep image retrieval of large-scale vessels images based on BoW model. Multimed Tools Appl 79:9387–9401. https://doi.org/10.1007/s11042-019-7725-y

    Article  Google Scholar 

  52. Wald L (1999) Some terms of reference in data fusion. IEEE Trans Geosci Remote Sens 37:1190–1193. https://doi.org/10.1109/36.763269

    Article  Google Scholar 

  53. Wang J, Li Q, Jia Z, Kasabov N, Yang J (2015) A novel multi-focus image fusion method using PCNN in nonsubsampled contourlet transform domain. Optik Int J Light Electron Opt 126(20):2508–2511

    Article  Google Scholar 

  54. Wirat R, Somkait U (2010) Comparative efficiency of color models for multi-focus color image fusion. Lecture notes in engineering and computer science 2181

  55. Xu Z, Xiang W, Zhu S, Zeng R, Marquez Chin C, Chen Z, Chen X, Liu B, Li J (2021) LatLRR-FCNs: latent low-rank representation with fully convolutional networks for medical image fusion. Front Neurosci. https://doi.org/10.3389/fnins.2020.615435

    Article  Google Scholar 

  56. Yahya AA, Tan J, Su B et al (2019) Image noise reduction based on adaptive thresholding and clustering. Multimed Tools Appl 78:15545–15573. https://doi.org/10.1007/s11042-018-6955-8

    Article  Google Scholar 

  57. Yang Y, Tong S, Huang S, Lin P (2014) Multi-focus image fusion based on NSCT and focused area detection. IEEE Sens J. https://doi.org/10.1109/JSEN.2014.2380153

    Article  Google Scholar 

  58. Yang Y, Tong S, Huang S, Lin P, Fang Y (2017) A hybrid method for multi-focus image fusion based on fast discrete curvelet transform. IEEE Access 5:14898–14913

    Article  Google Scholar 

  59. Yang D, Hu S, Liu S, Ma X, Sun Y (2018) Multi-focus image fusion based on block matching in 3D transform domain. J Syst Eng Electron 29(2):415–428

    Article  Google Scholar 

  60. Zafar R, Farid MS, Khan MH (2020) Multi-focus image fusion: algorithms, evaluation, and a library. J Imaging 6(7):60. https://doi.org/10.3390/jimaging6070060

    Article  Google Scholar 

  61. Zhang X, Li X, Liu Z, Feng Y (2014) Multi-focus image fusion using image-partition-based focus detection. Signal Process 102:64–76

    Article  Google Scholar 

  62. Zhang BH, Lu XQ, Pei HQ, Liu H, Zhao Y, Zhou WT (2017) Multi-focus image fusion algorithm based on focused region extraction. Neurocomputing 174:733–748

    Article  Google Scholar 

  63. Zhang Y, Bai X, Wang T (2017) Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf Fusion 35:81–101

    Article  Google Scholar 

  64. Zhang S, Huang F, Zhong H, Liu B, Chen Y, Wang Z (2020) Multi-modal image fusion via sparse representation and multi-scale anisotropic guided measure. IEEE Access. https://doi.org/10.1109/access.2020.2973269

    Article  Google Scholar 

  65. Zhou Z, Li S, Wang B (2014) Multi-scale weighted gradient-based fusion for multi-focus images. Inf Fusion 20:60–72. https://doi.org/10.1016/j.inffus.2013.11.005

    Article  Google Scholar 

Download references

Funding

No funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sudeep D. Thepade.

Ethics declarations

Conflict of interest

All authors declares that they has no conflict of interest.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jagtap, N., Thepade, S.D. Reliable and robust low rank representation based noisy images multi-focus image fusion. Multimed Tools Appl 82, 8235–8259 (2023). https://doi.org/10.1007/s11042-021-11576-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11576-7

Keywords

Navigation