An Algorithm of Infrared and Visible Light Images Fusion Based on Infrared Object Extraction

  • Haichao Zhang
  • Fangfang Zhang
  • Shibao Sun
  • Wen Yang
  • Yatao Wang
Chapter
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 125)

Abstract

The traditional image fusion methods of Infrared and visible light images neglect differences between the background and the targets, resulting in the poor clarity or week identifiability of the fused image. According to the characters of infrared and visible images, an algorithm based on object extraction is proposed. This algorithm infrared image object is extracted by maximum between-cluster variance, the acquired object information and the background and detail information of visible light image are fusion by NSCT. Thereby improved the specific of visual and effect largely. Finally, the effect of the results is evaluated through objective method. Experimental results indicate that the image fusion obtained by this algorithm possessed the same object as the one in the infrared image, and keeps the same details information as the one in the visible image.

Keywords

Image Fusion Filter Bank Infrared Image Fusion Rule Object Extraction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Toat, A., van Ruyven, J.J., Valeton, J.M.: Merging thermal and visible images by a contrast pyramid. Optical Engineering 28(7), 789–792 (1989)Google Scholar
  2. 2.
    Tao, G.Q., Li, D.P., Lu, G.H.: Study on image fusion based on different rules of wavelet transform. Acta Photonica Sinica 33, 211–224 (2004)Google Scholar
  3. 3.
    Do, M.N., Vetterli, M.: The Contourlet Transform: An efficient directional multiresolution image representation. IEEE Transactions on Image Processing 14, 2019–2106 (2005)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Burt, P.J., Adelson, E.H.: The laplacian pyramid as a compact image code. IEEE Transactions on Communications 31, 532–540 (1983)CrossRefGoogle Scholar
  5. 5.
    Do, M.N., Vetterli, M.: Framing pyramids. IEEE Transactions on Signal Processing 51, 2329–2342 (2003)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Bamberger, R.H., Smith, M.J.T.: A filter bank for the directional decomposition of image: theory and design. IEEE Transaction on Signal Processing 40, 882–893 (1992)CrossRefGoogle Scholar
  7. 7.
    Liu, Y., Guo, B.L., Wei, N.: Multifocus image fusion algorithm based on contourlet decomposition and region statistics. Image and Graphics, 707–712 (2007)Google Scholar
  8. 8.
    Cunha, A.L., Zhou, J.P., Do, M.N.: The nonsubsampled contourlet transform: theory, design and applications. IEEE Transactions on Image Processing 15, 3089–3101 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Berlin Heidelberg 2012

Authors and Affiliations

  • Haichao Zhang
    • 1
  • Fangfang Zhang
    • 1
  • Shibao Sun
    • 1
  • Wen Yang
    • 1
  • Yatao Wang
    • 1
  1. 1.College of Electronics and Information EngineeringHenan University of Science and TechnologyLuoyangP.R. China

Personalised recommendations