Skip to main content
Log in

FIRe-GAN: a novel deep learning-based infrared-visible fusion method for wildfire imagery

  • S.I.: LatinX in AI Research
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Wildfire detection is of paramount importance to avoid as much damage as possible to the environment, properties, and lives. In this regard, the fusion of thermal and visible information into a single image can potentially increase the robustness and accuracy of wildfire detection models. In the field of visible-infrared image fusion, there is a growing interest in Deep Learning (DL)-based image fusion techniques due to their reduced complexity; however, the most DL-based image fusion methods have not been evaluated in the domain of fire imagery. Additionally, to the best of our knowledge, no publicly available dataset contains visible-infrared fused fire images. In the present work, we select three state-of-the-art (SOTA) DL-based image fusion techniques and evaluate them for the specific task of fire image fusion, and compare the performance of these methods on selected metrics. Finally, we also present an extension to one of the said methods, that we called FIRe-GAN, that improves the generation of artificial infrared and fused images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Availability of data and material

The image fusion methods by Li et al. [9] and Ma et al. [10] are publicly available as Github repositories in [15, 24], respectively. The RGB-NIR dataset employed for pre-training the method by Zhao et al. [11] was developed by Brown et al. [18] and is publicly available at [25]. The Corsican Fire Database [12] is available upon request to the University of Corsica at [26].

References

  1. Yeung J Australia’s deadly wildfires are showing no signs of stopping. here’s what you need to know. https://edition.cnn.com/2020/01/01/australia/australia-fires-explainer-intl-hnk-scli/index.html. Accessed: 2020-04-10

  2. The Visual and Data Journalism Team: California and oregon 2020 wildfires in maps, graphics and images. https://www.bbc.com/news/world-us-canada-54180049. Accessed: 2020-09-30

  3. Yuan C, Zhang Y, Liu Z (2015) A survey on technologies for automatic forest fire monitoring, detection and fighting using uavs and remote sensing techniques. Canadian J Forest Res 45:150312143318009. https://doi.org/10.1139/cjfr-2014-0347

    Article  Google Scholar 

  4. Çetin AE, Dimitropoulos K, Gouverneur B, Grammalidis N, Günay O, Habiboǧlu YH, Töreyin BU, Verstockt S (2013) Video fire detection—review. Digital Signal Process 23(6):1827–1843

    Article  Google Scholar 

  5. Arrue BC, Ollero A, Matinez de Dios JR (2000) An intelligent system for false alarm reduction in infrared forest-fire detection. IEEE Intell Syst Their Appl 15(3):64–73. https://doi.org/10.1109/5254.846287

    Article  Google Scholar 

  6. Martinez-de Dios J, Arrue B, Ollero A, Merino L, Gómez-Rodríguez F (2008) Computer vision techniques for forest fire perception. Image Vision Comput 26(4):550–562

    Article  Google Scholar 

  7. Nemalidinne SM, Gupta D (2018) Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering. Fire Saf J 101:84–101

    Article  Google Scholar 

  8. Li H, Wu XJ, Kittler J (2020) Mdlatlrr: a novel decomposition method for infrared and visible image fusion. IEEE Trans Image Process 29:4733–4746. https://doi.org/10.1109/TIP.2020.2975984

    Article  MATH  Google Scholar 

  9. Li H, Wu X, Kittler J (2018) Infrared and visible image fusion using a deep learning framework. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp 2705–2710

  10. Ma J, Yu W, Liang P, Li C, Jiang J (2019) Fusiongan: a generative adversarial network for infrared and visible image fusion. Information Fusion 48:11–26

    Article  Google Scholar 

  11. Zhao Y, Fu G, Wang H, Zhang S (2020) The fusion of unmatched infrared and visible images based on generative adversarial networks. Math Probl Eng 2020:3739040

    Google Scholar 

  12. Toulouse T, Rossi L, Campana A, Celik T, Akhloufi MA (2017) Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf J 92:188–194

    Article  Google Scholar 

  13. Zhao Z, Xu S, Feng R, Zhang C, Liu J, Zhang J (2020) When image decomposition meets deep learning: A novel infrared and visible image fusion method

  14. Ma J, Liang P, Yu W, Chen C, Guo X, Wu J, Jiang J (2020) Infrared and visible image fusion via detail preserving adversarial learning. Information Fusion 54:85–98

    Article  Google Scholar 

  15. Infrared and visible image fusion using a deep learning framework. https://github.com/hli1221/imagefusion_deeplearning. Accessed: 2020-08-22

  16. Miyato T, Kataoka T, Koyama M, Yoshida Y (2018) Spectral normalization for generative adversarial networks

  17. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Information Fusion 45:153–178

    Article  Google Scholar 

  18. Brown M, Süsstrunk S (2011) Multi-spectral sift for scene category recognition. In: CVPR 2011, pp 177–184

  19. Liu Z, Blasch E, Xue Z, Zhao J, Laganiere R, Wu W (2012) Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE Trans Pattern Anal Mach Intell 34(1):94–109. https://doi.org/10.1109/TPAMI.2011.109

    Article  Google Scholar 

  20. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP The ssim index for image quality assessment. https://www.cns.nyu.edu/~lcv/ssim/. Accessed: 2020-08-26

  21. Wang Zhou, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  22. Isola P, Zhu J.Y., Zhou T, Efros A.A. (2017) Image-to-image translation with conditional adversarial networks. CVPR

  23. Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, p. 6629-6640. Curran Associates Inc., Red Hook, NY, USA

  24. Codes for fusiongan, a gan model for infrared and visible image fusion. https://github.com/jiayi-ma/FusionGAN. Accessed: 2020-09-13

  25. Rgb-nir scene dataset. https://ivrlwww.epfl.ch/supplementary_material/cvpr11/index.html. Accessed: 2020-12-17

  26. Corsican fire database. http://cfdb.univ-corse.fr/. Accessed: 2020-12-17

Download references

Acknowledgments

The authors would like to thank the University of Corsica for providing access to the Corsican Fire Database.

Funding

This research is supported in part by Tecnologico de Monterrey and the Mexican National Council of Science and Technology (CONACYT). This research is part of the project 7817-2019 funded by the Jalisco State Council of Science and Technology (COECYTJAL).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. F. Ciprián-Sánchez.

Ethics declarations

Code availability

The code generated by the authors implementing FIRe-GAN, an extended version of the image fusion method proposed by Zhao et al. [11], is available as an open-source Github repository at https://github.com/JorgeFCS/Image-fusion-GAN.

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research is supported in part by Tecnológico de Monterrey and the Mexican National Council of Science and Technology (CONACYT). This research is part of the project 7817-2019 funded by the Jalisco State Council of Science and Technology (COECYTJAL).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ciprián-Sánchez, J.F., Ochoa-Ruiz, G., Gonzalez-Mendoza, M. et al. FIRe-GAN: a novel deep learning-based infrared-visible fusion method for wildfire imagery. Neural Comput & Applic 35, 18201–18213 (2023). https://doi.org/10.1007/s00521-021-06691-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06691-3

Keywords

Navigation