Skip to main content

Deep Convolutional Generative Adversarial Networks for Flame Detection in Video

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12496))

Abstract

Real-time flame detection is crucial in video-based surveillance systems. We propose a vision-based method to detect flames using Deep Convolutional Generative Adversarial Neural Networks (DCGANs). Many existing supervised learning approaches using convolutional neural networks do not take temporal information into account and require a substantial amount of labeled data. To have a robust representation of sequences with and without flame, we propose a two-stage training of a DCGAN exploiting spatio-temporal flame evolution. Our training framework includes the regular training of a DCGAN with real spatio-temporal images, namely, temporal slice images, and noise vectors, and training the discriminator separately using the temporal flame images without the generator. Experimental results show that the proposed method effectively detects flame in video with negligible false-positive rates in real-time.

A. Enis Çetin’s research is partially funded by NSF with grant number 1739396 and NVIDIA Corporation. B. Uğur Töreyin’s research is partially funded by TÜBİTAK 114E426, İTÜ BAP MGA-2017-40964 and MOA-2019-42321.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI 2016, pp. 265–283 (2016)

    Google Scholar 

  2. Aslan, S., Güdükbay, U., Töreyin, B.U., Çetin, A.E.: Early wildfire smoke detection based on motion-based geometric image transformation and deep convolutional generative adversarial networks. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019, pp. 8315–8319. IEEE, Brighton (2019)

    Google Scholar 

  3. Çetin, A.E., et al.: Video fire detection-review. Digit. Signal Proc. 23(6), 1827–1843 (2013)

    Article  Google Scholar 

  4. Çetin, A.E., Merci, B., Günay, O., Töreyin, B.U., Verstockt, S.: Methods and Techniques for Fire Detection. Academic Press, Oxford (2016)

    Google Scholar 

  5. Dedeoğlu, Y., Toreyin, B.U., Güdükbay, U., Cetin, A.E.: Real-time fire and flame detection in video. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. ICASSO 2005, vol. 2, pp. ii–669. IEEE (2005)

    Google Scholar 

  6. Dimitropoulos, K., Barmpoutis, P., Grammalidis, N.: Spatio-temporal flame modeling and dynamic texture analysis for automatic video-based fire detection. IEEE Trans. Circuits Syst. Video Technol. 25(2), 339–351 (2015)

    Article  Google Scholar 

  7. Erden, F., et al.: Wavelet based flickering flame detector using differential PIR sensors. Fire Saf. J. 53, 13–18 (2012)

    Article  Google Scholar 

  8. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  9. Günay, O., Töreyin, B.U., Köse, K., Çetin, A.E.: Entropy-functional-based online adaptive decision fusion framework with application to wildfire detection in video. IEEE Trans. Image Process. 21(5), 2853–2865 (2012)

    Article  MathSciNet  Google Scholar 

  10. Günay, O., Çetin, A.E.: Real-time dynamic texture recognition using random sampling and dimension reduction. In: Proceedings of the International Conference on Image Processing, ICIP 2015, pp. 3087–3091. IEEE (2015)

    Google Scholar 

  11. Habiboğlu, Y.H., Günay, O., Çetin, A.E.: Covariance matrix-based fire and flame detection method in video. Mach. Vis. Appl. 23(6), 1103–1113 (2012)

    Article  Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the International Conference on Computer Vision, ICCV 2015, pp. 1026–1034. IEEE (2015)

    Google Scholar 

  13. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. CoRR abs/1502.03167 (2015). http://arxiv.org/abs/1502.03167

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014). http://arxiv.org/abs/1412.6980

  15. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(1), 436–444 (2015)

    Article  MathSciNet  Google Scholar 

  16. Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning, ICML 2010, pp. 807–814. Omnipress, Madison (2010)

    Google Scholar 

  17. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR abs/1511.06434 (2015). http://arxiv.org/abs/1511.06434

  18. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  19. Toreyin, B.U., Cetin, A.E.: Online detection of fire in video. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, CVPR 2007, pp. 1–5. IEEE (2007)

    Google Scholar 

  20. Toreyin, B.U., Dedeoğlu, Y., Cetin, A.E.: Contour based smoke detection in video using wavelets. In: Proceedings of the European Signal Processing Conference, EUSIPCO 2006 (2006)

    Google Scholar 

  21. Töreyin, B.U., Dedeoğlu, Y., Güdükbay, U., Cetin, A.E.: Computer vision based method for real-time fire and flame detection. Pattern Recogn. Lett. 27(1), 49–58 (2006)

    Article  Google Scholar 

  22. Zhao, Y., Ma, J., Li, X., Zhang, J.: Saliency detection and deep learning-based wildfire identification in UAV imagery. Sensors 18(3), 712 (2012)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to B. Uğur Töreyin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Aslan, S., Güdükbay, U., Töreyin, B.U., Çetin, A.E. (2020). Deep Convolutional Generative Adversarial Networks for Flame Detection in Video. In: Nguyen, N.T., Hoang, B.H., Huynh, C.P., Hwang, D., Trawiński, B., Vossen, G. (eds) Computational Collective Intelligence. ICCCI 2020. Lecture Notes in Computer Science(), vol 12496. Springer, Cham. https://doi.org/10.1007/978-3-030-63007-2_63

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63007-2_63

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63006-5

  • Online ISBN: 978-3-030-63007-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics