Advertisement

Particle Tracking and Detection Software for Firebrands Characterization in Wildland Fires

  • Alexander FilkovEmail author
  • Sergey Prohanov
Article
  • 67 Downloads

Abstract

Detection and analysis of the objects in a frame or a sequence of frames (video) can be used to solve a number of problems in various fields, including the field of fire behaviour and risk. A quantitative understanding of the short distance spotting dynamics, namely the firebrand density distribution within a distance from the fire front and how distinct fires coalesce in a highly turbulent environment, is still lacking. To address this, a custom software was developed in order to detect the location and the number of flying firebrands in a thermal image then determine the temperature and sizes of each firebrand. The software consists of two modules, the detector and the tracker. The detector determines the location of the firebrands in the frame, and the tracker compares the firebrand in different frames and determines the identification number of each firebrand. Comparison of the calculated results with the data obtained by the independent experts and experimental data showed that the maximum relative error does not exceed 12% for the low and medium number of firebrands in the frame (less than 30) and software agrees well with experimental observations for firebrands > 20 × 10−5 m. It was found that fireline intensity below 12,590 kW m−1 does not change significantly 2D firebrand flux for firebrands bigger than 20 × 10−5 m, while occasional crowning can increase the firebrand flux in several times. The developed software allowed us to analyse the thermograms obtained during the field experiments and to measure the velocities, sizes and temperatures of the firebrands. It will help to better understand of how the firebrands can ignite the surrounding fuel beds and could be an important tool in investigating fire propagation in communities.

Keywords

Wildland and structural firebrands Firebrand detection Firebrand tracking 

Notes

Acknowledgements

This work was supported by the Russian Foundation for Basic Research (Project #18-07-00548), the Tomsk State University Academic D.I. Mendeleev Fund Program and the Bushfire and Natural Hazard Cooperative Research Centre.

Supplementary material

Supplementary material 1 (AVI 210 kb)

Supplementary material 2 (AVI 462 kb)

References

  1. 1.
    Shu XM, Yuan HY, Su GF et al (2006) A new method of laser sheet imaging-based fire smoke detection. J Fire Sci 24:95–104.  https://doi.org/10.1177/0734904106055568 CrossRefGoogle Scholar
  2. 2.
    Magidimisha E, Griffith DJ (2017) Remote optical observations of actively burning biomass fires using potassium line spectral emission. In: Proceedings of SPIE—the international society for optical engineeringGoogle Scholar
  3. 3.
    Albini FA (1983) Transport of firebrands by line thermalst. Combust Sci Technol 32:277–288.  https://doi.org/10.1080/00102208308923662 CrossRefGoogle Scholar
  4. 4.
    El Houssami M, Mueller E, Filkov A et al (2016) Experimental procedures characterising firebrand generation in wildland fires. Fire Technol 52:731–751.  https://doi.org/10.1007/s10694-015-0492-z CrossRefGoogle Scholar
  5. 5.
    Gould J, McCaw W, Cheney N et al (2008) Project vesta: fire in dry eucalypt forest: fuel structure, fuel dynamics and fire behaviour. CSIRO Publishing, ClaytonGoogle Scholar
  6. 6.
    Manzello SL, Foote EID (2014) Characterizing firebrand exposure from wildland-urban interface (WUI) fires: results from the 2007 Angora fire. Fire Technol 50:105–124.  https://doi.org/10.1007/s10694-012-0295-4 CrossRefGoogle Scholar
  7. 7.
    Cruz MG, Sullivan AL, Gould JS et al (2012) Anatomy of a catastrophic wildfire: the Black Saturday Kilmore East fire in Victoria, Australia. Forest Ecol Manag 284:269–285.  https://doi.org/10.1016/j.foreco.2012.02.035 CrossRefGoogle Scholar
  8. 8.
    Pagni PJ (1993) Causes of the 20 October 1991 Oakland Hills conflagration. Fire Saf J 21:331–339.  https://doi.org/10.1016/0379-7112(93)90020-q CrossRefGoogle Scholar
  9. 9.
    Sharifian A, Hashempour J (2016) A novel ember shower simulator for assessing performance of low porosity screens at high wind speeds against firebrand attacks. J Fire Sci 34:335–355.  https://doi.org/10.1177/0734904116655175 CrossRefGoogle Scholar
  10. 10.
    Hammami M, Jarraya SK, Ben-Abdallah H (2011) A comparative study of proposed moving object detection methods. J Next Gener Inf Technol 2:56–68CrossRefGoogle Scholar
  11. 11.
    Cheng YH, Wang J (2014) A motion image detection method based on the inter-frame difference method. Appl Mech Mater 490–491:1283–1286Google Scholar
  12. 12.
    Chen Z, Ellis T (2014) A self-adaptive Gaussian mixture model. Comput Vis Image Underst 122:35–46CrossRefGoogle Scholar
  13. 13.
    Friedman N, Russell S (1997) Image segmentation in video sequences. In: Proceedings of 13th conference on uncertainty in artificial intelligenceGoogle Scholar
  14. 14.
    Horn BKP, Schunck BG (1981) Determining optical flow. Artif Intell 17:185–203CrossRefGoogle Scholar
  15. 15.
    Holte MB, Moeslund TB, Fihl P (2010) View-invariant gesture recognition using 3D optical flow and harmonic motion context. Comput Vis Image Underst 114:1353–1361CrossRefGoogle Scholar
  16. 16.
    Fortun D, Bouthemy P, Kervrann C (2015) Optical flow modeling and computation: a survey. Comput Vis Image Underst 134:1–21CrossRefGoogle Scholar
  17. 17.
    Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297zbMATHGoogle Scholar
  18. 18.
    Heisele B, Ho P, Wu J, Poggio T (2003) Face recognition: component-based versus global approaches. Comput Vis Image Underst 91:6–21CrossRefGoogle Scholar
  19. 19.
    Ko BC, Cheong K-H, Nam J-Y (2009) Fire detection based on vision sensor and support vector machines. Fire Saf J 44:322–329CrossRefGoogle Scholar
  20. 20.
    Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I511–I518Google Scholar
  21. 21.
    Viola P, Jones MJ (2004) Robust real-time face detection. Int J Comput Vis 57:137–154CrossRefGoogle Scholar
  22. 22.
    Zhang X-B, Liu W-Y, Lu D-W (2008) Automatic video object segmentation algorithm based on spatio-temporal information. J Optoelectron Laser 19:384–387Google Scholar
  23. 23.
    Ali I, Dailey MN (2012) Multiple human tracking in high-density crowds. Image Vis Comput 30:966–977CrossRefGoogle Scholar
  24. 24.
    Ng CW, Ranganath S (2002) Real-time gesture recognition system and application. Image Vis Comput 20:993–1007CrossRefGoogle Scholar
  25. 25.
    Perlin HA, Lopes HS (2015) Extracting human attributes using a convolutional neural network approach. Pattern Recognit Lett 68:250–259CrossRefGoogle Scholar
  26. 26.
    Li B, Chellappa R, Zheng Q et al (2001) Experimental evaluation of FLIR ATR approaches—a comparative study. Comput Vis Image Underst 84:5–24CrossRefGoogle Scholar
  27. 27.
    Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38:770–773CrossRefGoogle Scholar
  28. 28.
    Veenman CJ, Reinders MJT, Backer E (2001) Resolving motion correspondence for densely moving points. IEEE Trans Pattern Anal Mach Intell 23:54–72CrossRefGoogle Scholar
  29. 29.
    Shafique K, Shah M (2003) A non-iterative greedy algorithm for multi-frame point correspondence. In: Proceedings of the IEEE international conference on computer vision, pp 110–115Google Scholar
  30. 30.
    Kitagawa G (1987) Non-gaussian state—space modeling of nonstationary time series. J Am Stat Assoc 82:1032–1041MathSciNetzbMATHGoogle Scholar
  31. 31.
    Welch G, Bishop G (2006) An introduction to the Kalman filter. Practice 7:1–16Google Scholar
  32. 32.
    Muñoz-Salinas R, Aguirre E, García-Silvente M (2007) People detection and tracking using stereo vision and color. Image Vis Comput 25:995–1007CrossRefGoogle Scholar
  33. 33.
    Sifakis E, Tziritas G (2001) Moving object localisation using a multi-label fast marching algorithm. Signal Process Image Commun 16:963–976CrossRefGoogle Scholar
  34. 34.
    Jang D-S, Choi H-I (2000) Active models for tracking moving objects. Pattern Recognit 33:1135–1146CrossRefGoogle Scholar
  35. 35.
    Nummiaro K, Koller-Meier E, Van Gool L (2003) An adaptive color-based particle filter. Image Vis Comput 21:99–110CrossRefGoogle Scholar
  36. 36.
    Del Bimbo A, Dini F (2011) Particle filter-based visual tracking with a first order dynamic model and uncertainty adaptation. Comput Vis Image Underst 115:771–786CrossRefGoogle Scholar
  37. 37.
    Reid DB (1979) An algorithm for tracking multiple targets. IEEE Trans Autom Control 24:843–854CrossRefGoogle Scholar
  38. 38.
    Blackman SS (2004) Multiple hypothesis tracking for multiple target tracking. IEEE Aerosp Electron Syst Mag 19:5–18CrossRefGoogle Scholar
  39. 39.
    Tissainayagam P, Suter D (2005) Object tracking in image sequences using point features. Pattern Recognit 38:105–113CrossRefGoogle Scholar
  40. 40.
    Polat E, Yeasin M, Sharma R (2003) Robust tracking of human body parts for collaborative human computer interaction. Comput Vis Image Underst 89:44–69CrossRefGoogle Scholar
  41. 41.
    Comaniciu D, Meer P (2002) Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell 24:603–619CrossRefGoogle Scholar
  42. 42.
    Bugeau A, Pérez P (2009) Detection and segmentation of moving objects in complex scenes. Comput Vis Image Underst 113:459–476CrossRefGoogle Scholar
  43. 43.
    Leichter I, Lindenbaum M, Rivlin E (2010) Mean Shift tracking with multiple reference color histograms. Comput Vis Image Underst 114:400–408CrossRefGoogle Scholar
  44. 44.
    McKenna SJ, Jabri S, Duric Z et al (2000) Tracking groups of people. Comput Vis Image Underst 80:42–56CrossRefGoogle Scholar
  45. 45.
    Huttenlocher DP, Noh JJ, Rucklidge WJ (1993) Tracking non-rigid objects in complex scenes. In: 1993 IEEE 4th international conference on computer vision, pp 93–101Google Scholar
  46. 46.
    Li B, Chellappa R, Zheng Q, Der SZ (2001) Model-based temporal object verification using video. IEEE Trans Image Process 10:897–908CrossRefGoogle Scholar
  47. 47.
    Kang J, Cohen I, Medioni G (2004) Object reacquisition using invariant appearance model. In: Proceedings—international conference on pattern recognition, pp 759–762Google Scholar
  48. 48.
    Yilmaz A, Li X, Shah M (2004) Contour-based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Trans Pattern Anal Mach Intell 26:1531–1536CrossRefGoogle Scholar
  49. 49.
    Chen Y, Rui Y, Huang TS (2001) JPDAF based HMM for real-time contour tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I543–I550Google Scholar
  50. 50.
    Tohidi A, Kaye NB (2017) Comprehensive wind tunnel experiments of lofting and downwind transport of non-combusting rod-like model firebrands during firebrand shower scenarios. Fire Saf J 90:95–111.  https://doi.org/10.1016/j.firesaf.2017.04.032 CrossRefGoogle Scholar
  51. 51.
    Davis JW, Sharma V (2007) Background-subtraction using contour-based fusion of thermal and visible imagery. Comput Vis Image Underst 106:162–182CrossRefGoogle Scholar
  52. 52.
    Sobral A, Vacavant A (2014) A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput Vis Image Underst 122:4–21CrossRefGoogle Scholar
  53. 53.
    Stauffer C, Grimson WEL (1999) Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 246–252Google Scholar
  54. 54.
    Zivkovic Z, Van Der Heijden F (2006) Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett 27:773–780CrossRefGoogle Scholar
  55. 55.
    Park S, Aggarwal JK (2006) Simultaneous tracking of multiple body parts of interacting persons. Comput Vis Image Underst 102:1–21CrossRefGoogle Scholar
  56. 56.
    Filkov A, Prohanov S, Mueller E et al (2017) Investigation of firebrand production during prescribed fires conducted in a pine forest. Proc Combust Inst. 36:3263–3270.  https://doi.org/10.1016/j.proci.2016.06.125 CrossRefGoogle Scholar
  57. 57.
    Thomas JC, Mueller E V., Santamaria S et al (2017) Investigation of firebrand generation from an experimental fire: Development of a reliable data collection methodology. Fire Saf J 91:864–871.  https://doi.org/10.1016/j.firesaf.2017.04.002 CrossRefGoogle Scholar
  58. 58.
    Mueller EV, Skowronski N, Clark K et al (2017) Utilization of remote sensing techniques for the quantification of fire behavior in two pine stands. Fire Saf J 91:845–854.  https://doi.org/10.1016/j.firesaf.2017.03.076 CrossRefGoogle Scholar
  59. 59.
    OpenCv (2014) OpenCV Library. In: OpenCV websiteGoogle Scholar
  60. 60.
    Filkov A, Prohanov S (2015) IRaPaD. Automatic detection for the characteristics of moving particles in the thermal infrared video: No. 2015617763Google Scholar
  61. 61.
    MongoDB (2016) MongoDB architecture guide. MongoDB White Paper 1–16Google Scholar
  62. 62.
    Suzuki S, Manzello SL, Lage M, Laing G (2012) Firebrand generation data obtained from a full-scale structure burn. International Journal of Wildland Fire 21:961–968.  https://doi.org/10.1071/wf11133 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Ecosystem and Forest SciencesUniversity of MelbourneCreswickAustralia
  2. 2.Mechanics and Mathematics FacultyNational Research Tomsk State UniversityTomskRussia

Personalised recommendations