Particle Tracking and Detection Software for Firebrands Characterization in Wildland Fires


Detection and analysis of the objects in a frame or a sequence of frames (video) can be used to solve a number of problems in various fields, including the field of fire behaviour and risk. A quantitative understanding of the short distance spotting dynamics, namely the firebrand density distribution within a distance from the fire front and how distinct fires coalesce in a highly turbulent environment, is still lacking. To address this, a custom software was developed in order to detect the location and the number of flying firebrands in a thermal image then determine the temperature and sizes of each firebrand. The software consists of two modules, the detector and the tracker. The detector determines the location of the firebrands in the frame, and the tracker compares the firebrand in different frames and determines the identification number of each firebrand. Comparison of the calculated results with the data obtained by the independent experts and experimental data showed that the maximum relative error does not exceed 12% for the low and medium number of firebrands in the frame (less than 30) and software agrees well with experimental observations for firebrands > 20 × 10−5 m. It was found that fireline intensity below 12,590 kW m−1 does not change significantly 2D firebrand flux for firebrands bigger than 20 × 10−5 m, while occasional crowning can increase the firebrand flux in several times. The developed software allowed us to analyse the thermograms obtained during the field experiments and to measure the velocities, sizes and temperatures of the firebrands. It will help to better understand of how the firebrands can ignite the surrounding fuel beds and could be an important tool in investigating fire propagation in communities.

This is a preview of subscription content, log in to check access.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7


  1. 1.

    Shu XM, Yuan HY, Su GF et al (2006) A new method of laser sheet imaging-based fire smoke detection. J Fire Sci 24:95–104.

    Article  Google Scholar 

  2. 2.

    Magidimisha E, Griffith DJ (2017) Remote optical observations of actively burning biomass fires using potassium line spectral emission. In: Proceedings of SPIE—the international society for optical engineering

  3. 3.

    Albini FA (1983) Transport of firebrands by line thermalst. Combust Sci Technol 32:277–288.

    Article  Google Scholar 

  4. 4.

    El Houssami M, Mueller E, Filkov A et al (2016) Experimental procedures characterising firebrand generation in wildland fires. Fire Technol 52:731–751.

    Article  Google Scholar 

  5. 5.

    Gould J, McCaw W, Cheney N et al (2008) Project vesta: fire in dry eucalypt forest: fuel structure, fuel dynamics and fire behaviour. CSIRO Publishing, Clayton

    Google Scholar 

  6. 6.

    Manzello SL, Foote EID (2014) Characterizing firebrand exposure from wildland-urban interface (WUI) fires: results from the 2007 Angora fire. Fire Technol 50:105–124.

    Article  Google Scholar 

  7. 7.

    Cruz MG, Sullivan AL, Gould JS et al (2012) Anatomy of a catastrophic wildfire: the Black Saturday Kilmore East fire in Victoria, Australia. Forest Ecol Manag 284:269–285.

    Article  Google Scholar 

  8. 8.

    Pagni PJ (1993) Causes of the 20 October 1991 Oakland Hills conflagration. Fire Saf J 21:331–339.

    Article  Google Scholar 

  9. 9.

    Sharifian A, Hashempour J (2016) A novel ember shower simulator for assessing performance of low porosity screens at high wind speeds against firebrand attacks. J Fire Sci 34:335–355.

    Article  Google Scholar 

  10. 10.

    Hammami M, Jarraya SK, Ben-Abdallah H (2011) A comparative study of proposed moving object detection methods. J Next Gener Inf Technol 2:56–68

    Google Scholar 

  11. 11.

    Cheng YH, Wang J (2014) A motion image detection method based on the inter-frame difference method. Appl Mech Mater 490–491:1283–1286

    Google Scholar 

  12. 12.

    Chen Z, Ellis T (2014) A self-adaptive Gaussian mixture model. Comput Vis Image Underst 122:35–46

    Article  Google Scholar 

  13. 13.

    Friedman N, Russell S (1997) Image segmentation in video sequences. In: Proceedings of 13th conference on uncertainty in artificial intelligence

  14. 14.

    Horn BKP, Schunck BG (1981) Determining optical flow. Artif Intell 17:185–203

    Article  Google Scholar 

  15. 15.

    Holte MB, Moeslund TB, Fihl P (2010) View-invariant gesture recognition using 3D optical flow and harmonic motion context. Comput Vis Image Underst 114:1353–1361

    Article  Google Scholar 

  16. 16.

    Fortun D, Bouthemy P, Kervrann C (2015) Optical flow modeling and computation: a survey. Comput Vis Image Underst 134:1–21

    MATH  Article  Google Scholar 

  17. 17.

    Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297

    MATH  Google Scholar 

  18. 18.

    Heisele B, Ho P, Wu J, Poggio T (2003) Face recognition: component-based versus global approaches. Comput Vis Image Underst 91:6–21

    Article  Google Scholar 

  19. 19.

    Ko BC, Cheong K-H, Nam J-Y (2009) Fire detection based on vision sensor and support vector machines. Fire Saf J 44:322–329

    Article  Google Scholar 

  20. 20.

    Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I511–I518

  21. 21.

    Viola P, Jones MJ (2004) Robust real-time face detection. Int J Comput Vis 57:137–154

    Article  Google Scholar 

  22. 22.

    Zhang X-B, Liu W-Y, Lu D-W (2008) Automatic video object segmentation algorithm based on spatio-temporal information. J Optoelectron Laser 19:384–387

    Google Scholar 

  23. 23.

    Ali I, Dailey MN (2012) Multiple human tracking in high-density crowds. Image Vis Comput 30:966–977

    Article  Google Scholar 

  24. 24.

    Ng CW, Ranganath S (2002) Real-time gesture recognition system and application. Image Vis Comput 20:993–1007

    Article  Google Scholar 

  25. 25.

    Perlin HA, Lopes HS (2015) Extracting human attributes using a convolutional neural network approach. Pattern Recognit Lett 68:250–259

    Article  Google Scholar 

  26. 26.

    Li B, Chellappa R, Zheng Q et al (2001) Experimental evaluation of FLIR ATR approaches—a comparative study. Comput Vis Image Underst 84:5–24

    MATH  Article  Google Scholar 

  27. 27.

    Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38:770–773

    Article  Google Scholar 

  28. 28.

    Veenman CJ, Reinders MJT, Backer E (2001) Resolving motion correspondence for densely moving points. IEEE Trans Pattern Anal Mach Intell 23:54–72

    Article  Google Scholar 

  29. 29.

    Shafique K, Shah M (2003) A non-iterative greedy algorithm for multi-frame point correspondence. In: Proceedings of the IEEE international conference on computer vision, pp 110–115

  30. 30.

    Kitagawa G (1987) Non-gaussian state—space modeling of nonstationary time series. J Am Stat Assoc 82:1032–1041

    MathSciNet  MATH  Google Scholar 

  31. 31.

    Welch G, Bishop G (2006) An introduction to the Kalman filter. Practice 7:1–16

    Google Scholar 

  32. 32.

    Muñoz-Salinas R, Aguirre E, García-Silvente M (2007) People detection and tracking using stereo vision and color. Image Vis Comput 25:995–1007

    MATH  Article  Google Scholar 

  33. 33.

    Sifakis E, Tziritas G (2001) Moving object localisation using a multi-label fast marching algorithm. Signal Process Image Commun 16:963–976

    Article  Google Scholar 

  34. 34.

    Jang D-S, Choi H-I (2000) Active models for tracking moving objects. Pattern Recognit 33:1135–1146

    Article  Google Scholar 

  35. 35.

    Nummiaro K, Koller-Meier E, Van Gool L (2003) An adaptive color-based particle filter. Image Vis Comput 21:99–110

    MATH  Article  Google Scholar 

  36. 36.

    Del Bimbo A, Dini F (2011) Particle filter-based visual tracking with a first order dynamic model and uncertainty adaptation. Comput Vis Image Underst 115:771–786

    Article  Google Scholar 

  37. 37.

    Reid DB (1979) An algorithm for tracking multiple targets. IEEE Trans Autom Control 24:843–854

    Article  Google Scholar 

  38. 38.

    Blackman SS (2004) Multiple hypothesis tracking for multiple target tracking. IEEE Aerosp Electron Syst Mag 19:5–18

    Article  Google Scholar 

  39. 39.

    Tissainayagam P, Suter D (2005) Object tracking in image sequences using point features. Pattern Recognit 38:105–113

    Article  Google Scholar 

  40. 40.

    Polat E, Yeasin M, Sharma R (2003) Robust tracking of human body parts for collaborative human computer interaction. Comput Vis Image Underst 89:44–69

    MATH  Article  Google Scholar 

  41. 41.

    Comaniciu D, Meer P (2002) Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell 24:603–619

    Article  Google Scholar 

  42. 42.

    Bugeau A, Pérez P (2009) Detection and segmentation of moving objects in complex scenes. Comput Vis Image Underst 113:459–476

    Article  Google Scholar 

  43. 43.

    Leichter I, Lindenbaum M, Rivlin E (2010) Mean Shift tracking with multiple reference color histograms. Comput Vis Image Underst 114:400–408

    Article  Google Scholar 

  44. 44.

    McKenna SJ, Jabri S, Duric Z et al (2000) Tracking groups of people. Comput Vis Image Underst 80:42–56

    MATH  Article  Google Scholar 

  45. 45.

    Huttenlocher DP, Noh JJ, Rucklidge WJ (1993) Tracking non-rigid objects in complex scenes. In: 1993 IEEE 4th international conference on computer vision, pp 93–101

  46. 46.

    Li B, Chellappa R, Zheng Q, Der SZ (2001) Model-based temporal object verification using video. IEEE Trans Image Process 10:897–908

    Article  Google Scholar 

  47. 47.

    Kang J, Cohen I, Medioni G (2004) Object reacquisition using invariant appearance model. In: Proceedings—international conference on pattern recognition, pp 759–762

  48. 48.

    Yilmaz A, Li X, Shah M (2004) Contour-based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Trans Pattern Anal Mach Intell 26:1531–1536

    Article  Google Scholar 

  49. 49.

    Chen Y, Rui Y, Huang TS (2001) JPDAF based HMM for real-time contour tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I543–I550

  50. 50.

    Tohidi A, Kaye NB (2017) Comprehensive wind tunnel experiments of lofting and downwind transport of non-combusting rod-like model firebrands during firebrand shower scenarios. Fire Saf J 90:95–111.

    Article  Google Scholar 

  51. 51.

    Davis JW, Sharma V (2007) Background-subtraction using contour-based fusion of thermal and visible imagery. Comput Vis Image Underst 106:162–182

    Article  Google Scholar 

  52. 52.

    Sobral A, Vacavant A (2014) A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput Vis Image Underst 122:4–21

    Article  Google Scholar 

  53. 53.

    Stauffer C, Grimson WEL (1999) Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 246–252

  54. 54.

    Zivkovic Z, Van Der Heijden F (2006) Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett 27:773–780

    Article  Google Scholar 

  55. 55.

    Park S, Aggarwal JK (2006) Simultaneous tracking of multiple body parts of interacting persons. Comput Vis Image Underst 102:1–21

    Article  Google Scholar 

  56. 56.

    Filkov A, Prohanov S, Mueller E et al (2017) Investigation of firebrand production during prescribed fires conducted in a pine forest. Proc Combust Inst. 36:3263–3270.

    Article  Google Scholar 

  57. 57.

    Thomas JC, Mueller E V., Santamaria S et al (2017) Investigation of firebrand generation from an experimental fire: Development of a reliable data collection methodology. Fire Saf J 91:864–871.

    Article  Google Scholar 

  58. 58.

    Mueller EV, Skowronski N, Clark K et al (2017) Utilization of remote sensing techniques for the quantification of fire behavior in two pine stands. Fire Saf J 91:845–854.

    Article  Google Scholar 

  59. 59.

    OpenCv (2014) OpenCV Library. In: OpenCV website

  60. 60.

    Filkov A, Prohanov S (2015) IRaPaD. Automatic detection for the characteristics of moving particles in the thermal infrared video: No. 2015617763

  61. 61.

    MongoDB (2016) MongoDB architecture guide. MongoDB White Paper 1–16

  62. 62.

    Suzuki S, Manzello SL, Lage M, Laing G (2012) Firebrand generation data obtained from a full-scale structure burn. International Journal of Wildland Fire 21:961–968.

    Article  Google Scholar 

Download references


This work was supported by the Russian Foundation for Basic Research (Project #18-07-00548), the Tomsk State University Academic D.I. Mendeleev Fund Program and the Bushfire and Natural Hazard Cooperative Research Centre.

Author information



Corresponding author

Correspondence to Alexander Filkov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (AVI 210 kb)

Supplementary material 2 (AVI 462 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Filkov, A., Prohanov, S. Particle Tracking and Detection Software for Firebrands Characterization in Wildland Fires. Fire Technol 55, 817–836 (2019).

Download citation


  • Wildland and structural firebrands
  • Firebrand detection
  • Firebrand tracking