Skip to main content

A deep neural network and rule-based technique for fire risk identification in video frames


Automatically monitoring roadside fire risk plays a significant role in ensuring road safety by reducing potential hazards imposed to vehicle drivers and enabling effective roadside vegetation management. However, little work has been conducted in this field using video data collected by vehicle-mounted cameras. In this paper, a novel approach is proposed for roadside fire risk identification based on the biomass of grasses. Inspired by the biomass measurement method by human in grass curing, the proposed approach predicts the biomass and identifies high-risk regions using threshold-based rules based on two site-specific parameters of roadside grasses—brown grass coverage (BGC) and height (BGH). The BGC is calculated as the percentage of brown grass pixels in a sampling region, while the BGH is predicted based on the connectivity characteristics of grass stems along the vertical direction. To further reduce the false alarm rate of fire risk, we additionally incorporate and compare two deep learning techniques, including autoencoder and convolutional neural network, for refining the results. Our approach shows high performance of combining threshold-based rules with deep neural networks in classifying low and high fire risk on a roadside image dataset from video collected by the Department of Transport and Main Roads, Queensland, Australia.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8


  1. 1.

    Vazirabad YF, Karslioglu MO (2011) Lidar for biomass estimation, biomass Darko Matovic. Intech Open. Available from:

  2. 2.

    Bond WJ, Van Wilgen BW (2012) Fire and plants, vol 14. Springer, Berlin

    Google Scholar 

  3. 3.

    Campbell NW, Thomas BT, Troscianko T (1997) Automatic segmentation and classification of outdoor images using neural networks. Int J Neural Syst 08(01):137–144.

    Article  Google Scholar 

  4. 4.

    Haibing Z, Shirong L, Chaoliang Z (2014) Outdoor scene understanding using SEVI-BOVW model. In: Neural networks (IJCNN), 2014 international joint conference on, 6–11 July 2014, pp 2986–2990.

  5. 5.

    Harbas I, Subasic M (2014) Motion estimation aided detection of roadside vegetation. In: Image and signal processing (CISP), 2014 7th international congress on, 14–16 Oct. 2014, pp 420–425.

  6. 6.

    Nguyen DV, Kuhnert L, Thamke S, Schlemper J, Kuhnert KD (2012) A novel approach for a double-check of passable vegetation detection in autonomous ground vehicles. In: Intelligent transportation systems (ITSC), 2012 15th international IEEE conference on, 16–19 Sept. 2012, pp 230–236.

  7. 7.

    Blas MR, Agrawal M, Sundaresan A, Konolige K (2008) Fast color/texture segmentation for outdoor robots. In: Intelligent robots and systems (IROS), IEEE/RSJ international conference on, 22–26 Sept. 2008, pp 4078–4085.

  8. 8.

    Bosch A, Muñoz X, Freixenet J (2007) Segmentation and description of natural outdoor scenes. Image Vis Comput 25(5):727–740.

    Article  Google Scholar 

  9. 9.

    Zhang L, Verma B, Stockwell D (2016) Spatial contextual superpixel model for natural roadside vegetation classification. Pattern Recognit 60:444–457.

    Article  Google Scholar 

  10. 10.

    Zafarifar B, De With PHN (2008) Grass field detection for TV picture quality enhancement. In: Consumer electronics, 2008. ICCE 2008. Digest of technical papers. International conference on, 9–13 Jan 2008, pp 1–2.

  11. 11.

    Schepelmann A, Hudson RE, Merat FL, Quinn RD (2010) Visual segmentation of lawn grass for a mobile robotic lawnmower. In: Intelligent robots and systems (IROS), 2010 IEEE/RSJ international conference on, 18–22 Oct. 2010, pp 734–739.

  12. 12.

    Zhang L, Verma B, Stockwell D (2015) Roadside vegetation classification using color intensity and moments. In: The 11th international conference on natural computation, pp 1250–1255

  13. 13.

    Chowdhury S, Verma B, Stockwell D (2015) A novel texture feature based multiple classifier technique for roadside vegetation classification. Expert Syst Appl 42(12):5047–5055.

    Article  Google Scholar 

  14. 14.

    Bradley DM, Unnikrishnan R, Bagnell J (2007) Vegetation detection for driving in complex environments. In: Robotics and automation, 2007 IEEE international conference on, 10–14 April 2007, pp 503–508.

  15. 15.

    Nguyen DV, Kuhnert L, Kuhnert KD (2012) Structure overview of vegetation detection. A novel approach for efficient vegetation detection using an active lighting system. Robot Auton Syst 60(4):498–508.

    Article  Google Scholar 

  16. 16.

    Nguyen DV, Kuhnert L, Jiang T, Thamke S, Kuhnert KD (2011) Vegetation detection for outdoor automobile guidance. In: Industrial technology (ICIT), 2011 IEEE international conference on, 14–16 March 2011, pp 358–364.

  17. 17.

    Nguyen DV, Kuhnert L, Kuhnert KD (2012) Spreading algorithm for efficient vegetation detection in cluttered outdoor environments. Robot Auton Syst 60(12):1498–1507.

    Article  Google Scholar 

  18. 18.

    Lu X, Wu H, Yuan Y (2014) Double constrained NMF for hyperspectral unmixing. IEEE Trans Geosci Remote Sens 52(5):2746–2758

    Article  Google Scholar 

  19. 19.

    Yuan Y, Fu M, Lu X (2015) Substance dependence constrained sparse NMF for hyperspectral unmixing. IEEE Trans Geosci Remote Sens 53(6):2975–2986

    Article  Google Scholar 

  20. 20.

    Royo C, Villegas D (2011) Field Measurements of canopy spectra for biomass assessment of small-grain cereals. In: Biomass—detection, production and usage. INTECH Open Access Publisher

  21. 21.

    Sannier C, Taylor J, Plessis WD (2002) Real-time monitoring of vegetation biomass with NOAA-AVHRR in Etosha National Park, Namibia, for fire risk assessment. Int J Remote Sens 23(1):71–89

    Article  Google Scholar 

  22. 22.

    Verbesselt J, Somers B, van Aardt J, Jonckheere I, Coppin P (2006) Monitoring herbaceous biomass and water content with SPOT VEGETATION time-series to improve fire risk assessment in savanna ecosystems. Remote Sens Environ 101(3):399–414.

    Article  Google Scholar 

  23. 23.

    Schneider P, Roberts DA, Kyriakidis PC (2008) A VARI-based relative greenness from MODIS data for computing the Fire Potential Index. Remote Sens Environ 112(3):1151–1167.

    Article  Google Scholar 

  24. 24.

    St-Onge B, Hu Y, Vega C (2008) Mapping the height and above-ground biomass of a mixed forest using lidar and stereo Ikonos images. Int J Remote Sens 29(5):1277–1294

    Article  Google Scholar 

  25. 25.

    Ahamed T, Tian L, Zhang Y, Ting KC (2011) A review of remote sensing methods for biomass feedstock production. Biomass Bioenergy 35(7):2455–2469.

    Article  Google Scholar 

  26. 26.

    Sritarapipat T, Rakwatin P, Kasetkasem T (2014) Automatic rice crop height measurement using a field server and digital image processing. Sensors 14(1):900–926

    Article  Google Scholar 

  27. 27.

    Juan Z, Xin-yuan H (2009) Measuring method of tree height based on digital image processing technology. In: Information science and engineering (ICISE), 2009 1st international conference on, 26–28 Dec. 2009, pp 1327–1331.

  28. 28.

    Leung T, Malik J (2001) Representing and recognizing the visual appearance of materials using three-dimensional textons. Int J Comput Vision 43(1):29–44.

    Article  MATH  Google Scholar 

  29. 29.

    Schmid C (2001) Constructing models for content-based image retrieval. In: IEEE computer society conference on computer vision and pattern recognition, 2001. IEEE, pp 39–45

  30. 30.

    Geusebroek J-M, Smeulders AW, Van De Weijer J (2003) Fast anisotropic gauss filtering. IEEE Trans Image Process 12(8):938–943

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    Daugman JG (1980) Two-dimensional spectral analysis of cortical receptive field profiles. Vis Res 20(10):847–856.

    Article  Google Scholar 

  32. 32.

    Chang T, Kuo C-C (1993) Texture analysis and classification with tree-structured wavelet transform. IEEE Trans Image Process 2(4):429–441

    Article  Google Scholar 

  33. 33.

    Reyes-Aldasoro CC, Bhalerao A (2006) The Bhattacharyya space for feature selection and its application to texture segmentation. Pattern Recogn 39(5):812–826.

    Article  MATH  Google Scholar 

  34. 34.

    Winn J, Criminisi A, Minka T Object categorization by learned universal visual dictionary. In: Computer vision (ICCV). Tenth IEEE international conference on, 17–21 Oct. 2005, pp 1800–1807.

  35. 35.

    Kang Y, Kidono K, Naito T, Ninomiya Y (2008) Multiband image segmentation and object recognition using texture filter banks. In: 19th international conference on pattern recognition, 2008. IEEE, pp 1–4

  36. 36.

    Shotton J, Winn J, Rother C, Criminisi A (2009) Texton boost for image understanding: multi-class object recognition and segmentation by jointly modeling texture, layout, and context. Int J Comput Vis 81(1):2–23.

    Article  Google Scholar 

  37. 37.

    Kasson JM, Plouffe W (1992) An analysis of selected computer interchange color spaces. ACM Trans Graph 11(4):373–405.

    Article  MATH  Google Scholar 

  38. 38.

    Felzenszwalb P, Huttenlocher D (2004) Efficient graph-based image segmentation. Int J Comput Vision 59(2):167–181.

    Article  Google Scholar 

  39. 39.

    Verma B, Zhang L, Stockwell D (2017) Case study: roadside video data analysis for fire risk assessment. roadside video data analysis. Springer, Berlin, pp 159–183

    Chapter  Google Scholar 

  40. 40.

    Rasmussen C (2004) Grouping dominant orientations for ill-structured road following. In: Computer vision and pattern recognition (CVPR). Proceedings of the 2004 IEEE computer society conference on, 27 June-2 July 2004, pp 470–477.

  41. 41.

    Zheng L, Zhao Y, Wang S, Wang J, Tian Q (2016) Good practice in CNN feature transfer. arXiv preprint arXiv:160400133

  42. 42.

    Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324.

    Article  Google Scholar 

  43. 43.

    Zhang L, Verma B, Stockwell D (2015) Class-semantic color-texture textons for vegetation classification. In: Arik S, Huang T, Lai WK, Liu Q (eds) Neural information processing (Lecture notes in computer science), vol 9489. Springer International Publishing, pp 354–362.

  44. 44.

    Tighe J, Lazebnik S (2010) Superparsing: scalable nonparametric image parsing with superpixels. In: Daniilidis K, Maragos P, Paragios N (eds) Computer vision—ECCV 2010 (Lecture notes in computer science), vol 6315. Springer, Berlin, pp 352–365.

  45. 45.

    Shelhamer E, Long J, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640–651.

    Article  Google Scholar 

Download references


This research was supported under Australian Research Council's Linkage Projects funding scheme (project number LP140100939).

Author information



Corresponding author

Correspondence to Ligang Zhang.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, L., Verma, B. A deep neural network and rule-based technique for fire risk identification in video frames. Pattern Anal Applic 22, 187–203 (2019).

Download citation


  • Fire risk
  • Video frame
  • Object classification
  • Rules
  • Autoencoder