Skip to main content
Log in

An attention-based prototypical network for forest fire smoke few-shot detection

  • Original Paper
  • Published:
Journal of Forestry Research Aims and scope Submit manuscript

Abstract

Existing almost deep learning methods rely on a large amount of annotated data, so they are inappropriate for forest fire smoke detection with limited data. In this paper, a novel hybrid attention-based few-shot learning method, named Attention-Based Prototypical Network, is proposed for forest fire smoke detection. Specifically, feature extraction network, which consists of convolutional block attention module, could extract high-level and discriminative features and further decrease the false alarm rate resulting from suspected smoke areas. Moreover, we design a meta-learning module to alleviate the overfitting issue caused by limited smoke images, and the meta-learning network enables achieving effective detection via comparing the distance between the class prototype of support images and the features of query images. A series of experiments on forest fire smoke datasets and miniImageNet dataset testify that the proposed method is superior to state-of-the-art few-shot learning approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Bahdanau D, Cho KH, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: Proceedings of the International Conference on Learning Representations (ICLR)

  • Barmpoutis P, Papaioannou P, Dimitropoulos K, Grammalidis N (2020) A review on early forest fire detection systems using optical remote sensing. Sensors 20(22):6442

    Article  Google Scholar 

  • Barmpoutis P, Dimitropoulos K, Kaza K, Grammalidis N (2019) Fire detection from images using faster r-cnn and multidimensional texture analysis. In: Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 8301–8305

  • Boney R, Ilin A (2017) Semi-supervised and active few-shot learning with prototypical networks. https://arxiv.org/abs/1711.10856

  • Chen J, He YP, Wang J (2010) Multi-feature fusion based fast video flame detection. Build Environ 45(5):1113–1122

    Article  Google Scholar 

  • Ferreira MP, Almeida DRAD, Papa DDA, Minervino JBS (2020) Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. For Ecol Manag 475:118397

    Article  Google Scholar 

  • Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the International Conference on Machine Learning (ICML) 1126–1135

  • Fort S (2017) Gaussian prototypical networks for few-shot learning on Omniglot. https://arxiv.org/abs/1708.02735

  • Frizzi S, Kaabi R, Bouchouicha M, Ginoux JM, Moreau E, Fnaiech F (2016) Convolutional neural network for video fire and smoke detection. In: Proceedings of the IECON 2016 - 42nd Annual Conference of the IEEE Industrial Electronics Society Florence, Italy, 877–882

  • Geng RY, Li BH, Li YB, Zhu XD, Jian P, Sun J (2019) Induction networks for few-shot text classification. https://arxiv.org/abs/1902.10482

  • Gu K, Xia ZF, Qiao JF, Lin WS (2020) Deep dual-channel neural network for image-based smoke detection. IEEE Trans Multimedia 22(2):311–323

    Article  Google Scholar 

  • Hou RB, Chang H, Ma BP, Shan SG, Chen XL (2019) Cross attention network for few-shot classification. In: Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS)

  • Hu W, Guan YP (2020) Landmark-free head pose estimation using fusion inception deep neural network. J Electron Imaging 29(4):043030

    Article  Google Scholar 

  • Hu YC, Lu XB (2018) Real-time video fire smoke detection by utilizing spatial-temporal ConvNet features. Multimed Tools Appl 77(22):29283–29301

    Article  Google Scholar 

  • Hu J, Shen L, Albanie S, Sun G, Wu EH (2017) Squeeze-and-excitation networks. IEEE Trans Pattern Anal Mach Intell (PAMI) 42(8):2011–2023

    Article  Google Scholar 

  • Jaderberg M, Simonyan K, Zisserman A, Kavukcuoglu K (2016) Spatial transformer networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  • Ko BC, Cheong K, Nam J (2009) Fire detection based on vision sensor and support vector machines. Fire Saf J 44(3):322–329

    Article  Google Scholar 

  • Li XQ, Chen ZX, Wu QMJ, Liu CY (2018) 3D parallel fully convolutional networks for real-time video wildfire smoke detection. IEEE Trans Circuits Syst Video Technol 30(1):89–103

    Article  Google Scholar 

  • Liu J, Zhou Q, Qiang Y, Kang B, Wu XF, Zheng BY (2019) Fddwnet: a lightweight convolutional neural network for real-time sementic segmentation. https://arxiv.org/abs/1911.00632

  • Luo YM, Zhao L, Liu PZ, Huang DT (2017) Fire smoke detection algorithm based on motion characteristic and convolutional neural networks. Multimed Tools Appl 77(12):15075–15092

    Article  Google Scholar 

  • Maksymiv O, Rak T, Peleshko D (2016) Real-time fire detection method combining AdaBoost, LBP and convolutional neural network in video sequence. In: Proceedings of the 14th International Conference the Experience of Designing and Application of CAD Systems in Microelectronics (CADSM) Lviv, Ukraine, 351–353

  • Mao WT, Wang WP, Dou Z, Li Y (2017) Fire recognition based on multi-channel convolutional neural network. Fire Technol 54(2):531–554

    Article  Google Scholar 

  • Misra D, Nalamada T, Arasanipalai AU, Hou QB (2021) Rotate to attend: convolutional triplet attention module. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV)

  • Mnih V, Heess N, Graves A, Kavukcouglu K (2014) Recurrent models of visual attention. In: Proceedings of the 27th International Conference on Neural Information Processing Systems (NeurIPS) 2, 2204–2212

  • Peng YS, Wang Y (2019) Real-time forest smoke detection using hand-designed features and deep learning. Comput Electron Agric 167:105029

    Article  Google Scholar 

  • Perez L, Wang J (2017) The effectiveness of data augmentation in image classification using deep learning. https://arxiv.org/abs/1712.04621

  • Qin ZQ, Zhang PY, Wu F, Li X (2021) Fcanet: frequency channel attention networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV)

  • Ravi S, Larochelle H (2017) Optimization as a model for few-shot learning. In: Proceedings of the International Conference on Learning Representations (ICLR)

  • Rizve MN, Khan S, Khan FS, Shah M (2021) Exploring complementary strengths of invariant and equivariant representations for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  • Selvaraju RR, Cogswell M, Das A, Vedanta R, Parikh D, Batra D (2020) Grad-cam: visual explanations from deep networks via gradient-based localization. Int J Comput vis 128(2):336–359

    Article  Google Scholar 

  • Shen DQ, Chen X, Nguyen M, Yan WQ (2018) Flame detection using deep learning. In: Proceedings of the 2018 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 416–420

  • Shi JT, Yuan FN, Xia X (2018) Video smoke detection: a literature survey. J Image Graph 23(3):303–322

    Google Scholar 

  • Snell J, Swersky K, Zemel RS (2017) Prototypical networks for few-shot learning. https://arxiv.org/abs/1703.05175

  • Song G, Tao ZL, Huang XL, Cao G, Liu W, Yang LF (2020) Hybrid attention-based prototypical network for unfamiliar restaurant food image few-shot recognition. IEEE Access 8:14893–14900

    Article  Google Scholar 

  • Sun C, Ai YB, Wang S, Zhang WD (2020) Dense-refinedet for traffic sign detection and classification. Sensors 20(22):6570

    Article  Google Scholar 

  • Tao CY, Zhang J, Wang P (2016) Smoke detection based on deep convolutional neural networks. In: Proceedings of the 2016 International Conference on Industrial Informatics—Computing Technology, Intelligent Technology, Industrial Information Integration (ICIICII) Wuhan, China, 1, 150–153

  • Vinyals O, Blundell C, Lillicrap T, Wieratra D (2016) Matching networks for one shot learning. In: Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS)

  • Wang YB, Dang LF, Ren JY (2019) Forest fire image recognition based on convolutional neural network. J Algorithm Comput Technol 13:1–11

    Google Scholar 

  • Wang YQ, Yao QM, James TK, Lionel MN (2020b) Generalizing from a few examples: a survey on few-shot learning. ACM Comput Surv 53(3):1–34

    Article  Google Scholar 

  • Wang QL, Wu BG, Zhu PF, Li PH, Zou WM, Hu QH (2020a) Eca-net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  • Woo S, Park J, Lee JY, Kweon IS (2018) Cbam: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV)

  • Wu YL, Chen MH, Wo Y, Han GQ (2020b) Video smoke detection base on dense optical flow and convolutional neural network. Multimed Tools Appl 80(28–29):35887–35901

    Google Scholar 

  • Wu XW, Doyen S, Steven H (2020a) Meta-rcnn: meta learning for few-shot object detection. In: Proceedings of the 28th ACM International Conference on Multimedia ACM 1679–1687

  • Xie JJ, Li AQ, Zhang JG, Cheng ZA (2019) An integrated wildlife recognition model based on multi-branch aggregation and squeeze-and-excitation network. Appl Sci 9(14):2794

    Article  Google Scholar 

  • Xu G, Zhang YM, Zhang QX, Lin GH, Wang JJ (2017) Deep domain adaptation based video smoke detection using synthetic smoke images. Fire Saf J 93:53–59

    Article  Google Scholar 

  • Xue WQ, Wang W (2020) One-shot image classification by learning to restore prototypes. In: Proceedings of the Association for the Advance of Artificial Intelligence (AAAI)

  • Yin MX, Lang CY, Li Z, Feng SH, Wang T (2019) Recurrent convolutional network for video-based smoke detection. Multimed Tools Appl 78(1):237–256

    Article  Google Scholar 

  • Yuan FN (2012) A double mapping framework for extraction of shape-invariant features based on multi-scale partitions with AdaBoost for video smoke detection. Pattern Recognit 45(12):4326–4336

    Article  Google Scholar 

  • Yuan W, Li J, Fang J, Zhang YM (2013) Color model and method for video fire flame and smoke detection using Fisher linear discriminant. Opt Eng 52(2):027205

    Article  Google Scholar 

  • Zhang BF, Xiao JM, Qin T (2021) Self-guided and cross-guided learning for few-shot segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junguo Zhang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Project funding: The work was supported by the National Key R&D Program of China (Grant No. 2020YFC1511601), and Fundamental Research Funds for the Central Universities (Grant No. 2019SHFWLC01).

The online version is available at http://www.springerlink.com.

Corresponding editor: Tao Xu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, T., Zhu, H., Hu, C. et al. An attention-based prototypical network for forest fire smoke few-shot detection. J. For. Res. 33, 1493–1504 (2022). https://doi.org/10.1007/s11676-022-01457-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11676-022-01457-6

Keywords

Navigation