Advertisement

Applied Intelligence

, Volume 48, Issue 8, pp 2157–2179 | Cite as

A human-like visual-attention-based artificial vision system for wildland firefighting assistance

  • Kurosh Madani
  • Viachaslau Kachurka
  • Christophe Sabourin
  • Veronique Amarger
  • Vladimir Golovko
  • Lucile Rossi
Article
  • 147 Downloads

Abstract

In this work we contribute to development of a “Human-like Visual-Attention-based Artificial Vision” system for boosting firefighters’ awareness about the hostile environment in which they are supposed to move along. Taking advantage from artificial visual-attention, the investigated system’s conduct may be adapted to firefighter’s way of gazing by acquiring some kind of human-like artificial visual neatness supporting firefighters in interventional conditions’ evaluation or in their appraisal of the rescue conditions of people in distress dying out within the disaster. We achieve such a challenging goal by combining a statistically-founded bio-inspired saliency detection model with a Machine-Learning-based human-eye-fixation model. Hybridization of the two above-mentioned models leads to a system able to tune its parameters in order to fit human-like gazing of the inspected environment. It opens appealing perspectives in computer-aided firefighters’ assistance boosting their awareness about the hostile environment in which they are supposed to evolve. Using as well various available wildland fires images’ databases as an implementation of the investigated concept on a 6-wheeled mobile robot equipped with communication facilities, we provide experimental results showing the plausibility as well as the efficiency of the proposed system.

Keywords

Human-like artificial vision Artificial visual attention Fire region detection Firefighters’ assistance Robot Implementation 

References

  1. 1.
    FAO (2007) Wildfire management, a burning issue for livelihoods and land-use. Available online: http://www.fao.org/newsroom/en/news/2007/1000570/index.html
  2. 2.
    San-Miguel-Ayanz J, Ravail N, Kelha V, Ollero A (2005) Active fire detection for emergency management: potential and limitations for the operational use of remote sensing. Nat Hazards 35:361–76CrossRefGoogle Scholar
  3. 3.
    Kukreti SR, Kumar M, Cohen K (2016) Detection and localization using unmanned aerial systems for firefighting applications, AIAA Infotech Aerospace, AIAA SciTech, AIAA 2016–1903Google Scholar
  4. 4.
    Lu G, Yan Y, Huang Y, Reed A (1999) An intelligent monitoring and control system of combustion flames. Meas Control 32(7):164–68CrossRefGoogle Scholar
  5. 5.
    Chen T, Wu P, Chiou Y (2004) An early fire-detection method based on image processing. In: Proceedings of International Conference on Image Processing, pp 1707–1710Google Scholar
  6. 6.
    Gilabert G, Lu G, Yan Y (2007) Three-dimensional tomographic renconstruction of the luminosity distribution of a combustion flame. IEEE Trans Instr Measure 56(4):1300–1306CrossRefGoogle Scholar
  7. 7.
    Toulouse T, Rossi L, Celik T, Akhloufi M, Maldague X (2015) Benchmarking of wildland fire color segmentation algorithms. IET Image Process 9(12):1064–1072CrossRefGoogle Scholar
  8. 8.
    Ko BC, Cheong KH, Nam JY (2009) Fire detection based on vision sensor and support vector machines. Fire Saf J 44:322–329CrossRefGoogle Scholar
  9. 9.
    Celik T, Demirel H (2009) Fire detection in video sequences using a generic color model. Fire Saf J 44:147–158CrossRefGoogle Scholar
  10. 10.
    Ho C-C (2009) Machine vision-based real-time early flame and smoke detection. Meas Sci Technol 20(4):045502.  https://doi.org/10.1088/0957-0233/20/4/045502 CrossRefGoogle Scholar
  11. 11.
    Du S-Y, Liu Z-G (2015) A comparative study of different color spaces in computer-vision-based flame detection. Multimed Tools Appl 75:1–20Google Scholar
  12. 12.
    Koerich Borges PV, Izquierdo E (2010) A probabilistic approach for vision-based fire detection in videos. IEEE Trans Circuits Syst Video Technol 20(5):721–731CrossRefGoogle Scholar
  13. 13.
    Wang DC, Cui X, Park E, Jin C, Kim H (2013) Adaptive flame detection using randomness testing and robust features. Fire Saf J 55:116–125CrossRefGoogle Scholar
  14. 14.
    Wald A, Wolfowitz J (1943) An exact test for randomness in the non-parametric case based on serial correlation. Ann Math Stat 14(4):378–388MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Rossi L, Akhloufi M, Tison Y, Pieri A (2011) On the use of stereovision to develop a novel instrumentation system to extract geometric fire fronts characteristics. Fire Saf J 46(1-2):9–20CrossRefGoogle Scholar
  16. 16.
    Ko B, Jung JH, Nam JY (2014) Fire detection and 3D surface reconstruction based on stereoscopic pictures and probabilistic fuzzy logic. Fire Saf J 68:61–70CrossRefGoogle Scholar
  17. 17.
    Thokale A, Sonar P (2015) Hybrid approach to detect a fire based on motion color and edge. Digital Image Process 7(9):273–277Google Scholar
  18. 18.
    Kong SG, Jin D, Li S, Kim H (2015) Fast fire flame detection in surveillance video using logistic regression and temporal smoothing. Fire Saf J 79:37–43CrossRefGoogle Scholar
  19. 19.
    Liu Z-G, Yang Y, Ji X-H, (2016) Flame detection algorithm based on a saliency detection technique and the uniform local binary pattern in the YCbCr color space. SIViP 10(2):277–284CrossRefGoogle Scholar
  20. 20.
    Combination of Experts Based on Color (2015) Shape, and Motion. IEEE Trans Circuits Syst Video Technol 25(9):1545–1556CrossRefGoogle Scholar
  21. 21.
    Kleinbaum DG, Klein M (1994) Logistic regression: a self-learning text. Springer, Berlin. ISBN 978-1-4419-1741-6CrossRefMATHGoogle Scholar
  22. 22.
    Bulas-Cruz J, Ali AT, Dagless EL (1993) A temporal smoothing technique for real-time motion detection. Computer Analysis of Images and Patterns. Springer, Berlin, pp 379–386Google Scholar
  23. 23.
    Brand RJ, Baldwin DA, Ashburn LA (2002) Evidence for ’motionese’: modifications in mothers infant-directed action. Dev Sci 5:72–83CrossRefGoogle Scholar
  24. 24.
    Wolfe JM, Horowitz TS (2004) What attributes guide the deployment of visual attention and how do they do it? Nat Rev Neurosci 5:495–501CrossRefGoogle Scholar
  25. 25.
    Achanta R, Hemami S, Estrada F, Susstrunk S (2009) Frequency-tuned salient region detection. In: Proceedings of IEEE international conference on computer vision and pattern recognitionGoogle Scholar
  26. 26.
    Itti L, Koch C, Niebur E (1998) A Model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20:1254–1259CrossRefGoogle Scholar
  27. 27.
    Harel J, Koch C, Perona P (2007) Graph-based visual saliency. Adv Neural Inf Proces Syst 19:545–552Google Scholar
  28. 28.
    Achanta R, Estrada F, Wils P, Susstrunk S (2008) Salient region detection and segmentation. In: Proceedings of international conference on computer vision systems, vol 5008. LNCS, Springer, Berlin / Heidelberg, pp 66–75Google Scholar
  29. 29.
    Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, Shum H-Y. (2001) Learning to Detect a Salient Object. IEEE Trans Pattern Anal Mach Intell 33(2):353–367Google Scholar
  30. 30.
    Liang Z, Chi Z, Fu H, Feng D (2012) Salient object detection using content-sensitive hypergraph representation and partitioning. Pattern Recogn 45(11):3886–3901CrossRefGoogle Scholar
  31. 31.
    Ramík DM, Sabourin C, Madani K (2011) Hybrid salient object extraction approach with automatic estimation of visual attention scale. In: Proceedings of 7th International Conference on Signal Image Technology & Internet-Based Systems. Dijon, France, pp 438–445Google Scholar
  32. 32.
    Ramík DM, Sabourin C, Moreno R, Madani K (2014) A Machine Learning based Intelligent Vision System for Autonomous Object Detection and Recognition. J Appl Intelligence 40(2):358–375CrossRefGoogle Scholar
  33. 33.
    Moreno R, Ramík DM, Graña M, Madani K (2012) Image segmentation on the spherical coordinate representation of the RGB color space. IET Image Process 6(9):1275–1283MathSciNetCrossRefGoogle Scholar
  34. 34.
    Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, Shum H-Y (2011) Learning to detect a salient object. Proc Comput Vision Pattern Recogn 33:353–367Google Scholar
  35. 35.
    Borji A, Itti L (2013) State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Mach Intell 35(1):185–207CrossRefGoogle Scholar
  36. 36.
    Liu T, Sun J, Zheng N, Shum H-Y (2007) Learning to detect a salient object. In: Proceedings IEEE ICCV, pp 1–8Google Scholar
  37. 37.
    Holzbach A, Cheng G (2014) A Scalable and efficient method for salient region detection using sampled template collation. In: Proceedings IEEE ICIP, pp 1110–1114Google Scholar
  38. 38.
    Koehler K, Guo F, Zhang S, Eckstein MP (2014) What do saliency models predict. J Vis 14(3):1–27CrossRefGoogle Scholar
  39. 39.
    Navalpakkam V, Itti L (2006) An integrated model of top-down and bottom-up attention for optimizing detection speed. Proc IEEE CVPR II:2049–2056Google Scholar
  40. 40.
    Kadir T, Brady M (2001) Saliency, scale and image description. J Vis 45(2):83–105MATHGoogle Scholar
  41. 41.
    Kienzle W, Franz MO, Schölkopf B, Wchmann FA (2009) Center-surround patterns emerge as optimal predictors for human saccade targets. J Vis 9:1–15CrossRefGoogle Scholar
  42. 42.
    Rajashekar U, Vander Linde I, Bovik AC, Cormack LK (2008) GAFFE: a gaze- attentive fixation finding engine. IEEE Trans Image Process 17(4):564–573MathSciNetCrossRefGoogle Scholar
  43. 43.
    Hayhoe M, Ballard D (2005) Eye movements in natural behavior. Trends Cogn Sci 9:188–194CrossRefGoogle Scholar
  44. 44.
    Triesch J, Ballard DH, Hayhoe MM, Sullivan BT (2003) What you see is what you need. J Vis 3:86–94CrossRefGoogle Scholar
  45. 45.
    Zhang J, Sclaroff S (2013) Saliency detection: a boolean map approach. In: Proceedings of IEEE ICCV, pp 153–160Google Scholar
  46. 46.
    Karray FO, De Silva CW (2004) Soft computing and intelligent systems design: theory, tools and applications. Addison-Wesley Longman, Harlow. ISBN 9780321116178Google Scholar
  47. 47.
    Riche N, Duvinage M, Mancas M, Gosselin B, Dutoit T (2013) Saliency and human fixations: state-of-the-art and study of comparison metrics. In: Procssdings of IEEE ICCV, pp 1153–1160Google Scholar
  48. 48.
    Judd T, Ehinger K, Durand F, Torralba A (2009) Learning to predict where humans look. In: Proceedings of IEEE ICCV, pp 2106–2113Google Scholar
  49. 49.
    Borji A, Tavakoli HR, Sihite DN, Itti L (2013) Analysis of scores, datasets, and models in visual saliency prediction. In: Proceedings of IEEE ICCV, pp 921–928Google Scholar
  50. 50.
    Fawcett T (2006) An introduction to ROC analysis. Pattern Recogn Lett 27:861–874CrossRefGoogle Scholar
  51. 51.
    Contreras-Reyes JE, Arellano-Valle RB (2012) Küllback-Leibler divergence measure for multivariate skew-normal distributions. Entropy 14(9):1606–1626MathSciNetCrossRefMATHGoogle Scholar
  52. 52.
    Judd T, Durand F, Torralba A (2012) A benchmark of computational models of saliency to predict human fixations, MIT Technical Report. http://saliency.mit.edu/
  53. 53.
    Tatler BW (2007) The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor bases and image feature distributions. J V 14:1–17Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  • Kurosh Madani
    • 1
  • Viachaslau Kachurka
    • 1
    • 2
  • Christophe Sabourin
    • 1
  • Veronique Amarger
    • 1
  • Vladimir Golovko
    • 2
  • Lucile Rossi
    • 3
  1. 1.Université Paris-Est, Signals, Images, and Intelligent Systems Laboratory (LISSI / EA 3956), University Paris-Est Creteil, Senart-FB Institute of TechnologyLieusaintFrance
  2. 2.Neural Networks Laboratory, Intelligent Information Technologies DepartmentBrest State Techical UniversityBrestBelarus
  3. 3.Sciences of Environment Laboratory (SPE / UMR-CNRS 6134)University of CorsicaCorteFrance

Personalised recommendations