Skip to main content

Abstract

Current technology in imaging sensors offers a wide variety of information that can be extracted from an observed scene. Acquired images from different sensor modalities exhibit diverse characteristics such as type of degradation; salient features etc. and can be particularly beneficial in surveillance systems. Such representative sensory systems include infrared and thermal imaging cameras, which can operate beyond the visual spectrum providing functionality under any environmental conditions. Multi-sensor information is jointly combined to provide an enhanced representation, particularly utile in automated surveillance systems such as monitoring robotics. In this chapter, a surveillance framework based on a fusion model is presented in order to enhance the capabilities of unmanned vehicles for monitoring critical infrastructures. The fusion scheme multiplexes the acquired representations from different modalities by applying an image decomposition algorithm and combining the resulted sub-signals via metric optimization. Subsequently, the fused representations are fed into an identification module in order to recognize the detected instances and improve eventually the surveillance of the required area. The proposed framework adopts recent advancements in object detection for optimal identification by deploying a deep learning model properly trained with fused data. Initial results indicate that the overall scheme can accurately identify the objects of interest by processing the enhanced representations of the fusion scheme. Considering that the overall processing time and the resource requirements are kept in low levels, the framework can be integrated in an automated surveillance system comprised by unmanned vehicles.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Stathaki T (2008) Image fusion: algorithms and applications. Academic Press, London

    Google Scholar 

  2. Meng F, Guo B, Song M, Zhang X (2016) Image fusion with saliency map and interest points. Neurocomputing 177:1–8

    Article  Google Scholar 

  3. Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112

    Article  Google Scholar 

  4. Mertens T, Kautz J, Van Reeth F (2007).Exposure fusion. In: 15th Pacific conference on computer graphics and applications, pp 382–390

    Google Scholar 

  5. Ben Hamza A, He Y, Krim H, Willsky A (2005) A multiscale approach to pixel-level image fusion. Integr Comput-Aided Eng 12(2):135–146

    Article  Google Scholar 

  6. Li S, Kwok JT, Wang Y (2002) Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images. Inf Fusion 3(1):17–23

    Article  Google Scholar 

  7. Lewis JJ, O’ Callaghan RJ, Nikolov SG, Bull DR, Canagarajah N (2007) Pixel- and region-based image fusion with complex wavelets. Inf Fusion 8(2):119–130

    Article  Google Scholar 

  8. Li T, Wang Y (2011) Biological image fusion using a NSCT based variable-weight method. Inf Fusion 12(2):85–92

    Article  Google Scholar 

  9. Wang L, Li B, Tian L-F (2014) Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients. Inf Fusion 19:20–28

    Article  Google Scholar 

  10. Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892

    Article  Google Scholar 

  11. Li S, Yin H, Fang L (2012) Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans Biomed Eng 59(12):3450–3459

    Article  Google Scholar 

  12. Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inf Fusion 25:72–84

    Article  Google Scholar 

  13. Gangapure VN, Banerjee S, Chowdhury AS (2015) Steerable local frequency based multispectral multifocus image fusion. Inf Fusion 23:99–115

    Article  Google Scholar 

  14. Li S, Kwok JT, Tsang IW, Wang Y (2004) Fusing images with different focuses using support vector machines. IEEE Trans Neural Netw 15(6):1555–1561

    Article  Google Scholar 

  15. Li S, Kwok JT, Wang Y (2002) Multifocus image fusion using artificial neural networks. Pattern Recogn Lett 23(8):985–997

    Article  Google Scholar 

  16. Shahdoosti HR, Ghassemian H (2016) Combining the spectral PCA and spatial PCA fusion methods by an optimal filter. Inf Fusion 27:150–160

    Article  Google Scholar 

  17. Tu T-M, Huang PS, Hung C-L, Chang C-P (2004) A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Trans Geosci Remote Sens Lett 1(4):309–312

    Article  ADS  Google Scholar 

  18. Huang NE, Shen Z, Long S, Wu MC, Shih H, Zheng Q, Yen N-C, Tung CC, Liu HH (1998) The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc London A Math Phys Eng Sci 454(1971):903–995

    Article  ADS  MathSciNet  Google Scholar 

  19. Bhuiyan S, Adhami R, Khan J (2008) Fast and adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation. EURASIP J Adv Signal Process:1–18

    Google Scholar 

  20. Kennedy J, Eberhart R (1995) Particle swarm optimization. Int Conf Neural Netw 4:1942–1948

    Google Scholar 

  21. Girshick R, Donahue J, Darell T, Malin J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the on IEEE conference on computer vision and pattern recogn, pp 580–587

    Google Scholar 

  22. Girshick R (2015) Fast R-CNN. In Proceedings of the IEEE international conference on computer vision, pp 1440–1448

    Google Scholar 

  23. Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  Google Scholar 

  24. Davis JW, Keck MA (2005) A two-stage template approach to person detection in thermal imagery. In: 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION’05)-Volume 1, January, vol. 1, pp 364–369. IEEE

    Google Scholar 

  25. Creativecommons.org (2020) Creative commons – CC0 1.0 Universal. [Online] Available at: https://creativecommons.org/publicdomain/zero/1.0/deed.en. Accessed 31 Aug 2020

  26. pixabay.com (2020) Simplified pixabay license. [Online] Available at: https://pixabay.com/service/license/. Accessed 31 Aug 2020

  27. Toet A, IJspeert JK, Waxman AM, Aguilar M (1997) Fusion of visible and thermal imagery improves situational awareness. Displays 18(2):85–95

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by ROBORDER and BEAWARE projects funded by the European Commission under grant agreements No 740593 and No 700475, respectively.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Konstantinos Ioannidis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature B.V.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ioannidis, K., Orfanidis, G., Krestenitis, M., Vrochidis, S., Kompatsiaris, I. (2021). Sensor Data Fusion and Autonomous Unmanned Vehicles for the Protection of Critical Infrastructures. In: Pereira, M.F., Apostolakis, A. (eds) Terahertz (THz), Mid Infrared (MIR) and Near Infrared (NIR) Technologies for Protection of Critical Infrastructures Against Explosives and CBRN. NATO Science for Peace and Security Series B: Physics and Biophysics. Springer, Dordrecht. https://doi.org/10.1007/978-94-024-2082-1_1

Download citation

Publish with us

Policies and ethics