Abstract
Current technology in imaging sensors offers a wide variety of information that can be extracted from an observed scene. Acquired images from different sensor modalities exhibit diverse characteristics such as type of degradation; salient features etc. and can be particularly beneficial in surveillance systems. Such representative sensory systems include infrared and thermal imaging cameras, which can operate beyond the visual spectrum providing functionality under any environmental conditions. Multi-sensor information is jointly combined to provide an enhanced representation, particularly utile in automated surveillance systems such as monitoring robotics. In this chapter, a surveillance framework based on a fusion model is presented in order to enhance the capabilities of unmanned vehicles for monitoring critical infrastructures. The fusion scheme multiplexes the acquired representations from different modalities by applying an image decomposition algorithm and combining the resulted sub-signals via metric optimization. Subsequently, the fused representations are fed into an identification module in order to recognize the detected instances and improve eventually the surveillance of the required area. The proposed framework adopts recent advancements in object detection for optimal identification by deploying a deep learning model properly trained with fused data. Initial results indicate that the overall scheme can accurately identify the objects of interest by processing the enhanced representations of the fusion scheme. Considering that the overall processing time and the resource requirements are kept in low levels, the framework can be integrated in an automated surveillance system comprised by unmanned vehicles.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Stathaki T (2008) Image fusion: algorithms and applications. Academic Press, London
Meng F, Guo B, Song M, Zhang X (2016) Image fusion with saliency map and interest points. Neurocomputing 177:1–8
Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112
Mertens T, Kautz J, Van Reeth F (2007).Exposure fusion. In: 15th Pacific conference on computer graphics and applications, pp 382–390
Ben Hamza A, He Y, Krim H, Willsky A (2005) A multiscale approach to pixel-level image fusion. Integr Comput-Aided Eng 12(2):135–146
Li S, Kwok JT, Wang Y (2002) Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images. Inf Fusion 3(1):17–23
Lewis JJ, O’ Callaghan RJ, Nikolov SG, Bull DR, Canagarajah N (2007) Pixel- and region-based image fusion with complex wavelets. Inf Fusion 8(2):119–130
Li T, Wang Y (2011) Biological image fusion using a NSCT based variable-weight method. Inf Fusion 12(2):85–92
Wang L, Li B, Tian L-F (2014) Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients. Inf Fusion 19:20–28
Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892
Li S, Yin H, Fang L (2012) Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans Biomed Eng 59(12):3450–3459
Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inf Fusion 25:72–84
Gangapure VN, Banerjee S, Chowdhury AS (2015) Steerable local frequency based multispectral multifocus image fusion. Inf Fusion 23:99–115
Li S, Kwok JT, Tsang IW, Wang Y (2004) Fusing images with different focuses using support vector machines. IEEE Trans Neural Netw 15(6):1555–1561
Li S, Kwok JT, Wang Y (2002) Multifocus image fusion using artificial neural networks. Pattern Recogn Lett 23(8):985–997
Shahdoosti HR, Ghassemian H (2016) Combining the spectral PCA and spatial PCA fusion methods by an optimal filter. Inf Fusion 27:150–160
Tu T-M, Huang PS, Hung C-L, Chang C-P (2004) A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Trans Geosci Remote Sens Lett 1(4):309–312
Huang NE, Shen Z, Long S, Wu MC, Shih H, Zheng Q, Yen N-C, Tung CC, Liu HH (1998) The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc London A Math Phys Eng Sci 454(1971):903–995
Bhuiyan S, Adhami R, Khan J (2008) Fast and adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation. EURASIP J Adv Signal Process:1–18
Kennedy J, Eberhart R (1995) Particle swarm optimization. Int Conf Neural Netw 4:1942–1948
Girshick R, Donahue J, Darell T, Malin J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the on IEEE conference on computer vision and pattern recogn, pp 580–587
Girshick R (2015) Fast R-CNN. In Proceedings of the IEEE international conference on computer vision, pp 1440–1448
Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149
Davis JW, Keck MA (2005) A two-stage template approach to person detection in thermal imagery. In: 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION’05)-Volume 1, January, vol. 1, pp 364–369. IEEE
Creativecommons.org (2020) Creative commons – CC0 1.0 Universal. [Online] Available at: https://creativecommons.org/publicdomain/zero/1.0/deed.en. Accessed 31 Aug 2020
pixabay.com (2020) Simplified pixabay license. [Online] Available at: https://pixabay.com/service/license/. Accessed 31 Aug 2020
Toet A, IJspeert JK, Waxman AM, Aguilar M (1997) Fusion of visible and thermal imagery improves situational awareness. Displays 18(2):85–95
Acknowledgements
This work was supported by ROBORDER and BEAWARE projects funded by the European Commission under grant agreements No 740593 and No 700475, respectively.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature B.V.
About this paper
Cite this paper
Ioannidis, K., Orfanidis, G., Krestenitis, M., Vrochidis, S., Kompatsiaris, I. (2021). Sensor Data Fusion and Autonomous Unmanned Vehicles for the Protection of Critical Infrastructures. In: Pereira, M.F., Apostolakis, A. (eds) Terahertz (THz), Mid Infrared (MIR) and Near Infrared (NIR) Technologies for Protection of Critical Infrastructures Against Explosives and CBRN. NATO Science for Peace and Security Series B: Physics and Biophysics. Springer, Dordrecht. https://doi.org/10.1007/978-94-024-2082-1_1
Download citation
DOI: https://doi.org/10.1007/978-94-024-2082-1_1
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-024-2081-4
Online ISBN: 978-94-024-2082-1
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)