Abstract
Unmanned aerial vehicles (UAVs) play a vital role in calamity and natural disaster management due to their remote sensing capabilities. Specifically, UAVs/drones equipped with visual sensors can have remote access to confined areas wherein human access is limited. Growing inventions in deep learning incubate the efficacy of such UAVs/drones in terms of computational capability with limited resources and lead us to effectively utilize this technology in visual recognition of emergency situations, like floods in urban areas, earthquakes or fires in forests, and traffic accidents on busy highways. This can be beneficial in mitigating the consequences of such events on the environment and people more rapidly with minimum men and material loss. However, most deep learning architectures used in this domain with high accuracy are costly regarding memory and computational resources. This motivates us to propose a framework that can be computationally efficient and can be utilized on an embedded system suitable for smaller platforms. In this work, we formalize and investigate that problem and design a memory-efficient neural network for visual recognition of emergency situations named MEConvNN. To this end, we have effectively used dilated convolutions to extract the spatial representation. The proposed method is experimentally evaluated using Aerial Image Database for Emergency Response (AIDER), showing comparative efficacy with the state-of-the-art methods. Specifically, the proposed method achieves accuracy with less than a 2% drop compared to state-of-art methods but is more memory efficient in contrast to state-of-art methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alvarez-Vanhard, E., Corpetti, T., Houet, T.: UAV & satellite synergies for optical remote sensing applications: a literature review. Sci. Remote Sens. 3, 100019 (2021)
Matese, A., et al.: Intercomparison of UAV, aircraft and satellite remote sensing platforms for precision viticulture. Remote Sens. 7(3), 2971–2990 (2015)
Valsan, A., Parvathy, B., GH, V.D., Unnikrishnan, R.S., Reddy, P.K., Vivek, A.: Unmanned aerial vehicle for search and rescue mission. In: International Conference on Trends in Electronics and Informatics (ICOEI) (48184), pp. 684–687. IEEE (2020)
Alsamhi, S.H., et al.: Multi-drone edge intelligence and SAR smart wearable devices for emergency communication. Wirel. Commun. Mob. Comput. 2021, 1–12 (2021)
Ribeiro, R.G., Cota, L.P., Euzébio, T.A., Ramírez, J.A., Guimarães, F.G.: Unmanned-aerial-vehicle routing problem with mobile charging stations for assisting search and rescue missions in postdisaster scenarios. IEEE Trans. Syst. Man Cybern. Syst. 52, 6682–6896 (2021)
Schedl, D.C., Kurmi, I., Bimber, O.: An autonomous drone for search and rescue in forests using airborne optical sectioning. Sci. Rob. 6(55), eabg1188 (2021)
Zong, M., Wang, R., Chen, X., Chen, Z., Gong, Y.: Motion saliency based multi-stream multiplier ResNets for action recognition. Image Vision Comput. 107, 104108 (2021)
Simonyan, K. and Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)
Bejiga, M.B., Zeggada, A., Nouffidj, A., Melgani, F.: A convolutional neural network approach for assisting avalanche search and rescue operations with UAV imagery. Remote Sens. 9(2), 100 (2017)
Srinivas, K., Dua, M.: Fog computing and deep CNN based efficient approach to early forest fire detection with unmanned aerial vehicles. In: Smys, S., Bestak, R., Rocha, Á. (eds.) ICICIT 2019. LNNS, vol. 98, pp. 646–652. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33846-6_69
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017)
Kyrkou, C., Theocharides, T.: EmergencyNet: efficient aerial image classification for drone-based emergency monitoring using atrous convolutional feature fusion. IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens. 13, 1687–1699 (2020)
Acknowledgement
This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2014-3-00077, Development of Global Multi-target Tracking and Event Prediction Techniques Based on Real-time Large-Scale Video Analysis), and Culture, Sports and Tourism R &D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism (No. R2022060001, Development of service robot and contents supporting children’s reading activities based on artificial intelligence)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fatima, U., Pyo, J., Ko, Y., Jeon, M. (2023). MEConvNN-Designing Memory Efficient Convolution Neural Network for Visual Recognition of Aerial Emergency Situations. In: Kahraman, C., Sari, I.U., Oztaysi, B., Cebi, S., Cevik Onar, S., Tolga, A.Ç. (eds) Intelligent and Fuzzy Systems. INFUS 2023. Lecture Notes in Networks and Systems, vol 759. Springer, Cham. https://doi.org/10.1007/978-3-031-39777-6_59
Download citation
DOI: https://doi.org/10.1007/978-3-031-39777-6_59
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-39776-9
Online ISBN: 978-3-031-39777-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)