Skip to main content

MEConvNN-Designing Memory Efficient Convolution Neural Network for Visual Recognition of Aerial Emergency Situations

  • Conference paper
  • First Online:
Intelligent and Fuzzy Systems (INFUS 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 759))

Included in the following conference series:

Abstract

Unmanned aerial vehicles (UAVs) play a vital role in calamity and natural disaster management due to their remote sensing capabilities. Specifically, UAVs/drones equipped with visual sensors can have remote access to confined areas wherein human access is limited. Growing inventions in deep learning incubate the efficacy of such UAVs/drones in terms of computational capability with limited resources and lead us to effectively utilize this technology in visual recognition of emergency situations, like floods in urban areas, earthquakes or fires in forests, and traffic accidents on busy highways. This can be beneficial in mitigating the consequences of such events on the environment and people more rapidly with minimum men and material loss. However, most deep learning architectures used in this domain with high accuracy are costly regarding memory and computational resources. This motivates us to propose a framework that can be computationally efficient and can be utilized on an embedded system suitable for smaller platforms. In this work, we formalize and investigate that problem and design a memory-efficient neural network for visual recognition of emergency situations named MEConvNN. To this end, we have effectively used dilated convolutions to extract the spatial representation. The proposed method is experimentally evaluated using Aerial Image Database for Emergency Response (AIDER), showing comparative efficacy with the state-of-the-art methods. Specifically, the proposed method achieves accuracy with less than a 2% drop compared to state-of-art methods but is more memory efficient in contrast to state-of-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alvarez-Vanhard, E., Corpetti, T., Houet, T.: UAV & satellite synergies for optical remote sensing applications: a literature review. Sci. Remote Sens. 3, 100019 (2021)

    Article  Google Scholar 

  2. Matese, A., et al.: Intercomparison of UAV, aircraft and satellite remote sensing platforms for precision viticulture. Remote Sens. 7(3), 2971–2990 (2015)

    Article  Google Scholar 

  3. Valsan, A., Parvathy, B., GH, V.D., Unnikrishnan, R.S., Reddy, P.K., Vivek, A.: Unmanned aerial vehicle for search and rescue mission. In: International Conference on Trends in Electronics and Informatics (ICOEI) (48184), pp. 684–687. IEEE (2020)

    Google Scholar 

  4. Alsamhi, S.H., et al.: Multi-drone edge intelligence and SAR smart wearable devices for emergency communication. Wirel. Commun. Mob. Comput. 2021, 1–12 (2021)

    Article  Google Scholar 

  5. Ribeiro, R.G., Cota, L.P., Euzébio, T.A., Ramírez, J.A., Guimarães, F.G.: Unmanned-aerial-vehicle routing problem with mobile charging stations for assisting search and rescue missions in postdisaster scenarios. IEEE Trans. Syst. Man Cybern. Syst. 52, 6682–6896 (2021)

    Article  Google Scholar 

  6. Schedl, D.C., Kurmi, I., Bimber, O.: An autonomous drone for search and rescue in forests using airborne optical sectioning. Sci. Rob. 6(55), eabg1188 (2021)

    Article  Google Scholar 

  7. Zong, M., Wang, R., Chen, X., Chen, Z., Gong, Y.: Motion saliency based multi-stream multiplier ResNets for action recognition. Image Vision Comput. 107, 104108 (2021)

    Article  Google Scholar 

  8. Simonyan, K. and Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  10. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  11. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)

    Google Scholar 

  12. Bejiga, M.B., Zeggada, A., Nouffidj, A., Melgani, F.: A convolutional neural network approach for assisting avalanche search and rescue operations with UAV imagery. Remote Sens. 9(2), 100 (2017)

    Article  Google Scholar 

  13. Srinivas, K., Dua, M.: Fog computing and deep CNN based efficient approach to early forest fire detection with unmanned aerial vehicles. In: Smys, S., Bestak, R., Rocha, Á. (eds.) ICICIT 2019. LNNS, vol. 98, pp. 646–652. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33846-6_69

    Chapter  Google Scholar 

  14. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017)

    Article  Google Scholar 

  15. Kyrkou, C., Theocharides, T.: EmergencyNet: efficient aerial image classification for drone-based emergency monitoring using atrous convolutional feature fusion. IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens. 13, 1687–1699 (2020)

    Article  Google Scholar 

Download references

Acknowledgement

This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2014-3-00077, Development of Global Multi-target Tracking and Event Prediction Techniques Based on Real-time Large-Scale Video Analysis), and Culture, Sports and Tourism R &D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism (No. R2022060001, Development of service robot and contents supporting children’s reading activities based on artificial intelligence)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Unse Fatima .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fatima, U., Pyo, J., Ko, Y., Jeon, M. (2023). MEConvNN-Designing Memory Efficient Convolution Neural Network for Visual Recognition of Aerial Emergency Situations. In: Kahraman, C., Sari, I.U., Oztaysi, B., Cebi, S., Cevik Onar, S., Tolga, A.Ç. (eds) Intelligent and Fuzzy Systems. INFUS 2023. Lecture Notes in Networks and Systems, vol 759. Springer, Cham. https://doi.org/10.1007/978-3-031-39777-6_59

Download citation

Publish with us

Policies and ethics