System to Detect and Approach Humans from an Aerial View for the Landing Phase in a UAV Delivery Service
The possibility to engage in autonomous flight through geolocation-based missions turns Unmanned Aerial Vehicles (UAV) into valuable tools that save time and resources in services like deliveries and surveillance. Amazon is already developing a drop-by delivery service, but there are limitations regarding the client’s id, that can be analyzed in three phases: the approach to the potential receiver, the authorization through the client id and the delivery itself. This work shows a solution for the first of these phases. Firstly, the receiver identifies the GPS coordinates where he wants to receive the package. The UAV flights to that place and tries to locate the receiver on the arrival through Computer Vision (CV) techniques, more precisely Deep Neural Networks (DNN), to continue to the next phase, the identification. After the proposal of the system’s architecture and the prototype’s implementation, a test scenario to analyze the feasibility of the proposed techniques was created. The results were quite good considering a system to look for one person in a limited area defined by the destination coordinates, confirming the detection of one person with an up to 92% accuracy from a 10 m height and 5 m horizontal distance in low resolution images.
KeywordsComputer vision Deep Neural Networks Delivery services Internet of Things New generation services Unmanned Aerial Vehicles
This work is financed by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia within project: UID/EEA/50014/2019.
- 1.Safadinho, D., Ramos, J., Ribeiro, R., Caetano, R., Pereira, A.: UAV multiplayer platform for real-time online gaming. AISC, vol. 571, pp. 577–585 (2017)Google Scholar
- 2.Beleznai, C., Steininger, D., Croonen, G.: Multi-modal human detection from aerial views by fast shape-aware clustering and classification. In: 2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing, no. i, pp. 1–6 (2018)Google Scholar
- 3.Xu, S., Savvaris, A., He, S., Shin, H.S., Tsourdos, A.: Real-time implementation of YOLO+JPDA for small scale UAV multiple object tracking. In: 2018 International Conference on Unmanned Aircraft Systems, ICUAS 2018, pp. 1336–1341 (2018)Google Scholar
- 7.Taiana, M., Nascimento, J.C., Bernardino, A.: An improved labelling for the INRIA person data set for pedestrian detection. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7887, pp. 286–295. Springer, Heidelberg (2013)Google Scholar
- 8.Lawrence Zitnick, C., Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P.: Microsoft COCO: Common objects in context. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3686–3693 (2014)Google Scholar
- 9.Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications (2017)Google Scholar
- 10.Zhang, Y., Bi, S., Dong, M., Liu, Y.: The implementation of CNN-based object detector on ARM embedded platforms. In: Proceedings of the IEEE 16th International Conference on Dependable, Autonomic and Secure Computing, IEEE 16th International Conference on Pervasive Intelligence and Computing, IEEE 4th International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress, vol. 50, pp. 379–382 (2018)Google Scholar
- 11.Safadinho, D., Ramos, J., Ribeiro, R., Pereira, A.: UAV proposal for real-time online gaming to reduce stress (2018)Google Scholar