Abstract
The technology for the visual inspection of aircraft surfaces using programmable unmanned aerial vehicles (drones) is presented. When developing the technology, special attention is paid to the problem of the drone’s indoor navigation where the signal from satellite positioning systems is weak or there is no signal, as well as to the development of algorithmic programs and software for detecting both the drone and damage to the aircraft surface based on video analysis. The results of testing the technology in enclosed spaces under conditions close to the expected operating conditions are given.
Similar content being viewed by others
REFERENCES
A. Sinitskii, “Drones inspect commercial aircraft,” Aviatransp. Obozr., Prilozh. Bespilot. Aviats., No. 189, 63–64 (2018).
V. N. Vapnik, Statistical Learning Theory (Wiley, New York, 1998).
Y. LeCun, B. Boser, and J. S. Denker, “Backpropagation applied to handwritten zip code recognition,” Neural Comput. 1, 541–551 (1989).
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the International Conference on Learning Representations, San Diego, CA,2015.
L. Songtao, D. Huang, and Y. Wang, “Receptive field block net for accurate and fast object detection,” in Proceedings of the European Conference on Computer Vision, Munich, Germany,2018.
E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 640–651 (2017).
V. Jain and S. H. Seung, “Natural image denoising with convolutional networks,” in Proceedings of the International Conference on Neural Information Processing Systems, Vancouver, Canada,2008.
C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
O. Russakovsky, J. Deng, and H. Su, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vision 115, 211–252 (2015).
D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, “Microsoft COCO: common objects in context,” in Proceedings of the European Conference on Computer Vision, Zurich, Switzerland,2014.
K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on ImageNet classification,” in Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile,2015.
Yu. B. Blokhinov, “A method for automatically determining orientation elements of an orbital station from reference model images of its nodes,” Izv. Vyssh. Uchebn. Zaved., Geodez. Aerofotos’emka, No. 2, 13–19 (2011).
J. Redmon and A. Farhadi, “Yolo V3: an incremental improvement,” arXiv: 1804.02767 (2018).
A. P. Mikhailov and A. G. Chibunichev, Photogrammetry (MIIGAiK, Moscow, 2016) [in Russian].
D. C. Brown, “Close-range camera calibration,” Photogrammetr. Eng. 37, 855–866 (1971).
R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge Univ. Press, Canberra, Australia, 2004).
Funding
This work was supported by the Russian Foundation for Basic Research, project no. 17-08-00191 a.
Author information
Authors and Affiliations
Corresponding authors
Additional information
Translated by O. Pismenov
Rights and permissions
About this article
Cite this article
Blokhinov, Y.B., Gorbachev, V.A., Nikitin, A.D. et al. Technology for the Visual Inspection of Aircraft Surfaces Using Programmable Unmanned Aerial Vehicles. J. Comput. Syst. Sci. Int. 58, 960–968 (2019). https://doi.org/10.1134/S1064230719060042
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S1064230719060042