Abstract
This paper discusses the machine vision element of a system designed to allow Unmanned Aerial System (UAS) to perform automated taxiing around civil aerodromes, with only a monocular camera. The purpose of the computer vision system is to provide direct sensor data which can be used to validate vehicle position, in addition to detecting potential collision risks. In practice, untrained clustering is used to segment the visual feed before descriptors of each cluster (primarily colour and texture) are used to estimate the class. As the competency of each individual estimate can vary dependent on multiple factors (number of pixels, lighting conditions and even surface type). A Bayesian network is used to perform probabilistic data fusion, in order to improve the classification results. This result is shown to perform accurate image segmentation in real-world conditions, providing information viable for localisation and obstacle detection.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
European RPAS Steering Group: Roadmap for the Integration of Civil Remotely-Piloted Aircraft Systems into the European Aviation System, European RPAS Steering Group, Tech. Rep. (2013)
Lekkas, A.M.: Guidance and path-planning systems for autonomous vehicles (2014)
Loegering, G., Harris, S.: Landing dispersion results global hawk auto-land system in AIAAs 1st Technical Conference and Workshop on Unmanned Aerial Vehicles (2002)
Durrie, J., Gerritsen, T., Frew, E.W., Pledgie, S.: Vision-aided inertial navigation on an uncertain map using a particle filter Proceedings of the IEEE International Conference on Robotics and Automation, pp 4189–4194 (2009)
Sotelo, M.A., Rodriguez, F.J., Magdalena, L., Bergasa, L.M., Boquete, L.: A color vision-based lane tracking system for autonomous driving on unmarked roads. Auton. Robot. 16(1), 95–116 (2004)
Cheng, H.-Y., Jeng, B.-S., Tseng, P.-T., Fan, K.-C.: Lane detection with moving vehicles in the traffic scenes. IEEE Trans. Intell. Transp. Syst. 7(4), 571–582 (2006)
Muad, A.M., Hussain, A., Samad, S.A., Mustaffa, M.M., Majlis, B.Y.: Implementation of inverse perspective mapping algorithm for the development of an automatic lane tracking system TENCON 2004. 2004 IEEE Region 10 Conference, vol. A, Nov 2004, vol. 1, pp 207–210
Yao, J., Fidler, S., Urtasun, R.: Describing the scene as a whole: Joint object detection, scene classification and semantic segmentation Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp 702–709 (June 2012)
Eaton, W., Chen, W.-H.: Image segmentation for automated taxiing of unmanned aircraft Unmanned Aircraft Systems (ICUAS), 2015 International Conference on, June 2015, pp 1–8
Coombes, M., Eaton, W., Chen, W.H.: Colour based semantic image segmentation and classification for unmanned ground operations 2016 International Conference on Unmanned Aircraft Systems (ICUAS), June 2016, pp 858–867
Coombes, M., Eaton, W., Chen, W.H.: Unmanned ground operations using semantic image segmentation through a bayesian network 2016 International Conference on Unmanned Aircraft Systems (ICUAS), June 2016, pp 868–877
Liu, F., Xu, D., Yuan, C., Kerwin, W.: Image segmentation based on bayesian network-markov random field model and its application to in vivo plaque composition Biomedical Imaging: Nano to Macro, 2006. 3rd IEEE International Symposium on, April 2006, pp 141–144
Bouman, C., Shapiro, M.: A multiscale random field model for bayesian image segmentation. IEEE Trans. Image Process. 3(2), 162–177 (1994)
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC Superpixels, EPFL, Tech Rep. (2010)
Ren, C.Y., Reid, I.: Gslic: a Real-Time Implementation of Slic Superpixel Segmentation, University of Oxford, Department of Engineering, Technical Report (2011)
Kovesi, P.D.: MATLAB and Octave functions for computer vision and image processing, Centre for Exploration Targeting, School of Earth and Environment, The University of Western Australia, available from: < http://www.csse.uwa.edu.au/~pk/research/matlabfns/ >
Varma, M., Zisserman, A.: Texture classification: Are filter banks necessary? Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, Jun. 2003, pp. 691–698. [Online]. Available: http://www.robots.ox.ac.uk/vgg
Julesz, B.: Textons, the elements of texture perception, and their interactions. Nature 290(5802), 91–97 (1981)
Xue, S., Jing, X., Sun, S., Huang, H.: Binary-decision-tree-based multiclass support vector machines Communications and Information Technologies (ISCIT), 2014 14th International Symposium on, pp 85–89 (2014)
Foresee, F.D., Hagan, M.T.: Gauss-newton approximation to bayesian learning Neural Networks, 1997., International Conference on, vol. 3, vol. 3, pp 1930–1935 (1997)
Du, C.-J., Sun, D.-W.: Comparison of three methods for classification of pizza topping using different colour space transformations. J. Food Eng. 68(3), 277–287 (2005)
He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)
Yuan, H.-Z., Zhang, X.-Q., Feng, Z.-L.: Horizon detection in foggy aerial image in Image Analysis and Signal Processing (IASP), 2010 International Conference on, pp 191–194 (2010)
Luo, J., Singhal, A.: A bayesian network-based framework for semantic image understanding. Pattern Recogn. 38(6), 919–934 (2005). image Understanding for Photographs
Boutell, M., Luo, J.: Bayesian fusion of camera metadata cues in semantic scene classification Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the IEEE Computer Society Conference on, vol. 2, June 2004, pp. II–623–II–630 Vol. 2 (2004)
Sebe, N., Cohen, I., Huang, T., Gevers, T.: Skin detection: a bayesian network approach Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 2, vol. 2, pp 903–906 (2004)
Koller, D., Friedman, N.: Probabilistic graphical models: principles and techniques. MIT press (2009)
Acknowledgments
This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) Autonomous and Intelligent Systems programme under the grant number EP/J011525/1 with BAE Systems as the leading industrial partner. The work greatly benefits from the data set collected from an airfield provided by BAE Systems and technical advice provided by the technical officer Rob Buchanan.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Coombes, M., Eaton, W. & Chen, WH. Machine Vision for UAS Ground Operations. J Intell Robot Syst 88, 527–546 (2017). https://doi.org/10.1007/s10846-017-0542-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10846-017-0542-5