Advertisement

Altitude Information Acquisition of UAV Based on Monocular Vision and MEMS

  • Fuxun Gao
  • Chaoli WangEmail author
  • Lin Li
  • Dongkai Zhang
Article
  • 36 Downloads

Abstract

Altitude information is one of the most important data pieces to realize unmanned aerial vehicle (UAV) autonomous flight. At present, the barometers equipped in the UAVs were used to measure the atmospheric pressure and convert the data to the altitude of UAVs. But the pressure will change drastically when the propeller of the UAV rotate and lead to inaccurate measurement values. In this paper, a UAV height information acquisition method based on the information fusion of monocular vision and a microelectromechanical system (MEMS) is designed, which can obtain accurate height information in the indoor environment and low-altitude outdoor environment. The parallax is obtained by using two images taken in a short time during UAV flight in this method, and the acceleration data measured by the MEMS are used to compute the displacements of the UAV in this time as the baseline. The angle information measured by the MEMS is used to calibrate the images taken by the monocular camera of the UAV. Finally, the altitude information of the UAV is obtained with the theory of binocular stereo vision. Experiments have proven that the altitude information errors obtained by this method at 2 m is approximately 4%, which is less than barometer errors. A new image crop method was proposed to reduce computation, which meets the steady, fast and accurate requirements for practical application. The experiments show that our proposed method can obtain satisfactory results with respect to the state-of-the-art methods.

Keywords

UAV Altitude information Data fusion Monocular vision MEMS 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

This research was partly supported by National Natural Science Foundation of China under Grant (61374040, 61673277, 61503262) and Foundation for High-level Talents of Hebei Province (A2016001144).

References

  1. 1.
    Bavle, H., Sanchez-Lopez, J.L., Rodriguez-Ramos, A., Sampedro, C., Campoy, P.: A flight altitude estimator for multirotor UAVs in dynamic and unstructured indoor environments. In: 2017 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1044–1051 (2017)Google Scholar
  2. 2.
    Zhao, J., Li, Y., Hu, D., Pei, Z.: Design on altitude control system of quad rotor based on laser radar. In: IEEE International Conference on Aircraft Utility Systems (AUS), pp. 105–109 (2016)Google Scholar
  3. 3.
    Chen, Z., Luo, X., Dai, B.: Design of obstacle avoidance system for micro-UAV based on binocular vision. In: 2017 International Conference on Industrial Informatics-Computing Technology, Intelligent Technology, Industrial Information Integration (ICIICII), pp. 67–70 (2017)Google Scholar
  4. 4.
    Al-Kaff, A., Meng, Q., Martín, D., de la Escalera, A., Armingol, J.M.: Monocular vision-based obstacle detection/avoidance for unmanned aerial vehicles. In: IEEE Trans. Intelligent Vehicles Symposium (IV), pp. 92–97 (2016) (2016)Google Scholar
  5. 5.
    Qin, L., Wang, T.: Design and research of automobile anti-collision warning system based on monocular vision sensor with license plate cooperative target. Multimed. Tools Appl. 76(13), 14815–14828 (2017)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Mao, J., Zhang, M., Zhu, L., Bai, C., Xiao, G.: 20–600 cm. In: Pacific Rim Conference on Multimedia, pp. 583–595. Springer, Cham (2017)Google Scholar
  7. 7.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  8. 8.
    Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems, pp. 2366–2374 (2014)Google Scholar
  9. 9.
    Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2650–2658 (2015)Google Scholar
  10. 10.
    Mousavian, A., Pirsiavash, H., Koǎecká, J.: Joint semantic segmentation and depth estimation with deep convolutional networks. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 611–619 (2016)Google Scholar
  11. 11.
    Kuznietsov, Y., Stückler, J., Leibe, B.: Semi-supervised deep learning for monocular depth map prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6647–6655 (2017)Google Scholar
  12. 12.
    Gao, M., Meng, X., Yang, Y., He, Z.: A traffic avoidance system based on binocular ranging. In: 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 1039–1043 (2017)Google Scholar
  13. 13.
    Chao, Z., Wei, L., Hongwei, S., Hong, L.: Three-dimensional surface reconstruction based on binocular vision. In: 2nd International Conference on Robotics and Automation Engineering (ICRAE), pp. 389–393 (2017)Google Scholar
  14. 14.
    Zhou, Y., He, Y., Tang, X., Lu, Y., Jiang, G.: Incorporation of multi-dimensional binocular perceptual characteristics to detect stereoscopic video saliency. J. Image Graph. 22(3), 305–314 (2017)Google Scholar
  15. 15.
    Cui, X., Liu, C., Shi, G., Jin, Y.: A new calibration method for MEMS accelerometers with genetic algorithm. In: 2017 IEEE International Conference on Real-Time Computing and Robotics (RCAR), pp. 240–245 (2017)Google Scholar
  16. 16.
    Morgan, G.L.K., Liu, J.G., Yan, H.: Precise subpixel disparity measurement from very narrow baseline stereo. IEEE Trans. Geosci. Remote Sens. 48(9), 3424–3433 (2010)CrossRefGoogle Scholar
  17. 17.
    Zou, X., Zou, H., Lu, J.: Virtual manipulator-based binocular stereo vision positioning system and errors modelling. Mach. Vis. Appl. 23(1), 43–63 (2012)CrossRefGoogle Scholar
  18. 18.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)CrossRefGoogle Scholar
  19. 19.
    Martin, M.B., Mayhew, C.A.: Critical alignment of parallax images for autostereoscopic display. U.S. Patent No. 8,953,015. Patent and Trademark Office, Washington (2015)Google Scholar
  20. 20.
    Zhang, Q., Pless, R.: Extrinsic calibration of a camera and laser range finder (improves camera calibration). IROS 3, 2301–2306 (2004)Google Scholar
  21. 21.
    Ji, C.X., Zhang, Z.P.: Stereo match based on linear feature. In: 9th International Conference on Pattern Recognition, pp. 875–878 (1988)Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.University of Shanghai for Science and TechnologyShanghaiChina
  2. 2.Shijiazhuang UniversityShijiazhuangChina

Personalised recommendations