Advertisement

Real time UAV altitude, attitude and motion estimation from hybrid stereovision

Abstract

Knowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during critical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and detecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera contributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the translation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude.

We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit.

This is a preview of subscription content, log in to check access.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

References

  1. Achtelik, M., Bachrach, A., He, R., Prentice, S., & Roy, N. (2009). Stereo vision and laser odometry for autonomous helicopters in gps-denied indoor environments. In SPIE.

  2. Artieda, J., Sebastián, J. M., Campoy, P., Correa, J. F., Mondragón, I. F., Martínez, C., & Olivares, M. (2009). Visual 3-d slam from uavs. Journal of Intelligent & Robotic Systems, 55(4–5), 299–321.

  3. Baker, S., & Nayar, S. K. (1999). A theory of single-viewpoint catadioptric image formation. International Journal of Computer Vision, 35, 175–196. ISSN 0920-5691. doi:10.1023/A:1008128724364.

  4. Barreto, J., & Araujo, H. (2001). Issues on the geometry of central catadioptric image formation. In: International conference on pattern recognition.

  5. Barrows, G., Neely, C., & Miller, K. (2001). Optic flow sensors for mav navigation. AIAA Journal, 195, 557–574.

  6. Bastanlar, Y., Temizel, A., Yardimci, Y., & Sturm, P. F. (2010). Effective structure-from-motion for hybrid camera systems. In ICPR’10 (pp. 1654–1657).

  7. Bazin, J., Kweon, I., Demonceaux, C., & Vasseur, P. (2008). Uav attitude estimation by vanishing points in catadioptric image. In IEEE international conference on robotics and automation 2008 (ICRA’08) (pp. 2743–2749), Pasadena, CA, May 2008. New York: IEEE Press.

  8. Bazin, J., Demonceaux, C., Vasseur, P., & Kweon, I. (2010). Motion estimation by decoupling rotation and translation in catadioptric vision. Computer Vision and Image Understanding, 114(2(0)), 254–273.

  9. Beyeler, A., Mattiussi, C., Zufferey, C. J., & Floreano, D. (2006). Visionbased altitude and pitch estimation for ultra-light indoor aircraft. In IEEE international conference on robotics and automation (ICRA’06) (pp. 2836–2841).

  10. Blösch, M., Weiss, S., Scaramuzza, D., & Siegwart, R. (2010). Vision based mav navigation in unknown and unstructured environments. In IEEE international conference on robotics and automation (ICRA 2010).

  11. Bouguet, J.-Y. (2000). Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm. http://robots.stanford.edu/cs223b04/algo_tracking.pdf.

  12. Caron, G., & Eynard, D. (2011). Multiple camera types simultaneous stereo calibration. In IEEE int. conf. on robotics and automation (ICRA’11), Shanghai, China, May 2011.

  13. Chahl, J. S., Srinivasan, M. V., & Zhang, S.-W. (2004). Landing strategies in honeybees and applications to uninhabited airborne vehicles. The International Journal of Robotics Research, 23(2), 101–110.

  14. Cheriany, A., Andersh, J., Morellas, V., Papanikolopoulos, N., & Mettler, B. (2009). Autonomous altitude estimation of a uav using a single onboard camera. In Proceedings of the 2009 IEEE/RSJ international conference on intelligent robots and systems (IROS’09) (pp. 3900–3905). Piscataway, NJ, USA, 2009. New York: IEEE Press. ISBN 978-1-4244-3803-7. http://portal.acm.org/citation.cfm?id=1732643.1732689.

  15. Collins, R. (1996). A space-sweep approach to true multi-image matching. In IEEE computer vision and pattern recognition (pp. 358–363), June 1996.

  16. Davison, A. J., Reid, I. D., Molton, N., & Stasse, O. (2007). Monoslam: real-time single camera slam. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6), 1052–1067.

  17. Demonceaux, C., Vasseur, P., & Pégard, C. (2006). Robust attitude estimation with catadioptric vision. In IEEE/RSJ international conference on intelligent robots and systems 2006 (IROS’06) (pp. 3448–3453), Beijing, China, October 2006. New York: IEEE Press.

  18. Demonceaux, C., Vasseur, P., & Pégard, C. (2007). Uav attitude computation by omnidirectional vision in urban environment. In IEEE international conference on robotics and automation 2007 (ICRA’07) (pp. 2017–2022), Roma, Italy, April 2007. New York: IEEE Press.

  19. Demonceaux, C., Vasseur, P., & Fougerolle, Y. D. (2011). Central catadioptric image processing with geodesic metric. Image and Vision Computing, 29(12), 840–849.

  20. Dusha, D., Boles, W. W., & Walker, R. (2007). Fixed-wing attitude estimation using computer vision based horizon detection. In 12th Australian international aerospace congress (pp. 1–19), Melbourne Australia. http://eprints.qut.edu.au/6852/.

  21. Eberli, D., Scaramuzza, D., Weiss, S., & Siegwart, R. (2011). Vision based position control for mavs using one single circular landmark. Journal of Intelligent & Robotic Systems, 61(1–4), 495–512.

  22. Eynard, D. (2010). http://www.youtube.com/watch?v=ubXzf0eLud4.

  23. Eynard, D., Vasseur, P., Demonceaux, C., & Fremont, V. (2010). Uav altitude estimation by mixed stereoscopic vision. In IEEE int. conf. on intelligent robots and systems (IROS’10), Taipei, Taiwan, October 2010.

  24. Eynard, D., Vasseur, P., Demonceaux, C., & Fremont, V. (2011). Uav motion estimation using hybrid stereoscopic vision. In IAPR conf. on machine vision applications (MVA’11), Nara, Japan, June 2011.

  25. Fraundorfer, F., Scaramuzza, D., & Pollefeys, M. (2010). A constricted bundle adjustment parameterization for relative scale estimation in visual odometry. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5509733.

  26. Gallup, D., Frahm, J.-M., Mordohai, P., Yang, Q., & Pollefeys, M. (2007). Real-time plane-sweeping stereo with multiple sweeping directions.

  27. Garcia-pardo, P. J., Sukhatme, G. S., & Montgomery, J. F. (2000). Towards vision-based safe landing for an autonomous helicopter.

  28. Green, W. E., Oh, P. Y., Sevcik, K., & Barrows, G. (2003). Autonomous landing for indoor flying robots using optic flow. In ASME international mechanical engineering congress and exposition (pp. 1347–1352).

  29. Hartley, R., & Zisserman, A. (2004). Multiple view geometry in computer vision, 2nd edn. Cambridge: Cambridge University Press. ISBN: 0521540518.

  30. Hyscas (2011). Hybrid stereoscopic calibration software. http://www.hyscas.com.

  31. Kalantari, M., Hashemi, A., Jung, F., & Guédon, J.-P. (2009). A new solution to the relative orientation problem using only 3 points and the vertical direction. In CoRR. arXiv:0905.3964

  32. Kalantari, M., Hashemi, A., Jung, F., & Guédon, J.-P. (2011). A new solution to the relative orientation problem using only 3 points and the vertical direction. Journal of Mathematical Imaging and Vision, 39(3), 259–268.

  33. Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. http://www.cs.unc.edu/~welch/kalman/media/pdf/Kalman1960.pdf.

  34. Lee, G. H., Achtelik, M., Fraundorfer, F., Pollefeys, M., & Siegwart, R. (2010). A benchmarking tool for mav visual pose estimation. In ICARCV (pp. 1541–1546). New York: IEEE Press.

  35. Lee, G. H., Fraundorfer, F., & Pollefeys, M. (2011). Mav visual slam with plane constraint. In ICRA (pp. 3139–3144).

  36. Lee, J., You, S., & Neumann, U. (2000). Large motion estimation for omnidirectional vision. In Workshop on omnidirectional vision, pp. 161–168. http://doi.ieeecomputersociety.org/10.1109/OMNVIS.2000.853824

  37. Lhuillier, M. (2008). Automatic scene structure and camera motion using a catadioptric system. Computer Vision and Image Understanding, 109, 186–203. doi:10.1016/j.cviu.2007.05.004.

  38. Li, H., Hartley, R. I., & Kim, J.-H. (2008). A linear approach to motion estimation using generalized camera models. In CVPR’08 (pp. 1–8).

  39. Lourenço, M., Barreto, J. P., & Malti, A. (2010). Feature detection and matching in images with radial distortion. In ICRA (pp. 1028–1034).

  40. Malis, E., & Marchand, E. (2005). Méthodes robustes d’estimation pour la vision robotique. In Journées nationales de la recherche en robotique (JNRR’05), Guidel, France. http://hal.inria.fr/inria-00351893/en/.

  41. Marchand, E., & Chaumette, F. (2002). Virtual visual serving: a framework for real-time augmented reality. In EUROGRAPHICS 2002 conference proceeding (Vol. 21(3), pp. 289–298), Saarebrün, Germany. http://hal.inria.fr/inria-00352096/en/.

  42. Mei, C., & Rives, P. (2007). Single view point omnidirectional camera calibration from planar grids. In IEEE international conference on robotics and automation, April 2007.

  43. Mei, C., Benhimane, S., Malis, E., & Rives, P. (2006). Homography-based tracking for central catadioptric cameras. In IROS.

  44. Mellinger, D., Michael, N., Shomin, M., & Kumar, V. (2011). Recent advances in quadrotor capabilities. In ICRA (pp. 2964–2965).

  45. Montiel, J. M. M., & Davidson, A.-J. (2006, accepted). A visual compass based on slam. In Proc. intl. conf. on robotics and automation, 2006. Available: http://pubs.doc.ic.ac.uk/visual-compass-slam.

  46. Naroditsky, O., Zhou, X.-S., Gallier, J., Roumeliotis, S.-I., & Daniilidis, K. (2011, under review). Two efficient solutions for visual odometry using directional correspondence. IEEE trans. pattern anal. mach. intell. http://www.cis.upenn.edu/~kostas/mypub.dir/oleg2011pami-revised.pdf.

  47. Nistér, D. (2004). An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6), 756–777.

  48. Nützi, G., Weiss, S., Scaramuzza, D., & Siegwart, R. (2011). Fusion of imu and vision for absolute scale estimation in monocular slam. Journal of Intelligent & Robotic Systems, 61(1–4), 287–299.

  49. Oreifej, O., da Vitoria Lobo, N., & Shah, M. (2011). Horizon constraint for unambiguous uav navigation in planar scenes. In ICRA (pp. 1159–1165).

  50. Pless, R. (2003). Using many cameras as one. In CVPR (pp. 587–593).

  51. Puig, L., Guerrero, J., & Sturm, P. (2008). Matching of omindirectional and perspective images using the hybrid fundamental matrix. In Proceedings of the workshop on omnidirectional vision, camera networks and non-classical cameras, Marseilles, France, October 2008. http://perception.inrialpes.fr/Publications/2008/PGS08.

  52. Sanahuja, G. (2010). Commande et localisation embarquée d’un drone aérien en utilisant la vision, January 2010. http://www.hds.utc.fr/~sanahuj/dokuwiki/lib/exe/fetch.php?id=frAccueil&cache=cache&media=fr:these_guillaume_sanahuja.pdf.

  53. Saripalli, S., Montgomery, J. F., & Sukhatme, G. S. (2002). Vision-based autonomous landing of an unmanned aerial vehicle. In IEEE international conference on robotics and automation (ICRA) (pp. 2799–2804).

  54. Scaramuzza, D. (2011). 1-point-ransac structure from motion for vehicle-mounted cameras by exploiting non-holonomic constraints. International Journal of Computer Vision, 1–12. ISSN 0920-5691. doi:10.1007/s11263-011-0441-3.

  55. Scaramuzza, D., & Siegwart, R. (2008). Correcting vehicle heading in visual odometry by using image appearance. In Proc. of the first international workshop on omnidirectional robot vision, November 2008.

  56. Sharp, C. S., Shakernia, O., & Sastry, S. S. (2001). A vision system for landing an unmanned aerial vehicle. In IEEE international conference on robotics and automation (ICRA) (pp. 1720–1727), Seoul, Korea.

  57. Thurrowgood, S., Soccol, D., Moore, R. J. D., Bland, D. P., & Srinivasan, M. V. (2009). A vision based system for attitude estimation of uavs. In IROS (pp. 5725–5730). New York: IEEE Press.

  58. Weiss, S., Scaramuzza, D., & Siegwart, R. (2011). Monocular-slam-based navigation for autonomous micro helicopters in gps-denied environments. Journal of Field Robotics, 28(6), 854–874.

  59. Ying, X., & Hu, Z. (2004). Can we consider central catadioptric cameras and fisheye cameras within a unified imaging model. In T. Pajdla & J. Matas (Eds.), Lecture notes in computer science: Vol. 3021. Computer vision (ECCV 2004) (pp. 442–455). Berlin: Springer. doi:10.1007/978-3-540-24670-1_34.

Download references

Acknowledgements

This work is supported by the European FEDER (Fonds Européen de Développement Régional) and Région Picardie Project ALTO (Automatic Landing and Take-Off).

Author information

Correspondence to Damien Eynard.

Electronic Supplementary Material

Below is the link to the electronic supplementary material.

(MPG 13.8 MB)

(MPG 13.8 MB)

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Eynard, D., Vasseur, P., Demonceaux, C. et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision. Auton Robot 33, 157–172 (2012). https://doi.org/10.1007/s10514-012-9285-0

Download citation

Keywords

  • UAV
  • Hybrid stereovision
  • Motion
  • Attitude
  • Altitude