Advertisement

Autonomous Robots

, Volume 41, Issue 1, pp 133–143 | Cite as

Selective visual odometry for accurate AUV localization

  • Fabio Bellavia
  • Marco Fanfani
  • Carlo Colombo
Article

Abstract

In this paper we present a stereo visual odometry system developed for autonomous underwater vehicle localization tasks. The main idea is to make use of only highly reliable data in the estimation process, employing a robust keypoint tracking approach and an effective keyframe selection strategy, so that camera movements are estimated with high accuracy even for long paths. Furthermore, in order to limit the drift error, camera pose estimation is referred to the last keyframe, selected by analyzing the feature temporal flow. The proposed system was tested on the KITTI evaluation framework and on the New Tsukuba stereo dataset to assess its effectiveness on long tracks and different illumination conditions. Results of a live archaeological campaign in the Mediterranean Sea, on an AUV equipped with a stereo camera pair, show that our solution can effectively work in underwater environments.

Keywords

Visual odometry Stereo Underwater AUV RANSAC Feature matching Keyframe selection 

Notes

Acknowledgments

This work has been supported by the European ARROWS project, founded by the European Union’s Seventh Framework Programme for Research technological development and demonstration, under grant agreement no. 308724.

References

  1. Allotta, B., Colombo, C., et al. (2013). Teams of robots for underwater archaeology: The ARROWS project. In Proceedings of the 6th international congress on science and technology for the safeguard of cultural heritage in the Mediterranean basin.Google Scholar
  2. Allotta, B., Bartolini, F., Conti, R., Costanzi, R., Gelli, J., Monni, N., Natalini, M., Pugi, L., & Ridolfi, A. (2014). MARTA: An AUV for underwater cultural heritage. In Proceedings of the underwater acoustics 2014.Google Scholar
  3. Badino, H., & Kanade, T. (2011). A head-wearable short-baseline stereo system for the simultaneous estimation of structure and motion. In IAPR conference on machine vision application (pp. 185–189).Google Scholar
  4. Badino, H., Yamamoto, A., & Kanade, T. (2013). Visual odometry by multi-frame feature integration. In Proceedings of the international workshop on computer vision for autonomous driving at ICCV.Google Scholar
  5. Bay, H., Ess, A., Tuytelaars, T., & Van Gool, L. (2008). Speeded-up robust features (SURF). Computer Vision and Image Understanding, 110(3), 346–359.CrossRefGoogle Scholar
  6. Bellavia, F., Tegolo, D., & Trucco, E. (2010). Improving SIFT-based descriptors stability to rotations. In Proceedings of international conference on pattern recognition.Google Scholar
  7. Bellavia, F., Tegolo, D., & Valenti, C. (2011). Improving Harris corner selection strategy. IET Computer Vision, 5(2), 87–96.CrossRefMathSciNetGoogle Scholar
  8. Bellavia, F., Fanfani, M., Pazzaglia, F., & Colombo, C. (2013). Robust selective stereo SLAM without loop closure and bundle adjustment. In Proceedings of 17th international conference on image analysis and processing (pp. 462–471).Google Scholar
  9. Bellavia, F., Tegolo, D., & Valenti, C. (2014). Keypoint descriptor matching with context-based orientation estimation. Image and Vision Computing, 32, 559–567.CrossRefGoogle Scholar
  10. Botelho, S.C., Drews, P., Oliveira, G., & da Silva Figueiredo, M. (2009). Visual odometry and mapping for underwater autonomous vehicles. In Proceedings of the 2009 6th Latin American robotics symposium (p. 1–6).Google Scholar
  11. Corke, P., Detweiler, C., Dunbabin, M., Hamilton, M., Rus, D., & Vasilescu, I. (2007). Experiments with underwater robot localization and tracking. In Proceedings of the 2007 IEEE international conference on robotics and automation (pp. 4556–4561).Google Scholar
  12. Durrant-Whyte, H. F., & Bailey, T. (2006). Simultaneous localisation and mapping (SLAM): Part I the essential algorithms. IEEE Robotics and Automation Magazine, 13(2), 99–110.Google Scholar
  13. Eustice, R., Pizarro, O., & Singh, H. (2008). Visually augmented navigation for autonomous underwater vehicles. IEEE Journal of Oceanic Engineering, 33(2), 103–122.CrossRefGoogle Scholar
  14. Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.CrossRefMathSciNetGoogle Scholar
  15. Fraundorfer, F., & Scaramuzza, D. (2012). Visual odometry: Part II: Matching, robustness, optimization, and applications. IEEE Robotics and Automation Magazine, 19(2), 78–90.CrossRefGoogle Scholar
  16. Garro, V., Crosilla, F., & Fusiello, A. (2012). Solving the PnP problem with anisotropic orthogonal procrustes analysis. In Second joint 3DIM/3DPVT conference: 3d imaging, processing, visualization and transmission: modeling (pp. 262–269).Google Scholar
  17. Geiger, A., Ziegler, J., & Stiller, C. (2011). StereoScan: Dense 3D reconstruction in real-time. In IEEE Intelligent Vehicles Symposium.Google Scholar
  18. Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of computer vision and pattern recognition, http://www.cvlibs.net/datasets/kitti/eval_odometry.php.
  19. Hartley, R., & Sturm, P. (1997). Triangulation. Computer Vision and Image Understanding, 68(2), 146–157.CrossRefGoogle Scholar
  20. Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision (2nd ed.). Cambridge: Cambridge University Press.CrossRefzbMATHGoogle Scholar
  21. Hildebrandt, M., & Kirchner, F. (2010). IMU-aided stereo visual odometry for ground-tracking AUV applications. In OCEANS 2010 IEEE—Sydney (pp. 1–8).Google Scholar
  22. Horn, B. K. P. (1987). Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A, 4(4), 629–642.CrossRefGoogle Scholar
  23. Kim, A., & Eustice, R. (2009). Pose-graph visual SLAM with geometric model selection for autonomous underwater ship hull inspection. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 1559–1565).Google Scholar
  24. Kim, A., & Eustice, R. M. (2013). Real-time visual SLAM for autonomous underwater hull inspection using visual saliency. IEEE Transactions on Robotics, 29(3), 719–733.CrossRefGoogle Scholar
  25. Lee, G.H., Fraundorfer, F., & Pollefeys, M. (2011). RS-SLAM: RANSAC sampling for visual FastSLAM. In Proceedings of IEEE/RSJ international conference on intelligent robots and systems (pp. 1655–1660).Google Scholar
  26. Lowe, D. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRefGoogle Scholar
  27. Mallios, A., Ridao, P., Ribas, D., & Hernández, E. (2014). Scan matching SLAM in underwater environments. Autonomous Robots, 36(3), 181–198.Google Scholar
  28. Martull, S., Martorell, M.P., & Fukui, K. (2012). Realistic CG stereo image dataset with ground truth disparity maps. In Proceedings of ICPR2012 workshop TrakMark2012 (pp. 40–42), http://www.cvlab.cs.tsukuba.ac.jp/dataset/tsukubastereo.php.
  29. Montiel, J., Civera, J., & Davison, A. (2006). Unified inverse depth parametrization for monocular SLAM. In Proceedings of robotics: science and systems, IEEE Press.Google Scholar
  30. Nistér, D., Naroditsky, O., & Bergen, J.R. (2004). Visual odometry. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 652–659).Google Scholar
  31. Paull, L., Saeedi, S., Seto, M., & Li, H. (2014). AUV navigation and localization: A review. IEEE Journal of Oceanic Engineering, 39(1), 131–149.CrossRefGoogle Scholar
  32. Sanfourche, M., Vittori, V., & Besnerais, G.L. (2013). eVO: A realtime embedded stereo odometry for MAV applications. In IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 2107–2114).Google Scholar
  33. Scaramuzza, D., & Fraundorfer, F. (2011). Visual odometry: Part I—the first 30 years and fundamentals. IEEE Robotics and Automation Magazine, 18(4), 8092.CrossRefGoogle Scholar
  34. Shi, J., & Tomasi, C. (1994). Good features to track. In IEEE conference on computer vision and pattern recognition 1994 (CVPR’94) (pp. 593–600).Google Scholar
  35. Strasdat, H., Montiel, J.M.M., & Davison, A.J. (2010). Scale drift-aware large scale monocular SLAM. In Proceedings of robotics: science and systems.Google Scholar
  36. Whitcomb, L., Yoerger, D., Singh, H., & Howland, J. (1999). Advances in underwater robot vehicles for deep ocean exploration: Navigation, control, and survey operations. In Proceedings of the 9th international symposium on robotics research (pp. 346–353).Google Scholar
  37. Wirth, S., Negre Carrasco, P., & Codina, G. (2013). Visual odometry for autonomous underwater vehicles. In Proceedings of 2013 MTS/IEEE OCEANS (pp. 1–6).Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Computational Vision Group (CVG)University of FlorenceFlorenceItaly

Personalised recommendations