Advertisement

Model-referenced pose estimation using monocular vision for autonomous intervention tasks

  • Jisung Park
  • Taeyun Kim
  • Jinwhan KimEmail author
Article

Abstract

This study addresses vision-based underwater navigation techniques to automate underwater intervention tasks with robotic vehicles. A systematic procedure of model-referenced pose estimation is introduced to obtain the relative pose information between the underwater vehicle and the underwater structures whose geometry and shape are known. The vision-based pose estimation combined with inertial navigation enables underwater robots to navigate precisely around underwater structures for challenging underwater intervention tasks such as subsea construction, maintenance, and inspection. To demonstrate the feasibility of the proposed approach, a set of experiments were carried out in a test tank using an autonomous underwater vehicle.

Keywords

Underwater navigation Underwater robot Model-referenced pose estimation Underwater intervention task 

Notes

Supplementary material

Supplementary material 1 (mp4 117216 KB)

References

  1. Bay, H., Ess, A., Tuytelaars, T., & Van Gool, L. (2008). Speeded-up robust features (SURF). Computer Vision and Image Understanding, 110(3), 346–359.CrossRefGoogle Scholar
  2. Bouthemy, P. (1989). A maximum likelihood framework for determining moving edges. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(5), 499–511.CrossRefGoogle Scholar
  3. Calonder, M., Lepetit, V., Strecha, C., & Fua, P. (2010). BRIEF: Binary robust independent elementary features. In European conference on computer vision (pp. 778–792). Berlin: Springer.Google Scholar
  4. Chaumette, F., Marchand, E., Spindler, F., Tallonneau, R., & Yol, A. (2012). Computer vision algorithms. http://visp-doc.inria.fr/manual/visp-tutorial-computer-vision.pdf. Accessed 14 May 2018.
  5. Choi, C., & Christensen, H. I. (2018). Robust 3D visual tracking using particle filtering on the special euclidean group: A combined approach of keypoint and edge features. The International Journal of Robotics Research, 31(4), 498–519.CrossRefGoogle Scholar
  6. Cieslak, P., Ridao, P., & Giergiel, M. (2015). Autonomous underwater panel operation by GIRONA500 UVMS: A practical approach to autonomous underwater manipulation. In IEEE International conference on robotics and automation (pp 529–536).Google Scholar
  7. Comport, A. I., Marchand, E., & Chaumette, F. (2004). Robust model-based tracking for robot vision. IEEE/RSJ International Conference on Intelligent Robots and Systems, 1, 692–697.Google Scholar
  8. Comport, A. I., Marchand, E., Pressigout, M., & Chaumette, F. (2006). Real-time markerless tracking for augmented reality: The virtual visual servoing framework. IEEE Transactions on Visualization and Computer Graphics, 12(4), 615–628.CrossRefGoogle Scholar
  9. Dementhon, D. F., & Davis, L. S. (1995). Model-based object pose in 25 lines of code. International Journal of Computer Vision, 15(1), 123–141.CrossRefGoogle Scholar
  10. Drummond, T., Society, I. C., & Cipolla, R. (2002). Real-time visual tracking of complex structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 932–946.CrossRefGoogle Scholar
  11. Enrico, S., Giuseppe, C., Sandro, T., Alessandro, S., & Alessio, T. (2018). Floating underwater manipulation: Developed control methodology and experimental validation within the TRIDENT project. Journal of Field Robotics, 31(3), 364–385.Google Scholar
  12. Espiau, B., Chaumette, F., & Rives, P. (1992). A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3), 313–326.CrossRefGoogle Scholar
  13. Evans, J., Redmond, P., Plakas, C., Hamilton, K., & Lane, D. (2003). Autonomous docking for intervention-AUVs using sonar and video-based real-time 3D pose estimation. MTS/IEEE OCEANS, 4, 2201–2210.Google Scholar
  14. Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24, 381–395.MathSciNetCrossRefGoogle Scholar
  15. Han, J., Park, J., Kim, T., & Kim, J. (2015). Precision navigation and mapping under bridges with an unmanned surface vehicle. Autonomous Robots, 38(4), 349–362.CrossRefGoogle Scholar
  16. Harris, C. (1993). Active Vision. Tracking with Rigid Models (pp. 59–73). Cambridge: MIT Press.Google Scholar
  17. Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge: Cambridge University Press.zbMATHGoogle Scholar
  18. Kim, T., & Kim, J. (2014). Nonlinear filtering for terrain-referenced underwater navigation with an acoustic altimeter. In MTS/IEEE OCEANS (pp 1–6).Google Scholar
  19. Klein, G., & Murray, D. W. (2006). Full-3D edge tracking with a particle filter. In European conference on computer vision (pp. 1119–1128).Google Scholar
  20. Lewis, F. L. (1986). Optimal estimation: With an introduction to stochastic control theory. New York: Wiley.zbMATHGoogle Scholar
  21. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRefGoogle Scholar
  22. Marani, G., Choi, S. K., & Yuh, J. (2009). Underwater autonomous manipulation for intervention missions AUVs. Ocean Engineering, 36(1), 15–23.CrossRefGoogle Scholar
  23. Marchand, E., Spindler, F., & Chaumette, F. (2005). ViSP for visual servoing: A generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, 12(4), 40–52.CrossRefGoogle Scholar
  24. Mörwald. T., Prankl, J., Richtsfeld, A., Zillich, M., & Vincze, M. (2010). BLORT-the blocks world robotic vision toolbox. In IEEE international conference on robotics and automation.Google Scholar
  25. Palomeras, N., Carrera, A., Hurtós, N., Karras, G. C., Bechlioulis, C. P., Cashmore, M., et al. (2016). Toward persistent autonomous intervention in a subsea panel. Autonomous Robots, 40(7), 1279–1306.CrossRefGoogle Scholar
  26. Palomeras, N., Peñalver, A., Massot-Campos, M., Vallicrosa, G., Negre, P.L., Fernández, J.J., Ridao, P., Sanz, P.J., Oliver-Codina, G., & Palomer, A. (2014). I-AUV docking and intervention in a subsea panel. In 2014 IEEE/RSJ international conference on intelligent robots and systems (pp. 2279–2285).Google Scholar
  27. Prats, M., Ribas, D., Palomeras, N., García, J. C., Nannen, V., Wirth, S., et al. (2012). Reconfigurable AUV for intervention missions: A case study on underwater object recovery. Intelligent Service Robotics, 5(1), 19–31.CrossRefGoogle Scholar
  28. Pressigout, M., & Marchand, E. (2006). Real-time 3D model-based tracking: combining edge and texture information. In IEEE international conference on robotics and automation (pp. 2726–2731).Google Scholar
  29. Pupilli, M., & Calway, A. (2006). Real-time camera tracking using known 3D models and a particle filter. In 18th International conference on pattern recognition(vol. 1, pp. 199–203).Google Scholar
  30. Ridao, P., Carreras, M., Ribas, D., Sanz, P. J., & Oliver, G. (2014). Intervention AUVs: The next challenge. IFAC Proceedings Volumes, 47(3), 12146–12159.CrossRefGoogle Scholar
  31. Rosten, E., & Drummond, T. (2005). Fusing points and lines for high performance tracking. In Tenth IEEE international conference on computer vision (pp. 1508–1515).Google Scholar
  32. Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. In European conference on computer vision (pp. 430–443).Google Scholar
  33. Rublee, E., Rabaud, V., Konolige, K., Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. In International conference on computer vision (pp. 2564–2571).Google Scholar
  34. Sanz, P. J., Ridao, P., Oliver, G., Casalino, G., Petillot, Y., Silvestre, C., et al. (2013). TRIDENT an European project targeted to increase the autonomy levels for underwater intervention missions. In MTS/IEEE OCEANS (pp. 1–10).Google Scholar
  35. Vacchetti, L., Lepetit, V., & Fua, P. (2004). Combining edge and texture information for real-time accurate 3D camera tracking. In Third IEEE and ACM international symposium on mixed and augmented reality (pp 48–56).Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Mechanical EngineeringKAISTDaejeonRepublic of Korea

Personalised recommendations