Autonomous Robots

, Volume 37, Issue 2, pp 137–156 | Cite as

Visual navigation of wheeled mobile robots using direct feedback of a geometric constraint

  • Héctor M. BecerraEmail author
  • Carlos Sagüés
  • Youcef Mezouar
  • Jean-Bernard Hayet


Many applications of wheeled mobile robots demand a good solution for the autonomous mobility problem, i.e., the navigation with large displacement. A promising approach to solve this problem is the following of a visual path extracted from a visual memory. In this paper, we propose an image-based control scheme for driving wheeled mobile robots along visual paths. Our approach is based on the feedback of information given by geometric constraints: the epipolar geometry or the trifocal tensor. The proposed control law only requires one measurement easily computed from the image data through the geometric constraint. The proposed approach has two main advantages: explicit pose parameters decomposition is not required and the rotational velocity is smooth or eventually piece-wise constant avoiding discontinuities that generally appear in previous works when the target image changes. The translational velocity is adapted as demanded for the path and the resultant motion is independent of this velocity. Furthermore, our approach is valid for all cameras with approximated central projection, including conventional, catadioptric and some fisheye cameras. Simulations and real-world experiments illustrate the validity of the proposal.


Visual navigation Visual path following Visual memory Epipolar geometry Trifocal tensor 



This work was supported by projects DPI 2009-08126 and DPI 2012-32100 and Grants of Banco Santander-Universidad de Zaragoza and Conacyt-México.


  1. Argyros, A. A., Bekris, K. E., Orphanoudakis, S. C., & Kavraki, L. E. (2005). Robot homing by exploiting panoramic vision. Autonomous Robots, 19(1), 7–25.CrossRefGoogle Scholar
  2. Becerra, H. M., Courbon, J., Mezouar, Y., & Sagüés, C. (2010). Wheeled mobile robots navigation from a visual memory using wide field of view cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5693–5699).Google Scholar
  3. Becerra, H. M., López-Nicolás, G., & Sagüés, C. (2010). Omnidirectional visual control of mobile robots based on the 1D trifocal tensor. Robotics and Autonomous Systems, 58(6), 796–808.CrossRefGoogle Scholar
  4. Becerra, H. M., López-Nicolás, G., & Sagüés, C. (2011). A sliding mode control law for mobile robots based on epipolar visual servoing from three views. IEEE Transactions on Robotics, 27(1), 175–183.CrossRefGoogle Scholar
  5. Chen, Z., & Birchfield, S. T. (2009). Qualitative vision-based path following. IEEE Transactions on Robotics, 25(3), 749–754.CrossRefGoogle Scholar
  6. Cherubini, A., & Chaumette, F. (2009). Visual navigation with a time-independent varying reference. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5968–5973).Google Scholar
  7. Cherubini, A., & Chaumette, F. (2013). Visual navigation of a mobile robot with laser-based collision avoidance. International Journal of Robotics Research, 32(2), 189–205.CrossRefGoogle Scholar
  8. Cherubini, A., Chaumette, F., & Oriolo, G. (2011). Visual servoing for path reaching with nonholonomic robots. Robotica, 29(7), 1037–1048.CrossRefGoogle Scholar
  9. Cherubini, A., Colafrancesco, M., Oriolo, G., Freda, L., & Chaumette, F. (2009). Comparing appearance-based controllers for nonholonomic navigation from a visual memory. In Workshop on safe navigation in open and dynamic environments—Application to autonomous vehicles. IEEE International Conference on Robotics and Automation.Google Scholar
  10. Courbon, J., Mezouar, Y., & Martinet, P. (2008). Indoor navigation of a non-holonomic mobile robot using a visual memory. Autonomous Robots, 25(3), 253–266.CrossRefGoogle Scholar
  11. Courbon, J., Mezouar, Y., & Martinet, P. (2009). Autonomous navigation of vehicles from a visual memory using a generic camera model. IEEE Transactions on Intelligent Transportation Systems, 10(3), 392–402.CrossRefGoogle Scholar
  12. De Luca, A., Oriolo, G., & Samson, C. (1998). Feedback control of a nonholonomic car-like robot. In J. P. Laumond (Ed.), Robot motion planning and control. New York: Springer.Google Scholar
  13. Diosi, A., Segvic, S., Remazeilles, A., & Chaumette, F. (2011). Experimental evaluation of autonomous driving based on visual memory and image-based visual servoing. IEEE Transactions on Intelligent Transportation Systems, 12(3), 870–883.Google Scholar
  14. Fang, Y., Dixon, W. E., Dawson, D. M., & Chawda, P. (2005). Homography-based visual servo regulation of mobile robots. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, 35(5), 1041–1050.CrossRefGoogle Scholar
  15. Geyer, C., & Daniilidis, K. (2000). An unifying theory for central panoramic systems and practical implications. In European Conference on Computer Vision (pp. 445–461).Google Scholar
  16. Goedeme, T., Nuttin, M., Tuytelaars, T., & Gool, L. V. (2007). Omnidirectional vision based topological navigation. International Journal of Computer Vision, 74(3), 219–236.CrossRefGoogle Scholar
  17. Hartley, R. (1997a). In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6), 580–593.CrossRefGoogle Scholar
  18. Hartley, R. (1997b). Lines and points in three views and the trifocal tensor. International Journal of Computer Vision, 22(2), 125–140.CrossRefGoogle Scholar
  19. Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision (2nd ed.). Cambridge, MA: Cambridge University Press.CrossRefzbMATHGoogle Scholar
  20. López-Nicolás, G., Guerrero, J. J., & Sagüés, C. (2010). Visual control through the trifocal tensor for nonholonomic robots. Robotics and Autonomous Systems, 58(2), 216–226.CrossRefGoogle Scholar
  21. López-Nicolás, G., & Sagüés, C. (2011). Vision-based exponential stabilization of mobile robots. Autonomous Robots, 30(3), 293–306.CrossRefGoogle Scholar
  22. Mariottini, G. L., Oriolo, G., & Prattichizzo, D. (2007). Image-based visual servoing for nonholonomic mobile robots using epipolar geometry. IEEE Transactions on Robotics, 23(1), 87–100.CrossRefGoogle Scholar
  23. Matsumoto, Y., Ikeda, K., Inaba, M., & Inoue, H., (1999). Visual navigation using using omnidirectional view sequence. In IEEE International Conference on Intelligent Robots and Systems (pp. 317–322). Google Scholar
  24. Matsumoto, Y., Inaba, M., & Inoue, H. (1996). Visual navigation using view-sequenced route representation. In IEEE International Conference on Robotics and Automation (pp. 83–88).Google Scholar
  25. Mei, C., & Rives, P. (2007). Single view point omnidirectional camera calibration from planar grids. In IEEE International Conference on Robotics and Automation (pp. 3945–3950).Google Scholar
  26. Menegatti, E., Maeda, T., & Ishiguro, H. (2004). Image-based memory for robot navigation using properties of omnidirectional images. Robotics and Autonomous Systems, 47(4), 251–267.CrossRefGoogle Scholar
  27. Royer, E., Lhuillier, M., Dhome, M., & Lavest, J. M. (2007). Monocular vision for mobile robot localization and autonomous navigation. International Journal of Computer Vision, 74(3), 237–260.CrossRefGoogle Scholar
  28. Scaramuzza, D., & Siegwart, R. (2006). A toolbox for easy calibrating omnidirectional cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5695–5701).Google Scholar
  29. Segvic, S., Remazeilles, A., Diosi, A., & Chaumette, F. (2009). A mapping and localization framework for scalable appearance-based navigation. Computer Vision and Image Understanding, 113(2), 172–187.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Héctor M. Becerra
    • 1
    Email author
  • Carlos Sagüés
    • 2
  • Youcef Mezouar
    • 3
  • Jean-Bernard Hayet
    • 1
  1. 1.Centro de Investigación en Matemáticas (CIMAT)GuanajuatoMexico
  2. 2.Instituto de Investigación en Ingeniería de AragónUniversidad de ZaragozaSaragossaSpain
  3. 3.Institut Français de Mécanique AvancéeInstitut PascalAubièreFrance

Personalised recommendations