Skip to main content
Log in

Photometric visual servoing for omnidirectional cameras

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  • Aranda, M., López-Nicolás, G., & Sagüés, C. (2013). Angle-based homing from a reference image set using the 1D trifocal tensor. Autonomous Robots, 34(1–2), 73–91.

    Google Scholar 

  • Baker, S., & Nayar, S. K. (1999). A theory of single-viewpoint catadioptric image formation. International Journal on Computer Vision, 35(2), 175–196.

    Article  Google Scholar 

  • Barreto, J., Martin, F., & Horaud, R. (2003). Visual servoing/tracking using central catadioptric images. Experimental Robotics VIII, 5, 245–254.

    Article  Google Scholar 

  • Barreto, J.P., & Araujo, H. (2001). Issues on the geometry of central catadioptric imaging. In IEEE International Conference on Computer Vision and Pattern Recognition in Hawaii, USA (pp. 422–427).

  • Becerra, H., López-Nicolás, G., & Sagüés, C. (2010). Omnidirectional visual control of mobile robots based on the 1D trifocal tensor. Robotics and Autonomous Systems, 58(6), 796–808.

    Article  Google Scholar 

  • Benhimane, S., & Malis, E. (2007). Homography-based 2D visual tracking and servoing. International Journal of Robotics Research, 26(7), 661–676.

    Article  Google Scholar 

  • Caron, G., Marchand, E., & Mouaddib, E. (2010). Omnidirectional photometric visual servoing. In IEEE/RSJ International Conference on Intelligent Robots and Systems in Taipei, Taiwan (pp. 6202–6207).

  • Chaumette, F. (2004). Image moments: A general and useful set of features for visual servoing. IEEE Transactions on Robotics, 20(4), 713–723.

    Article  Google Scholar 

  • Chaumette, F., & Hutchinson, S. (2006). Visual servo control, part I: Basic approaches. IEEE Robotics and Automation Magazine, 13(4), 82–90.

    Article  Google Scholar 

  • Chen, Z., & Birchfield, S. (2009). Qualitative vision-based path following. IEEE Transactions on Robotics, 25(3), 749–754.

    Article  Google Scholar 

  • Chesi, G., & Hashimoto, K. (Eds.). (2010). Visual servoing via advanced numerical methods LNCIS 401. Berlin: Springer.

    Google Scholar 

  • Collewet, C., & Marchand, E. (2011). Photometric visual servoing. IEEE Transactions on Robotics, 27(4), 828–834.

    Article  Google Scholar 

  • Courbon, J., Mezouar, Y., Eck, L., & Martinet, P. (2007). A generic fisheye camera model for robotic applications. In IEEE/RSJ International Conference on Intelligent Robots and System in San Diego, USA (pp. 1683–1688).

  • Courbon, J., Mezouar, Y., & Martinet, P. (2009). Autonomous navigation of vehicles from a visual memory using a generic camera model. IEEE Transactions on Intelligent Transportation Systems, 10(3), 392–402.

    Article  Google Scholar 

  • Dame, A., & Marchand, E. (2011). Mutual information-based visual servoing. IEEE Transactions on Robotics, 27(5), 958–969.

    Article  Google Scholar 

  • Deguchi, K. (2000). A direct interpretation of dynamic images with camera and object motions for vision guided robot control. International Journal on Computer Vision, 37(1), 7–20.

    Article  MATH  Google Scholar 

  • Demonceaux, C., & Vasseur, P. (2009). Omnidirectional image processing using geodesic metric. In International Conference on Image Processing in Cairo, Egypt (pp. 221–224).

  • Di Stephano, L., Mattoccia, S., & Tombari, F. (2005). ZNCC-based template matching using bounded partial correlation. Pattern Recognition Letters, 26, 2129–2134.

    Article  Google Scholar 

  • Espiau, B., Chaumette, F., & Rives, P. (1992). A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3), 313–326.

    Article  Google Scholar 

  • Hamel, T., & Mahony, R. (2002). Visual servoing of an under-actuated dynamic rigid-body system: An image-based approach. IEEE Transactions on Robotics and Automation, 18(2), 187–198.

    Article  Google Scholar 

  • Horn, B. K. P., & Schunck, B. G. (1981). Determining optical flow. Artificial Intelligence, 17, 185–203.

    Article  Google Scholar 

  • Hutchinson, S., Hager, G., & Corke, P. (1996). A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12, 651–670.

    Article  Google Scholar 

  • Kallem, V., Dewan, M., Swensen, J., Hager, G., & Cowan, N. (2007). Kernel-based visual servoing. In IEEE/RSJ International Conference on Intelligent Robots and System in San Diego, USA (pp. 1975–1980).

  • Lébraly, P., Deymier, C., Ait-Aider, O., Royer, E., & Dhome, M. (2010). Flexible extrinsic calibration of non-overlapping cameras using a planar mirror: Application to vision-based robotics. In IEEE/RSJ International Conference on Intelligent Robots and System in Taipei, Taiwan (pp. 5640–5647).

  • Malis, E. (2004). Improving vision-based control using efficient second-order minimization techniques. In IEEE International Conference on Robotics and Automation in New Orleans (pp. 1843–1848).

  • Marchand, E., & Chaumette, F. (2005). Feature tracking for visual servoing purposes. Robotics and Autonomous Systems, 52(1), 53–70.

    Google Scholar 

  • Marchand, E., Spindler, F., & Chaumette, F. (2005). Visp for visual servoing: A generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, 12(4), 40–52.

    Google Scholar 

  • Mariottini, G., & Prattichizzo, D. (2008). Image-based visual servoing with central catadioptric cameras. International Journal of Robotics Research, 27(1), 41–56.

    Article  Google Scholar 

  • Meilland, M., Comport, A.I., & Rives, P. (2011). Dense visual mapping of large scale environments for real-time localization. In IEEE/RSJ International Conference on Intelligent Robots and System in San Francisco, CA (pp. 4242–4248).

  • Nayar, S., Nene, S., & Murase, H. (1996). Subspace methods for robot vision. IEEE Transactions on Robotics and Automation, 12(5), 750–758.

    Article  Google Scholar 

  • Royer, E., Lhuillier, M., Dhome, M., & Lavest, J. (2007). Monocular vision for mobile robot localization and autonomous navigation. International Journal of Computer Vision, 74(3), 237–260.

    Article  Google Scholar 

  • Segvic, S., Remazeilles, A., Diosi, A., & Chaumette, F. (2009). A mapping and localization framework for scalable appearance-based navigation. Computer Vision and Image Understanding, 113(2), 172–187.

    Google Scholar 

  • Tahri, O., Mezouar, Y., Chaumette, F., & Corke, P. (2010). Decoupled image-based visual servoing for cameras obeying the unified projection model. IEEE Transactions on Robotics, 26(4), 684–697.

    Article  Google Scholar 

  • Tatsambon Fomena R., Yoon H., Cherubini A., Chaumette F., & Hutchinson S. (2009). Coarsely calibrated visual servoing of a mobile robot using a catadioptric vision system. In IEEE International Conference on Intelligent Robots and Systems in St Louis, USA (pp. 5432–5437).

  • Ying, X., & Hu, Z. (2004). Can we consider central catadioptric cameras and fisheye cameras within a unified imaging model. In European Conference on Computer Vision in Prague, Czech Republic, (pp. 442–455).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guillaume Caron.

Additional information

Part of this paper has been published in IEEE/RSJ IROS 2010, Taipei, Taiwan.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 74138 KB)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Caron, G., Marchand, E. & Mouaddib, E.M. Photometric visual servoing for omnidirectional cameras. Auton Robot 35, 177–193 (2013). https://doi.org/10.1007/s10514-013-9342-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-013-9342-3

Keywords

Navigation