Visual SLAM Based on Single Omnidirectional Views

  • David ValienteEmail author
  • Arturo Gil
  • Lorenzo Fernández
  • Óscar Reinoso
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 283)


This chapter focuses on the problem of Simultaneous Localization and Mapping (SLAM) using visual information from the environment. We exploit the versatility of a single omnidirectional camera to carry out this task. Traditionally, visual SLAM approaches concentrate on the estimation of a set of visual 3D points of the environment, denoted as visual landmarks. As the number of visual landmarks increases the computation of the map becomes more complex. In this work we suggest a different representation of the environment which simplifies the computation of the map and provides a more compact representation. Particularly, the map is composed by a reduced set of omnidirectional images, denoted as views, acquired at certain poses of the environment. Each view consists of a position and orientation in the map and a set of 2D interest points extracted from the image reference frame. The information gathered by these views is stored to find corresponding points between the current view captured at the current robot pose and the views stored in the map. Once a set of corresponding points is found, a motion transformation can be computed to retrieve the position of both views. This fact allows us to estimate the current pose of the robot and build the map. Moreover, with the intention of performing a more reliable approach, we propose a new method to find correspondences since it is a troublesome issue in this framework. Its basis relies on the generation of a gaussian distribution to propagate the current error on the map to the the matching process. We present a series of experiments with real data to validate the ideas and the SLAM solution proposed in this work.


Visual SLAM Omnidirectional images 



This work has been supported by the Spanish government through the project DPI2010-15308.


  1. 1.
    Stachniss, C., Grisetti, G., Haehnel, D., Burgard, W.: Improved rao-blackwellized mapping by adaptive sampling and active loop-closure. In: Proceedings of the SOAVE, Ilmenau, Germany (2004)Google Scholar
  2. 2.
    Montemerlo, M., Thrun, S., Koller, D., Wegbreit, B.: FastSLAM: a factored solution to the simultaneous localization and mapping problem. In: Proceedings of the 18th National Conference on Artificial Intelligence, Edmonton, Canada (2002)Google Scholar
  3. 3.
    Gil, A., Reinoso, O., Ballesta, M., Juliá, M., Payá, L.: Estimation of visual maps with a robot network equipped with vision sensors. Sensors 10(5), 5209–5232 (2010)Google Scholar
  4. 4.
    Civera, J., Davison, A.J., Montiel, J.M.M.: Inverse depth parametrization for monocular slam. IEEE Trans. Rob. 24(5), 932–945 (2008)Google Scholar
  5. 5.
    Joly, C., Rives, P.: Bearing-only SAM using a minimal inverse depth parametrization. In: Proceedings of ICINCO, Funchal, Madeira (Portugal) (2010)Google Scholar
  6. 6.
    Jae-Hean, K., Myung Jin, C.: Slam with omni-directional stereo vision sensor. In: Proceedings of the IROS, Las Vegas (Nevada) (2003)Google Scholar
  7. 7.
    Davison, A.J.: Murray, D.W.: Simultaneous localisation and map-building using active vision. IEEE Trans. PAMI 24(7), 865–880 (2002)Google Scholar
  8. 8.
    Davison, A.J., Gonzalez Cid, Y., Kita, N.: Improving data association in vision-based SLAM. In: Proceedings of IFAC/EURON, Lisboa, Portugal (2004)Google Scholar
  9. 9.
    Scaramuzza, D., Fraundorfer, F., Siegwart, R.: Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC. In: Proceedings of the ICRA, Kobe, Japan (2009)Google Scholar
  10. 10.
    Scaramuzza, D.: Performance evaluation of 1-point RANSAC visual odometry. J. Field Rob. 28(5), 792–811 (2011)Google Scholar
  11. 11.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: Speeded up robust features. In: Proceedings of the ECCV, Graz, Austria (2006)Google Scholar
  12. 12.
    Grisetti, G., Stachniss, C., Grzonka, S., Burgard, W.: A tree parameterization for efficiently computing maximum likelihood maps using gradient descent. In: Proceedings of RSS, Atlanta, Georgia (2007)Google Scholar
  13. 13.
    Kawanishi, R., Yamashita, A., Kaneko, T.: Construction of 3D environment model from an omni-directional image sequence. In: Proceedings of the Asia International Symposium on Mechatronics 2008, Sapporo, Japan (2008)Google Scholar
  14. 14.
    Nister, D.: An efficient solution to the five-point relative pose problem. In: Proceedings of the IEEE CVPR, Madison, USA (2003)Google Scholar
  15. 15.
    Stewenius, H., Engels, C., Nister, D.: Recent developments on direct relative orientation. ISPRS J. Photogramm. Remote Sens. 60(4), 284–294 (2006)Google Scholar
  16. 16.
    Gil, A., Martinez-Mozos, O., Ballesta, M., Reinoso, O.: A comparative evaluation of interest point detectors and local descriptors for visual SLAM. Mach. Vis. Appl. 21, 905–920(2010)Google Scholar
  17. 17.
    Murillo, A.C., Guerrero, J.J., Sagüés, C.: SURF features for efficient robot localization with omnidirectional images. In: Proceedings of the ICRA, San Diego, USA (2007)Google Scholar
  18. 18.
    Bunschoten, R., Krose, B.: Visual odometry from an omnidirectional vision system. In: Proceedings of the ICRA, Taipei, Taiwan (2003)Google Scholar
  19. 19.
    Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Cambridge University Press, Cambridge (2004)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • David Valiente
    • 1
    Email author
  • Arturo Gil
    • 1
  • Lorenzo Fernández
    • 1
  • Óscar Reinoso
    • 1
  1. 1.Department of Systems Engineering and AutomationMiguel Hernández UniversityElcheSpain

Personalised recommendations