Compact Mapping in Plane-Parallel Environments Using Stereo Vision

  • Juan Manuel Sáez
  • Antonio Peñalver
  • Francisco Escolano
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2905)


In this paper we propose a method for transforming a 3D map of the environment, composed by a cloud of millions of points, into a compact representation in terms of basic geometric primitives, 3D planes in this case. These planes, with their texture, yield a very useful representation in robot navigation tasks like localization and motion control. Our method estimates the main planes in the environment (walls, floor and ceiling) using point classification, based on the orientation of their normal and its relative position. Once we have inferred the 3D planes we map their textures using the appearance information of the observations, obtaining a realistic model of the scene.


Mobile Robot Vertical Plane Gaussian Mixture Model Stereo Vision Compact Mapping 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Thrun, S., et al.: Probabilistic Algorithms and the interactive museum tour-guide robot Minerva. International Journal of Robotics Research 19(11) (November 2000)Google Scholar
  2. 2.
    Dieter, F., Burgard, W., Thrun, S.: The dynamic window approach to collision avoidance. IEEE Robotics and Automation Magazine (1997)Google Scholar
  3. 3.
    Moravec, H.P.: Robot spatial perception by stereoscopic vision and 3D evidence grids. TR The Robotics Institute Carnegie Mellon University. Pittsburgh, Pennsylvania (1996)Google Scholar
  4. 4.
    Se, S., Lowe, D., Little, J.: Vision-based mobile robot localization and mapping using scale-invariant features. In: Proc. of IEEE International Conference on Robotics and Automation, Seoul, Korea (May 2001)Google Scholar
  5. 5.
    Iocchi, L., Konolige, K., Bajracharya, M.: Visually realistic mapping of planar environment with stereo. In: Seventh International Symposium on Experimental Robotics (ISER 2000), Hawaii (2000)Google Scholar
  6. 6.
    Liu, Y., Emery, R., Chakrabarti, D., Burgard, W., Thrun, S.: Using EM to learn 3D models of indoor enviroments with mobile robots. In: Eighteenth International Conference on Machine Learning, Williams College (June 2001)Google Scholar
  7. 7.
    Dempster, A., Laird, A., Rubin, D.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society Series B 39, 1–38 (1977)MathSciNetGoogle Scholar
  8. 8.
    Stamos, I., Leordeanu, M.: Automated Feature-Based Range Registration of Urban Scenes of Large Scale. In: IEEE International Conference of Computer Vision and Pattern Recognition, Madison, WI, vol. II, pp. 555–561 (2003)Google Scholar
  9. 9.
    Sáez, J.M., Escolano, F.: Monte Carlo Localization in 3D Maps Using Stereo Vision. In: Garijo, F.J., Riquelme, J.-C., Toro, M. (eds.) IBERAMIA 2002. LNCS (LNAI), vol. 2527, pp. 913–922. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  10. 10.
    Sáez, J.M., Peñalver, A., Escolano, F.: Estimación de las acciones de un robot utilizando visión estéreo. In: IV Workshop de Agentes Físicos (WAF 2003), Alicante (April 2003)Google Scholar
  11. 11.
    Redner, R.A., Walker, H.F.: Mixture Densities, Maximum Likelihood, and the EM Algorithm. SIAM Review 26(2), 195–239 (1984)zbMATHMathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Juan Manuel Sáez
    • 1
  • Antonio Peñalver
    • 1
  • Francisco Escolano
    • 1
  1. 1.Robot Vision Group, Departamento de Ciencia de la Computación e Inteligencia ArtificialUniversidad de AlicanteSpain

Personalised recommendations