Advertisement

3D LIDAR- and Camera-Based Terrain Classification Under Different Lighting Conditions

  • Stefan Laible
  • Yasir Niaz Khan
  • Karsten Bohlmann
  • Andreas Zell
Conference paper
Part of the Informatik aktuell book series (INFORMAT)

Abstract

Terrain classification is a fundamental task in outdoor robot navigation to detect and avoid impassable terrain. Camera-based approaches are well-studied and provide good results. A drawback of these approaches, however, is that the quality of the classification varies with the prevailing lighting conditions. 3D laser scanners, on the other hand, are largely illumination-invariant. In this work we present easy to compute features for 3D point clouds using range and intensity values. We compare the classification results obtained using only the laser-based features with the results of camera-based classification and study the influence of different lighting conditions.

Keywords

Point Cloud Random Forest Local Binary Pattern Ground Plane True Positive Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)Google Scholar
  2. Häselich, M., Lang, D., Arends, M., Paulus, D.: Terrain classification with markov random fields on fused camera and 3d laser range data. In: Proceedings of the 5th European Conference on Mobile Robotics (ECMR), pp. 153–158 (2011)Google Scholar
  3. Happold, M., Ollis, M., Johnson, N.: Enhancing supervised terrain classification with predictive unsupervised learning. In: Robotics: Science and Systems (2006)Google Scholar
  4. Khan, Y.N., Komma, P., Zell, A.: High resolution visual terrain classification for outdoor robots. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops) 2011, pp. 1014–1021. Barcelona, Spain, Nov 2011Google Scholar
  5. Myneni, R.B., Hall, F.G., Sellers, P.J., Marshak, A. L.: The interpretation of spectral vegetation indexes. In: IEEE Transactions on Geoscience and Remote Sensing, 33(2), 481–486 (1995)Google Scholar
  6. Rasmussen, C.: Combining laser range, color, and texture cues for autonomous road following. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 4320–4325, (2002)Google Scholar
  7. Torr, P.H.S., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78, 138–156 (2000)Google Scholar
  8. Weiss, U., Biber, P., Laible, S., Bohlmann, K., Zell, A.: Plant species classification using a 3d lidar sensor and machine learning. In: Ninth International Conference on Machine Learning and Applications (ICMLA),2010, pp. 339–345, (2010)Google Scholar
  9. Wurm, K.M., Stachniss, C., Kümmerle, R., Burgard, W.: Improving robot navigation in structured outdoor environments by identifying vegetation from laser data. Procceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, USA, In (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Stefan Laible
    • 1
  • Yasir Niaz Khan
    • 1
  • Karsten Bohlmann
    • 1
  • Andreas Zell
    • 1
  1. 1.Chair of Cognitive SystemsUniversity of Tübingen, Department of Computer ScienceTübingenGermany

Personalised recommendations