Advertisement

3D Maps with Semantics

  • Andreas Nüchter
Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 52)

Abstract

Understanding environments is essential if a robotic system shall operate autonomously. This chapter proposes a complete technical cognitive system, consisting of a mobile robot, a 3D laser scanner and a set of algorithms for semantic environment mapping. We define a semantic 3D map as follows:

A semantic 3D map for mobile robots is a metric map, that contains in addition to geometric information of 3D data points, assignments of these points to known structures or object classes.

Our approach uses 3D laser range and reflectance data on an autonomous mobile robot to perceive 3D objects. Starting from an empty map, several 3D scans acquired by the mobile robot Kurt3D in a stop-scan-go fashion, are merged in a global coordinate system by 6D SLAM as discribed in the previous parts of this book. Then, the coarse structure of the resulting 3D scene is interpreted using plane extraction and labeling exploiting background knowledge stored in a semantic net [89]. Afterwards, the 3D range and reflectance data are transformed into images by off-screen rendering. A cascade of classifiers, i.e., a linear decision tree, is used to detect and localize the objects [88]. Finally, the semantic map is presented using computer graphics. Figure 7.1 presents a system overview.

Keywords

Point Cloud Mobile Robot Object Detection Iterative Close Point Autonomous Mobile Robot 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Andreas Nüchter

    There are no affiliations available

    Personalised recommendations