Advertisement

KI - Künstliche Intelligenz

, Volume 24, Issue 4, pp 345–348 | Cite as

Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments

  • Radu Bogdan RusuEmail author
Dissertationen und Habilitationen

Abstract

Environment models serve as important resources for an autonomous robot by providing it with the necessary task-relevant information about its habitat. Their use enables robots to perform their tasks more reliably, flexibly, and efficiently. As autonomous robotic platforms get more sophisticated manipulation capabilities, they also need more expressive and comprehensive environment models: for manipulation purposes their models have to include the objects present in the world, together with their position, form, and other aspects, as well as an interpretation of these objects with respect to the robot tasks.

The dissertation presented in this article (Rusu, PhD thesis, 2009) proposes Semantic 3D Object Models as a novel representation of the robot’s operating environment that satisfies these requirements and shows how these models can be automatically acquired from dense 3D range data.

Keywords

Point Cloud Machine Learning Classifier Perception Problem Robot Hexapod Robot Operating System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Rusu RB (2009) Semantic 3d object maps for everyday manipulation in human living environments. PhD thesis, Computer Science department, Technische Universität Müchen, Germany Google Scholar
  2. 2.
    Rusu RB, Marton ZC, Blodow N, Dolha M, Beetz M (2008) Towards 3D point cloud based object maps for household environments. Robot Auton Syst J (Special Issue on Semantic Knowledge) Google Scholar
  3. 3.
    Rusu RB, Blodow N, Beetz M (2009) Fast point feature histograms (FPFH) for 3D registration. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), Kobe, Japan Google Scholar
  4. 4.
    Rusu RB, Holzbach A, Diankov R, Bradski G, Beetz M (2009) Perception for mobile manipulation and grasping using active stereo. In: Proceedings of the 9th IEEE-RAS international conference on humanoid robots (Humanoids), Paris, France Google Scholar
  5. 5.
    Rusu RB, Marton ZC, Blodow N, Holzbach A, Beetz M (2009) Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments. In: Proceedings of the 22nd IEEE/RSJ international conference on intelligent robots and systems (IROS), St. Louis, MO, USA Google Scholar
  6. 6.
    Rusu RB, Meeussen W, Chitta S, Beetz M (2009) Laser-based perception for door and handle identification. In: Proceedings of the international conference on advanced robotics (ICAR), Munich, Germany, best paper award Google Scholar
  7. 7.
    Rusu RB, Sucan IA, Gerkey B, Chitta S, Beetz M, Kavraki LE (2009) Real-time perception-guided motion planning for a personal robot. In: Proceedings of the 22nd IEEE/RSJ international conference on intelligent robots and systems (IROS), St. Louis, MO, USA Google Scholar
  8. 8.
    Rusu RB, Sundaresan A, Morisset B, Hauser K, Agrawal M, Latombe JC, Beetz M (2009) Leaving flatland: efficient real-time 3D navigation. J Field Robot Google Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  1. 1.Intelligent Autonomous Systems, Computer Science DepartmentTechnische Universität MünchenGarching b. MünchenGermany

Personalised recommendations