OpenGRASP: A Toolkit for Robot Grasping Simulation

  • Beatriz León
  • Stefan Ulbrich
  • Rosen Diankov
  • Gustavo Puche
  • Markus Przybylski
  • Antonio Morales
  • Tamim Asfour
  • Sami Moisio
  • Jeannette Bohg
  • James Kuffner
  • Rüdiger Dillmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6472)

Abstract

Simulation is essential for different robotic research fields such as mobile robotics, motion planning and grasp planning. For grasping in particular, there are no software simulation packages, which provide a holistic environment that can deal with the variety of aspects associated with this problem. These aspects include development and testing of new algorithms, modeling of the environments and robots, including the modeling of actuators, sensors and contacts. In this paper, we present a new simulation toolkit for grasping and dexterous manipulation called OpenGRASP addressing those aspects in addition to extensibility, interoperability and public availability. OpenGRASP is based on a modular architecture, that supports the creation and addition of new functionality and the integration of existing and widely-used technologies and standards. In addition, a designated editor has been created for the generation and migration of such models. We demonstrate the current state of OpenGRASP’s development and its application in a grasp evaluation environment.

Keywords

software toolkit grasping simulation robot modeling 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Gerkey, B., Vaughan, R., Howard, A.: The Player/Stage Project: Tools for Multi-Robot and Distributed Sensor Systems. In: 11th International Conference on Advanced Robotics (ICAR 2003), Coimbra, Portugal, pp. 317–323 (2003)Google Scholar
  2. 2.
  3. 3.
    Microsoft Robotics Studio, http://msdn.microsoft.com/robotics
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
    Miller, A., Allen, P.: GraspIt!: A versatile simulator for robotic grasping. IEEE Robotics & Automation Magazine 11, 110–122 (2004)CrossRefGoogle Scholar
  10. 10.
    Diankov, R., Kuffner, J.: OpenRAVE: A Planning Architecture for Autonomous Robotics. Technical report, Robotics Institute, Pittsburgh, PA (2008)Google Scholar
  11. 11.
    PAL (Physics Abstraction Layer), http://www.adrianboeing.com/pal
  12. 12.
  13. 13.
    Khronos Group, http://www.khronos.org/
  14. 14.
    Boeing, A., Bräunl, T.: Evaluation of real-time physics simulation systems. In: 5th international Conference on Computer Graphics and Interactive Techniques in Australia and Southeast Asia (GRAPHITE 2007), Perth, Australia, pp. 281–288 (2007)Google Scholar
  15. 15.
  16. 16.
    Gaiser, I., Schulz, S., Kargov, A., Klosek, H., Bierbaum, A., Pylatiuk, C., Oberleand, R., Werner, T., Asfour, T., Bretthauer, G., Dillmann, R.: A new anthropomorphic robotic hand. In: IEEE/RAS International Conference on Humanoid Robots (Humanoids), pp. 418–422 (2008)Google Scholar
  17. 17.
    Illusoft: Blender Collada Plugin, http://colladablender.illusoft.com/cms/
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
    Björkmann, M., Kragic, D.: Active 3D scene segmentation and Detection of Unknown Objects. In: International Conference on Robotics and Automation (ICRA 2010), Anchorage, Alaska, USA (2010)Google Scholar
  24. 24.
    Papazov, C., Burschka, D.: Stochastic Optimization for Rigid Point Set Registration. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Kuno, Y., Wang, J., Pajarola, R., Lindstrom, P., Hinkenjann, A., Encarnação, M.L., Silva, C.T., Coming, D. (eds.) ISVC 2009. LNCS, vol. 5876, pp. 1043–1054. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  25. 25.
    Richtsfeld, M., Vincze, M.: Grasping of Unknown Objects from a Table Top. In: ECCV Workshop on ’Vision in Action: Efficient strategies for cognitive agents in complex environments’, Marseille, France (2008)Google Scholar
  26. 26.
    Przybylski, M., Asfour, T., Dillmann, R.: Unions of Balls for Shape Approximation in Robot Grasping. To appear in IROS (2010)Google Scholar
  27. 27.
    Asfour, T., Regenstein, K., Azad, P., Schröder, J., Vahrenkamp, N., Dillmann, R.: ARMAR-III: An Integrated Humanoid Platform for Sensory-Motor Control. In: IEEE/RAS International Conference on Humanoid Robots (Humanoids), pp. 169–175 (2006)Google Scholar
  28. 28.
    Asfour, T., Welke, K., Azad, P., Ude, A., Dillmann, R.: The Karlsruhe Humanoid Head. In: IEEE/RAS International Conference on Humanoid Robots (Humanoids), pp. 447–453 (2008)Google Scholar
  29. 29.
    Kasper, A., Becher, R., Steinhaus, P., Dillmann, R.: Developing and Analyzing Intuitive Modes for Interactive Object Modeling. In: International Conference on Multimodal Interfaces (2007), http://i61www.ira.uka.de/ObjectModels
  30. 30.

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Beatriz León
    • 1
  • Stefan Ulbrich
    • 2
  • Rosen Diankov
    • 5
  • Gustavo Puche
    • 1
  • Markus Przybylski
    • 2
  • Antonio Morales
    • 1
  • Tamim Asfour
    • 2
  • Sami Moisio
    • 3
  • Jeannette Bohg
    • 4
  • James Kuffner
    • 5
  • Rüdiger Dillmann
    • 2
  1. 1.Robotic Intelligence LaboratoryUniversitat Jaume ICastellónSpain
  2. 2.Karlsruhe Institute of TechnologyInstitute for AnthropomaticsKarlsruheGermany
  3. 3.Centre of Computational Engineering and Integrated DesignLappeenranta University of TechnologyFinland
  4. 4.Computer Vision and Active Vision LaboratoryRoyal Institute of TechnologyStockholmSweden
  5. 5.Institute for RoboticsCarnegie Mellon UniversityUSA

Personalised recommendations