Autonomous Robots

, Volume 36, Issue 1–2, pp 51–65 | Cite as

Learning of grasp selection based on shape-templates

  • Alexander Herzog
  • Peter Pastor
  • Mrinal Kalakrishnan
  • Ludovic Righetti
  • Jeannette Bohg
  • Tamim Asfour
  • Stefan Schaal
Article

Abstract

The ability to grasp unknown objects still remains an unsolved problem in the robotics community. One of the challenges is to choose an appropriate grasp configuration, i.e., the 6D pose of the hand relative to the object and its finger configuration. In this paper, we introduce an algorithm that is based on the assumption that similarly shaped objects can be grasped in a similar way. It is able to synthesize good grasp poses for unknown objects by finding the best matching object shape templates associated with previously demonstrated grasps. The grasp selection algorithm is able to improve over time by using the information of previous grasp attempts to adapt the ranking of the templates to new situations. We tested our approach on two different platforms, the Willow Garage PR2 and the Barrett WAM robot, which have very different hand kinematics. Furthermore, we compared our algorithm with other grasp planners and demonstrated its superior performance. The results presented in this paper show that the algorithm is able to find good grasp configurations for a large set of unknown objects from a relatively small set of demonstrations, and does improve its performance over time.

Keywords

Model-free grasping Grasp synthesis Template learning 

Notes

Acknowledgments

The work described in this paper was partially conducted within the EU Cognitive Systems project GRASP (IST-FP7-IP-215821) funded by the European Commission and the German Humanoid Research project SFB588 funded by the German Research Foundation (DFG: Deutsche Forschungsgemeinschaft). Alexander Herzog received support from the InterACT - International Center for Advanced Communication Technologies. This research was supported in part by National Science Foundation grants ECS-0326095, IIS-0535282, IIS-1017134, CNS-0619937, IIS-0917318, CBET-0922784, EECS-0926052, CNS-0960061, the DARPA program on Autonomous Robotic Manipulation, the Army Research Office, the Okawa Foundation, the ATR Computational Neuroscience Laboratories, and the Max-Planck-Society.

Supplementary material

10514_2013_9366_MOESM1_ESM.flv (9 mb)
Supplementary material 1 (flv 9261 KB)

References

  1. Balasubramanian, R., Xu, L., Brook, P. D., Smith, J. R., & Matsuoka, Y. (2012). Physical human interactive guidance: Identifying grasping principles from human-planned grasps. IEEE Transactions on Robotics, 28(4), 899–910.CrossRefGoogle Scholar
  2. Bohg, J., & Kragic, D. (2009). Grasping familiar objects using shape context. In International conference on advanced robotics (pp. 1–6).Google Scholar
  3. Bohren, J., Rusu, R.B., Gil Jones, E., Marder-Eppstein, E., Pantofaru, C., Wise, M., et al. (2011). Towards autonomous robotic butlers: Lessons learned with the PR2. In IEEE international conference on robotics and automation (pp. 5568–5575).Google Scholar
  4. Boularias, A., Kroemer, O., & Peters, J. (2011). Learning robot grasping from 3-D images with Markov random fields. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1548–1553).Google Scholar
  5. Curtis, N., Xiao, J., & Member, S. (2008). Efficient and effective grasping of novel objects through learning and adapting a knowledge base. In IEEE international conference on robotics and automation (pp. 2252–2257).Google Scholar
  6. Detry, R., Başeski, E., Popovic, M., Touati, Y., Krueger, N., Kroemer, O., et al. (2010). Learning continuous grasp affordances by sensorimotor exploration. In From motor learning to interaction learning in robots (pp. 451–465). Heidelberg: Springer.Google Scholar
  7. Detry, R., Ek, C. H., Madry, M., & Kragic, D. (2013). Learning a dictionary of prototypical grasp-predicting parts from grasping experience. In IEEE international conference on robotics and automation.Google Scholar
  8. Diankov, R., & Kuffner, J. (2008). Openrave: A planning architecture for autonomous robotics. Tech. Rep. CMU-RI-TR-08-34. Robotics Institute, Pittsburgh, PA.Google Scholar
  9. Erkan, A., Kroemer, O., Detry, R., Altun, Y., Piater, J., & Peters, J. (2010). Learning probabilistic discriminative models of grasp affordances under limited supervision. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1586–1591).Google Scholar
  10. Ferrari, C., & Canny, J. (1992). Planning optimal grasps. In IEEE international conference on robotics and automation (pp. 2290–2295).Google Scholar
  11. Goldfeder, C., & Allen, P. (2011). Data-driven grasping. Autonomous Robots, 31, 1–20.CrossRefGoogle Scholar
  12. Goldfeder, C., Allen, P. K., Lackner, C., & Pelossof, R. (2007). Grasp planning via decomposition trees. In IEEE international conference on robotics and automation (pp. 4679–4684).Google Scholar
  13. Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Asfour, T., & Schaal, S. (2012). Template-based learning of grasp selection. In IEEE international conference on robotics and automation (pp. 2379–2384).Google Scholar
  14. Hsiao, K., Chitta, S., Ciocarlie, M., & Jones, E. G. (2010). Contact-reactive grasping of objects with partial shape information. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1228–1235).Google Scholar
  15. Huebner, K., Welke, K., Przybylski, M., Vahrenkamp, N., Asfour, T., et al. (2009). Grasping known objects with humanoid robots: A box-based approach. In International conference on advanced robotics (pp. 1–6).Google Scholar
  16. Kalakrishnan, M., Buchli, J., Pastor, P., & Schaal, S. (2009). Learning locomotion over rough terrain using terrain templates. In IEEE/RSJ international conference on intelligent robots and systems (pp. 167–172).Google Scholar
  17. Klingbeil, E., Rao, D., Carpenter, B., Ganapathi, V., Ng, A. Y., & Khatib, O. (2011). Grasping with application to an autonomous checkout robot. In IEEE international conference on robotics and automation (pp. 2837–2844).Google Scholar
  18. Kroemer, O., Ugur, E., Oztop, E., & Peters, J. (2012). A kernel-based approach to direct action perception. In IEEE international conference on robotics and automation (pp. 2605–2610).Google Scholar
  19. Leon, B., Ulbrich, S., Diankov, R., Puche, G., Przybylski, M., Morales, A., et al. (2010). Opengrasp: A toolkit for robot grasping simulation. In 2nd international conference on simulation, modeling, and programming for autonomous robots.Google Scholar
  20. Miller, A. T., & Allen, P. K. (2004). Graspit! A versatile simulator for robotic grasping. IEEE Robotics & Automation Magazine, 11(4), 110–122.CrossRefGoogle Scholar
  21. Miller, A. T., Knoop, S., Christensen, H. I., & Allen, P. K. (2003). Automatic grasp planning using shape primitives. In IEEE international conference on robotics and automation (pp. 1824–1829).Google Scholar
  22. Montesano, L., & Lopes, M. (2009). Learning grasping affordances from local visual descriptors. In IEEE international conference on development and learning (pp. 1–6).Google Scholar
  23. Papazov, C., Haddadin, S., Parusel, S., Krieger, K., & Burschka, D. (2012). Rigid 3D geometry matching for grasping of known objects in cluttered scenes. The International Journal of Robotics Research, 31(4), 538–553.CrossRefGoogle Scholar
  24. Popović, M., Kootstra, G., Jørgensen, J. A., Kragic, D., & Krüger, N. (2011). Grasping unknown objects using an early cognitive vision system for general scene understanding. In IEEE/RSJ international conference on intelligent robots and systems (pp. 987–994).Google Scholar
  25. Przybylski, M., & Asfour, T. (2010). Unions of balls for shape approximation in robot grasping. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1592–1599).Google Scholar
  26. Ratliff, N., Bagnell, J., & Srinivasa, S. S. (2007). Imitation learning for locomotion and manipulation. In IEEE-RAS international conference on humanoid robots (pp. 392–397).Google Scholar
  27. Richtsfeld, M., & Zillich, M. (2008). Grasping unknown objects based on 2.5D range data. In Proceedings of IEEE international conference on automation science and engineering CASE (pp. 691–696).Google Scholar
  28. Righetti, L., Kalakrishnan, M., Pastor, P., Binney, J., Kelly, J., Voorhies, R., et al. (2013). An autonomous manipulation system based on force control and optimization. Autonomous Robots Journal Special Issue: Autonomous Grasping and Manipulation. doi:10.1007/s10514-013-9365-9.
  29. Rubio, O., Huebner, K., & Kragic, D. (2010). Representations for object grasping and learning from experience. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1566–1571).Google Scholar
  30. Rusu, R. B., & Cousins, S. (2011). 3D is here: Point Cloud Library (PCL). In IEEE international conference on robotics and automation (pp. 1–4).Google Scholar
  31. Saxena, A., Driemeyer, J., & Ng, A. Y. (2008). Robotic grasping of novel objects using vision. The International Journal of Robotics Research, 27(2), 157–173.CrossRefGoogle Scholar
  32. Saxena, A., Wong, L., & Ng, A. Y. (2008). Learning grasp strategies with partial shape information. In AAAI conference on artificial intelligence (pp. 1491–1494).Google Scholar
  33. Stark, M., Lies, P., Zillich, M., Wyatt, J., & Schiele, B. (2008). Functional object class detection based on learned affordance cues. In International conference on computer vision systems. LNAI (Vol. 5008, pp. 435–444). Heidelberg: Springer.Google Scholar
  34. Stückler, J., Steffens, R., Holz, D., & Behnke, S. (2011). Real-time 3D perception and efficient grasp planning for everyday manipulation tasks. In European conference on mobile robots (ECMR). Örebro, Sweden.Google Scholar
  35. Ulbrich, S., Kappler, D., Asfour, T., Vahrenkamp, N., Bierbaum, A., et al. (2011). The OpenGRASP benchmarking suite: An environment for the comparative analysis of grasping and dexterous manipulation. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1761–1767).Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Alexander Herzog
    • 1
  • Peter Pastor
    • 2
  • Mrinal Kalakrishnan
    • 2
  • Ludovic Righetti
    • 1
    • 2
  • Jeannette Bohg
    • 1
  • Tamim Asfour
    • 3
  • Stefan Schaal
    • 1
    • 2
  1. 1.Autonomous Motion DepartmentMax-Planck Institute for Intelligent SystemsTübingenGermany
  2. 2.Computational Learning and Motor Control LabUniversity of Southern CaliforniaLos AngelesUSA
  3. 3.High Performance Humanoid Technology Lab (H2T)Karlsruhe Institute of Technology (KIT)KarlsruheGermany

Personalised recommendations