Learning Continuous Grasp Affordances by Sensorimotor Exploration

  • R. Detry
  • E. Başeski
  • M. Popović
  • Y. Touati
  • N. Krüger
  • O. Kroemer
  • J. Peters
  • J. Piater
Part of the Studies in Computational Intelligence book series (SCI, volume 264)


We develop means of learning and representing object grasp affordances probabilistically. By grasp affordance, we refer to an entity that is able to assess whether a given relative object-gripper configuration will yield a stable grasp. These affordances are represented with grasp densities, continuous probability density functions defined on the space of 3D positions and orientations. Grasp densities are registered with a visual model of the object they characterize. They are exploited by aligning them to a target object using visual pose estimation. Grasp densities are refined through experience: A robot “plays” with an object by executing grasps drawn randomly for the object’s grasp density. The robot then uses the outcomes of these grasps to build a richer density through an importance sampling mechanism. Initial grasp densities, called hypothesis densities, are bootstrapped from grasps collected using a motion capture system, or from grasps generated from the visual model of the object. Refined densities, called empirical densities, represent affordances that have been confirmed through physical experience. The applicability of our method is demonstrated by producing empirical densities for two object with a real robot and its 3-finger hand. Hypothesis densities are created from visual cues and human demonstration.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aarno, D., Sommerfeld, J., Kragic, D., Pugeault, N., Kalkan, S., Wörgötter, F., Kraft, D., Krüger, N.: Early reactive grasping with second order 3D feature relations. In: The IEEE International Conference on Advanced Robotics (2007)Google Scholar
  2. 2.
    Biegelbauer, G., Vincze, M.: Efficient 3D object detection by fitting superquadrics to range image data for robot’s object manipulation. In: IEEE International Conference on Robotics and Automation (2007)Google Scholar
  3. 3.
    Bouchard, G., Triggs, B.: Hierarchical part-based visual object categorization. Computer Vision and Pattern Recognition 1, 710–715 (2005)Google Scholar
  4. 4.
    de Granville, C., Fagg, A.H.: Learning grasp affordances through human demonstration. Submitted to the Journal of Autonomous Robots (2009)Google Scholar
  5. 5.
    de Granville, C., Southerland, J., Fagg, A.H.: Learning grasp affordances through human demonstration. In: Proceedings of the International Conference on Development and Learning, ICDL 2006 (2006)Google Scholar
  6. 6.
    Detry, R., Başeski, E., Krüger, N., Popović, M., Touati, Y., Kroemer, O., Peters, J., Piater, J.: Learning object-specific grasp affordance densities. In: International Conference on Development and Learning (2009)Google Scholar
  7. 7.
    Detry, R., Piater, J.H.: Hierarchical integration of local 3D features for probabilistic pose recovery. In: Robot Manipulation: Sensing and Adapting to the Real World (Workshop at Robotics, Science and Systems) (2007)Google Scholar
  8. 8.
    Detry, R., Pugeault, N., Piater, J.: A probabilistic framework for 3D visual object representation. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2009)Google Scholar
  9. 9.
    Doucet, A., de Freitas, N., Gordon, N.: Sequential Monte Carlo Methods in Practice. Springer, Heidelberg (2001)zbMATHGoogle Scholar
  10. 10.
    Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient matching of pictorial structures. In: Conference on Computer Vision and Pattern Recognition (CVPR 2000), p. 2066 (2000)Google Scholar
  11. 11.
    Gibson, J.J.: The Ecological Approach to Visual Perception. Lawrence Erlbaum Associates, Mahwah (1979)Google Scholar
  12. 12.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000)zbMATHGoogle Scholar
  13. 13.
    Huebner, K., Ruthotto, S., Kragic, D.: Minimum volume bounding box decomposition for shape approximation in robot grasping. In: IEEE International Conference on Robotics and Automation, 2008. ICRA (2008)Google Scholar
  14. 14.
    Kraft, D., Pugeault, N., Başeski, E., Popović, M., Kragic, D., Kalkan, S., Wörgötter, F., Krüger, N.: Birth of the Object: Detection of Objectness and Extraction of Object Shape through Object Action Complexes. Special Issue on “Cognitive Humanoid Robots” of the International Journal of Humanoid Robotics (2008)Google Scholar
  15. 15.
    Kragic, D., Miller, A.T., Allen, P.K.: Real-time tracking meets online grasp planning. In: Proceedings of the 2001 IEEE International Conference on Robotics and Automation, pp. 2460–2465 (2001)Google Scholar
  16. 16.
    Kroemer, O., Detry, R., Piater, J., Peters, J.: Active learning using mean shift optimization for robot grasping. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2009)Google Scholar
  17. 17.
    Krüger, N., Lappe, M., Wörgötter, F.: Biologically Motivated Multi-modal Processing of Visual Primitives. The Interdisciplinary Journal of Artificial Intelligence and the Simulation of Behaviour 1(5), 417–428 (2004)Google Scholar
  18. 18.
    Kuffner, J.: Effective sampling and distance metrics for 3D rigid body path planning. In: Proc. 2004 IEEE Int’l Conf. on Robotics and Automation (ICRA 2004), May 2004. IEEE, Los Alamitos (2004)Google Scholar
  19. 19.
    Lallee, S., Yoshida, E., Mallet, A., Nori, F., Natale, L., Metta, G., Warneken, F., Dominey, P.F.: Human-robot cooperation based on interaction learning. In: Sigaud, O., Peters, J. (eds.) From Motor Learning to Interaction Learning in Robots. SCI, vol. 264, pp. 491–536. Springer, Heidelberg (2010)Google Scholar
  20. 20.
    Lopes, M., Melo, F., Montesano, L., Santos-Victor, J.: Cognitive processes in imitation: Overview and computational approaches. In: Sigaud, O., Peters, J. (eds.) From Motor Learning to Interaction Learning in Robots. SCI, vol. 264, pp. 313–355. Springer, Heidelberg (2010)Google Scholar
  21. 21.
    Mardia, K.V., Jupp, P.E.: Directional Statistics. Wiley Series in Probability and Statistics. Wiley, Chichester (1999)CrossRefGoogle Scholar
  22. 22.
    Miller, A.T., Knoop, S., Christensen, H., Allen, P.K.: Automatic grasp planning using shape primitives. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2003, vol. 2, pp. 1824–1829 (2003)Google Scholar
  23. 23.
    Montesano, L., Lopes, M.: Learning grasping affordances from local visual descriptors. In: International Conference on Development and Learning (2009)Google Scholar
  24. 24.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco (1988)Google Scholar
  25. 25.
    Pugeault, N.: Early Cognitive Vision: Feedback Mechanisms for the Disambiguation of Early Visual Representation. Vdm Verlag Dr. Müller (2008)Google Scholar
  26. 26.
    Richtsfeld, M., Vincze, M.: Robotic grasping based on laser range and stereo data. In: International Conference on Robotics and Automation (2009)Google Scholar
  27. 27.
    Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic Grasping of Novel Objects using Vision. The International Journal of Robotics Research 27(2), 157 (2008)CrossRefGoogle Scholar
  28. 28.
    Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC, Boca Raton (1986)zbMATHGoogle Scholar
  29. 29.
    Stoytchev, A.: Toward learning the binding affordances of objects: A behavior-grounded approach. In: Proceedings of AAAI Symposium on Developmental Robotics, Stanford University, March 21-23, pp. 17–22 (2005)Google Scholar
  30. 30.
    Stoytchev, A.: Learning the affordances of tools using a behavior-grounded approach. In: Rome, E., Hertzberg, J., Dorffner, G. (eds.) Towards Affordance-Based Robot Control. LNCS (LNAI), vol. 4760, pp. 140–158. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  31. 31.
    Sudderth, E.B., Ihler, A.T., Freeman, W.T., Willsky, A.S.: Nonparametric belief propagation. In: Computer Vision and Pattern Recognition. IEEE Computer Society, Los Alamitos (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • R. Detry
    • 1
  • E. Başeski
    • 2
  • M. Popović
    • 2
  • Y. Touati
    • 2
  • N. Krüger
    • 2
  • O. Kroemer
    • 3
  • J. Peters
    • 3
  • J. Piater
    • 1
  1. 1.University of LiègeBelgium
  2. 2.University of SouthernDenmark
  3. 3.MPI for Biological CyberneticsTübingenGermany

Personalised recommendations