Autonomous Robots

, Volume 37, Issue 3, pp 301–316 | Cite as

Semantic grasping: planning task-specific stable robotic grasps

Article

Abstract

We present an example-based planning framework to generate semantic grasps, stable grasps that are functionally suitable for specific object manipulation tasks. We propose to use partial object geometry, tactile contacts, and hand kinematic data as proxies to encode task-related constraints, which we call semantic constraints. We introduce a semantic affordance map, which relates local geometry to a set of predefined semantic grasps that are appropriate to different tasks. Using this map, the pose of a robot hand with respect to the object can be estimated so that the hand is adjusted to achieve the ideal approach direction required by a particular task. A grasp planner is then used to search along this approach direction and generate a set of final grasps which have appropriate stability, tactile contacts, and hand kinematics. We show experiments planning semantic grasps on everyday objects and applying these grasps with a physical robot.

Keywords

Grasp planning Task-specific grasping Semantic grasping 

References

  1. Aleotti, J., & Caselli, S. (2011). Part-based robot grasp planning from human demonstration. In ICRA (pp. 4554–4560).Google Scholar
  2. Aleotti, J., & Caselli, S. (2012). A 3d shape segmentation approach for robot grasping by parts. Robotics and Autonomous Systems, 60(3), 358–366. doi:10.1016/j.robot.2011.07.022.CrossRefGoogle Scholar
  3. Belongie, S., Malik, J., & Puzicha, J. (2002). Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 509–522.CrossRefGoogle Scholar
  4. Ben Amor, H., Heumer, G., Jung, B., & Vitzthum, A. (2008). Grasp synthesis from low-dimensional probabilistic grasp models. Computer Animation and Virtual Worlds, 19(3–4), 445–454. doi:10.1002/cav.252.CrossRefGoogle Scholar
  5. Ben Amor, H., Kroemer, O., Hillenbrand, U., Neumann, G., & Peters, J. (2012). Generalization of human grasping for multi-fingered robot hands. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2043–2050).Google Scholar
  6. Berenson, D., & Srinivasa, S. (2008). Grasp synthesis in cluttered environments for dexterous hands. In IEEE-RAS International Conference on Humanoid Robots (Humanoids08).Google Scholar
  7. Berenson, D., Diankov, R., Nishiwaki, K., Kagami, S., & Kuffner, J. (2007). Grasp planning in complex scenes. In 7th IEEE-RAS International Conference on Humanoid Robots (pp. 42–48). doi:10.1109/ICHR.2007.4813847.
  8. Boularias, A., Kroemer, O., & Peters, J. (2011). Learning robot grasping from 3-d images with markov random fields. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1548–1553). doi:10.1109/IROS.2011.6094888.
  9. Castiello, U. (2005). The neuroscience of grasping. Nature Reviews Neuroscience, 6(9), 726–736.CrossRefGoogle Scholar
  10. Chang, L. Y., Zeglin, G., & Pollard, N. (2008). Preparatory object rotation as a human-inspired grasping strategy. In IEEE-RAS International Conference on Humanoid Robots (pp. 527–534).Google Scholar
  11. Ciocarlie, M., Lackner, C., & Allen, P. (2007). Soft finger model with adaptive contact geometry for grasping and manipulation tasks. In World Haptics Conference (pp. 219–224).Google Scholar
  12. Ciocarlie, M. T., & Allen, P. K. (2009). Hand Posture Subspaces for Dexterous Robotic Grasping. The International Journal of Robotics Research, 28(7), 851–867.CrossRefGoogle Scholar
  13. Dang, H., & Allen, P. (2010). Robot learning of everyday object manipulations via human demonstration. In IROS (pp. 1284–1289). doi:10.1109/IROS.2010.5651244.
  14. Dang, H., & Allen, P. (2012). Semantic grasping: Planning robotic grasps functionally suitable for an object manipulation task. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1311–1317).Google Scholar
  15. Dang, H., & Allen, P. (2014). Stable grasping under pose uncertainty using tactile feedback. Autonomous Robots, 36(4), 309–330.Google Scholar
  16. Dang, H., Weisz, J., & Allen, P. (2011). Blind grasping: Stable robotic grasping using tactile feedback and hand kinematics. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5917–5922). doi:10.1109/ICRA.2011.5979679.
  17. Detry, R., Kraft, D., Buch, A., Kruger, N., & Piater, J. (2010). Refining grasp affordance models by experience. In IEEE International Conference on Robotics and Automation (pp. 2287–2293). doi:10.1109/ROBOT.2010.5509126.
  18. Diankov, R., & Kuffner, J. (2008). Openrave: A planning architecture for autonomous robotics. Tech. Rep. CMU-RI-TR-08-34, Robotics Institute, Pittsburgh, PA.Google Scholar
  19. Dogar, M., & Srinivasa, S. (2011). A framework for push-grasping in clutter. In P. Abbeel (Ed.), Hugh Durrant-Whyte NR. Science and Systems VII. Cambridge: MIT Press.Google Scholar
  20. Ferrari, C., & Canny, J. (1992). Planning optimal grasps. In ICRA (vol. 3, pp. 2290–2295).Google Scholar
  21. Geidenstam, S., Huebner, K., Banksell, D., & Kragic, D. (2009). Learning of 2D grasping strategies from box-based 3D object approximations. In Proceedings of Robotics: Science and Systems.Google Scholar
  22. Gioioso, G., Salvietti, G., Malvezzi, M., & Prattichizzo, D. (2012). An object-based approach to map human hand synergies onto robotic hands with dissimilar kinematics. In Proceedings of Robotics: Science and Systems, Sydney, Australia.Google Scholar
  23. Goldfeder, C., & Allen, P. K. (2011). Data-driven grasping. Auton Robots, 31, 1–20. doi:10.1007/s10514-011-9228-1.CrossRefGoogle Scholar
  24. Haschke, R., Steil, J. J., Steuwer, I., & Ritter, H. (2005). Task-oriented quality measures for dextrous grasping. In Proc. Conference on Computational Intelligence in Robotics and Automation.Google Scholar
  25. Hillenbrand, U., & Roa, M. A. (2012). Transferring functional grasps through contact warping and local replanning. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).Google Scholar
  26. Hsiao, K., Kaelbling, L., & Lozano-Prez, T. (2011). Robust grasping under object pose uncertainty. Autonomous Robots, 31, 253–268. doi:10.1007/s10514-011-9243-2.CrossRefGoogle Scholar
  27. Kehoe, B., Matsukawa, A., Candido, S., Kuffner, J., & Goldberg, K. (2013). Cloud-Based Robot Grasping with the Google Object Recognition Engine. http://goldberg.berkeley.edu/pubs/ICRA-Cloud-Grasping-May-2013.pdf. Accessed 22 May 2014.
  28. Li, Z., & Sastry, S. (1987). Task oriented optimal grasping by multifingered robot hands. In ICRA (pp. 389–394).Google Scholar
  29. Ling, H., & Jacobs, D. (2007). Shape classification using the inner-distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(2), 286–299. doi:10.1109/TPAMI.2007.41.CrossRefGoogle Scholar
  30. Manis, R., & Santos, V. (2011a) Characteristics of a three-fingered grasp during a pouring task requiring dynamic stability. In Proc. Neural Control of Movement Ann Mtg.Google Scholar
  31. Manis, R., & Santos, V. (2011b). Multi-digit coordination during a pouring task that requires dynamic stability. In Proc Ann Mtg Amer Soc Biomech.Google Scholar
  32. Miller, A. T., & Allen, P. K. (2004). Graspit a versatile simulator for robotic grasping. IEEE Robotics and Automation Magazine, 11(4), 110–122.CrossRefGoogle Scholar
  33. Papazov, C., & Burschka, D. (2010). An efficient ransac for 3d object recognition in noisy and occluded scenes. In ACCV (pp. 135–148).Google Scholar
  34. Popovic, M., Kraft, D., Bodenhagen, L., Baseski, E., Pugeault, N., Kragic, D., et al. (2010). A strategy for grasping unknown objects based on co-planarity and colour information. Robotics and Autonomous Systems, 58(5), 551–565. Google Scholar
  35. Prats, M., Sanz. P., & del Pobil, A. (2007). Task-oriented grasping using hand preshapes and task frames. In IEEE International Conference on Robotics and Automation (pp. 1794–1799).Google Scholar
  36. Rosales, C., Ros, L., Porta, J. M., & Suarez, R. (2010). Synthesizing Grasp Configurations with Specified Contact Regions. The International Journal of Robotics Research,. doi:10.1177/0278364910370218.Google Scholar
  37. Sahbani, A., & El-Khoury, S. (2009). A hybrid approach for grasping 3d objects. In IROS (pp. 1272–1277).Google Scholar
  38. Saxena, A., Driemeyer, J., Kearns, J., & Ng, A. Y. (2007). Robotic grasping of novel objects. Advances in Neural Information Processing Systems, 19, 1209–1216.Google Scholar
  39. Song, D., Huebner, K., Kyrki, V., & Kragic, D. (2010). Learning task constraints for robot grasping using graphical models. In IROS (pp. 1579–1585). doi:10.1109/IROS.2010.5649406.
  40. Varadarajan, K., & Vincze, M. (2011a) Knowledge representation and inference for grasp affordances. In Crowley, J., Draper, B., Thonnat, M. (Eds.) Computer Vision Systems, Lecture Notes in Computer Science (vol. 6962, pp. 173–182). Berlin: Springer. doi:10.1007/978-3-642-23968-7_18.
  41. Varadarajan, K., & Vincze, M. (2011b). Object part segmentation and classification in range images for grasping. In 15th International Conference on Advanced Robotics (ICAR) (pp. 21–27). doi:10.1109/ICAR.2011.6088647.
  42. Ying, L., Fu, J., & Pollard, N. (2007). Data-driven grasp synthesis using shape matching and task-based pruning. IEEE Transactions on Visualization and Computer Graphics, 13(4), 732–747. doi:10.1109/TVCG.2007.1033.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Computer Science DepartmentColumbia UniversityNew YorkUSA

Personalised recommendations