Advertisement

Enabling Affordance-Guided Grasp Synthesis for Robotic Manipulation

  • Xavier Williams
  • Nihar R. MahapatraEmail author
Conference paper
  • 14 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1119)

Abstract

Empowering robots with the ability to use image data to understand complex object affordances will enable robots to intuitively interact with any given object. An affordance is an action such as scooping or swinging that an object allows. A single object may have multiple affordances that are linked to the object’s functional parts. Furthermore, scooping and swinging require particular grasp configurations. This is the principle behind affordance-guided grasp synthesis. Once the affordances and the corresponding grasps are known, the robot is primed to learn the behaviors to execute affordances. This paper presents a systematic review of recent work on affordance detection and how those affordances are used to guide grasp synthesis.

Keywords

Affordance detection Functional parts Grasping Object manipulation Robotic manipulation 

Notes

Acknowledgements

This material is based upon work partly supported by the U.S. National Science Foundation under Grant No. 1936857.

References

  1. 1.
    Hassanin, M., Khan, S., Tahtali, M.: Visual affordance and function understanding: A survey (2018). arXiv:1807.06775
  2. 2.
    Sanchez, J., Corrales, J.-A., Bouzgarrou, B.-C., Mezouar, Y.: Robotic manipulation and sensing of deformable objects in domestic and industrial applications: A survey. Int. J. Robot. Res. 37(7), 688–716 (2018)Google Scholar
  3. 3.
    Schwarz, M., Milan, A., Periyasamy, A.S., Behnke, S.: RGB-D object detection and semantic segmentation for autonomous manipulation in clutter. Int. J. Robot. Res. 37(4–5), 437–451 (2018)CrossRefGoogle Scholar
  4. 4.
    Ghadirzadeh, A., Maki, A., Kragic, D., Bjrkman, M.: Deep predictive policy training using reinforcement learning. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2351–2358 (2017)Google Scholar
  5. 5.
    Gibson, J.J.: The senses considered as perceptual systems (1966)Google Scholar
  6. 6.
    Bütepage, J., Cruciani, S., Kokic, M., Welle, M., Kragic, D.: From visual understanding to complex object manipulation. Ann. Rev. Control Robot. Auton. Syst. 2, 161–179 (2019)CrossRefGoogle Scholar
  7. 7.
    Wang, D., Devin, C., Cai, Q.-Z., Krähenbühl, P., Darrell, T.: Monocular plan view networks for autonomous driving (2019). arxiv:1905.06937
  8. 8.
    Sun, C., Vianney, J.M.U., Cao, D.: Affordance learning in direct perception for autonomous driving (2019). arXiv:1903.08746
  9. 9.
    Shu, T., Ryoo, M.S., Zhu, S.-C.: Learning social affordance for human-robot interaction (2016). arXiv:1604.03692
  10. 10.
    Shu, T., Gao, X., Ryoo, M.S., Zhu, S.-C.: Learning social affordance grammar from videos: Transferring human interactions to human-robot interactions. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1669–1676. IEEE (2017)Google Scholar
  11. 11.
    Chuang, C.-Y., Li, J., Torralba, A., Fidler, S.: Learning to act properly: Predicting and explaining affordances from images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 975–983 (2018)Google Scholar
  12. 12.
    Li, X., Liu, S., Kim, K., Wang, X., Yang, M.-H., Kautz, J.: Putting humans in a scene: Learning affordance in 3D indoor environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12368–12376 (2019)Google Scholar
  13. 13.
    Su, Y.-F., Liu, A., Lu, W.-H.: Improving robot grasping plans with affordance. In: 2017 International Conference on Advanced Robotics and Intelligent Systems (ARIS), pp. 7–12. IEEE (2017)Google Scholar
  14. 14.
    Myers, A., Teo, C.L., Fermüller, C., Aloimonos, Y.: Affordance detection of tool parts from geometric features. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1374–1381. IEEE (2015)Google Scholar
  15. 15.
    Lakani, S.R., Rodríguez-Sánchez, A.J., Piater, J.: Towards affordance detection for robot manipulation using affordance for parts and parts for affordance. Auton. Robots 43(5), 1155–1172 (2019)CrossRefGoogle Scholar
  16. 16.
    Guerin, F., Ferreira, P.: Robot manipulation in open environments: New perspectives. In: IEEE Transactions on Cognitive and Developmental Systems, pp. 1–1 (2019)Google Scholar
  17. 17.
    Indurkhya, B.: Metaphor and cognition: An interactionist approach, Vol. 13. Springer Science & Business Media (2013)Google Scholar
  18. 18.
    Chu, F.-J., Xu, R., Seguin, L., Vela, P.: Toward affordance detection and ranking on novel objects for real-world robotic manipulation. IEEE Robot. Autom. Lett. (2019)Google Scholar
  19. 19.
    Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3D shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015)Google Scholar
  20. 20.
    Kokic, M., Stork, J.A., Haustein, J.A., Kragic, D.: Affordance detection for task-specific grasping using deep learning. In: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pp. 91–98. IEEE (2017)Google Scholar
  21. 21.
    Nguyen, A., Kanoulas, D., Caldwell, D.G., Tsagarakis, N.G.: Detecting object affordances with convolutional neural networks. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2765–2770. IEEE (2016)Google Scholar
  22. 22.
    Srikantha, A., Gall, J.: Weakly supervised learning of affordances (2016). arXiv:1605.02964
  23. 23.
    Fang, K., Wu, T.-L., Yang, D., Savarese, S., Lim, J.J.: Demo2vec: Reasoning object affordances from online videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2139–2147 (2018)Google Scholar
  24. 24.
    Nguyen, A., Kanoulas, D., Caldwell, D.G., Tsagarakis, N.G.: Object-based affordances detection with convolutional neural networks and dense conditional random fields. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5908–5915 (2017)Google Scholar
  25. 25.
    Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: European Conference on Computer Vision, pp. 740–755. Springer, Berlin (2014)Google Scholar
  26. 26.
    Jiang, Y., Moseson, S., Saxena, A.: Efficient grasping from RGB-D images: Learning using a new rectangle representation. In: 2011 IEEE International Conference on Robotics and Automation, pp. 3304–3311. IEEE (2011)Google Scholar
  27. 27.
    Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J.A., Goldberg, K.: Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics (2017). arXiv:1703.09312
  28. 28.
    Du, G., Wang, K., Lian, S.: Vision-based robotic grasping from object localization, pose estimation, grasp detection to motion planning: a review (2019). arXiv:1905.06658
  29. 29.
    Kraft, D., Detry, R., Pugeault, N., Başeski, E., Piater, J., Krüger, N.: Learning objects and grasp affordances through autonomous exploration. In: International Conference on Computer Vision Systems, pp. 235–244. Springer, Berlin (2009)Google Scholar
  30. 30.
    Ten Pas, A., Platt, R.: Localizing handle-like grasp affordances in 3D point clouds. In: Experimental Robotics, pp. 623–638. Springer, Berlin (2016)Google Scholar
  31. 31.
    Ardón, P., Pairet, È., Ramamoorthy, S., Lohan, K.S.: Towards robust grasps: Using the environment semantics for robotic object affordances. In: Proceedings of the AAAI Fall Symposium on Reasoning and Learning in Real-World Systems for Long-Term Autonomy, pp. 5–12 (2018)Google Scholar
  32. 32.
    Ardón, P., Pairet, È., Petrick, R., Ramamoorthy, S., Lohan, K.S.: Learning grasp affordance reasoning through semantic relations (2019). arXiv:1906.09836
  33. 33.
    Madry, M., Song, D., Kragic, D.: From object categories to grasp transfer using probabilistic reasoning. In: 2012 IEEE International Conference on Robotics and Automation, pp. 1716–1723. IEEE (2012)Google Scholar
  34. 34.
    Csurka, G., Dance, C., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Workshop on statistical learning in computer vision, ECCV, Vol. 1, No. 1–22, pp. 1–2. Prague (2004)Google Scholar
  35. 35.
    Zeng, A., Yu, K., Song, S., Suo, D., Walker, E., Rodriguez, A., Xiao, J.: Multiview self-supervised deep learning for 6D pose estimation in the Amazon picking challenge. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1386–1383 (2017)Google Scholar
  36. 36.
    Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: PoseCNN: A convolutional neural network for 6D object pose estimation in cluttered scenes (2017). arXiv:1711.00199

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Electrical and Computer Engineering, Michigan State UniversityEast LansingUSA

Personalised recommendations