Skip to main content

Using Affordances to Improve Robotic Understanding Based on Deep Learning

  • Conference paper
  • First Online:
Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021) (ICAUS 2021)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 861))

Included in the following conference series:

  • 127 Accesses

Abstract

The affordance theory provides a biology-inspired approach to enable a robot to act, think and develop like human beings. Based on existing affordance relationships, a robot understands its environment and task in terms of potential actions that it can execute. Deep learning makes it possible for a robot to perceive the environment in an efficient manner. As a result, affordance-based perception together with deep learning provides a possible solution for a robot to provide good service to us. However, affordance knowledge can not be gained just by visual perception and a single object might have multiply affordances. In this paper, we propose a novel framework to combine affordance knowledge and visual perception. Our method has the following features: (i) map human instructions into affordance knowledge; (ii) perceive the environment based on deep neural networks and associate each object with its affordances. In our experiments, a humanoid robot NAO is used and the results demonstrate that affordance knowledge can improve robotic understanding based on deep learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 549.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 699.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 699.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jamone, L., et al.: Affordances in psychology, neuroscience and robotics: a survey. IEEE Trans. Cogn. Dev. Syst. 10(1), 4–255 (2016)

    Article  Google Scholar 

  2. Pfeifer, R., Bongard, J., Grand, S.: How the Body Shapes the Way we Think: A New View of Intelligence. MIT Press (2007)

    Google Scholar 

  3. Weng, J., et al.: Autonomous mental development by robots and animals. Science 291(5504), 599–600 (2001)

    Article  Google Scholar 

  4. Gibson, J.J.: The theory of affordances. In: Shaw, R., Bransford, J. (eds.) Perceiving, Acting, and Knowing: Toward an Ecological Psychology, pp. 67–82. Lawrence Erlbaum, Englewood Cliffs (1977)

    Google Scholar 

  5. Şahin, E., Çakmak, M., Doğar, M.R., Ugur, E., Ucoluk, G.: To afford or not to afford: a new formalization of affordances toward affordance-based robot control. Adapt. Behav. 15(4), 447–472 (2007)

    Article  Google Scholar 

  6. Wang, C., Hindriks, K.V., Babuska, R.: Robot learning and use of affordances in goal-directed tasks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2288–2294 (2013)

    Google Scholar 

  7. Dong, S., Wang, P., Abbas, K.: A survey on deep learning and its applications. Comput. Sci. Rev. 40, 100379 (2021)

    Article  MathSciNet  Google Scholar 

  8. Jamone, L., et al.: Affordances in psychology, neuroscience, and robotics: a survey. IEEE Trans. Cogn. Dev. Syst. 10(1), 4–25 (2018)

    Article  Google Scholar 

  9. Imre, M., Oztop, E., Nagai, Y., Ugur, E.: Affordance-based altruistic robotic architecture for human-robot collaboration. Adapt. Behav. 27(4), 223–241 (2019)

    Article  Google Scholar 

  10. Min, H.Q., Yi, C.A., Luo, R.H., Bi, S., Shen, X.W., Yan, Y.G.: Affordance learning based on subtask’s optimal strategy. Int. J. Adv. Rob. Syst. 12, 1–11 (2015)

    Article  Google Scholar 

  11. Do, T.T., Nguyen, A., Reid, I.: AffordanceNet: an end-to-end deep learning approach for object affordance detection. In: IEEE International Conference on Robotics and Automation, pp. 5882–5889 (2018)

    Google Scholar 

  12. Uğur, E., Şahin, E.: Traversability: a case study for learning and perceiving affordances in robots. Adapt. Behav. 18(3–4), 258–284 (2010)

    Article  Google Scholar 

  13. Yamanobe, N., Wan, W.W., Alpizar, I.G.R., Petit, D.: A brief review of affordance in robotic manipulation research. Adv. Robot. 31(19–20), 1086–1101 (2017)

    Article  Google Scholar 

  14. Wang, C., Hindriks, K., Babuska, R.: Active learning of affordances for robot use of household objects. In: IEEE-RAS International Conference on Humanoid Robots, pp. 566-572 (2014)

    Google Scholar 

  15. Federico, G., Brandimonte, M.A.: Tool and object affordances: an ecological eye-tracking study. Brain Cogn. 135, 103582 (2019)

    Article  Google Scholar 

  16. Koppula, H.S.: Understanding people from RGBD data for assistive robots. Ph.D. dissertation, Cornell University, America (2016)

    Google Scholar 

  17. Paulius, D., Sun, Y.: A survey on knowledge representation in service robotics. Robot. Auton. Syst. 118, 13–30 (2020)

    Article  Google Scholar 

  18. Caldera, S., Rassau, A., Chai, D.: Review of deep learning methods in robotic grasp detection. Multimodal Technol. Iteraction 2(3), 1–24 (2018)

    Google Scholar 

  19. Luo, S., Lu, H.M., Xiao, J.H., Yu, Q.H., Zheng, Z.Q.: Robot detection and localization based on deep learning. In: Chinese Automation Congress, pp. 7091–7095 (2017)

    Google Scholar 

  20. Paulius, D., Sun, Y.: A survey of knowledge representation in service robotics. Robot. Auton. Syst. 118, 13–30 (2019)

    Article  Google Scholar 

  21. Bergamini, L., Sposato, M., Pellicciari, M., Peruzzini, M., Calderara, S., Schmidt, J.: Deep learning-based method for vision-guided robotic grasping of unknown objects. Adv. Eng. Inform. 44, 101052 (2020)

    Article  Google Scholar 

  22. Abbasi, B., Monaikul, N., Rysbek, Z., Eugenio, B.D., Zefran, M.: A multimodal human-robot interaction manager for assistive robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 6756–6762 (2019)

    Google Scholar 

  23. Xue, B., Marcin, A., Wojciech, Z.: Sim-to-real transfer of robotic control with dynamics randomization. In: IEEE International Conference on Robotics and Automation, pp. 3803–3810 (2018)

    Google Scholar 

  24. Alex, K., Ilya, S., Geoffrey, E.H.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  25. Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  26. Zhuang, F.Z., et al.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2020)

    Article  Google Scholar 

  27. He, K.M., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

    Google Scholar 

Download references

Acknowledgment

This work is partly supported by the Humanities and Social Science Youth Foundation of Ministry of Education of China (18YJCZH226), the Science and Technology Plan of Guangdong Province of China (2020A1414050072), the Science and Technology Plan of Foshan (1920001000529, Foshan Science and Technology Bereau), and Postgraduate Free Exploration Fund of Foshan University (2020ZYTS07).

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yi, C., Chen, H., Zhong, J., Liu, X., Hu, X., Xu, Y. (2022). Using Affordances to Improve Robotic Understanding Based on Deep Learning. In: Wu, M., Niu, Y., Gu, M., Cheng, J. (eds) Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021). ICAUS 2021. Lecture Notes in Electrical Engineering, vol 861. Springer, Singapore. https://doi.org/10.1007/978-981-16-9492-9_243

Download citation

Publish with us

Policies and ethics