Abstract
The affordance theory provides a biology-inspired approach to enable a robot to act, think and develop like human beings. Based on existing affordance relationships, a robot understands its environment and task in terms of potential actions that it can execute. Deep learning makes it possible for a robot to perceive the environment in an efficient manner. As a result, affordance-based perception together with deep learning provides a possible solution for a robot to provide good service to us. However, affordance knowledge can not be gained just by visual perception and a single object might have multiply affordances. In this paper, we propose a novel framework to combine affordance knowledge and visual perception. Our method has the following features: (i) map human instructions into affordance knowledge; (ii) perceive the environment based on deep neural networks and associate each object with its affordances. In our experiments, a humanoid robot NAO is used and the results demonstrate that affordance knowledge can improve robotic understanding based on deep learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Jamone, L., et al.: Affordances in psychology, neuroscience and robotics: a survey. IEEE Trans. Cogn. Dev. Syst. 10(1), 4–255 (2016)
Pfeifer, R., Bongard, J., Grand, S.: How the Body Shapes the Way we Think: A New View of Intelligence. MIT Press (2007)
Weng, J., et al.: Autonomous mental development by robots and animals. Science 291(5504), 599–600 (2001)
Gibson, J.J.: The theory of affordances. In: Shaw, R., Bransford, J. (eds.) Perceiving, Acting, and Knowing: Toward an Ecological Psychology, pp. 67–82. Lawrence Erlbaum, Englewood Cliffs (1977)
Şahin, E., Çakmak, M., Doğar, M.R., Ugur, E., Ucoluk, G.: To afford or not to afford: a new formalization of affordances toward affordance-based robot control. Adapt. Behav. 15(4), 447–472 (2007)
Wang, C., Hindriks, K.V., Babuska, R.: Robot learning and use of affordances in goal-directed tasks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2288–2294 (2013)
Dong, S., Wang, P., Abbas, K.: A survey on deep learning and its applications. Comput. Sci. Rev. 40, 100379 (2021)
Jamone, L., et al.: Affordances in psychology, neuroscience, and robotics: a survey. IEEE Trans. Cogn. Dev. Syst. 10(1), 4–25 (2018)
Imre, M., Oztop, E., Nagai, Y., Ugur, E.: Affordance-based altruistic robotic architecture for human-robot collaboration. Adapt. Behav. 27(4), 223–241 (2019)
Min, H.Q., Yi, C.A., Luo, R.H., Bi, S., Shen, X.W., Yan, Y.G.: Affordance learning based on subtask’s optimal strategy. Int. J. Adv. Rob. Syst. 12, 1–11 (2015)
Do, T.T., Nguyen, A., Reid, I.: AffordanceNet: an end-to-end deep learning approach for object affordance detection. In: IEEE International Conference on Robotics and Automation, pp. 5882–5889 (2018)
Uğur, E., Şahin, E.: Traversability: a case study for learning and perceiving affordances in robots. Adapt. Behav. 18(3–4), 258–284 (2010)
Yamanobe, N., Wan, W.W., Alpizar, I.G.R., Petit, D.: A brief review of affordance in robotic manipulation research. Adv. Robot. 31(19–20), 1086–1101 (2017)
Wang, C., Hindriks, K., Babuska, R.: Active learning of affordances for robot use of household objects. In: IEEE-RAS International Conference on Humanoid Robots, pp. 566-572 (2014)
Federico, G., Brandimonte, M.A.: Tool and object affordances: an ecological eye-tracking study. Brain Cogn. 135, 103582 (2019)
Koppula, H.S.: Understanding people from RGBD data for assistive robots. Ph.D. dissertation, Cornell University, America (2016)
Paulius, D., Sun, Y.: A survey on knowledge representation in service robotics. Robot. Auton. Syst. 118, 13–30 (2020)
Caldera, S., Rassau, A., Chai, D.: Review of deep learning methods in robotic grasp detection. Multimodal Technol. Iteraction 2(3), 1–24 (2018)
Luo, S., Lu, H.M., Xiao, J.H., Yu, Q.H., Zheng, Z.Q.: Robot detection and localization based on deep learning. In: Chinese Automation Congress, pp. 7091–7095 (2017)
Paulius, D., Sun, Y.: A survey of knowledge representation in service robotics. Robot. Auton. Syst. 118, 13–30 (2019)
Bergamini, L., Sposato, M., Pellicciari, M., Peruzzini, M., Calderara, S., Schmidt, J.: Deep learning-based method for vision-guided robotic grasping of unknown objects. Adv. Eng. Inform. 44, 101052 (2020)
Abbasi, B., Monaikul, N., Rysbek, Z., Eugenio, B.D., Zefran, M.: A multimodal human-robot interaction manager for assistive robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 6756–6762 (2019)
Xue, B., Marcin, A., Wojciech, Z.: Sim-to-real transfer of robotic control with dynamics randomization. In: IEEE International Conference on Robotics and Automation, pp. 3803–3810 (2018)
Alex, K., Ilya, S., Geoffrey, E.H.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Zhuang, F.Z., et al.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2020)
He, K.M., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)
Acknowledgment
This work is partly supported by the Humanities and Social Science Youth Foundation of Ministry of Education of China (18YJCZH226), the Science and Technology Plan of Guangdong Province of China (2020A1414050072), the Science and Technology Plan of Foshan (1920001000529, Foshan Science and Technology Bereau), and Postgraduate Free Exploration Fund of Foshan University (2020ZYTS07).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yi, C., Chen, H., Zhong, J., Liu, X., Hu, X., Xu, Y. (2022). Using Affordances to Improve Robotic Understanding Based on Deep Learning. In: Wu, M., Niu, Y., Gu, M., Cheng, J. (eds) Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021). ICAUS 2021. Lecture Notes in Electrical Engineering, vol 861. Springer, Singapore. https://doi.org/10.1007/978-981-16-9492-9_243
Download citation
DOI: https://doi.org/10.1007/978-981-16-9492-9_243
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-9491-2
Online ISBN: 978-981-16-9492-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)