Learning Affordances for Assistive Robots

  • Mohan SridharanEmail author
  • Ben Meadows
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10652)


This paper describes an architecture that enables a robot to represent, reason about, and learn affordances. Specifically, Answer Set Prolog is used to represent and reason with incomplete domain knowledge that includes affordances modeled as relations between attributes of the robot and the object(s) in the context of specific actions. The learning of affordance relations from observations obtained through reactive execution or active exploration is formulated as a reinforcement learning problem. A sampling-based approach and decision-tree regression with the underlying relational representation are used to obtain generic affordance relations that are added to the Answer Set Prolog program for subsequent reasoning. The capabilities of this architecture are illustrated and evaluated in the context of a simulated robot assisting humans in an indoor domain.



This work was supported in part by the US Office of Naval Research award N00014-13-1-0766 and Asian Office of Aerospace Research and Development award FA2386-16-1-4071. All conclusions are those of the authors alone.


  1. 1.
    Driessens, K., Ramon, J.: Relational instance-based regression for relational reinforcement learning. In: International Conference on Machine Learning, pp. 123–130 (2003)Google Scholar
  2. 2.
    Gabaldon, A.: Activity recognition with intended actions. In: International Joint Conference on Artificial Intelligence (IJCAI), Pasadena, USA, 11–17 July 2009Google Scholar
  3. 3.
    Gelfond, M., Inclezan, D.: Some properties of system descriptions of \(AL_d\). J. Appl. Non-Class. Logics, Special Issue Equilibr. Logic Answer Set Program. 23(1–2), 105–120 (2013)Google Scholar
  4. 4.
    Gelfond, M., Kahl, Y.: Knowledge Representation, Reasoning and the Design of Intelligent Agents. Cambridge University Press, New York (2014)CrossRefGoogle Scholar
  5. 5.
    Gibson, J.J.: The Ecological Approach to Visual Perception. Psychology Press (1986)Google Scholar
  6. 6.
    Gil, Y.: Learning by experimentation: Incremental refinement of incomplete planning domains. In: International Conference on Machine Learning, New Brunswick, USA, pp. 87–95, 10–13 July 1994Google Scholar
  7. 7.
    Kirk, J., Mininger, A., Laird, J.: Learning task goals interactively with visual demonstrations. Biol. Inspired Cogn. Archit. 18, 1–8 (2016)Google Scholar
  8. 8.
    Oates, T., Cohen, P.R.: Searching for planning operators with context-dependent and probabilistic effects. In: AAAI Conference on Artificial Intelligence, Portland, USA, 4–8 August 1996Google Scholar
  9. 9.
    Otero, R.P.: Induction of the effects of actions by monotonic methods. In: International Conference on Inductive Logic Programming, pp. 299–310 (2003)Google Scholar
  10. 10.
    Ramenzoni, V.C., Davis, T.J., Riley, M.A., Shockley, K.: Perceiving action boundaries: Learning effects in perceiving maximum jumping-reach affordances. Atten. Percept. Psychophys. 72(4), 1110–1119 (2010)CrossRefGoogle Scholar
  11. 11.
    Sahin, E., Cakmak, M., Dogar, M.R., Ugur, E., Ucoluk, G.: To afford or not to afford: A new formalization of affordances towards affordance-based robot control. Adapt. Behav. 15(4), 447–472 (2007)CrossRefGoogle Scholar
  12. 12.
    Sarathy, V., Scheutz, M.: A logic-based computational framework for inferring cognitive affordances. IEEE Trans. Cogn. Devel. Syst. 8(3) (2016)Google Scholar
  13. 13.
    Shen, W.-M., Simon, H.: Rule creation and rule learning through environmental exploration. In: International Joint Conference on Artificial Intelligence, Detroit, USA, pp. 675–680, 20–25 August 1989Google Scholar
  14. 14.
    Shu, T., Ryoo, M.S., Zhu, S.-C.: Learning social affordance for human-robot interaction. In: International Joint Conference on Artificial Intelligence (IJCAI), New York, USA, 9–15 July 2016Google Scholar
  15. 15.
    Sridharan, M., Gelfond, M., Zhang, S., Wyatt, J.: A Refinement-Based Architecture for Knowledge Representation and Reasoning in Robotics. Technical report, April 2017.
  16. 16.
    Sridharan, M., Meadows, B.: A combined architecture for discovering affordances, causal laws, and executability conditions. In: International Conference on Advances in Cognitive Systems (ACS), Troy, USA, 12–14 May 2017Google Scholar
  17. 17.
    Sridharan, M., Meadows, B., Gomez, R.: ’What can I not do? Towards an architecture for reasoning about and learning affordances. In: International Conference on Automated Planning and Scheduling (ICAPS), Pittsburgh, USA, 18–23 June 2017Google Scholar
  18. 18.
    Stoffregen, T.A., Yang, C.-M., Giveans, R., Flanagan, M., Bardy, B.G.: Movement in the perception of an affordance for wheelchair locomotion. Ecol. Psychol. 21(1), 1–36 (2009)CrossRefGoogle Scholar
  19. 19.
    Sutton, R.L., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  20. 20.
    Tadepalli, P., Givan, R., Driessens, K.: Relational reinforcement learning: an overview. In: Relational Reinforcement Learning Workshop at International Conference on Machine Learning (2004)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringThe University of AucklandAucklandNew Zealand

Personalised recommendations