Autonomous Robots

, Volume 42, Issue 1, pp 19–44 | Cite as

Relational affordances for multiple-object manipulation

  • Bogdan Moldovan
  • Plinio MorenoEmail author
  • Davide Nitti
  • José Santos-Victor
  • Luc De Raedt


The concept of affordances has been used in robotics to model action opportunities of a robot and as a basis for making decisions involving objects. Affordances capture the interdependencies between the objects and their properties, the executed actions on those objects, and the effects of those respective actions. However, existing affordance models cannot cope with multiple objects that may interact during action execution. Our approach is unique in that possesses the following four characteristics. First, our model employs recent advances in probabilistic programming to learn affordance models that take into account (spatial) relations between different objects, such as relative distances. Two-object interaction models are first learned from the robot interacting with the world in a behavioural exploration stage, and are then employed in worlds with an arbitrary number of objects. The model thus generalizes over both the number of and the particular objects used in the exploration stage, and it also effectively deals with uncertainty. Secondly, rather than using a (discrete) action repertoire, the actions are parametrised according to the motor capabilities of the robot, which allows to model and achieve goals at several levels of complexity. It also supports a two-arm parametrised action. Thirdly, the relational affordance model represents the state of the world using both discrete (action and object features) and continuous (effects) random variables. The effects follow a multivariate Gaussian distribution with the correlated discrete variables (actions and object properties). Fourthly, the learned model can be employed on planning for high-level goals that closely correspond to goals formulated in natural language. The goals are specified by means of (spatial) relations between the objects. The model is evaluated in real experiments using an iCub robot given a series of such planning goals of increasing difficulty.


Affordances Relational affordances Probabilistic programming Object manipulation Planning 



This work was partly supported by the EU FP7 Project FIRST-MM, the IWT via scholarships for BM and DN, the Research Foundation Flanders, and the KULeuven BOF. Also by Research Infrastructure 22084-01/SAICT/2016 - Robotics, Brain and Cognition (RBCog-Lab) and Project FCT UID/EEA/50009/2013.


  1. Amant, R. S. (1999). Planning and user interface affordances. In Proceedings of the 1999 international conference on Intelligent user interfaces (pp. 135–142).Google Scholar
  2. Berenson, D., & Srinivasa, S. S. (2008). Grasp synthesis in cluttered environments for dexterous hands. In Humanoids.Google Scholar
  3. Blum, A. L., & Langford, J. C. (1999, September). Probabilistic planning in the graphplan framework. In European Conference on Planning (pp. 319–332). Berlin, Heidelberg: Springer.Google Scholar
  4. Brown, S., & Sammut, C. (2013). A relational approach to tool-use learning in robots. In Inductive logic programming.Google Scholar
  5. Browne, C., Powley, E. J., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., et al. (2012). A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1), 1–43.CrossRefGoogle Scholar
  6. Cakmak, M., Dogar, M. R., Ugur, E., & Sahin, E. (2007). Affordances as a framework for robot control. In International Conference on Epigenetic Robotics (EpiRob).Google Scholar
  7. Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing, and generalizing a task in a humanoid robot. Systems, Man, and Cybernetics, Part B: Cybernetics, 37(2), 286–298.CrossRefGoogle Scholar
  8. Christoudias, C. M., Georgescu, B., Meer, P., & Georgescu, C. M. (2002). Synergism in low level vision. In ICPR.Google Scholar
  9. Cosgun, A., Hermans, T., Emeli, V., & Stilman, M. (2011). Push planning for object placement on cluttered table surfaces. In IROS.Google Scholar
  10. De Raedt, L., Kimmig, A., & Toivonen, H. (2007). Problog: A probabilistic Prolog and its application in link discovery. In IJCAI.Google Scholar
  11. De Raedt, L. (2008). Logical and relational learning. Berlin: Springer.CrossRefzbMATHGoogle Scholar
  12. De Raedt, L., & Kersting, K. (2008). Probabilistic inductive logic programming. Probabilistic inductive logic programming. Berlin: Springer.CrossRefzbMATHGoogle Scholar
  13. Emeli, V., Kemp, C., & Stilman, M. (2011). Push planning for object placement in clutter using the PR-2. In IROS PR2 Workshop.Google Scholar
  14. Fikes, R. E., & Nilsson, N. (1971). STRIPS: A new approach to the application theorem proving to problem solving. Artificial Intelligence, 5(2), 189–208.CrossRefzbMATHGoogle Scholar
  15. Fritz, G., Paletta, L., Breithaupt, R., Rome, E., & Dorffner, G. (2006). Learning predictive features in affordance based robotic perception systems. In IROS (pp. 3642–3647).Google Scholar
  16. Geib, C., Mourão, K., Petrick, R. P. A., Pugeault, N., Steedman, M., Krueger, N., & Wörgötter, F. (2006). Object action complexes as an interface for planning and robot control. In Workshop: Towards Cognitive Humanoid Robots at IEEE RAS.Google Scholar
  17. Getoor, L., & Taskar, B. (2007). Introduction to statistical relational learning. Cambridge: The MIT press.zbMATHGoogle Scholar
  18. Gibson, J. J. (1979). The ecologial approach to visual perception. Boston: Houghton Mifflin.Google Scholar
  19. Gienger, M., Toussaint, M., & Goerick, C. (2008). Task maps in humanoid robot manipulation. In IROS.Google Scholar
  20. Gutmann, B., Thon, I., Kimmig, A., Bruynooghe, M., & De Raedt, L. (2011). The magic of logical inference in probabilistic programming. Theory and Practice of Logic Programming, 11(4–5), 663–680.Google Scholar
  21. Hertzberg, J., & Chatila, R. (2008). AI reasoning methods for robotics. Handbook of robotics. Berlin: Springer.Google Scholar
  22. Hirschmuller, H. (2008). Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2), 328–341.CrossRefGoogle Scholar
  23. Jain, D., Mösenlechner, L., & Beetz, M. (2009). Equipping robot control programs with first-order probabilistic reasoning capabilities. In ICRA (pp. 3626–3631).Google Scholar
  24. Jetchev, N., & Toussaint, M. (2010). Trajectory prediction in cluttered voxel environments. In ICRA.Google Scholar
  25. Kearns, M., Mansour, Y., & Ng, A. Y. (2002). A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49(2–3), 193–208.CrossRefzbMATHGoogle Scholar
  26. Kjærulff, U. B., & Madsen, A. L. (2005). Probabilistic networks–An introduction to Bayesian networks and influence diagrams. Aalborg: Aalborg University.Google Scholar
  27. Kjellström, H., Romero, J., & Kragić, D. (2011). Visual object-action recognition: Inferring object affordances from human demonstration. Computer Vision and Image Understanding, 115(1), 81–90.CrossRefGoogle Scholar
  28. Kocsis, L., & Szepesvári, C. (2006). Bandit based Monte-Carlo planning. In ECML (pp. 282–293).Google Scholar
  29. Kragic, D., Björkman, M., Christensen, H. I., & Eklundh, J. O. (2005). Vision for robotic object manipulation in domestic settings. Robotics and Autonomous Systems, 52(1), 85–100.CrossRefGoogle Scholar
  30. Kruger, N., Piater, J., Worgotter, F., Geib, C., Petrick, R., Steedman, M., Asfour, T., Kraft, D., Hommel, B., Agostini, A., Kragic, D., Eklundh, J.-O., Kruger, V., Torras, C., & Dillmann, R. (2009). A formal definition of object-action complexes and examples at different levels of the processing hierarchy. In Computer and Information Science, 1–39QC 20120426.Google Scholar
  31. Krüger, N., Geib, C., Piater, J., Petrick, R., Steedman, M., Wörgötter, F., et al. (2011). ObjectAction complexes: Grounded abstractions of sensorymotor processes. Robotics and Autonomous Systems, 59(10), 740–757.CrossRefGoogle Scholar
  32. Krunic, V., Salvi, G., Bernardino, A., Montesano, L., & Santos-Victor, J. (2009). Affordance based word-to-meaning association. In ICRA.Google Scholar
  33. Kushmerick, N., Hanks, S., & Weld, D. S. (1995). An algorithm for probabilistic planning. Artificial Intelligence, 76(1–2), 239–286.CrossRefGoogle Scholar
  34. Lang, T., & Toussaint, M. (2009). Approximate inference for planning in stochastic relational worlds. In ICML (pp. 585–592).Google Scholar
  35. Lang, T., & Toussaint, M. (2010). Planning with noisy probabilistic relational rules. Journal of Artificial Intelligence Research, 39, 1–49.zbMATHGoogle Scholar
  36. Lang, T., & Toussaint, M. (2010). Planning with noisy probabilistic relational rules. Journal of Artificial Intelligence Research (JAIR), 39, 1–49.zbMATHGoogle Scholar
  37. Lopes, M., Melo, F. S., & Montesano, L. (2007). Affordance-based imitation learning in robots. In IROS (pp. 1015–1021).Google Scholar
  38. Lorken, C., & Hertzberg, J. (2008). Grounding planning operators by affordances. In International Conference on Cognitive Systems.Google Scholar
  39. Lungarella, M., Metta, G., Pfeifer, R., & Sandini, G. (2003). Developmental robotics: A survey. Connection Science, 15, 151–190.CrossRefGoogle Scholar
  40. Mcdermott, D., Ghallab, M., Howe, A., Knoblock, C., Ram, A., Veloso, M., Weld, D., & Wilkins, D. (1998). PDDL—The planning domain definition language, Technical report, CVC TR-98-003/DCS TR-1165, Yale Center for Computational Vision and Control.Google Scholar
  41. Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F. (2008). The iCub humanoid robot: An open platform for research in embodied cognition. In PerMIS.Google Scholar
  42. Metta, G., Fitzpatrick, P., & Natale, L. (2006). YARP: Yet another robot platform. International Journal on Advanced Robotics Systems, 3(1), 43–38.Google Scholar
  43. Moldovan, B., & De Raedt, L. (2014). Learning relational affordance models for two-arm robots. In IROS.Google Scholar
  44. Moldovan, B., Moreno, P., & van Otterlo, M. (2013). On the use of probabilistic relational affordance models for sequential manipulation tasks in robotics. In ICRA.Google Scholar
  45. Moldovan, B., Moreno, P., van Otterlo, M., Santos-Victor, J., & De Raedt, L. (2012). Learning relational affordance models for robots in multi-object manipulation tasks. In ICRA.Google Scholar
  46. Montesano, L., Lopes, M., Bernardino, A., & Santos-Victor, J. (2008). Learning object affordances: From sensory-motor coordination to imitation. IEEE Transactions on Robotics, 24, 15–26.CrossRefGoogle Scholar
  47. Mourão, K., Petrick, R. P. A., & Steedman, M. (2010). Learning action effects in partially observable domains. In ECAI (pp. 973–974).Google Scholar
  48. Murphy, K. P. (2002). Dynamic Bayesian networks: Representation, inference and learning, Ph.D. thesis, University of California, Berkeley.Google Scholar
  49. Murphy, K., et al. (2001). The Bayes net toolbox for Matlab. Computing Science and Statistics, 33(2), 1024–1034.Google Scholar
  50. Nitti, D., Belle, V., & De Raedt, L. (2015). In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD) 2015, Part II (Vol. 9285).Google Scholar
  51. Nitti, D., De Laet, T., & De Raedt, L. (2014). Relational object tracking and learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (pp. 935–942).Google Scholar
  52. Nitti, D., De Laet, T., & De Raedt, L. (2016). Probabilistic logic programming for hybrid relational domains. Machine Learning, 103(3), 407–449.Google Scholar
  53. Nitti, D., Laet, T. D., & Raedt, L. D. (2013). A particle filter for hybrid relational domains. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2764–2771).Google Scholar
  54. Pasula, H. M., Zettlemoyer, L. S., & Kaebling, L. P. (2004). Learning probabilistic relational planning rules. In ICAPS (pp. 73–82).Google Scholar
  55. Pattacini, U., Nori, F., Natale, L., Metta, G., & Sandini, G. (2010). An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots. In IROS.Google Scholar
  56. Şahin, E., Çakmak, M., Doğar, M. R., Uğur, E., & Üçoluk, G. (2007). To afford or not to afford: A new formalization of affordances towards affordance based robot control. Adaptive Behavior, 15(4), 447–472.CrossRefGoogle Scholar
  57. Sato, T. (1995). A statistical learning method for logic programs with distribution semantics. In ICLP (pp. 715–729).Google Scholar
  58. Sinapov, J., & Stoytchev, A. (2007). Learning and generalization of behavior-grounded tool affordances. In ICDL (pp. 19–24).Google Scholar
  59. Steedman, M. (2002). Formalizing affordance. In Annual Meeting of the Cognitive Science Society (pp. 834–839).Google Scholar
  60. Stoytchev, A. (2005). Behavior-grounded representation of tool affordances. In ICRA (pp. 3060–3065).Google Scholar
  61. Stulp, F., & Beetz, M. (2008). Combining declarative, procedural, and predictive knowledge to generate, execute, optimize robot plans. Robotics and Autonomous Systems, 56, 967–979.CrossRefGoogle Scholar
  62. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press.Google Scholar
  63. Thrun, S., Burgard, W., & Fox, D. (2005). Probabilistic robotics. Cambridge: MIT Press.zbMATHGoogle Scholar
  64. Toussaint, M., Plath, N., Lang, T., & Jetchev, N. (2010). Integrated motor control, planning, grasping and high-level reasoning in a blocks world using probabilistic inference. In ICRA.Google Scholar
  65. Ugur, E., Dogar, M. R., Cakmak, M., & Sahin, E. (2007). The learning and use of traversability affordance using range images on a mobile robot. In ICRA (pp. 1721–1726).Google Scholar
  66. Ugur, E., Sahin, E., & Oztop, E. (2009). Affordance learning from range data for multi-step planning. In International Conference on Epigenetic Robotics (EpiRob).Google Scholar
  67. Vahrenkamp, N., Berenson, D., Asfour, T., Kuffner, J., & Dillmann, R. (2009). Humanoid motion planning for dual-arm manipulation and re-grasping tasks. In IROS, 2009 (pp. 2464–2470).Google Scholar
  68. van Otterlo, M. (2009). The logic of adaptive behavior. Amsterdam: IOS Press.zbMATHGoogle Scholar
  69. Wiering, M. A., & van Otterlo, M. (2012). Reinforcement Learning: State-of-the-art. Berlin: Springer.CrossRefGoogle Scholar
  70. Wörgötter, F., Agostini, A., Krüger, N., Shylo, N., & Porr, B. (2009). Cognitive agents—A procedural perspective relying on the predictability of object-action complexes (OACs). Robotics and Autonomous Systems, 57, 420–432.CrossRefGoogle Scholar
  71. Younes, H. L., & Littman, M. L. (2004). PPDDL1. 0: An extension to PDDL for expressing planning domains with probabilistic effects. Techn. Rep. CMU-CS-04-162Google Scholar
  72. Zamani, Z., Sanner, S., & Fang, C. (2012). Symbolic dynamic programming for continuous state and action MDPS. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence.Google Scholar
  73. Zettlemoyer, L. S., Pasula, H., & Kaelbling, L. P. (2005). Learning planning rules in noisy stochastic worlds. In AAAI (pp. 911–918).Google Scholar
  74. Zöllner, R., Asfour, T., & Dillmann, R. (2004). Programming by demonstration: dual-arm manipulation tasks for humanoid robots. In IROS (pp. 479–484).Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceKatholieke Universiteit LeuvenLeuvenBelgium
  2. 2.Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior TécnicoUniversity of LisbonLisbonPortugal

Personalised recommendations