Skip to main content

Relational affordances for multiple-object manipulation


The concept of affordances has been used in robotics to model action opportunities of a robot and as a basis for making decisions involving objects. Affordances capture the interdependencies between the objects and their properties, the executed actions on those objects, and the effects of those respective actions. However, existing affordance models cannot cope with multiple objects that may interact during action execution. Our approach is unique in that possesses the following four characteristics. First, our model employs recent advances in probabilistic programming to learn affordance models that take into account (spatial) relations between different objects, such as relative distances. Two-object interaction models are first learned from the robot interacting with the world in a behavioural exploration stage, and are then employed in worlds with an arbitrary number of objects. The model thus generalizes over both the number of and the particular objects used in the exploration stage, and it also effectively deals with uncertainty. Secondly, rather than using a (discrete) action repertoire, the actions are parametrised according to the motor capabilities of the robot, which allows to model and achieve goals at several levels of complexity. It also supports a two-arm parametrised action. Thirdly, the relational affordance model represents the state of the world using both discrete (action and object features) and continuous (effects) random variables. The effects follow a multivariate Gaussian distribution with the correlated discrete variables (actions and object properties). Fourthly, the learned model can be employed on planning for high-level goals that closely correspond to goals formulated in natural language. The goals are specified by means of (spatial) relations between the objects. The model is evaluated in real experiments using an iCub robot given a series of such planning goals of increasing difficulty.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13


  1. 1.

    We assume the close-world assumption: anything that is undefined is considered false.

  2. 2.

    Having the same normalized second central moments.

  3. 3.

    As opposed to the two-arm affordances modelling in Moldovan and De Raedt (2014), we also include in the exploration phase the two-arm simultaneous actions whose effects might not always be well modelled by the sum of the individual single-arm actions.

  4. 4.

    An improved, but more involved planner for DDCs, is described in Nitti et al. (2015), but is outside the scope of this paper.

  5. 5.

    The hybrid model was developed later.

  6. 6.

    Gazebo simulator.

  7. 7.

  8. 8.

  9. 9.

  10. 10.

  11. 11.

  12. 12.

  13. 13.

  14. 14.


  1. Amant, R. S. (1999). Planning and user interface affordances. In Proceedings of the 1999 international conference on Intelligent user interfaces (pp. 135–142).

  2. Berenson, D., & Srinivasa, S. S. (2008). Grasp synthesis in cluttered environments for dexterous hands. In Humanoids.

  3. Blum, A. L., & Langford, J. C. (1999, September). Probabilistic planning in the graphplan framework. In European Conference on Planning (pp. 319–332). Berlin, Heidelberg: Springer.

  4. Brown, S., & Sammut, C. (2013). A relational approach to tool-use learning in robots. In Inductive logic programming.

  5. Browne, C., Powley, E. J., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., et al. (2012). A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1), 1–43.

    Article  Google Scholar 

  6. Cakmak, M., Dogar, M. R., Ugur, E., & Sahin, E. (2007). Affordances as a framework for robot control. In International Conference on Epigenetic Robotics (EpiRob).

  7. Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing, and generalizing a task in a humanoid robot. Systems, Man, and Cybernetics, Part B: Cybernetics, 37(2), 286–298.

    Article  Google Scholar 

  8. Christoudias, C. M., Georgescu, B., Meer, P., & Georgescu, C. M. (2002). Synergism in low level vision. In ICPR.

  9. Cosgun, A., Hermans, T., Emeli, V., & Stilman, M. (2011). Push planning for object placement on cluttered table surfaces. In IROS.

  10. De Raedt, L., Kimmig, A., & Toivonen, H. (2007). Problog: A probabilistic Prolog and its application in link discovery. In IJCAI.

  11. De Raedt, L. (2008). Logical and relational learning. Berlin: Springer.

    Book  MATH  Google Scholar 

  12. De Raedt, L., & Kersting, K. (2008). Probabilistic inductive logic programming. Probabilistic inductive logic programming. Berlin: Springer.

    Book  MATH  Google Scholar 

  13. Emeli, V., Kemp, C., & Stilman, M. (2011). Push planning for object placement in clutter using the PR-2. In IROS PR2 Workshop.

  14. Fikes, R. E., & Nilsson, N. (1971). STRIPS: A new approach to the application theorem proving to problem solving. Artificial Intelligence, 5(2), 189–208.

    Article  MATH  Google Scholar 

  15. Fritz, G., Paletta, L., Breithaupt, R., Rome, E., & Dorffner, G. (2006). Learning predictive features in affordance based robotic perception systems. In IROS (pp. 3642–3647).

  16. Geib, C., Mourão, K., Petrick, R. P. A., Pugeault, N., Steedman, M., Krueger, N., & Wörgötter, F. (2006). Object action complexes as an interface for planning and robot control. In Workshop: Towards Cognitive Humanoid Robots at IEEE RAS.

  17. Getoor, L., & Taskar, B. (2007). Introduction to statistical relational learning. Cambridge: The MIT press.

    MATH  Google Scholar 

  18. Gibson, J. J. (1979). The ecologial approach to visual perception. Boston: Houghton Mifflin.

    Google Scholar 

  19. Gienger, M., Toussaint, M., & Goerick, C. (2008). Task maps in humanoid robot manipulation. In IROS.

  20. Gutmann, B., Thon, I., Kimmig, A., Bruynooghe, M., & De Raedt, L. (2011). The magic of logical inference in probabilistic programming. Theory and Practice of Logic Programming, 11(4–5), 663–680.

  21. Hertzberg, J., & Chatila, R. (2008). AI reasoning methods for robotics. Handbook of robotics. Berlin: Springer.

    Google Scholar 

  22. Hirschmuller, H. (2008). Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2), 328–341.

    Article  Google Scholar 

  23. Jain, D., Mösenlechner, L., & Beetz, M. (2009). Equipping robot control programs with first-order probabilistic reasoning capabilities. In ICRA (pp. 3626–3631).

  24. Jetchev, N., & Toussaint, M. (2010). Trajectory prediction in cluttered voxel environments. In ICRA.

  25. Kearns, M., Mansour, Y., & Ng, A. Y. (2002). A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49(2–3), 193–208.

    Article  MATH  Google Scholar 

  26. Kjærulff, U. B., & Madsen, A. L. (2005). Probabilistic networks–An introduction to Bayesian networks and influence diagrams. Aalborg: Aalborg University.

    Google Scholar 

  27. Kjellström, H., Romero, J., & Kragić, D. (2011). Visual object-action recognition: Inferring object affordances from human demonstration. Computer Vision and Image Understanding, 115(1), 81–90.

    Article  Google Scholar 

  28. Kocsis, L., & Szepesvári, C. (2006). Bandit based Monte-Carlo planning. In ECML (pp. 282–293).

  29. Kragic, D., Björkman, M., Christensen, H. I., & Eklundh, J. O. (2005). Vision for robotic object manipulation in domestic settings. Robotics and Autonomous Systems, 52(1), 85–100.

    Article  Google Scholar 

  30. Kruger, N., Piater, J., Worgotter, F., Geib, C., Petrick, R., Steedman, M., Asfour, T., Kraft, D., Hommel, B., Agostini, A., Kragic, D., Eklundh, J.-O., Kruger, V., Torras, C., & Dillmann, R. (2009). A formal definition of object-action complexes and examples at different levels of the processing hierarchy. In Computer and Information Science, 1–39QC 20120426.

  31. Krüger, N., Geib, C., Piater, J., Petrick, R., Steedman, M., Wörgötter, F., et al. (2011). ObjectAction complexes: Grounded abstractions of sensorymotor processes. Robotics and Autonomous Systems, 59(10), 740–757.

    Article  Google Scholar 

  32. Krunic, V., Salvi, G., Bernardino, A., Montesano, L., & Santos-Victor, J. (2009). Affordance based word-to-meaning association. In ICRA.

  33. Kushmerick, N., Hanks, S., & Weld, D. S. (1995). An algorithm for probabilistic planning. Artificial Intelligence, 76(1–2), 239–286.

    Article  Google Scholar 

  34. Lang, T., & Toussaint, M. (2009). Approximate inference for planning in stochastic relational worlds. In ICML (pp. 585–592).

  35. Lang, T., & Toussaint, M. (2010). Planning with noisy probabilistic relational rules. Journal of Artificial Intelligence Research, 39, 1–49.

    MATH  Google Scholar 

  36. Lang, T., & Toussaint, M. (2010). Planning with noisy probabilistic relational rules. Journal of Artificial Intelligence Research (JAIR), 39, 1–49.

    MATH  Google Scholar 

  37. Lopes, M., Melo, F. S., & Montesano, L. (2007). Affordance-based imitation learning in robots. In IROS (pp. 1015–1021).

  38. Lorken, C., & Hertzberg, J. (2008). Grounding planning operators by affordances. In International Conference on Cognitive Systems.

  39. Lungarella, M., Metta, G., Pfeifer, R., & Sandini, G. (2003). Developmental robotics: A survey. Connection Science, 15, 151–190.

    Article  Google Scholar 

  40. Mcdermott, D., Ghallab, M., Howe, A., Knoblock, C., Ram, A., Veloso, M., Weld, D., & Wilkins, D. (1998). PDDL—The planning domain definition language, Technical report, CVC TR-98-003/DCS TR-1165, Yale Center for Computational Vision and Control.

  41. Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F. (2008). The iCub humanoid robot: An open platform for research in embodied cognition. In PerMIS.

  42. Metta, G., Fitzpatrick, P., & Natale, L. (2006). YARP: Yet another robot platform. International Journal on Advanced Robotics Systems, 3(1), 43–38.

    Google Scholar 

  43. Moldovan, B., & De Raedt, L. (2014). Learning relational affordance models for two-arm robots. In IROS.

  44. Moldovan, B., Moreno, P., & van Otterlo, M. (2013). On the use of probabilistic relational affordance models for sequential manipulation tasks in robotics. In ICRA.

  45. Moldovan, B., Moreno, P., van Otterlo, M., Santos-Victor, J., & De Raedt, L. (2012). Learning relational affordance models for robots in multi-object manipulation tasks. In ICRA.

  46. Montesano, L., Lopes, M., Bernardino, A., & Santos-Victor, J. (2008). Learning object affordances: From sensory-motor coordination to imitation. IEEE Transactions on Robotics, 24, 15–26.

    Article  Google Scholar 

  47. Mourão, K., Petrick, R. P. A., & Steedman, M. (2010). Learning action effects in partially observable domains. In ECAI (pp. 973–974).

  48. Murphy, K. P. (2002). Dynamic Bayesian networks: Representation, inference and learning, Ph.D. thesis, University of California, Berkeley.

  49. Murphy, K., et al. (2001). The Bayes net toolbox for Matlab. Computing Science and Statistics, 33(2), 1024–1034.

    Google Scholar 

  50. Nitti, D., Belle, V., & De Raedt, L. (2015). In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD) 2015, Part II (Vol. 9285).

  51. Nitti, D., De Laet, T., & De Raedt, L. (2014). Relational object tracking and learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (pp. 935–942).

  52. Nitti, D., De Laet, T., & De Raedt, L. (2016). Probabilistic logic programming for hybrid relational domains. Machine Learning, 103(3), 407–449.

  53. Nitti, D., Laet, T. D., & Raedt, L. D. (2013). A particle filter for hybrid relational domains. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2764–2771).

  54. Pasula, H. M., Zettlemoyer, L. S., & Kaebling, L. P. (2004). Learning probabilistic relational planning rules. In ICAPS (pp. 73–82).

  55. Pattacini, U., Nori, F., Natale, L., Metta, G., & Sandini, G. (2010). An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots. In IROS.

  56. Şahin, E., Çakmak, M., Doğar, M. R., Uğur, E., & Üçoluk, G. (2007). To afford or not to afford: A new formalization of affordances towards affordance based robot control. Adaptive Behavior, 15(4), 447–472.

    Article  Google Scholar 

  57. Sato, T. (1995). A statistical learning method for logic programs with distribution semantics. In ICLP (pp. 715–729).

  58. Sinapov, J., & Stoytchev, A. (2007). Learning and generalization of behavior-grounded tool affordances. In ICDL (pp. 19–24).

  59. Steedman, M. (2002). Formalizing affordance. In Annual Meeting of the Cognitive Science Society (pp. 834–839).

  60. Stoytchev, A. (2005). Behavior-grounded representation of tool affordances. In ICRA (pp. 3060–3065).

  61. Stulp, F., & Beetz, M. (2008). Combining declarative, procedural, and predictive knowledge to generate, execute, optimize robot plans. Robotics and Autonomous Systems, 56, 967–979.

    Article  Google Scholar 

  62. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press.

    Google Scholar 

  63. Thrun, S., Burgard, W., & Fox, D. (2005). Probabilistic robotics. Cambridge: MIT Press.

    MATH  Google Scholar 

  64. Toussaint, M., Plath, N., Lang, T., & Jetchev, N. (2010). Integrated motor control, planning, grasping and high-level reasoning in a blocks world using probabilistic inference. In ICRA.

  65. Ugur, E., Dogar, M. R., Cakmak, M., & Sahin, E. (2007). The learning and use of traversability affordance using range images on a mobile robot. In ICRA (pp. 1721–1726).

  66. Ugur, E., Sahin, E., & Oztop, E. (2009). Affordance learning from range data for multi-step planning. In International Conference on Epigenetic Robotics (EpiRob).

  67. Vahrenkamp, N., Berenson, D., Asfour, T., Kuffner, J., & Dillmann, R. (2009). Humanoid motion planning for dual-arm manipulation and re-grasping tasks. In IROS, 2009 (pp. 2464–2470).

  68. van Otterlo, M. (2009). The logic of adaptive behavior. Amsterdam: IOS Press.

    MATH  Google Scholar 

  69. Wiering, M. A., & van Otterlo, M. (2012). Reinforcement Learning: State-of-the-art. Berlin: Springer.

    Book  Google Scholar 

  70. Wörgötter, F., Agostini, A., Krüger, N., Shylo, N., & Porr, B. (2009). Cognitive agents—A procedural perspective relying on the predictability of object-action complexes (OACs). Robotics and Autonomous Systems, 57, 420–432.

    Article  Google Scholar 

  71. Younes, H. L., & Littman, M. L. (2004). PPDDL1. 0: An extension to PDDL for expressing planning domains with probabilistic effects. Techn. Rep. CMU-CS-04-162

  72. Zamani, Z., Sanner, S., & Fang, C. (2012). Symbolic dynamic programming for continuous state and action MDPS. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence.

  73. Zettlemoyer, L. S., Pasula, H., & Kaelbling, L. P. (2005). Learning planning rules in noisy stochastic worlds. In AAAI (pp. 911–918).

  74. Zöllner, R., Asfour, T., & Dillmann, R. (2004). Programming by demonstration: dual-arm manipulation tasks for humanoid robots. In IROS (pp. 479–484).

Download references


This work was partly supported by the EU FP7 Project FIRST-MM, the IWT via scholarships for BM and DN, the Research Foundation Flanders, and the KULeuven BOF. Also by Research Infrastructure 22084-01/SAICT/2016 - Robotics, Brain and Cognition (RBCog-Lab) and Project FCT UID/EEA/50009/2013.

Author information



Corresponding author

Correspondence to Plinio Moreno.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Moldovan, B., Moreno, P., Nitti, D. et al. Relational affordances for multiple-object manipulation. Auton Robot 42, 19–44 (2018).

Download citation


  • Affordances
  • Relational affordances
  • Probabilistic programming
  • Object manipulation
  • Planning