The concept of affordances has been used in robotics to model action opportunities of a robot and as a basis for making decisions involving objects. Affordances capture the interdependencies between the objects and their properties, the executed actions on those objects, and the effects of those respective actions. However, existing affordance models cannot cope with multiple objects that may interact during action execution. Our approach is unique in that possesses the following four characteristics. First, our model employs recent advances in probabilistic programming to learn affordance models that take into account (spatial) relations between different objects, such as relative distances. Two-object interaction models are first learned from the robot interacting with the world in a behavioural exploration stage, and are then employed in worlds with an arbitrary number of objects. The model thus generalizes over both the number of and the particular objects used in the exploration stage, and it also effectively deals with uncertainty. Secondly, rather than using a (discrete) action repertoire, the actions are parametrised according to the motor capabilities of the robot, which allows to model and achieve goals at several levels of complexity. It also supports a two-arm parametrised action. Thirdly, the relational affordance model represents the state of the world using both discrete (action and object features) and continuous (effects) random variables. The effects follow a multivariate Gaussian distribution with the correlated discrete variables (actions and object properties). Fourthly, the learned model can be employed on planning for high-level goals that closely correspond to goals formulated in natural language. The goals are specified by means of (spatial) relations between the objects. The model is evaluated in real experiments using an iCub robot given a series of such planning goals of increasing difficulty.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
We assume the close-world assumption: anything that is undefined is considered false.
Having the same normalized second central moments.
As opposed to the two-arm affordances modelling in Moldovan and De Raedt (2014), we also include in the exploration phase the two-arm simultaneous actions whose effects might not always be well modelled by the sum of the individual single-arm actions.
An improved, but more involved planner for DDCs, is described in Nitti et al. (2015), but is outside the scope of this paper.
The hybrid model was developed later.
Amant, R. S. (1999). Planning and user interface affordances. In Proceedings of the 1999 international conference on Intelligent user interfaces (pp. 135–142).
Berenson, D., & Srinivasa, S. S. (2008). Grasp synthesis in cluttered environments for dexterous hands. In Humanoids.
Blum, A. L., & Langford, J. C. (1999, September). Probabilistic planning in the graphplan framework. In European Conference on Planning (pp. 319–332). Berlin, Heidelberg: Springer.
Brown, S., & Sammut, C. (2013). A relational approach to tool-use learning in robots. In Inductive logic programming.
Browne, C., Powley, E. J., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., et al. (2012). A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1), 1–43.
Cakmak, M., Dogar, M. R., Ugur, E., & Sahin, E. (2007). Affordances as a framework for robot control. In International Conference on Epigenetic Robotics (EpiRob).
Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing, and generalizing a task in a humanoid robot. Systems, Man, and Cybernetics, Part B: Cybernetics, 37(2), 286–298.
Christoudias, C. M., Georgescu, B., Meer, P., & Georgescu, C. M. (2002). Synergism in low level vision. In ICPR.
Cosgun, A., Hermans, T., Emeli, V., & Stilman, M. (2011). Push planning for object placement on cluttered table surfaces. In IROS.
De Raedt, L., Kimmig, A., & Toivonen, H. (2007). Problog: A probabilistic Prolog and its application in link discovery. In IJCAI.
De Raedt, L. (2008). Logical and relational learning. Berlin: Springer.
De Raedt, L., & Kersting, K. (2008). Probabilistic inductive logic programming. Probabilistic inductive logic programming. Berlin: Springer.
Emeli, V., Kemp, C., & Stilman, M. (2011). Push planning for object placement in clutter using the PR-2. In IROS PR2 Workshop.
Fikes, R. E., & Nilsson, N. (1971). STRIPS: A new approach to the application theorem proving to problem solving. Artificial Intelligence, 5(2), 189–208.
Fritz, G., Paletta, L., Breithaupt, R., Rome, E., & Dorffner, G. (2006). Learning predictive features in affordance based robotic perception systems. In IROS (pp. 3642–3647).
Geib, C., Mourão, K., Petrick, R. P. A., Pugeault, N., Steedman, M., Krueger, N., & Wörgötter, F. (2006). Object action complexes as an interface for planning and robot control. In Workshop: Towards Cognitive Humanoid Robots at IEEE RAS.
Getoor, L., & Taskar, B. (2007). Introduction to statistical relational learning. Cambridge: The MIT press.
Gibson, J. J. (1979). The ecologial approach to visual perception. Boston: Houghton Mifflin.
Gienger, M., Toussaint, M., & Goerick, C. (2008). Task maps in humanoid robot manipulation. In IROS.
Gutmann, B., Thon, I., Kimmig, A., Bruynooghe, M., & De Raedt, L. (2011). The magic of logical inference in probabilistic programming. Theory and Practice of Logic Programming, 11(4–5), 663–680.
Hertzberg, J., & Chatila, R. (2008). AI reasoning methods for robotics. Handbook of robotics. Berlin: Springer.
Hirschmuller, H. (2008). Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2), 328–341.
Jain, D., Mösenlechner, L., & Beetz, M. (2009). Equipping robot control programs with first-order probabilistic reasoning capabilities. In ICRA (pp. 3626–3631).
Jetchev, N., & Toussaint, M. (2010). Trajectory prediction in cluttered voxel environments. In ICRA.
Kearns, M., Mansour, Y., & Ng, A. Y. (2002). A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49(2–3), 193–208.
Kjærulff, U. B., & Madsen, A. L. (2005). Probabilistic networks–An introduction to Bayesian networks and influence diagrams. Aalborg: Aalborg University.
Kjellström, H., Romero, J., & Kragić, D. (2011). Visual object-action recognition: Inferring object affordances from human demonstration. Computer Vision and Image Understanding, 115(1), 81–90.
Kocsis, L., & Szepesvári, C. (2006). Bandit based Monte-Carlo planning. In ECML (pp. 282–293).
Kragic, D., Björkman, M., Christensen, H. I., & Eklundh, J. O. (2005). Vision for robotic object manipulation in domestic settings. Robotics and Autonomous Systems, 52(1), 85–100.
Kruger, N., Piater, J., Worgotter, F., Geib, C., Petrick, R., Steedman, M., Asfour, T., Kraft, D., Hommel, B., Agostini, A., Kragic, D., Eklundh, J.-O., Kruger, V., Torras, C., & Dillmann, R. (2009). A formal definition of object-action complexes and examples at different levels of the processing hierarchy. In Computer and Information Science, 1–39QC 20120426.
Krüger, N., Geib, C., Piater, J., Petrick, R., Steedman, M., Wörgötter, F., et al. (2011). ObjectAction complexes: Grounded abstractions of sensorymotor processes. Robotics and Autonomous Systems, 59(10), 740–757.
Krunic, V., Salvi, G., Bernardino, A., Montesano, L., & Santos-Victor, J. (2009). Affordance based word-to-meaning association. In ICRA.
Kushmerick, N., Hanks, S., & Weld, D. S. (1995). An algorithm for probabilistic planning. Artificial Intelligence, 76(1–2), 239–286.
Lang, T., & Toussaint, M. (2009). Approximate inference for planning in stochastic relational worlds. In ICML (pp. 585–592).
Lang, T., & Toussaint, M. (2010). Planning with noisy probabilistic relational rules. Journal of Artificial Intelligence Research, 39, 1–49.
Lang, T., & Toussaint, M. (2010). Planning with noisy probabilistic relational rules. Journal of Artificial Intelligence Research (JAIR), 39, 1–49.
Lopes, M., Melo, F. S., & Montesano, L. (2007). Affordance-based imitation learning in robots. In IROS (pp. 1015–1021).
Lorken, C., & Hertzberg, J. (2008). Grounding planning operators by affordances. In International Conference on Cognitive Systems.
Lungarella, M., Metta, G., Pfeifer, R., & Sandini, G. (2003). Developmental robotics: A survey. Connection Science, 15, 151–190.
Mcdermott, D., Ghallab, M., Howe, A., Knoblock, C., Ram, A., Veloso, M., Weld, D., & Wilkins, D. (1998). PDDL—The planning domain definition language, Technical report, CVC TR-98-003/DCS TR-1165, Yale Center for Computational Vision and Control.
Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F. (2008). The iCub humanoid robot: An open platform for research in embodied cognition. In PerMIS.
Metta, G., Fitzpatrick, P., & Natale, L. (2006). YARP: Yet another robot platform. International Journal on Advanced Robotics Systems, 3(1), 43–38.
Moldovan, B., & De Raedt, L. (2014). Learning relational affordance models for two-arm robots. In IROS.
Moldovan, B., Moreno, P., & van Otterlo, M. (2013). On the use of probabilistic relational affordance models for sequential manipulation tasks in robotics. In ICRA.
Moldovan, B., Moreno, P., van Otterlo, M., Santos-Victor, J., & De Raedt, L. (2012). Learning relational affordance models for robots in multi-object manipulation tasks. In ICRA.
Montesano, L., Lopes, M., Bernardino, A., & Santos-Victor, J. (2008). Learning object affordances: From sensory-motor coordination to imitation. IEEE Transactions on Robotics, 24, 15–26.
Mourão, K., Petrick, R. P. A., & Steedman, M. (2010). Learning action effects in partially observable domains. In ECAI (pp. 973–974).
Murphy, K. P. (2002). Dynamic Bayesian networks: Representation, inference and learning, Ph.D. thesis, University of California, Berkeley.
Murphy, K., et al. (2001). The Bayes net toolbox for Matlab. Computing Science and Statistics, 33(2), 1024–1034.
Nitti, D., Belle, V., & De Raedt, L. (2015). In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD) 2015, Part II (Vol. 9285).
Nitti, D., De Laet, T., & De Raedt, L. (2014). Relational object tracking and learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (pp. 935–942).
Nitti, D., De Laet, T., & De Raedt, L. (2016). Probabilistic logic programming for hybrid relational domains. Machine Learning, 103(3), 407–449.
Nitti, D., Laet, T. D., & Raedt, L. D. (2013). A particle filter for hybrid relational domains. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2764–2771).
Pasula, H. M., Zettlemoyer, L. S., & Kaebling, L. P. (2004). Learning probabilistic relational planning rules. In ICAPS (pp. 73–82).
Pattacini, U., Nori, F., Natale, L., Metta, G., & Sandini, G. (2010). An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots. In IROS.
Şahin, E., Çakmak, M., Doğar, M. R., Uğur, E., & Üçoluk, G. (2007). To afford or not to afford: A new formalization of affordances towards affordance based robot control. Adaptive Behavior, 15(4), 447–472.
Sato, T. (1995). A statistical learning method for logic programs with distribution semantics. In ICLP (pp. 715–729).
Sinapov, J., & Stoytchev, A. (2007). Learning and generalization of behavior-grounded tool affordances. In ICDL (pp. 19–24).
Steedman, M. (2002). Formalizing affordance. In Annual Meeting of the Cognitive Science Society (pp. 834–839).
Stoytchev, A. (2005). Behavior-grounded representation of tool affordances. In ICRA (pp. 3060–3065).
Stulp, F., & Beetz, M. (2008). Combining declarative, procedural, and predictive knowledge to generate, execute, optimize robot plans. Robotics and Autonomous Systems, 56, 967–979.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press.
Thrun, S., Burgard, W., & Fox, D. (2005). Probabilistic robotics. Cambridge: MIT Press.
Toussaint, M., Plath, N., Lang, T., & Jetchev, N. (2010). Integrated motor control, planning, grasping and high-level reasoning in a blocks world using probabilistic inference. In ICRA.
Ugur, E., Dogar, M. R., Cakmak, M., & Sahin, E. (2007). The learning and use of traversability affordance using range images on a mobile robot. In ICRA (pp. 1721–1726).
Ugur, E., Sahin, E., & Oztop, E. (2009). Affordance learning from range data for multi-step planning. In International Conference on Epigenetic Robotics (EpiRob).
Vahrenkamp, N., Berenson, D., Asfour, T., Kuffner, J., & Dillmann, R. (2009). Humanoid motion planning for dual-arm manipulation and re-grasping tasks. In IROS, 2009 (pp. 2464–2470).
van Otterlo, M. (2009). The logic of adaptive behavior. Amsterdam: IOS Press.
Wiering, M. A., & van Otterlo, M. (2012). Reinforcement Learning: State-of-the-art. Berlin: Springer.
Wörgötter, F., Agostini, A., Krüger, N., Shylo, N., & Porr, B. (2009). Cognitive agents—A procedural perspective relying on the predictability of object-action complexes (OACs). Robotics and Autonomous Systems, 57, 420–432.
Younes, H. L., & Littman, M. L. (2004). PPDDL1. 0: An extension to PDDL for expressing planning domains with probabilistic effects. Techn. Rep. CMU-CS-04-162
Zamani, Z., Sanner, S., & Fang, C. (2012). Symbolic dynamic programming for continuous state and action MDPS. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence.
Zettlemoyer, L. S., Pasula, H., & Kaelbling, L. P. (2005). Learning planning rules in noisy stochastic worlds. In AAAI (pp. 911–918).
Zöllner, R., Asfour, T., & Dillmann, R. (2004). Programming by demonstration: dual-arm manipulation tasks for humanoid robots. In IROS (pp. 479–484).
This work was partly supported by the EU FP7 Project FIRST-MM, the IWT via scholarships for BM and DN, the Research Foundation Flanders, and the KULeuven BOF. Also by Research Infrastructure 22084-01/SAICT/2016 - Robotics, Brain and Cognition (RBCog-Lab) and Project FCT UID/EEA/50009/2013.
About this article
Cite this article
Moldovan, B., Moreno, P., Nitti, D. et al. Relational affordances for multiple-object manipulation. Auton Robot 42, 19–44 (2018). https://doi.org/10.1007/s10514-017-9637-x
- Relational affordances
- Probabilistic programming
- Object manipulation