Advertisement

Artificial Life and Robotics

, Volume 18, Issue 1–2, pp 95–103 | Cite as

Bayesian learning of tool affordances based on generalization of functional feature to estimate effects of unseen tools

  • Raghvendra Jain
  • Tetsunari Inamura
Original Article

Abstract

To address the problem of estimating the effects of unknown tools, we propose a novel concept of tool representation based on the functional features of the tool. We argue that functional features remain distinctive and invariant across different tools used for performing similar tasks. Such a representation can be used to estimate the effects of unknown tools that share similar functional features. To learn the usages of tools to physically alter the environment, a robot should be able to reason about its capability to act, the representation of available tools, and effect of manipulating tools. To enable a robot to perform such reasoning, we present a novel approach, called Tool Affordances, to learn bi-directional causal relationships between actions, functional features and the effects of tools. A Bayesian network is used to model tool affordances because of its capability to model probabilistic dependencies between data. To evaluate the learnt tool affordances, we conducted an inference test in which a robot inferred suitable functional features to realize certain effects (including novel effects) from the given action. The results show that the generalization of functional features enables the robot to estimate the effects of unknown tools that have similar functional features. We validate the accuracy of estimation by error analysis.

Keywords

Tool manipulation Tool affordances Bayesian networks Probabilistic modeling Functional features of tools 

References

  1. 1.
    Stoytchev A (2005) Behavior-grounded representation of tool affordances. In: Proceedings of IEEE international conference on robotics and automation, pp 3071–3076Google Scholar
  2. 2.
    Stoytchev A (2008) Learning the affordances of tools using a behavior-grounded approach. In: Proceedings of the 2006 international conference on towards affordance-based robot control, Springer-Verlag, Berlin, Heidelberg, pp 140–158 Google Scholar
  3. 3.
    Sinapov J, Stoytchev A (2008) Detecting the functional similarities between tools using a hierarchical representation of outcomes. In: 7th IEEE international conference on development and learning, pp 91–96Google Scholar
  4. 4.
    Edsinger A, Kemp CC (2006) Manipulation in human environments. In: 6th IEEE-RAS international conference on humanoid robots, pp 102–109Google Scholar
  5. 5.
    Radwin R, Haney J, Committee AIHAE (1996) An ergonomics guide to hand tools. In: Ergonomics guide series. American Industrial Hygiene Association, USAGoogle Scholar
  6. 6.
    Brown S (2009) A relational approach to tool-use learning in robots. PhD thesis, School of Computer Science and Engineering, The University of New South WalesGoogle Scholar
  7. 7.
    Muggleton S, Feng C (1990) Efficient induction of logic programs. In: New generation computing. Academic Press, New YorkGoogle Scholar
  8. 8.
    Nishide S, Tani J, Takahashi T, Okuno HG, Ogata T (2012) Tool-body assimilation of humanoid robot using a neurodynamical system. In: IEEE transactions on autonomous mental development, pp 139–149Google Scholar
  9. 9.
    Call J, Carpenter M (2002) Imitation in animals and artifacts. In: Three sources of information in social learning. MIT Press, Cambridge, pp 211–228Google Scholar
  10. 10.
    Gibson JJ (1979) The ecological approach to visual perception. Houghton Mifflin, BostonGoogle Scholar
  11. 11.
    Oztop E, Bradley N, Arbib M (2004) Infant grasp learning: a computational model. Exp Brain Res 158(4):480–503CrossRefGoogle Scholar
  12. 12.
    Lopes M, Santos-Victor J (2005) Visual learning by imitation with motor representations. In: IEEE transactions on systems, man, and cybernetics, part B: cybernetics, vol. 35, pp 438–449 (June)Google Scholar
  13. 13.
    Slocum AC, Downey DC, Beer RD, Beer AD (2000) Further experiments in the evolution of minimally cognitive behavior: from perceiving affordances to selective attention. MIT Press, Cambridge, pp 430–439Google Scholar
  14. 14.
    Dogar M, Cakmak M, Ugur E, Sahin E (2007) From primitive behaviors to goal-directed behavior using affordances. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 729–734, 29 October–2 November 2007Google Scholar
  15. 15.
    Montesano L, Lopes M, Bernardino A, Santos-Victor J (2008) Learning object affordances: from sensory motor coordination to imitation. IEEE Trans Robot 24:15–26Google Scholar
  16. 16.
    Jain R, Inamura T (2011) Learning of tool affordances for autonomous tool manipulation. In: IEEE/SICE international symposium on system integration, pp 814–819Google Scholar
  17. 17.
    Pearl J (1998) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann Publishers Inc., San FranciscoGoogle Scholar
  18. 18.
    Pearl J (1998) Causality: models, reasoning, and inference. Cambridge University Press, New YorkGoogle Scholar
  19. 19.
    Heckerman D (1996) Learning in graphical models. In: Jordan, Michael I (eds) A tutorial on learning with Bayesian networks. MIT Press, Cambridge, MA, pp 301–354Google Scholar
  20. 20.
    Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer-Verlag, New YorkGoogle Scholar
  21. 21.
    Huang C, Darwiche A (1996) Inference in belief networks: a procedural guide. Int J Approx Reason 15:225–263MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© ISAROB 2013

Authors and Affiliations

  1. 1.Department of InformaticsThe Graduate University for Advanced Studies (Sokendai)TokyoJapan
  2. 2.Japan National Institute of InformaticsTokyoJapan

Personalised recommendations