Advertisement

On-Line Learning of the Visuomotor Transformations on a Humanoid Robot

  • Marco AntonelliEmail author
  • Eris Chinellato
  • Angel P. Del Pobil
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 193)

Abstract

In infant primates, the combination of looking and reaching to the same target is used to establish an implicit sensorimotor representation of the peripersonal space. This representation is created incrementally by linking together correlated signals. Also, such a map is not learned all at once, but following an order established by the temporal dependences between different modalities, which is imposed by the choice of the vision as master signal. Indeed, visual feedback is used both to correct gazing movements and to improve eye-arm coordination. Inspired by these observations we have developed a framework for building and maintaining an implicit sensorimotor map of the environment. In this work we present how this framework can be extended to allow a humanoid robot to update on-line the sensorimotor transformations among visual, oculomotor and arm-motor cues.

Keywords

Humanoid Robot Training Point Peripersonal Space Delta Rule Visual Position 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adolph, K.E., Joh, A.S.: Motor development: How infants get into the act. In: Slater, A., Lewis, M. (eds.) Introduction to Infant Development, pp. 63–80. Oxford University Press (2007)Google Scholar
  2. 2.
    Antonelli, M., Chinellato, E., delPobil, A.P.: Implicit mapping of the peripersonal space of a humanoid robot. In: IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain, pp. 1–8 (2011)Google Scholar
  3. 3.
    Chinellato, E., Antonelli, M., Grzyb, B., delPobil, A.: Implicit sensorimotor mapping of the peripersonal space by gazing and reaching. IEEE Transactions on Autonomous Mental Development 3, 45–53 (2011)CrossRefGoogle Scholar
  4. 4.
    Chinellato, E., Grzyb, B.J., Marzocchi, N., Bosco, A., Fattori, P., del Pobil, A.P.: The dorso-medial visual stream: From neural activation to sensorimotor interaction. Neurocomputing 74(8), 1203–1212 (2011), doi:10.1016/j.neucom.2010.07.029CrossRefGoogle Scholar
  5. 5.
    Fuke, S., Ogino, M., Asada, M.: Acquisition of the head-centered peri-personal spatial representation found in vip neuron. IEEE Transactions on Autonomous Mental Development 1(2), 131–140 (2009)CrossRefGoogle Scholar
  6. 6.
    Hoffmann, H., Schenck, W., Möller, R.: Learning visuomotor transformations for gaze-control and grasping. Biological Cybernetics 93(2), 119–130 (2005)zbMATHCrossRefGoogle Scholar
  7. 7.
    Jones, M., Vernon, D.: Using neural networks to learn hand-eye co-ordination. Neural Computing and Applications 2(1), 2–12 (1994)CrossRefGoogle Scholar
  8. 8.
    Marjanovic, M., Scassellati, B., Williamson, M.: Self-taught visually guided pointing for a humanoid robot. In: From Animals to Animats 4: Proc. Fourth Intl Conf. Simulation of Adaptive Behavior, pp. 35–44 (1996)Google Scholar
  9. 9.
    Martinetz, T.M., Ritter, H.J., Schulten, K.J.: Three-dimensional neural net for learning visuomotor coordination of a robot arm. IEEE T. Neural Networ. 1(1), 131–136 (1990), doi:10.1109/72.80212CrossRefGoogle Scholar
  10. 10.
    McBride, S., Law, J., Lee, M.: Integration of active vision and reaching from a developmental robotics perspective. IEEE Transactions on Autonomous Mental Development 2(4), 355–366 (2010)CrossRefGoogle Scholar
  11. 11.
    Nori, F., Natale, L., Sandini, G., Metta, G.: Autonomous learning of 3d reaching in a humanoid robot. In: IEEE/RSJ IROS, International Conference on Intelligent Robots and Systems, pp. 1142–1147 (2007)Google Scholar
  12. 12.
    Pouget, A., Sejnowski, T.J.: Spatial transformations in the parietal cortex using basis functions. Journal of Cognitive Neuroscience 9(2), 222–237 (1997)CrossRefGoogle Scholar
  13. 13.
    Salinas, E., Thier, P.: Gain modulation: A major meeting report computational principle of the central nervous system. Neuron 27, 15–21 (2000)CrossRefGoogle Scholar
  14. 14.
    Schenck, W., Hoffmann, H., Möller, R.: Learning internal models for eye-hand coordination in reaching and grasping. In: Proceedings of the European Cognitive Science Conference, Osnabrück, Germany, p. 289 (2003)Google Scholar
  15. 15.
    Sun, G., Scassellati, B.: A fast and efficient model for learning to reach. International Journal of Humanoid Robotics 2(4), 391–414 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Marco Antonelli
    • 1
    Email author
  • Eris Chinellato
    • 2
  • Angel P. Del Pobil
    • 1
    • 3
  1. 1.Robotic Intelligence LaboratoryJaume I UniversityCastellón de la PlanaSpain
  2. 2.Department of Electrical and Electronic EngineeringImperial CollegeLondonUK
  3. 3.Department of Interaction ScienceSungkyunkwan UniversitySeoulSouth Korea

Personalised recommendations