Advertisement

Integrating Epistemic Action (Active Vision) and Pragmatic Action (Reaching): A Neural Architecture for Camera-Arm Robots

  • Dimitri Ognibene
  • Christian Balkenius
  • Gianluca Baldassarre
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5040)

Abstract

The active vision and attention-for-action frameworks propose that in organisms attention and perception are closely integrated with action and learning. This work proposes a novel bio-inspired integrated neural-network architecture that on one side uses attention to guide and furnish the parameters to action, and on the other side uses the effects of action to train the task-oriented top-down attention components of the system. The architecture is tested both with a simulated and a real camera-arm robot engaged in a reaching task. The results highlight the computational opportunities and difficulties deriving from a close integration of attention, action and learning.

Keywords

Active Vision Real Robot Saccade Target Frequent Sequence Epistemic Action 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Marr, D.: Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W. H. Freeman, New York (1982)Google Scholar
  2. 2.
    Fermuller, C., Aloimonos, Y.: Vision and action. Image Vision Comput. 13(10), 725–744 (1995)CrossRefGoogle Scholar
  3. 3.
    Ballard, D.: Animate vision. Artif. Intell. 48, 57–86 (1991)CrossRefGoogle Scholar
  4. 4.
    Posner, M.I.: Orienting of attention. Q J. Exp. Psychol. 32(1), 3–25 (1980)CrossRefGoogle Scholar
  5. 5.
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognit Psychol. 12(1), 97–136 (1980)CrossRefGoogle Scholar
  6. 6.
    Nolfi, S., Floreano, D.: Evolutionary Robotics: The Biology, Intelligence, and Technology. MIT Press, Cambridge (2000)Google Scholar
  7. 7.
    Floreano, D., Kato, T., Marocco, D., Sauser, E.: Coevolution of active vision and feature selection. Biol. Cybern. 90(3), 218–228 (2004)zbMATHCrossRefGoogle Scholar
  8. 8.
    Cliff, D., Noble, J.: Knowledge-based vision and simple visual machines. Philos. T Roy Soc. B 352(1358), 1165–1175 (1997)CrossRefGoogle Scholar
  9. 9.
    de Croon, G., Postma, E.: Sensory-motor coordination in object detection. In: IEEE Symp. ALIFE 2007, pp. 147–154 (2007)Google Scholar
  10. 10.
    Whitehead, S.D., Ballard, D.H.: Learning to perceive and act by trial and error. Mach. Learn. 7(1), 45–83 (1991)Google Scholar
  11. 11.
    Allport, D.: Selection for action: Some behavioral and neurophysiological considerations of attention and action. In: Perspectives on perception and action, vol. 15, pp. 395–419. Erlbaum, Hillsdale (1987)Google Scholar
  12. 12.
    Neumann, O.: Direct parameter specification and the concept of perception. Psychol. Res. 52(2-3), 207–215 (1990)CrossRefGoogle Scholar
  13. 13.
    Balkenius, C.: Attention, habituation and conditioning: Toward a computational model. Cogn. Sci.Quart. 1(2), 171–204 (2000)Google Scholar
  14. 14.
    Itti, L., Koch, C.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2(3), 194–203 (2001)CrossRefGoogle Scholar
  15. 15.
    Schmidhuber, J., Huber, R.: Learning to generate artificial fovea trajectories for target detection. Int. J. Neural Syst. 2(1-2), 135–141 (1991)Google Scholar
  16. 16.
    Ognibene, D., Balkenius, C., Baldassarre, G.: A reinforcement-learning model of top-down attention based on a potential-action map. In: The Anticipatory Approach. Springer, Berlin (2008)Google Scholar
  17. 17.
    Ognibene, D., Rega, A., Baldassarre, G.: A model of reaching that integrates reinforcement learning and population encoding of postures. In: 9th Int. Conf. Simul. Adapt. Behav., September 2006, pp. 381–393. Springer, Heidelberg (2006)Google Scholar
  18. 18.
    Pouget, A., Ducom, J.C., Torri, J., Bavelier, D.: Multisensory spatial representations in eye-centered coordinates for reaching. Cognition 83(1), B1–11 (2002)CrossRefGoogle Scholar
  19. 19.
    Pouget, A., Zhang, K., Deneve, S., Latham, P.E.: Statistically efficient estimation using population coding. Neural Comput. 10(2), 373–401 (1998)CrossRefGoogle Scholar
  20. 20.
    Cisek, P.: Integrated neural processes for defining potential actions and deciding between them: a computational model. J. Neurosci. 26(38), 9761–9770 (2006)CrossRefGoogle Scholar
  21. 21.
    Erlhagen, W., Schöner, G.: Dynamic field theory of movement preparation. Psychol. Rev. 109(3), 545–572 (2002)CrossRefGoogle Scholar
  22. 22.
    Sutton, R., Barto, A.: Reinforcement Learning. MIT Press, Cambridge (1998)Google Scholar
  23. 23.
    Dominey, P.F., Arbib, M.A.: A cortico-subcortical model for generation of spatially accurate sequential saccades. Cereb Cortex 2(2), 153–175 (1992)CrossRefGoogle Scholar
  24. 24.
    Klein: Inhibition of return. Trends Cogn. Sci. 4(4), 138–147 (2000)CrossRefGoogle Scholar
  25. 25.
    Herbort, O., Ognibene, D., Butz, M.V., Baldassarre, G.: Learning to select targets within targets in reaching tasks. In: IEEE 6th Intern. Conf. Development Learning, July 2007, pp. 7–12 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Dimitri Ognibene
    • 1
    • 2
  • Christian Balkenius
    • 3
  • Gianluca Baldassarre
    • 1
  1. 1.Lab. of Autonomous Robotics and Artificial Life, Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle RicercheLARAL-ISTC-CNRRomaItaly
  2. 2.DIST, Dip. di Informatica Sistemistica e TelematicaUniversita’ di GenovaGenovaItaly
  3. 3.Lund University Cognitive ScienceLundSweden

Personalised recommendations