Advertisement

Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention

  • Dimitri Ognibene
  • Eris Chinellato
  • Miguel Sarabia
  • Yiannis Demiris
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7375)

Abstract

Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. We have designed and implemented a system for dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task. During the observation of a partner’s reaching movement, the robot is able to contextually estimate the goal position of the partner hand and the location in space of the candidate targets, while moving its gaze around with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control provides a relevant advantage with respect to typical passive observation, both in term of estimation precision and of time required for action recognition.

Keywords

active vision social interaction humanoid robots attentive systems information gain 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bajcsy, R.: Active perception. Proceedings of the IEEE 76(8), 966–1005 (1988)CrossRefGoogle Scholar
  2. 2.
    Ballard, D.H.: Animate vision. AI 48, 57–86 (1991)Google Scholar
  3. 3.
    de Croon, G.C.H.E., Postma, E.O., van den Herik, H.J.: Adaptive gaze control for object detection. Cognitive Computation 3(1), 264–278 (2011)CrossRefGoogle Scholar
  4. 4.
    Demiris, Y., Khadhouri, B.: Hierarchical attentive multiple models for execution and recognition of actions. Robotics and Autonomous Systems 54, 361–369 (2006)CrossRefGoogle Scholar
  5. 5.
    Demiris, Y., Khadhouri, B.: Content-based control of goal-directed attention during human action perception. Journal of Interaction Studies 9(2), 353–376 (2008)CrossRefGoogle Scholar
  6. 6.
    Demiris, Y., Simmons, G.: Perceiving the unusual: temporal properties of hierarchical motor representations for action perception. Neural Networks 19(3), 272–284 (2006)zbMATHCrossRefGoogle Scholar
  7. 7.
    Heisz, J.J., Shore, D.I.: More efficient scanning for familiar faces. J. Vis. 8(1), 1–10 (2008)CrossRefGoogle Scholar
  8. 8.
    Kastella, K.: Discrimination gain to optimize detection and classification. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans 27(1), 112–116 (1997)CrossRefGoogle Scholar
  9. 9.
    Kwok, C., Fox, D.: Reinforcement learning for sensing strategies. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2004 (2004)Google Scholar
  10. 10.
    Land, M.F.: Eye movements and the control of actions in everyday life. Prog. Retin. Eye Res. 25(3), 296–324 (2006)CrossRefGoogle Scholar
  11. 11.
    Marr, D.: Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W. H. Freeman, New York (1982)Google Scholar
  12. 12.
    Ognibene, D., Balkenius, C., Baldassarre, G.: Integrating epistemic action (active vision) and pragmatic action (reaching): A neural architecture for camera-arm robots. In: Proceedings of the Tenth International Conference on the Simulation of Adaptive Behavior (2008)Google Scholar
  13. 13.
    Ognibene, D., Wu, Y., Lee, K., Demiris, Y.: Hierarchies for embodied action perception. Under review (2012)Google Scholar
  14. 14.
    Ognibene, D., Pezzulo, G., Baldassarre, G.: How can bottom-up information shape learning of top-down attention control skills? In: Proceedings of 9th International Conference on Development and Learning (2010)Google Scholar
  15. 15.
    Sailer, U., Flanagan, J.R., Johansson, R.S.: Eye-hand coordination during learning of a novel visuomotor task. J. Neurosci. 25(39), 8833–8842 (2005)CrossRefGoogle Scholar
  16. 16.
    Sarabia, M., Ros, R., Demiris, Y.: Towards an open-source social middleware for humanoid robots. In: Proc. 11th IEEE-RAS Int Humanoid Robots (Humanoids) Conf., pp. 670–675 (2011)Google Scholar
  17. 17.
    Schmidhuber, J., Huber, R.: Learning to generate artificial fovea trajectories for target detection. Int. J. Neural Syst. 2(1-2), 135–141 (1991)Google Scholar
  18. 18.
    Sommerlade, E., Reid, I.: Information theoretic active scene exploration. In: Proc. IEEE Computer Vision and Pattern Recognition (CVPR) (May 2008)Google Scholar
  19. 19.
    Suzuki, M., Floreano, D.: Enactive robot vision. Adapt. Behav. 16(2-3), 122–128 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Dimitri Ognibene
    • 1
  • Eris Chinellato
    • 1
  • Miguel Sarabia
    • 1
  • Yiannis Demiris
    • 1
  1. 1.Imperial College LondonLondonUK

Personalised recommendations