Gaze Allocation Analysis for a Visually Guided Manipulation Task

  • Jose Nunez-Varela
  • Balaraman Ravindran
  • Jeremy L. Wyatt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7426)

Abstract

Findings from eye movement research in humans have demonstrated that the task determines where to look. One hypothesis is that the purpose of looking is to reduce uncertainty about properties relevant to the task. Following this hypothesis, we define a model that poses the problem of where to look as one of maximising task performance by reducing task relevant uncertainty. We implement and test our model on a simulated humanoid robot which has to move objects from a table into containers. Our model outperforms and is more robust than two other baseline schemes in terms of task performance whilst varying three environmental conditions, reach/grasp sensitivity, observation noise and the camera’s field of view.

Keywords

Gaze control reinforcement learning decision making 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cassandra, A.R.: Exact and approximate algorithms for partially observable Markov decision processes. Ph.D. thesis, Brown University (1998)Google Scholar
  2. 2.
    Metta, G., et al.: The iCub humanoid robot: An open platform for research in embodied cognition. In: Proc. ACM Perf. Metrics for Int. Sys., pp. 50–56. ACM, New York (2008)Google Scholar
  3. 3.
    Pattacini, U., et al.: An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots. In: IEEE IROS, pp. 1668–1674. IEEE Press (2010)Google Scholar
  4. 4.
    Nunez-Varela, J., et al.: Where do I look now? Gaze allocation during visually guided manipulation. In: IEEE ICRA, pp. 4444–4449. IEEE Press (2012)Google Scholar
  5. 5.
    Karaoguz, C., et al.: Optimisation of gaze movements for multitasking using rewards. In: IEEE/RSJ IROS, pp. 1187–1193. IEEE Press (2011)Google Scholar
  6. 6.
    Bradtke, S., Duff, M.: Reinforcement learning methods for continuous-time Markov decision problems. Adv. in Neural Inf. Proc. Sys. 8, 393–400 (1995)Google Scholar
  7. 7.
    Sutton, R., et al.: Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. AI Journal 112(1), 181–211 (1999)MathSciNetMATHGoogle Scholar
  8. 8.
    Schultz, W.: Multiple reward signals in the brain. Nat. Rev. Neurosci. 1(3), 199–207 (2000)CrossRefGoogle Scholar
  9. 9.
    Johansson, R., et al.: Eye-hand coordination in object manipulation. Journal of Neuroscience 21(17), 6917–6932 (2001)Google Scholar
  10. 10.
    Land, M.: Eye movements and the control of actions in everyday life. Progress in Retinal and Eye Research 25(3), 296–324 (2006)CrossRefGoogle Scholar
  11. 11.
    Sprague, N., et al.: Modeling embodied visual behaviors. ACM Trans. Appl. Percept. 4(2) (2007)Google Scholar
  12. 12.
    Hayhoe, M., Rothkopf, C.: Vision in the natural world. Wiley Inter. Reviews: Cognitive Science 2(2), 158–166 (2010)CrossRefGoogle Scholar
  13. 13.
    Puterman, M.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience, New York (1994)MATHGoogle Scholar
  14. 14.
    Sutton, R., Barto, A.: Introduction to Reinforcement Learning. MIT Press (1998)Google Scholar
  15. 15.
    Thrun, S., et al.: Probabilistic Robotics. MIT Press, Cambridge (2008)Google Scholar
  16. 16.
    Frintrop, S.: VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search. Springer, New York (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jose Nunez-Varela
    • 1
  • Balaraman Ravindran
    • 2
  • Jeremy L. Wyatt
    • 1
  1. 1.School of Computer ScienceUniversity of BirminghamBirminghamUK
  2. 2.Department of Computer Science and EngineeringIIT MadrasChennaiIndia

Personalised recommendations