Learning to Grasp Information with Your Own Hands

  • Dimitri Ognibene
  • Nicola Catenacci Volpi
  • Giovanni Pezzulo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6856)

Abstract

Autonomous robots immersed in a complex world can seldom directly access relevant parts of the environment by only using their sensors. Indeed, finding relevant information for a task can require the execution of actions that explicitly aim at unveiling previously hidden information. Informativeness of an action depends strongly on the current environment and task beyond the architecture of the agent. An autonomous adaptive agent has to learn to exploit the epistemic (e.g., information-gathering) implications of actions that are not architecturally designed to acquire information (e.g. orientation of sensors). The selection of these actions cannot be hardwired as general-purpose information-gathering actions, because differently from sensor control actions they can have effects on the environment and can affect the task execution. In robotics information-gathering actions have been used in navigation [7]; in active vision [4]; and in manipulation [3]. In all these works the informative value of each action was known and exploited at design time while the problem of actively facing un-predicted state uncertainty has not received much .

Keywords

Reinforcement Learning Belief State Autonomous Robot Epistemic Logic Action Entropy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Cassandra, A.R.: Exact and Approximate Algorithms for Partially Observable Markov Decision Processes. Ph.D. thesis, Brown University (1998)Google Scholar
  2. 2.
    Herzig, A., Lang, J., Marquis, P.: Action representation and partially observable planning in epistemic logic. In: Proc. of IJCAI 2003 (2003)Google Scholar
  3. 3.
    Hsiao, K., Kaelbling, L.P., Lozano-Perez, T.: Task-driven tactile exploration. In: Proceedings of Robotics: Science and Systems, RSS (2010)Google Scholar
  4. 4.
    Kwok, C., Fox, D.: Reinforcement learning for sensing strategies. In: Proceedings of IROS 2004 (2004)Google Scholar
  5. 5.
    McCallum, A.: Efficient exploration in reinforcement learning with hidden state. In: AAAI Fall Symposium on Model-directed Autonomous Systems (1997)Google Scholar
  6. 6.
    Ognibene, D., Pezzulo, G., Baldassarre, G.: How can bottom-up information shape learning of top-down attention control skills? In: ICDL 2010 (2010)Google Scholar
  7. 7.
    Roy, N., Thrun, S.: Coastal navigation with mobile robots. In: Advances in Neural Information Processing Systems, vol. 12 (2000)Google Scholar
  8. 8.
    Whitehead, S., Lin, L.: Reinforcement learning of non-Markov decision processes. Artificial Intelligence 73(1-2), 271–306 (1995)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Dimitri Ognibene
    • 1
  • Nicola Catenacci Volpi
    • 3
  • Giovanni Pezzulo
    • 2
  1. 1.Intelligent System NetworksImperial College LondonLondonUK
  2. 2.CNRIstituto di Linguistica Computazionale “Antonio Zampolli”Italy
  3. 3.IMT Institute for Advanced StudiesLuccaItaly

Personalised recommendations