Autonomous Robots

, Volume 21, Issue 1, pp 3–14 | Cite as

How a mobile robot selects landmarks to make a decision based on an information criterion

Article

Abstract

Most current mobile robots are designed to determine their actions according to their positions. Before making a decision, they need to localize themselves. Thus, their observation strategies are mainly for self-localization. However, observation strategies should not only be for self-localization but also for decision making. We propose an observation strategy that enables a mobile robot to make a decision. It enables a robot equipped with a limited viewing angle camera to make decisions without self-localization. A robot can make a decision based on a decision tree and on prediction trees of observations constructed from its experiences. The trees are constructed based on an information criterion for the action decision, not for self-localization or state estimation. The experimental results with a four legged robot are shown and discussed.

Keywords

Decision making Decision tree Information criterion Observation strategy Active perception 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Busquets, D., de Mantaras, R.L., Sierra, C., and Dietterich, T.G. 2002. Reinforcement learning for landmark-based robot navigation. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems.Google Scholar
  2. Cassandra, A.R., Kaelbling, L.P., and Kurien, J.A. 1996. Acting under uncertainty: Discrete Bayesian models for mobile robot navigation. In Proceedings of the 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 963–972.Google Scholar
  3. Fox, D., Burgard, W., and Thrun, S. 1998. Active Markov localization for mobile robots. Robotics and Autonomous Systems, 25:195–207.Google Scholar
  4. Hutchinson, S.A. and Kak, A.C. 1989. Planning sensing strategies in a robot work cell with multi-sensor capabilities. IEEE Transactions on Robotics and automation, 5(6):765–783.Google Scholar
  5. Jensfelt, P., Austin, D., and Christensen, H.I. 2000. Towards task oriented localization. In Proceedings of the Intelligent Autonomous Systems 6, IOS Press, pp. 612–619.Google Scholar
  6. Kristensen, S. 1997. Sensor planning with bayesian decision theory. Robotics and Autonomous Systems, 19:273–286.CrossRefGoogle Scholar
  7. McCallum, R.A. 1996. Hidden state and reinforcement learning with instance-based state identification. IEEE Transaction on System, Man and Cybernetics Part B, 26(3):464–474.CrossRefGoogle Scholar
  8. Mihaylova, L., Lefebvre, T., Bruyninckx, H., Gadeyne, K., and Schutter, J.D. 2002. Active sensing for robotics – a survey. In Proceedings of the 5the International Conference On Numerical Methods and Applications.Google Scholar
  9. Miyazaki, K. and Kobayashi, S. 1998. Learning deterministic policies in partially observable Markov decision processes. In Proceedings of the Intelligent Autonomous Systems, Y. Kakazu, M. Wada, and T. Sato, (Eds.), 5: pp. 250–257.Google Scholar
  10. Moon, I.H., Miura, J., and Shirai, Y. 1999. On-line viewpoint and motion planning for efficient visual navigation under uncertainty. Robotics and Autonomous Systems, 28(2–3):237–248.CrossRefGoogle Scholar
  11. Quinlan, J.R. 1979. Discovering rules from large collections of examples: A case study. In Expert Systems in the Microelectronic Age, D. Michie (Ed.) University Press, 1979.Google Scholar
  12. Quinlan, J.R. 1993. C4.5: Programs for machine learning. Morgan Kaufmann Publishers.Google Scholar
  13. Roy, N., Burgard, W., Fox, D., and Thrun, S. 1999. Coastal navigation: Robot navigation under uncertainty in dynamic environments. In Proceedings of the IEEE International Conference on Robotics and Automation.Google Scholar
  14. Sakaguchi, Y. 1994. Haptic sensing system with active perception. Advanced Robotics, 8(3):263–283.Google Scholar
  15. Tani, J., Yamamoto, J., and Nishi, H. 1997. Dynamical interactions between learning, visual attention, and behavior: An experiment with a vision-based mobile robot. In Fourth European Conference on Artificial Life, P. Husbands and I. Harvey (Eds.), The MIT Press, pp. 309–317.Google Scholar
  16. Wan, E.A., and van der Merwe, R. 2000. The unscented Kalman filter for nonlinear estimation. In Procedings of the Adaptive Systems for Signal Processing, Communications, and Control Symposium 2000.Google Scholar
  17. Wang, H., Yao, K., Pottie, G., and Estrin, D. 2004. Entropy-based sensor selection heuristic for target localization. In Proceedings of the Third International Symposium on Information Processing in Sensor Networks, pp. 36–45.Google Scholar
  18. Whitehead, S.D. 1991. A complexity analysis of cooperative mechanisms in reinforcement learning. In Proceedings of AAAI-91, pp. 607–613.Google Scholar
  19. Whitehead, S.D. and Ballard, D.H. 1990. Active perception and reinforcement learning. In Proceedings of the Seventh International Conference on Machine Learning, Morgan Kaufmann, pp. 179–188.Google Scholar

Copyright information

© Springer Science + Business Media, LLC 2006

Authors and Affiliations

  1. 1.Intelligent Robotics and Communication LaboratoriesAdvanced Telecommunications Research Institute InternationalKyotoJapan
  2. 2.Emergent Robotics Area, Department of Adaptive Machine Systems, Graduate School of EngineeringOsaka UniversityJapan
  3. 3.Handai Frontier Research CenterOsaka

Personalised recommendations