Advertisement

Learning Obstacle Avoidance Behavior Using Multi-agent Learning with Fuzzy States

  • Ming Lin
  • Jihong Zhu
  • Zengqi Sun
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3192)

Abstract

This paper presents a proposal of learning obstacle avoidance behavior in unknown environment. The robot learns this behavior through seeking to collide with possible obstacles. The field of view (FOV) of the robot sensors is partitioned into five neighboring portions, and each is associated with an agent that applies Q-learning with fuzzy states codified in distance notions. The five agents recommend actions independently and a mechanism of arbitration is employed to generate the final action. After hundreds of collision, the robot can achieve collision-free navigation with high successful ratio, through integrating the goal information and the learned obstacle avoidance behavior. Simulation results verify the effectiveness of our proposal.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. The International Journal of Robotics Research 5(1), 90–98 (1986)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Homaifar, A., McCormick, E.: Simultaneous design of membership functions and rule sets for fuzzy controllers using genetic algorithms. IEEE Transaction on Fuzzy Systems 3, 129–139 (1995)CrossRefGoogle Scholar
  3. 3.
    Pal, P.K., Kar, A.: Mobile robot navigation using a neural net. In: IEEE International Conference on Robotics and Automation, pp. 1503–1508 (1995)Google Scholar
  4. 4.
    Kozakiewwicz, C., Ejiri, M.: Neural network approach to path planning for two dimension robot motion. In: IEEE/RSJ International Conference on Intelligent Robots Systems, pp. 818–823 (1991)Google Scholar
  5. 5.
    Beom, H.R., Cho, H.S.: A sensor-based navigation for a mobile robot using fuzzy logic and reinforcement learning. IEEE Transaction on System, Man, Cybernetics 25, 464–477 (1995)CrossRefGoogle Scholar
  6. 6.
    Yung, N.H.C., Ye, C.: An intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learning. IEEE Transaction System, Man, Cybernetics B 29, 314–321 (1999)CrossRefGoogle Scholar
  7. 7.
    Digney, B.: Learning hierarchical control structures for multiple tasks and changing environments. In: Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior, Zurich, Switzerland (1998)Google Scholar
  8. 8.
    Moren, J.: Dynamic action sequences in reinforcement learning. In: Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior, Zurich, Switzerland (1998)Google Scholar
  9. 9.
    Smart, W., Kaelbling, L.: Practical reinforcement learning in continuous spaces. In: Proceedings of the Seventeenth International Conference on Machine Learning, pp. 903–910 (2000)Google Scholar
  10. 10.
    Sutton, R., Barto, A.: Reinforcement Learning: an Introduction. The MIT Press, Cambridge (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Ming Lin
    • 1
  • Jihong Zhu
    • 1
  • Zengqi Sun
    • 1
  1. 1.Department of Computer Science and TechnologyTsinghua UniversityBeijingP.R.China

Personalised recommendations