Learning Obstacle Avoidance Behavior Using Multi-agent Learning with Fuzzy States
This paper presents a proposal of learning obstacle avoidance behavior in unknown environment. The robot learns this behavior through seeking to collide with possible obstacles. The field of view (FOV) of the robot sensors is partitioned into five neighboring portions, and each is associated with an agent that applies Q-learning with fuzzy states codified in distance notions. The five agents recommend actions independently and a mechanism of arbitration is employed to generate the final action. After hundreds of collision, the robot can achieve collision-free navigation with high successful ratio, through integrating the goal information and the learned obstacle avoidance behavior. Simulation results verify the effectiveness of our proposal.
Unable to display preview. Download preview PDF.
- 3.Pal, P.K., Kar, A.: Mobile robot navigation using a neural net. In: IEEE International Conference on Robotics and Automation, pp. 1503–1508 (1995)Google Scholar
- 4.Kozakiewwicz, C., Ejiri, M.: Neural network approach to path planning for two dimension robot motion. In: IEEE/RSJ International Conference on Intelligent Robots Systems, pp. 818–823 (1991)Google Scholar
- 7.Digney, B.: Learning hierarchical control structures for multiple tasks and changing environments. In: Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior, Zurich, Switzerland (1998)Google Scholar
- 8.Moren, J.: Dynamic action sequences in reinforcement learning. In: Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior, Zurich, Switzerland (1998)Google Scholar
- 9.Smart, W., Kaelbling, L.: Practical reinforcement learning in continuous spaces. In: Proceedings of the Seventeenth International Conference on Machine Learning, pp. 903–910 (2000)Google Scholar
- 10.Sutton, R., Barto, A.: Reinforcement Learning: an Introduction. The MIT Press, Cambridge (1998)Google Scholar