Advertisement

Improving behavior arbitration using exploration and dynamic programming

  • Mohamed Salah Hamdi
  • Karl Kaiser
4 Applied Artificial Intelligence and Knowledge-Based Systems in Specific Domains Connectionist and Hybrid AI Approaches to Manufacturing
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1416)

Abstract

This paper presents a self-improving reactive control system for autonomous agents. The design process consists of three main parts: first, building a self-organizing map and integrating the available knowledge about the system into the neural control structure, second, improving the performance of the agent with regard to the individual goals separately, and third, combining the obtained results to get an optimal overall behavior of the system. In this paper the emphasis is put on the second part. Improvement consists of identifying the dynamics of the environment using exploration and determining an optimal behavior selection policy using techniques of dynamic programming. We show the effectiveness of the improvement method and evaluate it through several simulation studies.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    R.C. Arkin. Integrating behavioral, perceptual, and world knowledge in reactive navigation. Robotics and Autonomous Systems, 6:105–122, 1990.Google Scholar
  2. 2.
    R. A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2(1):14–23, April 1986.Google Scholar
  3. 3.
    E. Gat. Integrating planning and reacting in a heterogeneous asynchronous architecture for controlling real-world mobile robots. In Proceedings of AAAI-92, pages 809–815, San Jose, CA, July 1992.Google Scholar
  4. 4.
    M. S. Hamdi. A goal-oriented behavior-based control architecture for autonomous mobile robots allowing learning. In M. Kaiser, editor, Proceedings of the Fourth European Workshop on Learning Robots, Karlsruhe, Germany, December 1995.Google Scholar
  5. 5.
    M. S. Hamdi and K. Kaiser. Adaptable arbitration of behaviors: Some simulation results. In Proceedings of The Second World Congress on Intelligent Manufacturing Processes and Systems, Budapest, Hungary, June 1997.Google Scholar
  6. 6.
    M. S. Hamdi and K. Kaiser. Adaptable local level arbitration of behaviors. In Proceedings of The First International Conference on Autonomous Agents, Agents'97, Marina del Rey, CA, USA, February 1997.Google Scholar
  7. 7.
    S. Mahadevan and J. Connell. Automatic programming of behavior-based robots using reinforcement learning. Artificial Intelligence, 55(2):311–365, 1992.Google Scholar
  8. 8.
    J.d.R. Millan and C. Torras. Efficient reinforcement learning of navigation strategies in an autonomous robot. In Proceedings of The International Conference on Intelligent Robots and Systems, IROS'94, 1994Google Scholar
  9. 9.
    N. Nilsson. Shakey the robot. Technical Note 323, SRI AI center, 1984.Google Scholar
  10. 10.
    Richard S. Sutton and Andrew G. Barto. Reinforcement learning: an introduction. MIT Press, to appear.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Mohamed Salah Hamdi
    • 1
  • Karl Kaiser
    • 1
  1. 1.FB InformatikUniversitaet HamburgHamburgGermany

Personalised recommendations